I have a tough time integrating the ActiveMQ using Dell Boomi as the Dell Boomi documentation is old and may be misleading too sometimes. As I could not find a good suggestion on the web I am putting my query here. Can someone please help with the steps of how to integrate ActiveMQ with Boomi?
With below steps I got it working--
Copy activemq-core-5.4.3.jar and geronimo-j2ee-management_1.1_spec-1.0.1.jar files from your ActiveMQ to your Atom/usrlib/database (create if not there) directory.
Create a jndi property file and place in ActiveMQ home directory. Reference this.
You might get NoClassDefFound error for JMS/Topic etc, means your Boomi lib does not have the implementation for that. You need to copy activemq-all-5.4.3.jar from ActiveMQ home folder to Atom/lib.
I am not specifying how to create JMS Connection and Operation in boomi however, below properties you can use for JMS conneciton in Boomi--
Connection Factory JNDI Lookup: ConnectionFactory.
Initial Context Factory: org.apache.activemq.jndi.ActiveMQInitialContextFactory (default).
Provider URL: tcp://localhost:61616 (Default port).
JMS Operation--
Destination : dynamicQueues/Dell_Boomi (Dynamic will create a queue if not existing).
That's all, try your luck and share your experience!
pick
jars activemq-client,hawtbuf,geronimo-jms_1.1_spec,geronimo-j2ee-management_1.1_spec
from the lib\plugin\queue and copy it to the lib folder. Restart Atom and it should work now.
Related
I have installed and tested kafka connect in distributed mode, it works now and it connects to the configured sink and reads from the configured source.
That being the case, I moved to enhance my installation. The one area I think needs immediate attention is the fact that to create a connector, the only available mean is through REST calls, this means I need to send my information through the wire, unprotected.
In order to secure this, kafka introduced the new ConfigProvider seen here.
This is helpful as it allows to set properties in the server and then reference them in the rest call, like so:
{
.
.
"property":"${file:/path/to/file:nameOfThePropertyInFile}"
.
.
}
This works really well, just by adding the property file on the server and adding the following config on the distributed.properties file:
config.providers=file # multiple comma-separated provider types can be specified here
config.providers.file.class=org.apache.kafka.common.config.provider.FileConfigProvider
While this solution works, it really does not help to easy my concerns regarding security, as the information now passed from being sent over the wire, to now be seating on a repository, with text on plain sight for everyone to see.
The kafka team foresaw this issue and allowed clients to produce their own configuration providers implementing the interface ConfigProvider.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
and added the following entry in the distributed file:
config.providers=cust
config.providers.cust.class=com.somename.configproviders.CustConfigProvider
However I am getting an error from connect, stating that a class implementing ConfigProvider, with the name:
com.somename.configproviders.CustConfigProvider
could not be found.
I am at a loss now, because the documentation on their site is not explicit about how to configure custom config providers very well.
Has someone worked on a similar issue and could provide some insight into this? Any help would be appreciated.
I just went through these to setup a custom ConfigProvider recently. The official doc is ambiguous and confusing.
I have created my own implementation and packaged in a jar, givin it the sugested final name:
META-INF/services/org.apache.kafka.common.config.ConfigProvider
You could name the final name of jar whatever you like, but needs to pack to jar format which has .jar suffix.
Here is the complete step by step. Suppose your custom ConfigProvider fully-qualified name is com.my.CustomConfigProvider.MyClass.
1. create a file under directory: META-INF/services/org.apache.kafka.common.config.ConfigProvider. File content is full qualified class name:
com.my.CustomConfigProvider.MyClass
Include your source code, and above META-INF folder to generate a Jar package. If you are using Maven, file structure looks like this
put your final Jar file, say custom-config-provider-1.0.jar, under the Kafka worker plugin folder. Default is /usr/share/java. PLUGIN_PATH in Kafka worker config file.
Upload all the dependency jars to PLUGIN_PATH as well. Use the META-INFO/MANIFEST.MF file inside your Jar file to configure the 'ClassPath' of dependent jars that your code will use.
In kafka worker config file, create two additional properties:
CONNECT_CONFIG_PROVIDERS: 'mycustom', // Alias name of your ConfigProvider
CONNECT_CONFIG_PROVIDERS_MYCUSTOM_CLASS:'com.my.CustomConfigProvider.MyClass',
Restart workers
Update your connector config file by curling POST to Kafka Restful API. In Connector config file, you could reference the value inside ConfigData returned from ConfigProvider:get(path, keys) by using the syntax like:
database.password=${mycustom:/path/pass/to/get/method:password}
ConfigData is a HashMap which contains {password: 123}
If you still seeing ClassNotFound exception, probably your ClassPath is not setup correctly.
Note:
• If you are using AWS ECS/EC2, you need to set the worker config file by setting the environment variable.
• worker config and connector config file are different.
I am starting to use FlexyPool to monitor an JNDI datasource managed by Tomcat.
I found how to monitor one datasource in this answer and in FlexyPool doc. I can not, however, figure how to configure the monitoring of multiple sources through the flexy-pool.properties file. Is this possible ?
Currently, the declarative configuration only supports a single DataSource. You can open an issue on GitHub for this. I would not mind if you send a Pull request for it.
I'm using IBM Integration Bus v10 (previously called IBM Message Broker) to expose COBOL routines as SOAP Web Services.
COBOL routines are integrated into IIB through MQ queues.
We have imported some COBOL copybooks as DFDL schemas in IIB, and the mapping between SOAP messages and DFDL messages is working fine.
However, when the message reaches a node where a serialization of the message tree has to take place (for example, a FileOutput or a MQ request), it fails with the following error:
"The PIF data could not be found for the specified application"
This is the last part of the stack trace of the exception:
RecoverableException
File:CHARACTER:F:\build\slot1\S000_P\src\DataFlowEngine\TemplateNodes\ImbOutputTemplateNode.cpp
Line:INTEGER:303
Function:CHARACTER:ImbOutputTemplateNode::processMessageAssemblyToFailure
Type:CHARACTER:ComIbmFileOutputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_5
Label:CHARACTER:MyCustomFlow.File Output
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:2230
Text:CHARACTER:Caught exception and rethrowing
Insert
Type:INTEGER:14
Text:CHARACTER:Kcilmw20Flow.File Output
ParserException
File:CHARACTER:F:\build\slot1\S000_P\src\MTI\MTIforBroker\DfdlParser\ImbDFDLWriter.cpp
Line:INTEGER:315
Function:CHARACTER:ImbDFDLWriter::getDFDLSerializer
Type:CHARACTER:ComIbmSOAPInputNode
Name:CHARACTER:MyCustomFlow#FCMComposite_1_7
Label:CHARACTER:MyCustomFlow.SOAP Input
Catalog:CHARACTER:BIPmsgs
Severity:INTEGER:3
Number:INTEGER:5828
Text:CHARACTER:The PIF data could not be found for the specified application
Insert
Type:INTEGER:5
Text:CHARACTER:MyCustomProject
It seems like something is missing in my deployable BAR file. It's important to say that my application has the message flow and it depends on a shared library that has all the .xsd files (DFDLs).
I suppose that the schemas are OK, as I've generated them using the Toolkit wizard, and the message parsing works well. The problem is only with serialization.
Does anybody know what may be missing here?
OutputRoot.Properties.MessageType must contain the name of the message in the DFDL schema. Additionally when the DFDL schema is in a shared library, OutputRoot.Properties.MessageSet must contain the name of the library.
Sounds as if OutputRoot.Properties is not pointing at the shared library. I cannot remember which subfield does that job - it is either OutputRoot.Properties.MessageType or OutputRoot.Properties.MessageSet.
You can easily check - just check the contents of InputRoot.Properties after an input node that has used the same shared libary.
Faced a similar problem. In my case, a message flow with an HttpRequest node using a DFDL domain parser / format to parse an HTTP response from the remote system threw this error (PIF data could not be found for the specified application). "Re-selecting" the same parser domain & message type on the node followed by build / redeploy solved the problem. Seemed to be a project reference related issue within the IIB toolkit.
you need to create static libraries and refer to application.
in compute node ur coding is based on dfdl body
we recently changed our Application Server from Glassfish to Wildfly. With Glassfish we used QBrowser to monitor our JMS Queues, sadly that tool does not work with Wildfly.
After a quick search I found the Tool HermesJMS. Although there are lots of guides how to set up a connection to a JMS queue with it I couldn´t find anything directly for the JBoss Wildfly application server. After lots of reading through different guides I think I can now connect to the wildfly server but I just can´t connect to my jms queues.
First I tried to connect via JNDI InitialContext. Here´s my settings for it:
initialContextFactory: org.jboss.naming.remote.client.InitialContextFactory
providerURL: http-remoting://localhost:
urlPkgPrefixes: org.jboss.naming.remote.client
securityPrincipal: admin
securityCredentials: admin
It does connect but all I see are my deployed web applications and a "jms" folder. But they all contain the same web-applications again plus the jms folder and appear as a red circle with a white X in it.
So next I tried to set up a session manually via "Create new JMS Session" with following preferences:
Session: HornetQ
Plugin: HornetQ
Properties:
binding: jms/RemoteConnectionFactory
initialContextFactory: initialContextFactory: org.jboss.naming.remote.client.InitialContextFactory
providerURL: http-remoting://localhost:
urlPkgPrefixes: org.jboss.naming.remote.client
User: guest Password: pass
The guest user is an user I created in Wildfly as an application user
When I then double click on one of the queues it says that there is no such queue.
javax.jms.JMSException: There is no queue with name java:jboss/jms/queue/ngsEmailProvRequestQueue
at org.hornetq.jms.client.HornetQSession.createQueue(HornetQSession.java:397)
at hermes.impl.jms.SimpleDestinationManager.createDesintaion(SimpleDestinationManager.java:60)
at hermes.impl.JNDIDestinationManager.createDesintaion(JNDIDestinationManager.java:105)
at hermes.impl.jms.SimpleDestinationManager.getDestination(SimpleDestinationManager.java:137)
at hermes.impl.jms.AbstractSessionManager.getDestination(AbstractSessionManager.java:387)
at hermes.impl.DefaultHermesImpl.getDestination(DefaultHermesImpl.java:323)
at hermes.browser.tasks.BrowseDestinationTask.invoke(BrowseDestinationTask.java:122)
at hermes.browser.tasks.TaskSupport.run(TaskSupport.java:175)
at hermes.browser.tasks.ThreadPool.run(ThreadPool.java:170)
at java.lang.Thread.run(Thread.java:745)
Does anybody know what I´m missing? Is it even possible to get HermesJms to work with Wildfly? Of if not is there an alternative monitoring tool for JMS queues?
Thank you for your help.
To work with Wildfly, follow this doc: https://developer.jboss.org/wiki/UsingHermesJMSWithHornetQ
Second part: Configuring HermesJMS for JBoss7 / EAP6 with HornetQ
And change those values:
binding=jms/RemoteConnectionFactory
initialContextFactory=org.jboss.naming.remote.client.InitialContextFactory
providerURL=http-remoting://localhost:8080
urlPkgPrefixes=org.jboss.naming.remote.client
In the destinations, change also:
Name: sample
Domain: QUEUE
Maybe you could have a look at JMSToolbox on sourceforge: https://sourceforge.net/projects/jmstoolbox/?source=directory
i recently revisited this as the team is moving from glassfish (yaye...) to wildfly. I tried with wildfly9 and it works.
I think it is a matter of exporting your queue name. see below
java:/jms/queue/test does not work
java:jboss/exported/jms/queue/test works
Note: wildfly9.2 is the final version that has hornetq. wildfly 10++ supports artemis instead.
I'm writing, or trying to write, Baby's First MDB on WebSphere 7. I have nearly no hair left, having pulled it all out trying to get the thing to work. It appears that I've got everything set up right, but I get no response when I put a message to the associated queue.
Here's the EAR file setup:
simplemdb.ear
META-INF
Manifest.mf
application.xml
simplemdb.jar
META-INF
Manifest.mf
ejb-jar.xml
com
[ classes go here ]
I can't find any syntax for defining the queue's JNDI name in ejb-jar.xml, so instead I:
Define a WebSphere activation spec. Name SimpleMDBActivationSpec, JNDI name jms/SimpleActivationSpec, Destination jms/SimpleMDBQueue.
Define a WebSphere queue. Name SimpleMDBQueue, JNDI name jms/SimpleMDBQueue, Queue name SIMPLE.MDB.QUEUE.
Define an MQ queue, name SIMPLE.MDB.QUEUE.
Deploy the EAR file. During the deployment, I'm asked to enter binding information. I select Activation Specification, then point the Target Resource JNDI Name and Destination JNDI name at the activation spec and queue, respectively.
(The MDB code has no annotations.) At this point, the app points to the spec and queue, and the spec points to the queue - belt and suspenders. Naturally, I imagine that the app therefore knows about the queue. Full of hope, I put a message on the queue, and ... nothing. The onMessage event is supposed to use System.out to log a message. I see no message.
Clear documentation on this is conspicuous by its absence. Google gives LOTS of results, but none of them details how the configuration all fits together. There's lots of hand-waving about ibm-ejb-jar-bnd.xmi, but examples of the file are arcane, full of opaque numbers with no explanation about how they were generated, or how they relate to other parts of the configuration.
For goodness' sake. All I want to do is deploy an MDB, and have it write "Hello, world" when I put a message to a queue. I'm using vi and ant as my development and build tools. Can anybody out there give me an idea about what I'm missing?
Edit: "zos" tag added.
I found the problem. It's specific to WebSphere running on z/OS. For an activation spec to be fully available in that environment, the Control Region Adjunct (CRA) process must be started. I told WAS to start it up, recycled the app server, and lo! My MDB started responding.
To make the CRA start via the WebSphere Admin Console, go to ...
Application servers > [server name] > Communications > Messaging > WebSphere MQ CRA Settings
... and check the box that says "Start CRA". Hit OK, save it to the master configuration, and to make the CRA actually start, bring the app server down and back up. (This is for WAS 7.0.)
Thanks to everyone for their time and thoughtspace.
have a quick look at this and see if there is anything here that helps you.
http://publib.boulder.ibm.com/infocenter/ieduasst/v1r1m0/topic/com.ibm.iea.wasfpejb/wasfpejb/6.1/DevelopmentTools/WASv61_EJB3FP_MDBLab.pdf
I haven't played with this for the last one year so i am not able to comment straight away but i thought the PDF might be of some assistance to you.
HTH
Manglu