Is there a way to tell JMS in JBoss to delay processing of messages already in the persistent queue for a while, e.g. 2 minutes, while JBoss starts.
As it is right now, when we restart JBoss, JMS starts to dispatch messages to the MessagesListeners even before JBoss has started properly.
We're running JBoss 4.2.3
I have found an annotation called Depends where an ejb or other service can list what you depend on:
http://docs.jboss.org/ejb3/docs/reference/build/reference/en/html/jboss_extensions.html
To actually start an ejb when ther server is up and listening this works best:
http://community.jboss.org/wiki/BarrierController
Related
Team,
We are facing a strange issue in our webservice application.
It has 6 weblogic managed instances (4 # m01,m02,m04,m05 - handles webservice requests which post the messages to JMS queues, 2 # m03,m06 - JMS instances which have MDB components which actually process the messages from queue).
We have observed one of the JMS instance (M06) is stopped processing messages all of sudden without any errors in the application or server logs. We observed the connection factory is not responding. This also causing hogging threads in service instances while posting the and searching the messages from the JMS queues. We are not able to see any issue from the thread dumps as well.
Adding to this when we try to stop the M06 instance it is not going down, eventually we had to kill the instance process and start the instance to resolve the issue. Then it is working fine for few days then again issue resurfacing.
We are using weblogic 12c.
Any one had faced this kind of issue earlier. Or any one have any idea what could have went wrong. Your inputs are greatly appreciated.
If I'll be you, I'll start by creating error queue, to get rid of any "poisoned" messages. More information can be found here: http://middlewaremagic.com/weblogic/?p=4670. Then try to check error queue and message content there.
Secondly, try to turn off mentioned instance (M06) at all, if bottleneck/errors does not appear on some other node, check M06 instance configuration and compare it with other nodes -> issue will be definitely somewhere there.
I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.
We have two applications which is running on Weblogic and JBoss AS respectively. We would like to Keep HornetQ as an intermediate server for Asynchronous messaging.
I would like to write a Publish/Subscriber. Whenever a data Inserted/Modified/Deleted(JPA) or whatever possible messages it could be.
Here Producer will be the Weblogic and consumer will be the JBOSS. How can i achieve it?
On wls end define a foreign jms server. Point it to the hornetq topic. Your application on wls will publish
Messages to foreign jms and your application on jboss can consume it.
When defining foreign jms make sure you provide user credentials for both the topic and jndi look up as needed.
I think You could use JMS bridge between wls and JBoss:
http://docs.jboss.org/hornetq/2.3.0.CR2/docs/user-manual/html/jms-bridge.html
I have declared only One Queue called MyQueue1 in JBoss JMS MQ - jbossmq-destinations-service.xml , But when i view list of Queues via JBoss JMX Console. I see 5 queues there.
How did other Queues get created?
I have been told that Queues get created automatically when we attempt to write to a non-existent queue - is this possible? I am not able to replicate this behavior on my desktop though.
I am using JBoss 4.0.x
Please advice... Thanks.
We have a server app that is deployed across to server machines, each running JBOSS 4.2.2. We use JBOSS messaging with MDBs to communicate between the systems. Currently we need to start the servers in a very specific order so that JBOSS can connect properly. If a server starts and doesn't see its resources it never tries again. This is problematic and time consuming in testing when we're bouncing servers constantly. We believe that if we could specify a retry flag in JBOSS could reattempt to get the connection.
Is there a flag/config option in JBOSS that would reattempt to obtain JMS connections on failure at startup?
I am quite new to the JMS technology, so it is entirely possible that I have mixed up some terms here. Since this capability is to be used in house experimental or deprecated options are acceptable.
Edit: The problem is that a consumer starts up with no producer available and subsequently fails, never to try again. If a consumer and producer are up and the producer dies the consumer will retry for the producer to come back.
I'm 95% sure that JBoss MDBs do retry connections like that. If your MDBs are not receiving messages as you expect, I think something else is wrong. Do the MDBs depend on any other resources. Perhaps posting your EJB descriptors (META-IF/ejb-jar.xml and META-IF/jboss.xml) would help.