Weblogic 12c - Suddenly JMS Server stopped processing messages - server

Team,
We are facing a strange issue in our webservice application.
It has 6 weblogic managed instances (4 # m01,m02,m04,m05 - handles webservice requests which post the messages to JMS queues, 2 # m03,m06 - JMS instances which have MDB components which actually process the messages from queue).
We have observed one of the JMS instance (M06) is stopped processing messages all of sudden without any errors in the application or server logs. We observed the connection factory is not responding. This also causing hogging threads in service instances while posting the and searching the messages from the JMS queues. We are not able to see any issue from the thread dumps as well.
Adding to this when we try to stop the M06 instance it is not going down, eventually we had to kill the instance process and start the instance to resolve the issue. Then it is working fine for few days then again issue resurfacing.
We are using weblogic 12c.
Any one had faced this kind of issue earlier. Or any one have any idea what could have went wrong. Your inputs are greatly appreciated.

If I'll be you, I'll start by creating error queue, to get rid of any "poisoned" messages. More information can be found here: http://middlewaremagic.com/weblogic/?p=4670. Then try to check error queue and message content there.
Secondly, try to turn off mentioned instance (M06) at all, if bottleneck/errors does not appear on some other node, check M06 instance configuration and compare it with other nodes -> issue will be definitely somewhere there.

Related

Message count is not zero even after all messages are consumed and acknowledged

We have containerized ActiveMQ Artemis 2.16.0 and deployed it as a K8s deployment for KEDA.
We use STOMP using stomp.py python module. The ACK-mode is set as client-individual and consumerWindowSize = 0 on the connection. We are promptly acknowledging the message as soon as we read it.
The problem is, sometimes, the message count in the web console does not become zero even after all the messages are actually consumed and acknowledged. When I browse the queue, I don't see any messages in it. This is causing KEDA to spin up pods unnecessarily. Please refer to the attached screenshots I attached in the JIRA for this issue.
I fixed the issue in my application code. My requirement was one queue listener should consume only one message and exit gracefully. So, soon after sending ACK for the consumed message, I disconnected the connection, instead of waiting for the sleep duration to disconnect.
Thanks, Justin, for spending time on this.

OpenShift message queue

I'd like to host apps that uses queue to communicate with each other on OpenShift.
One kind of apps - producers will put some data to the queue and another type - consumer will process the message. My question is how to implement message queue. I've thought about two approaches:
Create an app with JBoss, HornetQ and consumer and create proxy port for HornetQ, so that producers can send messages there.
Create an app with JBoss and consumer, and make a JBoss's HornetQ available to producers. It sounds a bit better for me, but I don't know if I can make queue available to producers and how it works if there are more instances of consumer on different nodes (and different JBoss instances).
I'm not sure how else to answer you besides showing you a link on how to use Wildfly. You can just use the Wildfly Cartridge:
https://www.openshift.com/quickstarts/wildfly-8
If you provide me some extra context I can try to enrich the answer a bit better. I need to know what is your problem, and what's not working.
If you just want to know how to configure Wildfly with HornetQ, the Wildfly cartridge I posted is the way to go.

Queue Name not declared, yet it exists in JBoss JMS MQ - How?

I have declared only One Queue called MyQueue1 in JBoss JMS MQ - jbossmq-destinations-service.xml , But when i view list of Queues via JBoss JMX Console. I see 5 queues there.
How did other Queues get created?
I have been told that Queues get created automatically when we attempt to write to a non-existent queue - is this possible? I am not able to replicate this behavior on my desktop though.
I am using JBoss 4.0.x
Please advice... Thanks.

JBoss Messaging Queue Stuck, with Remote Interface and MDB Consumer

I am trying to diagnose and fix what is likely an environmental problem. We have dev, SI, and production servers, and they have been setup the same for several years. One of the environments has stopped working for a particular JBM Queue, and I have so far been unable to figure out why.
What I am seeing via the JMX Console is that the messages are "stuck" in the delivering state. The MessageCount and DeliveringCount increment each time a message is sent through the Queue. The Consumer's onMessage() is invoked, and it outputs debug messages into the log4j log, however I don't think it ever completes the request.
This is a persisted JBM setup. Restarting the JBoss Server doesn't help. Clearing out or even dropping the JBM_* tables does not help.
The jbm_msg_ref entries have null transaction_id's and the state is 'C', which seems like it was put into this state by the prepared statement "ROLLBACK_MESSAGE_REF2" from the oracle-persistence-service.xml we use.
The MaxPoolSize for the MDB consumer is 15, and this is also the max amount of messages that are received by the consumer instances. After 15, it seems that the Queue "fills up" and there are no longer any available consumer MBeans to receive messages.
I am looking for ideas or suggestions about how to diagnose and fix the problem. I've been googling and trying stuff for a few days with little results. There are plenty of JIRA tickets for this fairly old version of JBM, but other instances of the same setup work fine, so I suspect that there is some sort of network, race condition, or env issue on this one server/DB combo.
JBoss Remoting 4.3.0.GA
JBoss Messaging 1.4.0.SP3
JBoss 4.3.0.GA
Thanks!
The issue was identified to be caused by Oracle database issues. The database instance was bounced to resolve the issue. Most likely, the database performance was slow enough to have caused a timing issue with message acknowledgement.

How can we have JBOSS MDB retry its connection if it fails at startup?

We have a server app that is deployed across to server machines, each running JBOSS 4.2.2. We use JBOSS messaging with MDBs to communicate between the systems. Currently we need to start the servers in a very specific order so that JBOSS can connect properly. If a server starts and doesn't see its resources it never tries again. This is problematic and time consuming in testing when we're bouncing servers constantly. We believe that if we could specify a retry flag in JBOSS could reattempt to get the connection.
Is there a flag/config option in JBOSS that would reattempt to obtain JMS connections on failure at startup?
I am quite new to the JMS technology, so it is entirely possible that I have mixed up some terms here. Since this capability is to be used in house experimental or deprecated options are acceptable.
Edit: The problem is that a consumer starts up with no producer available and subsequently fails, never to try again. If a consumer and producer are up and the producer dies the consumer will retry for the producer to come back.
I'm 95% sure that JBoss MDBs do retry connections like that. If your MDBs are not receiving messages as you expect, I think something else is wrong. Do the MDBs depend on any other resources. Perhaps posting your EJB descriptors (META-IF/ejb-jar.xml and META-IF/jboss.xml) would help.