Messages pending in subscriber queue - jboss5.x

I am using jboss-5.1 to deploy message driven bean which is used to subscribe messages from a third party queue.
Around 16 messages were posted to that queue but they remained pending in our subscriber queue. I restarted the server and the messages were readily picked.
As much as I have analysed, I think maxsize and maxsession could have affected it, as both are 15. But I do not understand if there was some real issue, how it got solved by just restarting.
The logs were in error mode. I did not get the full stack trace.
This is the snippet of that error log.
[2012-10-30 17:01:00,228] [MQQueueAgent (GQH1_PLANNING_MDM_001)]
[ERROR] STDERR: 2012.10.30 17:01:00 MQJMS1023E rollback failed
[2012-10-30 17:01:00,228] [exceptionDelivery0] [WARN ]
org.jboss.resource.adapter.jms.inflow.JmsActivation: Failure in jms activation
org.jboss.resource.adapter.jms.inflow.JmsActivationSpec#85d0d(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter#b21aae
destination=remotewsmq/NOTIFICATION_PLANNING_MDM_001.SUBQ
destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=RemoteWSMQJMSProvider
user=null maxMessages=1 minSession=1 maxSession=5 keepAlive=60000 useDLQ=false)
GQH1_PLANNING_MDM_001: The name of the queue used for subscribing.
The files that I use to configure the properties of the MDBs are as follows.
1.ejb3-interceptors-aop.xml
<domain name="Message Driven Bean" extends="Intercepted Bean" inheritBindings="true">
<bind pointcut="execution(public * *->*(..))">
<interceptor-ref name="org.jboss.ejb3.security.AuthenticationInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.security.RunAsSecurityInterceptorFactory"/>
</bind>
<!-- TODO: Authorization? -->
<bind pointcut="execution(public * *->*(..))">
<interceptor-ref name="org.jboss.ejb3.tx.CMTTxInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.stateless.StatelessInstanceInterceptor"/>
<interceptor-ref name="org.jboss.ejb3.tx.BMTTxInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.AllowedOperationsInterceptor"/>
<interceptor-ref name="org.jboss.ejb3.entity.TransactionScopedEntityManagerInterceptor"/>
<!-- interceptor-ref name="org.jboss.ejb3.interceptor.EJB3InterceptorsFactory"/ -->
<stack-ref name="EJBInterceptors"/>
</bind>
<annotation expr="class(*) AND !class(#org.jboss.ejb3.annotation.Pool)">
#org.jboss.ejb3.annotation.Pool (value="StrictMaxPool", maxSize=15, timeout=10000)
</annotation>
</domain>
2.standardjboss.xml
<invoker-proxy-binding>
<name>message-driven-bean</name>
<invoker-mbean>default</invoker-mbean>
<proxy-factory>org.jboss.ejb.plugins.jms.JMSContainerInvoker</proxy-factory>
<proxy-factory-config>
<JMSProviderAdapterJNDI>DefaultJMSProvider</JMSProviderAdapterJNDI>
<ServerSessionPoolFactoryJNDI>StdJMSPool</ServerSessionPoolFactoryJNDI>
<CreateJBossMQDestination>false</CreateJBossMQDestination>
<!-- WARN: Don't set this to zero until a bug in the pooled executor is fixed -->
<MinimumSize>1</MinimumSize>
<MaximumSize>15</MaximumSize>
<KeepAliveMillis>30000</KeepAliveMillis>
<MaxMessages>1</MaxMessages>
<MDBConfig>
<ReconnectIntervalSec>10</ReconnectIntervalSec>
<DLQConfig>
<DestinationQueue>queue/DLQ</DestinationQueue>
<MaxTimesRedelivered>10</MaxTimesRedelivered>
<TimeToLive>0</TimeToLive>
</DLQConfig>
</MDBConfig>
</proxy-factory-config>
</invoker-proxy-binding>
<activation-config-property>
<activation-config-property-name>maxSession</activation-config-property-name>
<activation-config-property-value>15</activation-config-property-value>
</activation-config-property>
3.jms-ds.xml
<?xml version="1.0" encoding="UTF-8"?>
<connection-factories>
<!-- ==================================================================== -->
<!-- JMS Stuff -->
<!-- ==================================================================== -->
<!--
The JMS provider loader. Currently pointing to a non-clustered ConnectionFactory. Need to
be replaced with a clustered non-load-balanced ConnectionFactory when it becomes available.
See http://jira.jboss.org/jira/browse/JBMESSAGING-843.
-->
<mbean code="org.jboss.jms.jndi.JMSProviderLoader"
name="jboss.messaging:service=JMSProviderLoader,name=JMSProvider">
<attribute name="ProviderName">DefaultJMSProvider</attribute>
<attribute name="ProviderAdapterClass">org.jboss.jms.jndi.JNDIProviderAdapter</attribute>
<attribute name="FactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="QueueFactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="TopicFactoryRef">java:/XAConnectionFactory</attribute>
</mbean>
<!-- JMS XA Resource adapter, use this to get transacted JMS in beans -->
<tx-connection-factory>
<jndi-name>JmsXA</jndi-name>
<xa-transaction/>
<rar-name>jms-ra.rar</rar-name>
<connection-definition>org.jboss.resource.adapter.jms.JmsConnectionFactory</connection-definition>
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
<max-pool-size>20</max-pool-size>
<security-domain-and-application>JmsXARealm</security-domain-and-application>
<depends>jboss.messaging:service=ServerPeer</depends>
</tx-connection-factory>
</connection-factories>
Please help.

If the listener did not try to reconnect, then it might be the messages pending which caused it to fail.

According to the error, a transaction ROLLBACK call failed. After the failure, the queue manager probably held those messages in an outstanding unit of work. Restarting the server would have closed the connection at which point the queue manager will have rolled back the transaction on behalf of the application. On restart, the application will create a new UOW and retrieve the messages.
Look in WebSphere MQ's queue manager error logs and global error logs to determine whether the error was caused by a resource shortage. It may be necessary to increase the size of the queue manager transaction logs or to tune transaction parameters such as MAXUOW.
You may also need to update the MQ client version or Queue Manager version. According to this Technote, WebSphere MQ JMS classes were updated as of 6.0.2.3 to fix a bug that resulted in MQJMS1023E errors. If you need to update the client version, it is available as a free download as SupportPac MQC75. A new client is able to run with any back level queue manager. After upgrading, the app benefits from the bug fixes and performance enhancements of the new client code and provides API functionality appropriate for the version of Queue Manager to which it connects. What version of WebSphere MQ JMS client is currently installed? What version of WebSphere MQ queue manager is currently installed?

Related

JMS 2.0 durable subscriptions topic best practice in Kubernetes

We are creating a Mule application which will be running in a container on Kubernetes and will be in a replica set that will be connecting to JMS 2.0 Red Hat AMQ 7 (based on ActiveMQ Artemis).
The pom.xml has been configured to get the jms client:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-client-all</artifactId>
<version>2.10.1</version>
</dependency>
And the JMS config is configured as:
<jms:config name="JMS_Config" doc:name="JMS Config" doc:id="8621b07d-b203-463e-bbbe-76eb03741a61" >
<jms:generic-connection specification="JMS_2_0" username="${mq.user}" password="${mq.password}" clientId="${mq.client.id}">
<reconnection >
<reconnect-forever frequency="${mq.reconnection.frequency}" />
</reconnection>
<jms:connection-factory >
<jms:jndi-connection-factory connectionFactoryJndiName="ConnectionFactory" >
<jms:name-resolver-builder jndiInitialContextFactory="org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" jndiProviderUrl="${mq.brokerurl}"/>
</jms:jndi-connection-factory>
</jms:connection-factory>
</jms:generic-connection>
<jms:consumer-config>
<jms:consumer-type >
<jms:topic-consumer shared="true" durable="true"/>
</jms:consumer-type>
</jms:consumer-config>
<jms:producer-config persistentDelivery="true"/>
</jms:config>
Then in the JMS listener component:
<jms:listener doc:name="EMS JMS Listener" doc:id="318b4f08-daf6-41f4-944b-3ec1420d5c12" config-ref="JMS_Config" destination="${mq.incoming.queue}" ackMode="AUTO" >
<jms:consumer-type >
<jms:topic-consumer shared="true" subscriptionName="${mq.sub.name}" durable="true"/>
</jms:consumer-type>
<jms:response sendCorrelationId="ALWAYS" />
</jms:listener>
The variables are set as:
mq.client.id=client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40
mq.sub.name=my-sub
mq.incoming.queue=my-queue
Is this the best way to configure the client? As we have seen errors in the logs when deployed to K8s regarding connections to the AMQ server:
javax.jms.InvalidClientIDException: client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40 was already set into another connection
In JMS 2.0 you don't have to set the client identifier when creating a shared durable subscription. However, if you do set the client identifier then it must be unique per connection. For whatever reason (e.g. due to Mule or perhaps K8s) multiple connections are being created and since each connection is using the same client identifier you're receiving the javax.jms.InvalidClientIDException.
Remove clientId="${mq.client.id}" from your configuration and the javax.jms.InvalidClientIDException should go away.

How to persist data console in a cluster of ActiveMQ [Openshift]

I have a cluster of ActiveMQ deployed in Master/Slave mode in Openshift, but I have a problem. I can persist the data of topics and queues with any problem. When a pod comes down, do not lose the messages. But I lose the data on the console of ActiveMQ. I attached below my activemq.xml
Someone have the same problem?
Thanks in advance.
`
file:${activemq.conf}/credentials.properties
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" dataDirectory="${activemq.data}" persistent="true" deleteAllMessagesOnStartup="false" useShutdownHook="false" schedulerSupport="true" >
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VirtualTopic.>" prefix="Consumer.*." selectorAware="false"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
ctivemq.data}
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
`
The console does not offer any sort of data persistence, the statistics are mostly all reset to zero on restart other than those relating to Queue depth or other Durable Topic Subscription related metrics.

Jboss EAP 6.3: HQ119031: Unable to validate user: null

ERROR HQ224018: Failed to create session: HornetQException[errorType=SECURITY_EXCEPTION message=HQ119031: Unable to validate user: null]
When the Jboss EAP 6.3 server is about to receive JMS message. I have the user successfully authenticated by remoting subsystem so why the user is null? How to overcome this error?
EAP documentation encorage you to:
(...) set allowClientLogin to true (...) If you would like HornetQ to
authenticate using the propagated security then set the authoriseOnClientLogin to true also.
But due to HORNETQ-883 bug you have to turn off security for messaging:
<hornetq-server>
<!-- … -->
<security-enabled>false</security-enabled>
<!-- … -->
</hornetq-server>
In short, if your JMS client is connecting from within your JEE container and have no need to supply credentials to connect to JMS (when calling factory.createConnection()), then obtain connections using the InVM Connector. The InVM Connector doesn't require credentials when opening a connection to JMS (since the caller is within the JVM instance, hence the name) but still enforces security for Remote JMS clients. Connectors and ConnectionFactories are configured in the urn:jboss:domain:messaging subsystem of standalone.xml.
Otherwise, if you don't use the InVM Connector with security enabled, you'll probably need to run the add-user script in [jboss-home]/bin to add client credentials to the appilcation-users.properties file and supply those credentials when calling factory.createConnection(username, pwd) for both Remote and InVM clients connecting via Remotely available factories.
Gory Details
In our JBoss EAP 6.4 instance, security needs to remain enabled for remote connections (outside the JVM) so our <security-settings> for HornetQ are specified appropriately. Consequently, the JMS ConnectionFactory dictates the level of security based on which Connector it is configured with.
<hornetq-server>
<connectors>
<!-- additional connectors here -->
...
<in-vm-connector name="in-vm" server-id="0"/>
</connectors>
<jms-connection-factories>
<connection-factory name="InVmConnectionFactory">
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<!-- JNDI bindings here -->
<entry name="java:/ConnectionFactory" />
</entries>
</connection-factory>
...
</jms-connection-factories>
So, in the JMS client apply the standard connection boiler-plate:
InitialContext context = new InitialContext();
javax.jms.ConnectionFactory factory = (ConnectionFactory) context.lookup("java:/ConnectionFactory");
and when creating the connection:
javax.jms.Connection connection = factory.createConnection();
Transacted JMS
For Transaction-aware in-container client connections to JMS, our InVM ConnectionFactory is configured like this:
<jms-connection-factories>
...
<pooled-connection-factory name="hornetq-ra">
<transaction mode="xa"/>
<connectors>
<connector-ref connector-name="in-vm"/>
</connectors>
<entries>
<entry name="java:/JmsXA"/>
</entries>
</pooled-connection-factory>
</jms-connection-factories>
Obtain the transacted JMS ConnectionFactory as such:
InitialContext context = new InitialContext();
javax.jms.ConnectionFactory factory = (ConnectionFactory) context.lookup("java:/JmsXA");

jboss destination queue is in CLOSED state

We are using the jboss application server version 4.2.3.
After starting the server, One of the destination queue 'testQueue' bound becomes to CLOSED state.
While checking the logs, the below information was present.
2014-01-07 20:55:49,855 INFO [genericEventJmsContainer-1]- Setup of
JMS message listener invoker failed for destination
'JBossQueue[testQueue]' - trying to recover. Cause: The object is
closed javax.jms.IllegalStateException: The object is closed at
org.jboss.jms.client.container.ClosedInterceptor.invoke(ClosedInterceptor.java:159)
at
org.jboss.aop.advice.PerInstanceInterceptor.invoke(PerInstanceInterceptor.java:105)
at
org.jboss.jms.client.delegate.ClientConsumerDelegate$receive_N8299950230150603585.invokeNext(ClientConsumerDelegate$receive_N8299950230150603585.java) at
org.jboss.jms.client.delegate.ClientConsumerDelegate.receive(ClientConsumerDelegate.java)
at
org.jboss.jms.client.JBossMessageConsumer.receive(JBossMessageConsumer.java:86)
at
org.springframework.jms.connection.CachedMessageConsumer.receive(CachedMessageConsumer.java:74)
at
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveMessage(AbstractPollingMessageListenerContainer.java:405)
at
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.doReceiveAndExecute(AbstractPollingMessageListenerContainer.java:308)
at
org.springframework.jms.listener.AbstractPollingMessageListenerContainer.receiveAndExecute(AbstractPollingMessageListenerContainer.java:261)
at
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.invokeListener(DefaultMessageListenerContainer.java:982)
at
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.executeOngoingLoop(DefaultMessageListenerContainer.java:974)
at
org.springframework.jms.listener.DefaultMessageListenerContainer$AsyncMessageListenerInvoker.run(DefaultMessageListenerContainer.java:876)
at java.lang.Thread.run(Thread.java:662)
Could anyone kindly provide us with some inputs on the reason why a selected destination becomes to CLOSED state after a certain period of time from the server start time?
It may be the bug https://issues.jboss.org/browse/JBESB-1491 which is in version 4.2.1 ,which is fixed version is 4.3.
Please share the JMS configuration you have done for Queue.
Also you can find related topic http://docs.jboss.org/jbossas/jboss4guide/r1/html/ch6.chapt.html
Thanks for your comments
<mbean code="org.jboss.jms.server.destination.QueueService" name="jboss.esb.quickstart.destination:service=Queue,name=testQueue" xmbean-dd="xmdesc/Queue-xmbean.xml"> <depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer </depends> <depends>jboss.messaging:service=PostOffice</depends> <attribute name="RedeliveryDelay">1000</attribute> <attribute name="MaxDeliveryAttempts">15</attribute> <attribute name="Clustered">true</attribute> </mbean>
Above is a sample configuration we make for a Queue destination.
These queues are defined in JBOSS SOA server version 4.3 and being listened by components deployed in JBOSS Application Server version 4.2.3

jboss-esb fs-listener jbm message queue overflow

We have a jboss esb server which is reading files from the file system in a scheduled way (schedule frequency of 20sec) and convert them into the esb message then we parse the message.
There are some other providers/listeners (jms) and services configured on the esb servers. When there is an error in one of the services it effects the above process. File system provider (gateway) is working fine but the jms-listener who takes the gateway messages are not working and lots of messages are accumulated in the jbm queue (jbm_msg Oracle DB table).
Here is the problem, when my server is restarted messages in the jbm-queue is parsed in the esb for just 20 seconds which is the scheduled frequency of fs-provider, never process messages again and cpu usage goes up to 100% and stays there. We believe somehow fs-providers interrupts the jms-provider.
Is there any configuration we have been missing out.
Here are the configuration files that we have:
jboss-esb.xml
<?xml version = "1.0" encoding = "UTF-8"?>
<jbossesb xmlns="http://anonsvn.labs.jboss.com/labs/jbossesb/trunk/product/etc/schemas/xml/jbossesb-1.0.1.xsd" parameterReloadSecs="5">
<providers>
<fs-provider name="SitaIstProvider">
<fs-bus busid="gw_sita_ist" >
<fs-message-filter
directory="/ikarussita/IST/IN"
input-suffix=".RCV"
work-suffix=".lck"
post-delete="false"
post-directory="/ikarussita/IST/OK"
post-suffix=".ok"
error-delete="false"
error-directory="/ikarussita/IST/ERR"
error-suffix=".err"/>
</fs-bus>
</fs-provider>
<jms-provider name="SitaESBQueue" connection-factory="ConnectionFactory">
<jms-bus busid="esb_sita_queue">
<jms-message-filter dest-type="QUEUE" dest-name="queue/esb_sita_queue"/>
</jms-bus>
</jms-provider>
</providers>
<services>
<service category="SITA" name="SITA_IST" description="SITA Daemon For ISTCOXH">
<listeners>
<fs-listener name="Sita_Ist_Gateway" busidref="gw_sita_ist" is-gateway="true" schedule-frequency="20" />
<jms-listener name="Jms_Sita_EsbAware" busidref="esb_sita_queue" />
</listeners>
<actions mep="OneWay">
<action name="parse_msg" class="com.celebi.integration.action.sita.inbound.SitaHandler" process="parseMessage" />
<action name="send_ikarus" class="com.celebi.integration.action.ikarus.outbound.fis.FlightJmsSender" />
</actions>
</service>
</services>
</jbossesb>
jbm-queue-service.xml
<?xml version="1.0" encoding="UTF-8"?>
<server>
<mbean code="org.jboss.jms.server.destination.QueueService"
name="jboss.messaging.destination:service=Queue,name=esb_sita_queue"
xmbean-dd="xmdesc/Queue-xmbean.xml">
<depends optional-attribute-name="ServerPeer">jboss.messaging:service=ServerPeer</depends>
<depends>jboss.messaging:service=PostOffice</depends>
</mbean>
<server>
deployment.xml
<jbossesb-deployment>
<depends>jboss.messaging.destination:service=Queue,name=esb_sita_queue</depends>
</jbossesb-deployment>
Thanx
Split the service into 2 separate services, one handling the JMS queue, the other the file poller. Specify the same action pipeline. That way you get the same functionality but without the threading issue. Also use max-threads attr on the listener to specify the number of reading threads.