I have a cluster of ActiveMQ deployed in Master/Slave mode in Openshift, but I have a problem. I can persist the data of topics and queues with any problem. When a pod comes down, do not lose the messages. But I lose the data on the console of ActiveMQ. I attached below my activemq.xml
Someone have the same problem?
Thanks in advance.
`
file:${activemq.conf}/credentials.properties
<!--
The <broker> element is used to configure the ActiveMQ broker.
-->
<broker xmlns="http://activemq.apache.org/schema/core" brokerName="localhost" useJmx="true" dataDirectory="${activemq.data}" persistent="true" deleteAllMessagesOnStartup="false" useShutdownHook="false" schedulerSupport="true" >
<!--
For better performances use VM cursor and small memory limit.
For more information, see:
http://activemq.apache.org/message-cursors.html
Also, if your producer is "hanging", it's probably due to producer flow control.
For more information, see:
http://activemq.apache.org/producer-flow-control.html
-->
<destinationInterceptors>
<virtualDestinationInterceptor>
<virtualDestinations>
<virtualTopic name="VirtualTopic.>" prefix="Consumer.*." selectorAware="false"/>
</virtualDestinations>
</virtualDestinationInterceptor>
</destinationInterceptors>
<destinationPolicy>
<policyMap>
<policyEntries>
<policyEntry topic=">" producerFlowControl="true">
<!-- The constantPendingMessageLimitStrategy is used to prevent
slow topic consumers to block producers and affect other consumers
by limiting the number of messages that are retained
For more information, see:
http://activemq.apache.org/slow-consumer-handling.html
-->
<pendingMessageLimitStrategy>
<constantPendingMessageLimitStrategy limit="1000"/>
</pendingMessageLimitStrategy>
</policyEntry>
<policyEntry queue=">" producerFlowControl="true" memoryLimit="1mb">
<!-- Use VM cursor for better latency
For more information, see:
http://activemq.apache.org/message-cursors.html
<pendingQueuePolicy>
<vmQueueCursor/>
</pendingQueuePolicy>
-->
</policyEntry>
</policyEntries>
</policyMap>
</destinationPolicy>
<!--
The managementContext is used to configure how ActiveMQ is exposed in
JMX. By default, ActiveMQ uses the MBean server that is started by
the JVM. For more information, see:
http://activemq.apache.org/jmx.html
-->
<managementContext>
<managementContext createConnector="false"/>
</managementContext>
<!--
Configure message persistence for the broker. The default persistence
mechanism is the KahaDB store (identified by the kahaDB tag).
For more information, see:
ctivemq.data}
http://activemq.apache.org/persistence.html
-->
<persistenceAdapter>
<kahaDB directory="${activemq.data}/kahadb"/>
</persistenceAdapter>
<!--
The systemUsage controls the maximum amount of space the broker will
use before slowing down producers. For more information, see:
http://activemq.apache.org/producer-flow-control.html
If using ActiveMQ embedded - the following limits could safely be used:
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="20 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="1 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="100 mb"/>
</tempUsage>
</systemUsage>
</systemUsage>
-->
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage limit="64 mb"/>
</memoryUsage>
<storeUsage>
<storeUsage limit="100 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit="50 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
<!--
The transport connectors expose ActiveMQ over a given protocol to
clients and other brokers. For more information, see:
http://activemq.apache.org/configuring-transports.html
-->
<transportConnectors>
<!-- DOS protection, limit concurrent connections to 1000 and frame size to 100MB -->
<transportConnector name="openwire" uri="tcp://0.0.0.0:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
<transportConnector name="amqp" uri="amqp://0.0.0.0:5672?maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
</transportConnectors>
<!-- destroy the spring context on shutdown to stop jetty -->
<shutdownHooks>
<bean xmlns="http://www.springframework.org/schema/beans" class="org.apache.activemq.hooks.SpringContextHook" />
</shutdownHooks>
</broker>
<!--
Enable web consoles, REST and Ajax APIs and demos
Take a look at ${ACTIVEMQ_HOME}/conf/jetty.xml for more details
-->
<import resource="jetty.xml"/>
`
The console does not offer any sort of data persistence, the statistics are mostly all reset to zero on restart other than those relating to Queue depth or other Durable Topic Subscription related metrics.
Related
We have been experimenting with the number of Ignite server pods to see the impact on performance.
One thing that we have noticed is that if the number of Ignite server pods is increased after client nodes have established communication the new pod will just fail loop with the error below.
If however the grid is destroyed (bring down all client and server nodes) and then the desired number of server nodes is launch there are no issues.
Also the above procedure is not fully dependable for anything other than launching a single Ignite server.
From reading it looks like [this stack over flow][1] post and [this documentation][2] that the issue may be that we are not launching the "Kubernetes service".
Ignite's KubernetesIPFinder requires users to configure and deploy a special Kubernetes service that maintains a list of the IP addresses of all the alive Ignite pods (nodes).
However this is the only documentation I have found and it says that it is no longer current.
Is this information still relevant for Ignite 2.11.1?
If not is there some more recent documentation?
If this service is indeed needed, are there some more concreate examples and information on setting them up?
Error on new Server pod:
[21:37:55,793][SEVERE][main][IgniteKernal] Failed to start manager: GridManagerAdapter [enabled=true, name=o.a.i.i.managers.discovery.GridDiscoveryManager]
class org.apache.ignite.IgniteCheckedException: Failed to start SPI: TcpDiscoverySpi [addrRslvr=null, addressFilter=null, sockTimeout=5000, ackTimeout=5000, marsh=JdkMarshaller [clsFilter=org.apache.ignite.marshaller.MarshallerUtils$1#78422efb], reconCnt=10, reconDelay=2000, maxAckTimeout=600000, soLinger=0, forceSrvMode=false, clientReconnectDisabled=false, internalLsnr=null, skipAddrsRandomization=false]
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:281)
at org.apache.ignite.internal.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at org.apache.ignite.internal.IgniteKernal.startManager(IgniteKernal.java:1985)
at org.apache.ignite.internal.IgniteKernal.start(IgniteKernal.java:1331)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at org.apache.ignite.internal.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
at org.apache.ignite.internal.IgnitionEx.start0(IgnitionEx.java:1172)
at org.apache.ignite.internal.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:952)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:851)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:721)
at org.apache.ignite.internal.IgnitionEx.start(IgnitionEx.java:690)
at org.apache.ignite.Ignition.start(Ignition.java:353)
at org.apache.ignite.startup.cmdline.CommandLineStartup.main(CommandLineStartup.java:367)
Caused by: class org.apache.ignite.spi.IgniteSpiException: Node with the same ID was found in node IDs history or existing node in topology has the same ID (fix configuration and restart local node) [localNode=TcpDiscoveryNode [id=000e84bb-f587-43a2-a662-c7c6147d2dde, consistentId=8751ef49-db25-4cf9-a38c-26e23a96a3e4, addrs=ArrayList [0:0:0:0:0:0:0:1%lo, 127.0.0.1, fd00:85:4001:5:f831:8cc:cd3:f863%eth0], sockAddrs=HashSet [nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local/fd00:85:4001:5:f831:8cc:cd3:f863:47500, /0:0:0:0:0:0:0:1%lo:47500, /127.0.0.1:47500], discPort=47500, order=0, intOrder=0, lastExchangeTime=1676497065109, loc=true, ver=2.11.1#20211220-sha1:eae1147d, isClient=false], existingNode=000e84bb-f587-43a2-a662-c7c6147d2dde]
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.duplicateIdError(TcpDiscoverySpi.java:2083)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.joinTopology(ServerImpl.java:1201)
at org.apache.ignite.spi.discovery.tcp.ServerImpl.spiStart(ServerImpl.java:473)
at org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at org.apache.ignite.internal.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
... 13 more
Server DiscoverySpi Config:
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="myNameSpace"/>
<property name="serviceName" value="myServiceName"/>
</bean>
</property>
</bean>
</property>
Client DiscoverySpi Configs:
<bean id="discoverySpi" class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder" ref="ipFinder" />
</bean>
<bean id="ipFinder" class="org.apache.ignite.spi.discovery.tcp.ipfinder.vm.TcpDiscoveryVmIpFinder">
<property name="shared" value="false" />
<property name="addresses">
<list>
<value>myServiceName.myNameSpace:47500</value>
</list>
</property>
</bean>
Edit:
I have experimented more with this issue. As long as I do not deploy any clients (using the static TcpDiscoveryVmIpFinder above) I am able to scale up and down server pods without any issue. However as soon as a single client joins I am no longer able to scale the server pods up.
I can see that the server pods have ports 47500 and 47100 open so I am not sure what the issue is. Dows the TcpDiscoveryKubernetesIpFinder still need the port to be specified on the client config?
I have tried to change my client config to use the TcpDiscoveryKubernetesIpFinder below but I am getting a discovery timeout falure (see below).
<property name="discoverySpi">
<bean class="org.apache.ignite.spi.discovery.tcp.TcpDiscoverySpi">
<property name="ipFinder">
<bean class="org.apache.ignite.spi.discovery.tcp.ipfinder.kubernetes.TcpDiscoveryKubernetesIpFinder">
<property name="namespace" value="680e5bbc-21b1-5d61-8dfa-6b27be10ede7"/>
<property name="serviceName" value="nkw-mnomni-ignite-1-1"/>
</bean>
</property>
</bean>
</property>
24-Feb-2023 14:15:02.450 WARNING [grid-timeout-worker-#22%igniteClientInstance%] org.apache.ignite.logger.java.JavaLogger.warning Thread dump at 2023/02/24 14:15:02 UTC
Thread [name="main", id=1, state=WAITING, blockCnt=78, waitCnt=3]
Lock [object=java.util.concurrent.CountDownLatch$Sync#45296dbd, ownerName=null, ownerId=-1]
at java.base#17.0.1/jdk.internal.misc.Unsafe.park(Native Method)
at java.base#17.0.1/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715)
at java.base#17.0.1/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047)
at java.base#17.0.1/java.util.concurrent.CountDownLatch.await(CountDownLatch.java:230)
at o.a.i.spi.discovery.tcp.ClientImpl.spiStart(ClientImpl.java:324)
at o.a.i.spi.discovery.tcp.TcpDiscoverySpi.spiStart(TcpDiscoverySpi.java:2207)
at o.a.i.i.managers.GridManagerAdapter.startSpi(GridManagerAdapter.java:278)
at o.a.i.i.managers.discovery.GridDiscoveryManager.start(GridDiscoveryManager.java:980)
at o.a.i.i.IgniteKernal.startManager(IgniteKernal.java:1985)
at o.a.i.i.IgniteKernal.start(IgniteKernal.java:1331)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start0(IgnitionEx.java:2141)
at o.a.i.i.IgnitionEx$IgniteNamedInstance.start(IgnitionEx.java:1787)
- locked o.a.i.i.IgnitionEx$IgniteNamedInstance#57ac9100
at o.a.i.i.IgnitionEx.start0(IgnitionEx.java:1172)
at o.a.i.i.IgnitionEx.startConfigurations(IgnitionEx.java:1066)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:952)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:851)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:721)
at o.a.i.i.IgnitionEx.start(IgnitionEx.java:690)
at o.a.i.Ignition.start(Ignition.java:353)
Edit 2:
I also spoke with an admin about opening client side ports in case that was the issue. He indicated that should not be needed as clients should be able to open ephemeral ports to communicate with the server nodes.
[1]: Ignite not discoverable in kubernetes cluster with TcpDiscoveryKubernetesIpFinder
[2]: https://apacheignite.readme.io/docs/kubernetes-ip-finder
It's hard to say precisely what the root cause is, but in general it's something related to the network or domain names resolution.
A public address is assigned to a node on a startup and is exposed to other nodes for communication. Other nodes store that address and nodeId in their history. Here is what is happening: a new node is trying to enter the cluster, it connects to a random node, then this request is transferred to the coordinator. The coordinator issues TcpDiscoveryNodeAddedMessage that must circle across the topology ring and be ACKed by all other nodes. That process didn't finish during a join timeout, so the new node is trying to re-enter the topology by starting the same joining process but with a new ID. But, other nodes see that this address is already registered by another nodeId, causing the original duplicate nodeId error.
Some recommendations:
If the issue is reproducible on a regular basis, I'd recommend collecting more information by enabling DEBUG logging for the following package:
org.apache.ignite.spi.discovery (discovery-related events tracing)
Take thread dumps from affected nodes (could be done by kill -3). Check for discovery-related issues. Search for "lookupAllHostAddr".
Check that it's not DNS issue and all public addresses for your node are resolved instantly nkw-mnomni-ignite-1-1-1.nkw-mnomni-ignite-1-1.680e5bbc-21b1-5d61-8dfa-6b27be10ede7.svc.cluster.local. I was asking about the provider, because in OpenShift there seems to be a hard limit on DNS resolution time.
Check GC and safepoints.
To hide the underlying issue you can play around by increasing Ignite configuration: network timeout, join timeout, reducing failure detection timeout. But I recommend finding the real root cause instead of treating the symptoms.
We are creating a Mule application which will be running in a container on Kubernetes and will be in a replica set that will be connecting to JMS 2.0 Red Hat AMQ 7 (based on ActiveMQ Artemis).
The pom.xml has been configured to get the jms client:
<dependency>
<groupId>org.apache.activemq</groupId>
<artifactId>artemis-jms-client-all</artifactId>
<version>2.10.1</version>
</dependency>
And the JMS config is configured as:
<jms:config name="JMS_Config" doc:name="JMS Config" doc:id="8621b07d-b203-463e-bbbe-76eb03741a61" >
<jms:generic-connection specification="JMS_2_0" username="${mq.user}" password="${mq.password}" clientId="${mq.client.id}">
<reconnection >
<reconnect-forever frequency="${mq.reconnection.frequency}" />
</reconnection>
<jms:connection-factory >
<jms:jndi-connection-factory connectionFactoryJndiName="ConnectionFactory" >
<jms:name-resolver-builder jndiInitialContextFactory="org.apache.activemq.artemis.jndi.ActiveMQInitialContextFactory" jndiProviderUrl="${mq.brokerurl}"/>
</jms:jndi-connection-factory>
</jms:connection-factory>
</jms:generic-connection>
<jms:consumer-config>
<jms:consumer-type >
<jms:topic-consumer shared="true" durable="true"/>
</jms:consumer-type>
</jms:consumer-config>
<jms:producer-config persistentDelivery="true"/>
</jms:config>
Then in the JMS listener component:
<jms:listener doc:name="EMS JMS Listener" doc:id="318b4f08-daf6-41f4-944b-3ec1420d5c12" config-ref="JMS_Config" destination="${mq.incoming.queue}" ackMode="AUTO" >
<jms:consumer-type >
<jms:topic-consumer shared="true" subscriptionName="${mq.sub.name}" durable="true"/>
</jms:consumer-type>
<jms:response sendCorrelationId="ALWAYS" />
</jms:listener>
The variables are set as:
mq.client.id=client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40
mq.sub.name=my-sub
mq.incoming.queue=my-queue
Is this the best way to configure the client? As we have seen errors in the logs when deployed to K8s regarding connections to the AMQ server:
javax.jms.InvalidClientIDException: client-id-135a9514-d4d5-4f52-b01c-f6ca34a76b40 was already set into another connection
In JMS 2.0 you don't have to set the client identifier when creating a shared durable subscription. However, if you do set the client identifier then it must be unique per connection. For whatever reason (e.g. due to Mule or perhaps K8s) multiple connections are being created and since each connection is using the same client identifier you're receiving the javax.jms.InvalidClientIDException.
Remove clientId="${mq.client.id}" from your configuration and the javax.jms.InvalidClientIDException should go away.
I have a small test in which I am trying to chalk out the time taken by Infinispan in a local cache and then in a local cache with a write behind.
Surprisingly, the time taken in a local cache to put 8M entries is around 27 sec and to do a get it is 1 millisec. That is good. However, as soon as I enable the write behind the does not even end in 30 minutes. I am sure there is something which is terribly wrong with the configuration.
I have used 5.3.0 Final and 5.2.7 final.
The configurations are pasted here
<namedCache name="LocalWithWriteBehind">
<loaders shared="false">
<loader class="org.infinispan.loaders.file.FileCacheStore"
fetchPersistentState="true" ignoreModifications="false"
purgeOnStartup="false">
<properties>
<property name="location" value="${java.io.tmpdir}" />
</properties>
<!-- write-behind configuration starts here -->
<async enabled="true" threadPoolSize="500" />
<!-- write-behind configuration ends here -->
</loader>
</loaders>
</namedCache>
If you would like to see the Scala App, see the code here http://pastebin.com/PSiJFFiZ
The file cache store before Infinispan 6.0 was very slow. Please upgrade to Infinispan 6.0.0.Final, and enable the single file cache store as indicated here.
I have to configure an instance of Jboss 5.1.0 to use a different port number (i.e. 8480). To do this i made the following changes to the bindings-jboss-beans.xml.
<parameter>
<set>
<inject bean="PortsDefaultBindings"/>
<inject bean="Ports01Bindings"/>
<inject bean="Ports02Bindings"/>
<inject bean="Ports03Bindings"/>
<inject bean="Ports04Bindings"/>
</set>
</parameter>
<bean name="Ports04Bindings" class="org.jboss.services.binding.impl.ServiceBindingSet">
<constructor>
<!-- The name of the set -->
<parameter>ports-04</parameter>
<!-- Default host name -->
<parameter>${jboss.bind.address}</parameter>
<!-- The port offset -->
<parameter>400</parameter>
<!-- Set of bindings to which the "offset by X" approach can't be applied -->
<parameter><null/></parameter>
</constructor>
</bean>
The change works fine in that i can access my application using the URL http://localhost:8480/XYZApp.
Now to be able to do the deployment, i have to inform the infrastructure people all the port numbers that the application will use.
I know that we will be using 8480 but how would i know all the other port numbers that Jboss will use for this instance based on an offset of 400?
JBoss listens to many ports for each of its services respectively, but you shouldn't need to open all those ports if your applications don't make use of the services related to these ports. For example if no external applications will use the Naming Service you shouldn't need to open the port 1099 (1499 in your case).
Anyway, if you need a list of all the ports from where Jboss listens, you can check the bean with name="StandardBindings" in the file conf/bindingservice.beans/META-INF/bindings-jboss-beans.xml. Those are the standard ports, so if you have defined an offset (in your case 400) you'll have to add it to the respective port to get the ports used by your JBoss instance.
I am using jboss-5.1 to deploy message driven bean which is used to subscribe messages from a third party queue.
Around 16 messages were posted to that queue but they remained pending in our subscriber queue. I restarted the server and the messages were readily picked.
As much as I have analysed, I think maxsize and maxsession could have affected it, as both are 15. But I do not understand if there was some real issue, how it got solved by just restarting.
The logs were in error mode. I did not get the full stack trace.
This is the snippet of that error log.
[2012-10-30 17:01:00,228] [MQQueueAgent (GQH1_PLANNING_MDM_001)]
[ERROR] STDERR: 2012.10.30 17:01:00 MQJMS1023E rollback failed
[2012-10-30 17:01:00,228] [exceptionDelivery0] [WARN ]
org.jboss.resource.adapter.jms.inflow.JmsActivation: Failure in jms activation
org.jboss.resource.adapter.jms.inflow.JmsActivationSpec#85d0d(ra=org.jboss.resource.adapter.jms.JmsResourceAdapter#b21aae
destination=remotewsmq/NOTIFICATION_PLANNING_MDM_001.SUBQ
destinationType=javax.jms.Queue tx=true durable=false reconnect=10 provider=RemoteWSMQJMSProvider
user=null maxMessages=1 minSession=1 maxSession=5 keepAlive=60000 useDLQ=false)
GQH1_PLANNING_MDM_001: The name of the queue used for subscribing.
The files that I use to configure the properties of the MDBs are as follows.
1.ejb3-interceptors-aop.xml
<domain name="Message Driven Bean" extends="Intercepted Bean" inheritBindings="true">
<bind pointcut="execution(public * *->*(..))">
<interceptor-ref name="org.jboss.ejb3.security.AuthenticationInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.security.RunAsSecurityInterceptorFactory"/>
</bind>
<!-- TODO: Authorization? -->
<bind pointcut="execution(public * *->*(..))">
<interceptor-ref name="org.jboss.ejb3.tx.CMTTxInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.stateless.StatelessInstanceInterceptor"/>
<interceptor-ref name="org.jboss.ejb3.tx.BMTTxInterceptorFactory"/>
<interceptor-ref name="org.jboss.ejb3.AllowedOperationsInterceptor"/>
<interceptor-ref name="org.jboss.ejb3.entity.TransactionScopedEntityManagerInterceptor"/>
<!-- interceptor-ref name="org.jboss.ejb3.interceptor.EJB3InterceptorsFactory"/ -->
<stack-ref name="EJBInterceptors"/>
</bind>
<annotation expr="class(*) AND !class(#org.jboss.ejb3.annotation.Pool)">
#org.jboss.ejb3.annotation.Pool (value="StrictMaxPool", maxSize=15, timeout=10000)
</annotation>
</domain>
2.standardjboss.xml
<invoker-proxy-binding>
<name>message-driven-bean</name>
<invoker-mbean>default</invoker-mbean>
<proxy-factory>org.jboss.ejb.plugins.jms.JMSContainerInvoker</proxy-factory>
<proxy-factory-config>
<JMSProviderAdapterJNDI>DefaultJMSProvider</JMSProviderAdapterJNDI>
<ServerSessionPoolFactoryJNDI>StdJMSPool</ServerSessionPoolFactoryJNDI>
<CreateJBossMQDestination>false</CreateJBossMQDestination>
<!-- WARN: Don't set this to zero until a bug in the pooled executor is fixed -->
<MinimumSize>1</MinimumSize>
<MaximumSize>15</MaximumSize>
<KeepAliveMillis>30000</KeepAliveMillis>
<MaxMessages>1</MaxMessages>
<MDBConfig>
<ReconnectIntervalSec>10</ReconnectIntervalSec>
<DLQConfig>
<DestinationQueue>queue/DLQ</DestinationQueue>
<MaxTimesRedelivered>10</MaxTimesRedelivered>
<TimeToLive>0</TimeToLive>
</DLQConfig>
</MDBConfig>
</proxy-factory-config>
</invoker-proxy-binding>
<activation-config-property>
<activation-config-property-name>maxSession</activation-config-property-name>
<activation-config-property-value>15</activation-config-property-value>
</activation-config-property>
3.jms-ds.xml
<?xml version="1.0" encoding="UTF-8"?>
<connection-factories>
<!-- ==================================================================== -->
<!-- JMS Stuff -->
<!-- ==================================================================== -->
<!--
The JMS provider loader. Currently pointing to a non-clustered ConnectionFactory. Need to
be replaced with a clustered non-load-balanced ConnectionFactory when it becomes available.
See http://jira.jboss.org/jira/browse/JBMESSAGING-843.
-->
<mbean code="org.jboss.jms.jndi.JMSProviderLoader"
name="jboss.messaging:service=JMSProviderLoader,name=JMSProvider">
<attribute name="ProviderName">DefaultJMSProvider</attribute>
<attribute name="ProviderAdapterClass">org.jboss.jms.jndi.JNDIProviderAdapter</attribute>
<attribute name="FactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="QueueFactoryRef">java:/XAConnectionFactory</attribute>
<attribute name="TopicFactoryRef">java:/XAConnectionFactory</attribute>
</mbean>
<!-- JMS XA Resource adapter, use this to get transacted JMS in beans -->
<tx-connection-factory>
<jndi-name>JmsXA</jndi-name>
<xa-transaction/>
<rar-name>jms-ra.rar</rar-name>
<connection-definition>org.jboss.resource.adapter.jms.JmsConnectionFactory</connection-definition>
<config-property name="SessionDefaultType" type="java.lang.String">javax.jms.Topic</config-property>
<config-property name="JmsProviderAdapterJNDI" type="java.lang.String">java:/DefaultJMSProvider</config-property>
<max-pool-size>20</max-pool-size>
<security-domain-and-application>JmsXARealm</security-domain-and-application>
<depends>jboss.messaging:service=ServerPeer</depends>
</tx-connection-factory>
</connection-factories>
Please help.
If the listener did not try to reconnect, then it might be the messages pending which caused it to fail.
According to the error, a transaction ROLLBACK call failed. After the failure, the queue manager probably held those messages in an outstanding unit of work. Restarting the server would have closed the connection at which point the queue manager will have rolled back the transaction on behalf of the application. On restart, the application will create a new UOW and retrieve the messages.
Look in WebSphere MQ's queue manager error logs and global error logs to determine whether the error was caused by a resource shortage. It may be necessary to increase the size of the queue manager transaction logs or to tune transaction parameters such as MAXUOW.
You may also need to update the MQ client version or Queue Manager version. According to this Technote, WebSphere MQ JMS classes were updated as of 6.0.2.3 to fix a bug that resulted in MQJMS1023E errors. If you need to update the client version, it is available as a free download as SupportPac MQC75. A new client is able to run with any back level queue manager. After upgrading, the app benefits from the bug fixes and performance enhancements of the new client code and provides API functionality appropriate for the version of Queue Manager to which it connects. What version of WebSphere MQ JMS client is currently installed? What version of WebSphere MQ queue manager is currently installed?