AMQ222244: Unable to check if message expired artemis 2.17.0 - activemq-artemis

We are using ActiveMQ Artemis 2.17.0, and I observed that producer are getting stuck after a while. Producers are still using the ActiveMQ 5.16.3 client library.
The lock observed on client side:
"Thread-13422" #14272 prio=5 os_prio=0 tid=0x000055a74d20a000 nid=0x7a46 waiting on condition [0x00007fa5523dd000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000006b2a00338> (at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.await(AbstractQueuedSynchronizer.java:2039)
at java.util.concurrent.ArrayBlockingQueue.take(ArrayBlockingQueue.java:403)
at org.apache.activemq.transport.FutureResponse.getResult(FutureResponse.java:48)
at org.apache.activemq.transport.ResponseCorrelator.request(ResponseCorrelator.java:87)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1388)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1428)
at org.apache.activemq.ActiveMQConnection.syncSendPacket(ActiveMQConnection.java:1323)
at org.apache.activemq.ActiveMQSession.send(ActiveMQSession.java:1974)
- locked <0x00000006b2a00488> (a java.lang.Object)
at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:288)
at org.apache.activemq.ActiveMQMessageProducer.send(ActiveMQMessageProducer.java:223)
at org.apache.activemq.jms.pool.PooledProducer.send(PooledProducer.java:95)
- locked <0x00000006b2a00508> (a org.apache.activemq.ActiveMQMessageProducer)
On the server side I saw this exception:
AMQ222244: Unable to check if message expired: org.apache.activemq.artemis.core.paging.cursor.NonExistentPage: Invalid messageNumber passed = PagePositionImpl [pageNr=1224, messageNr=1233, recordID=2178453908, fileOffset=9056626] on null
at org.apache.activemq.artemis.core.paging.cursor.impl.PageCursorProviderImpl.getMessage(PageCursorProviderImpl.java:148)
at org.apache.activemq.artemis.core.paging.cursor.impl.PageSubscriptionImpl.queryMessage(PageSubscriptionImpl.java:634)
at org.apache.activemq.artemis.core.paging.cursor.PagedReferenceImpl.getPagedMessage(PagedReferenceImpl.java:132)
at org.apache.activemq.artemis.core.paging.cursor.PagedReferenceImpl.getMessage(PagedReferenceImpl.java:99)
at org.apache.activemq.artemis.core.server.impl.QueueImpl.checkExpired(QueueImpl.java:3815)
at org.apache.activemq.artemis.core.server.impl.QueueImpl.deliver(QueueImpl.java:2981)
at org.apache.activemq.artemis.core.server.impl.QueueImpl.access$2400(QueueImpl.java:126)
at org.apache.activemq.artemis.core.server.impl.QueueImpl$DeliverRunner.run(QueueImpl.java:4163)
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:42)
at org.apache.activemq.artemis.utils.actors.OrderedExecutor.doTask(OrderedExecutor.java:31)
at org.apache.activemq.artemis.utils.actors.ProcessorBase.executePendingTasks(ProcessorBase.java:65)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at org.apache.activemq.artemis.utils.ActiveMQThreadFactory$1.run(ActiveMQThreadFactory.java:118)
What is the root cause of that issue? How can we prevent it?

Related

Failed to return from Redis call from Scala/spark, Shows some Deadlock in thread dump

I am new to scala and spark world, some where in scala code I am seeing invocation to Redis call via Redisson 3.9.1 to get keys data which is very few number of records and this leads me some deadlock as seen below trace. Could someone please acknowledge what could be issue that I can take a hit on.
Full thread dump OpenJDK 64-Bit Server VM (25.282-b08 mixed mode):
"Attach Listener" #124 daemon prio=9 os_prio=0 tid=0x00007fbde8002800 nid=0x494e1 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
"Keep-Alive-Timer" #123 daemon prio=8 os_prio=0 tid=0x00007fbd68021000 nid=0x493d1 waiting on condition [0x00007fbb2fffe000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at sun.net.www.http.KeepAliveCache.run(KeepAliveCache.java:172)
at java.lang.Thread.run(Thread.java:748)
"ForkJoinPool-1-worker-5" #117 daemon prio=5 os_prio=0 tid=0x00007fbc9c87b000 nid=0x475ba waiting on condition [0x00007fbb1dbfc000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00007fc68d00ed50> (a java.util.concurrent.CountDownLatch$Sync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedInterruptibly(AbstractQueuedSynchronizer.java:997)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1304)
at java.util.concurrent.CountDownLatch.await(CountDownLatch.java:231)
at org.redisson.command.CommandAsyncService.get(CommandAsyncService.java:182)
at org.redisson.RedissonKeys$2.iterator(RedissonKeys.java:127)
at org.redisson.RedissonKeys$2.iterator(RedissonKeys.java:123)
at org.redisson.BaseIterator.hasNext(BaseIterator.java:54)
at scala.collection.convert.Wrappers$JIteratorWrapper.hasNext(Wrappers.scala:42)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
at scala.collection.IterableLike$class.foreach(IterableLike.scala:72)
at scala.collection.AbstractIterable.foreach(Iterable.scala:54)
at scala.collection.TraversableLike$class.filterImpl(TraversableLike.scala:247)
at scala.collection.TraversableLike$class.filterNot(TraversableLike.scala:267)
at scala.collection.AbstractTraversable.filterNot(Traversable.scala:104)
at com.mycomosi.eaa.common.infrastructure.topology.store.TopologyStoreEntityService$$anonfun$getTopologyInstanceIdsExcludingVersion$1$$anonfun$apply$7.apply(TopologyStoreEntityService.scala:73)
at com.mycomosi.eaa.common.infrastructure.topology.store.TopologyStoreEntityService$$anonfun$getTopologyInstanceIdsExcludingVersion$1$$anonfun$apply$7.apply(TopologyStoreEntityService.scala:70)
at scala.collection.immutable.Set$Set1.foreach(Set.scala:94)
at com.mycomosi.eaa.common.infrastructure.topology.store.TopologyStoreEntityService$$anonfun$getTopologyInstanceIdsExcludingVersion$1.apply(TopologyStoreEntityService.scala:70)
at com.mycomosi.eaa.common.infrastructure.topology.store.TopologyStoreEntityService$$anonfun$getTopologyInstanceIdsExcludingVersion$1.apply(TopologyStoreEntityService.scala:69)
at scala.collection.Iterator$class.foreach(Iterator.scala:893)
at scala.collection.AbstractIterator.foreach(Iterator.scala:1336)
Actual cause of this was slow response from keyByPattern query from redisson client. When there is huge volume of data, default keyByPattern query search with count of 10 and it build iterator for complete volume of keys. If instead of default provide some higher count value it can perform much better.
I also have referenced this issue at github plesae refer
https://github.com/redisson/redisson/issues/4635

Kafka HeartbeatThread BLOCKED

We are using spring kafka version(2.1.5.RELEASE).
After our performance testing while analysing
thread dump we saw below stack trace which indicates
HeartbeatThread is being blocked by one of the
Consumer thread
LOG:
Consumer Thread Dump
org.springframework.kafka.KafkaListenerEndpointContainer#4-0-C-1 - priority:5 - threadId:0x00007fca72cf4800 - nativeId:0x5d - nativeId (decimal):93 - state:RUNNABLE
stackTrace:
java.lang.Thread.State: RUNNABLE
at sun.nio.ch.EPollArrayWrapper.epollWait(Native Method)
at sun.nio.ch.EPollArrayWrapper.poll(EPollArrayWrapper.java:269)
at sun.nio.ch.EPollSelectorImpl.doSelect(EPollSelectorImpl.java:93)
at sun.nio.ch.SelectorImpl.lockAndDoSelect(SelectorImpl.java:86)
- locked <0x000000067e24e1c0> (a sun.nio.ch.Util$3)
- locked <0x000000067e24e1a8> (a java.util.Collections$UnmodifiableSet)
- locked <0x000000067e4415b0> (a sun.nio.ch.EPollSelectorImpl)
at sun.nio.ch.SelectorImpl.select(SelectorImpl.java:97)
at org.apache.kafka.common.network.Selector.select(Selector.java:674)
at org.apache.kafka.common.network.Selector.poll(Selector.java:396)
at org.apache.kafka.clients.NetworkClient.poll(NetworkClient.java:460)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:258)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:230)
at org.apache.kafka.clients.consumer.KafkaConsumer.pollOnce(KafkaConsumer.java:1164)
at org.apache.kafka.clients.consumer.KafkaConsumer.poll(KafkaConsumer.java:1111)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:699)
at java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:511)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- <0x000000067e3716b0> (a java.util.concurrent.locks.ReentrantLock$FairSync)
HeartbeatThread Dump
kafka-coordinator-heartbeat-thread | ccm.device.migration.event
Stack Trace is:
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000067e3716b0> (a java.util.concurrent.locks.ReentrantLock$FairSync)
at java.util.concurrent.locks.LockSupport.park(LockSupport.java:175)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.parkAndCheckInterrupt(AbstractQueuedSynchronizer.java:836)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireQueued(AbstractQueuedSynchronizer.java:870)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:1199)
at java.util.concurrent.locks.ReentrantLock$FairSync.lock(ReentrantLock.java:224)
at java.util.concurrent.locks.ReentrantLock.lock(ReentrantLock.java:285)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.poll(ConsumerNetworkClient.java:243)
at org.apache.kafka.clients.consumer.internals.ConsumerNetworkClient.pollNoWakeup(ConsumerNetworkClient.java:297)
at org.apache.kafka.clients.consumer.internals.AbstractCoordinator$HeartbeatThread.run(AbstractCoordinator.java:948)
- locked <0x000000067e2e0440> (a org.apache.kafka.clients.consumer.internals.ConsumerCoordinator)
Locked ownable synchronizers:
- None
What could be the probable cause for above observed behaviour?

Scala/Akka/Dropwizard Metrics app hanging for 10s before shutting down

I have a Scala 2.11 app that uses Akka. The GitHub repo for it is here. To reproduce what I'm seeing, just clone it, build it via ./gradlew fullBuild and then run it via java -jar build/libs/akka-scala-troubleshooting.jar (its a self-contained executable JAR). But essentially, I'm noticing that when I kill the running JVM process via Ctrl+C that it takes exactly 10 seconds for the JAR to shut down!
IIRC, Ctrl+C should issue a SIGKILL to the JVM process, so how it lives for another 10 seconds thereafter is blowing my mind! Even if I'm recalling that incorrectly, I'd still like to know why my app isn't shutting down immediately (it's causing problems by not doing so).
I'm using Dropwizard (formerly Coda Hale) Metrics via the Scala Metrics library. From what I can tell there might be a scheduled reporter daemon still alive that isn't shutting down right away, even when the JVM receives the SIGKILL (or whatever Ctrl+C is sending). Here's the main Driver (app entry point):
object Driver extends App {
println("Starting upp the app...")
//SLF4JBridgeHandler.install()
lazy val metricRegistry = new MetricRegistry()
ConsoleReporter
.forRegistry(metricRegistry)
.convertRatesTo(TimeUnit.SECONDS)
.convertDurationsTo(TimeUnit.MILLISECONDS)
.build()
.start(15, TimeUnit.SECONDS)
lazy val cortex = ActorSystem("cortex")
cortex.registerOnTermination {
System.exit(0)
}
val master = cortex.actorOf(Props[Master], name = "Master")
println("About to fire a StartUp message at Master...")
master ! StartUp
println("Fired! Actor system is spinning up...")
}
When you run the JAR, you'll see this output to the logs:
java -jar build/libs/akka-scala-troubleshooting.jar
Starting upp the app...
About to fire a StartUp message at Master...
Fired! Actor system is spinning up...
Master has received a command to start up the actor system!
Child will make it happen!
At this point it will just idle and do nothing -- that's fine and intended! But when I hit Ctrl+C on the terminal, I immediately get this log INFO:
^C[INFO] [08/03/2017 06:15:57.236] [Thread-0] [CoordinatedShutdown(akka://cortex)] Starting coordinated shutdown from JVM shutdown hook
Then 10 seconds goes by (exactly 10s every time), then finally a log WARNing:
[WARN] [08/03/2017 06:16:07.248] [Thread-0] [CoordinatedShutdown(akka://cortex)] CoordinatedShutdown from JVM shutdown failed: Futures timed out after [10000 milliseconds]
Any idea what could be going on here?
Update:
I did a thread dump:
2017-08-03 14:08:58
Full thread dump OpenJDK 64-Bit Server VM (25.131-b11 mixed mode):
"cortex-akka.actor.default-dispatcher-5" #19 prio=5 os_prio=0 tid=0x00007fcba8009000 nid=0x547d waiting on condition [0x00007fcc16e6f000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076ff34e28> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
- None
"Thread-0" #15 prio=5 os_prio=0 tid=0x00007fcba0001000 nid=0x547c waiting on condition [0x00007fcc17170000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x0000000770b33920> (a scala.concurrent.impl.Promise$CompletionLatch)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.doAcquireSharedNanos(AbstractQueuedSynchronizer.java:1037)
at java.util.concurrent.locks.AbstractQueuedSynchronizer.tryAcquireSharedNanos(AbstractQueuedSynchronizer.java:1328)
at scala.concurrent.impl.Promise$DefaultPromise.tryAwait(Promise.scala:212)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:222)
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:157)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.Await$$anonfun$ready$1.apply(package.scala:169)
at scala.concurrent.BlockContext$DefaultBlockContext$.blockOn(BlockContext.scala:53)
at scala.concurrent.Await$.ready(package.scala:169)
at akka.actor.CoordinatedShutdown$$anonfun$initJvmHook$1.apply(CoordinatedShutdown.scala:161)
at akka.actor.CoordinatedShutdown$$anon$2.run(CoordinatedShutdown.scala:446)
Locked ownable synchronizers:
- None
"SIGTERM handler" #18 daemon prio=9 os_prio=0 tid=0x00007fcbdc002800 nid=0x547b in Object.wait() [0x00007fcc17271000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007707bb150> (a akka.actor.CoordinatedShutdown$$anon$2)
at java.lang.Thread.join(Thread.java:1252)
- locked <0x00000007707bb150> (a akka.actor.CoordinatedShutdown$$anon$2)
at java.lang.Thread.join(Thread.java:1326)
at java.lang.ApplicationShutdownHooks.runHooks(ApplicationShutdownHooks.java:106)
at java.lang.ApplicationShutdownHooks$1.run(ApplicationShutdownHooks.java:46)
at java.lang.Shutdown.runHooks(Shutdown.java:123)
at java.lang.Shutdown.sequence(Shutdown.java:167)
at java.lang.Shutdown.exit(Shutdown.java:212)
- locked <0x00000007707bb5d0> (a java.lang.Class for java.lang.Shutdown)
at java.lang.Terminator$1.handle(Terminator.java:52)
at sun.misc.Signal$1.run(Signal.java:212)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"Attach Listener" #17 daemon prio=9 os_prio=0 tid=0x00007fcbdc001000 nid=0x545f waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"DestroyJavaVM" #16 prio=5 os_prio=0 tid=0x00007fcc3800c000 nid=0x5427 waiting on condition [0x0000000000000000]
java.lang.Thread.State: RUNNABLE
Locked ownable synchronizers:
- None
"cortex-akka.actor.default-dispatcher-4" #14 prio=5 os_prio=0 tid=0x00007fcbb0010000 nid=0x5445 waiting for monitor entry [0x00007fcc17473000]
java.lang.Thread.State: BLOCKED (on object monitor)
at java.lang.Shutdown.exit(Shutdown.java:212)
- waiting to lock <0x00000007707bb5d0> (a java.lang.Class for java.lang.Shutdown)
at java.lang.Runtime.exit(Runtime.java:109)
at java.lang.System.exit(System.java:971)
at hotmeatballsoup.Driver$$anonfun$1.apply$mcV$sp(Driver.scala:24)
at hotmeatballsoup.Driver$$anonfun$1.apply(Driver.scala:24)
at hotmeatballsoup.Driver$$anonfun$1.apply(Driver.scala:24)
at akka.actor.ActorSystemImpl$$anon$3.run(ActorSystem.scala:810)
at akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$addRec$1$1.applyOrElse(ActorSystem.scala:987)
at akka.actor.ActorSystemImpl$TerminationCallbacks$$anonfun$addRec$1$1.applyOrElse(ActorSystem.scala:987)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:436)
at scala.concurrent.Future$$anonfun$andThen$1.apply(Future.scala:435)
at scala.concurrent.impl.CallbackRunnable.run(Promise.scala:36)
at akka.dispatch.BatchingExecutor$AbstractBatch.processBatch(BatchingExecutor.scala:55)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply$mcV$sp(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at akka.dispatch.BatchingExecutor$BlockableBatch$$anonfun$run$1.apply(BatchingExecutor.scala:91)
at scala.concurrent.BlockContext$.withBlockContext(BlockContext.scala:72)
at akka.dispatch.BatchingExecutor$BlockableBatch.run(BatchingExecutor.scala:90)
at akka.dispatch.TaskInvocation.run(AbstractDispatcher.scala:38)
at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(ForkJoinExecutorConfigurator.scala:43)
at akka.dispatch.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
at akka.dispatch.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
- None
"cortex-akka.actor.default-dispatcher-3" #13 prio=5 os_prio=0 tid=0x00007fcc386e8800 nid=0x5444 waiting on condition [0x00007fcc17574000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076ff34e28> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
- None
"cortex-akka.actor.default-dispatcher-2" #12 prio=5 os_prio=0 tid=0x00007fcc386db000 nid=0x5443 waiting on condition [0x00007fcc17675000]
java.lang.Thread.State: WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076ff34e28> (a akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinPool)
at akka.dispatch.forkjoin.ForkJoinPool.scan(ForkJoinPool.java:2075)
at akka.dispatch.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
at akka.dispatch.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)
Locked ownable synchronizers:
- None
"cortex-scheduler-1" #11 prio=5 os_prio=0 tid=0x00007fcc38645800 nid=0x5442 waiting on condition [0x00007fcc17976000]
java.lang.Thread.State: TIMED_WAITING (sleeping)
at java.lang.Thread.sleep(Native Method)
at akka.actor.LightArrayRevolverScheduler.waitNanos(LightArrayRevolverScheduler.scala:85)
at akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:265)
at akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:235)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
"metrics-console-reporter-1-thread-1" #10 daemon prio=5 os_prio=0 tid=0x00007fcc384ee800 nid=0x5441 waiting on condition [0x00007fcc17c77000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x000000076e328370> (a java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:215)
at java.util.concurrent.locks.AbstractQueuedSynchronizer$ConditionObject.awaitNanos(AbstractQueuedSynchronizer.java:2078)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:1093)
at java.util.concurrent.ScheduledThreadPoolExecutor$DelayedWorkQueue.take(ScheduledThreadPoolExecutor.java:809)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:1074)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1134)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624)
at java.lang.Thread.run(Thread.java:748)
Locked ownable synchronizers:
- None
Note that 2 threads are trying to call Shutdown.exit...
The 10 seconds is defined here in the configuration:
actor-system-terminate {
timeout = 10 s
depends-on = [before-actor-system-terminate]
}
When system.terminate() is called, CoordinatedShutdown by default is initiated. Notice that this method returns a Future, meaning that termination will happen asynchronously. By default, there is a 10-second timeout for the actor system to shut down.
In your driver, you call registerOnTermination. The Scaladoc for this method states:
Note that ActorSystem will not terminate until all the registered callbacks are finished.
Your registered callback invokes System.exit, and you're getting the timeout warning because the actor system is unable to properly shut down within the 10-second window. The System.exit call is probably interfering with the actor system's coordinated shutdown. Removing the System.exit call in registerOnTermination, or removing the call to registerOnTermination altogether, will remove the warning.

Jboss EAP Server Hanged with many waiting threads

Can someone explain me based on the following image. How to find which objects are in waiting state?
We see the system hangs.. Our system has a External authentication to each page of the portal.. So our system is getting slow and hanged
Threaddump showed the following image..
"http-serverIP-8080-107" daemon prio=10 tid=0x0000000052017000 nid=0x4fa3 in Object.wait() [0x00002ae69d5cf000]
java.lang.Thread.State: WAITING (on object monitor)
at java.lang.Object.wait(Native Method)
- waiting on <0x00000007145a6148> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
at java.lang.Object.wait(Object.java:485)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.await(JIoEndpoint.java:416)
- locked <0x00000007145a6148> (a org.apache.tomcat.util.net.JIoEndpoint$Worker)
at org.apache.tomcat.util.net.JIoEndpoint$Worker.run(JIoEndpoint.java:442)
at java.lang.Thread.run(Thread.java:662)

jstack understanding

I'm using JBoss 5.1.0 in a production environment and the shutdown time is quite long at times(>10 minutes). I've used jstack during one of the shutdown operations but I can't really understand its output. I'm attaching a snippet if there is some one who can help me.
Thank you.
"Thread-7326 (group:HornetQ-remoting-threads318818410-409317605)" prio=10 tid=0x000000004097b000 nid=0xbae waiting on condition [0x00007fc43f829000]
java.lang.Thread.State: TIMED_WAITING (parking)
at sun.misc.Unsafe.park(Native Method)
- parking to wait for <0x00000000c9d55b28> (a java.util.concurrent.SynchronousQueue$TransferStack)
at java.util.concurrent.locks.LockSupport.parkNanos(LockSupport.java:198)
at java.util.concurrent.SynchronousQueue$TransferStack.awaitFulfill(SynchronousQueue.java:424)
at java.util.concurrent.SynchronousQueue$TransferStack.transfer(SynchronousQueue.java:323)
at java.util.concurrent.SynchronousQueue.poll(SynchronousQueue.java:874)
at java.util.concurrent.ThreadPoolExecutor.getTask(ThreadPoolExecutor.java:945)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:907)
at java.lang.Thread.run(Thread.java:662)
"JBoss Shutdown Hook" daemon prio=10 tid=0x0000000041b59800 nid=0xba9 runnable [0x00007fc4530ca000]
java.lang.Thread.State: RUNNABLE
at java.lang.Throwable.fillInStackTrace(Native Method)
- locked <0x00000000f1e29d48> (a java.lang.NoSuchMethodException)
at java.lang.Throwable.<init>(Throwable.java:196)
at java.lang.Exception.<init>(Exception.java:41)
at java.lang.NoSuchMethodException.<init>(NoSuchMethodException.java:32)
at java.lang.Class.getDeclaredMethod(Class.java:1937)
at org.jboss.ejb3.interceptors.lang.SecurityActions$2.run(SecurityActions.java:58)
at org.jboss.ejb3.interceptors.lang.SecurityActions$2.run(SecurityActions.java:55)
at java.security.AccessController.doPrivileged(Native Method)
at org.jboss.ejb3.interceptors.lang.SecurityActions.getDeclaredMethod(SecurityActions.java:53)
at org.jboss.ejb3.interceptors.lang.ClassHelper.getDeclaredMethod(ClassHelper.java:165)
at org.jboss.ejb3.interceptors.lang.ClassHelper.getDeclaredMethod(ClassHelper.java:176)
at org.jboss.ejb3.interceptors.lang.ClassHelper.getDeclaredMethod(ClassHelper.java:176)
at org.jboss.ejb3.interceptors.lang.ClassHelper.getMethod(ClassHelper.java:138)
at org.jboss.ejb3.interceptors.lang.ClassHelper.isOverridden(ClassHelper.java:229)
at org.jboss.ejb3.interceptors.aop.LifecycleCallbacks.createLifecycleCallbackInterceptors(LifecycleCallbacks.java:99)
at org.jboss.ejb3.EJBContainer.invokeCallback(EJBContainer.java:1112)
at org.jboss.ejb3.EJBContainer.invokePreDestroy(EJBContainer.java:1149)
at org.jboss.ejb3.pool.AbstractPool.remove(AbstractPool.java:112)
at org.jboss.ejb3.InfinitePool.destroy(InfinitePool.java:44)
at org.jboss.ejb3.pool.ThreadlocalPool.destroy(ThreadlocalPool.java:71)
at org.jboss.ejb3.EJBContainer.lockedStop(EJBContainer.java:934)
at org.jboss.ejb3.session.SessionContainer.lockedStop(SessionContainer.java:276)
at org.jboss.ejb3.session.SessionSpecContainer.lockedStop(SessionSpecContainer.java:588)
at org.jboss.ejb3.stateless.StatelessContainer.lockedStop(StatelessContainer.java:221)
at org.jboss.ejb3.EJBContainer.stop(EJBContainer.java:923)
at sun.reflect.GeneratedMethodAccessor5513.invoke(Unknown Source)