Apache Modcluster failed to drain active sessions - jboss

I'm am using JBoss EAP 6.2 as Webapplication server and Apace Modcluster for load balancing.
Whenever I try to undeploy my webapplication, I get the following warning
14:22:16,318 WARN [org.jboss.modcluster] (ServerService Thread Pool -- 136) MODCLUSTER000025: Failed to drain 2 remaining active sessions from default-host:/starrassist within 10.000000.1 seconds
14:22:16,319 INFO [org.jboss.modcluster] (ServerService Thread Pool -- 136) MODCLUSTER000021: All pending requests drained from default-host:/starrassist in 10.002000.1 seconds
and it takes forever to undeploy and the EAP server-group and node in which the application is deployed becomes unresponsive.
The only workaround is to restart the entire EAP server. My Question is, Is there an attribute that I can set in EAP or ModCluster so that the active sessions beyond a maxTimeOut would expire itself?

To control the timeout to stop a context you can use the following configuration value:
Stop Context Timeout
The amount of time, measure in units specified by
stopContextTimeoutUnit, for which to wait for clean shutdown of a
context (completion of pending requests for a distributable context;
or destruction/expiration of active sessions for a non-distributable
context).
CLI Command:
/profile=full-ha/subsystem=modcluster/mod-cluster-config=configuration/:write-attribute(name=stop-context-timeout,value=10)
Ref: Configure the mod_cluster Subsystem
Likewise if you are using JDK 8 take a look at this issue: Draining pending requests fails with Oracle JDK8

Related

IJ000607/IJ031011 connection active locks errors on transaction reaper thread in JBoss EAP

I am getting this error in my keycloak pod.
How can i fix it?
IJ000607/IJ031011 connection active locks errors on transaction reaper thread in JBoss EAP

What does "Pool is now shutting down as requested" mean when using host connection pools

I have a few streams that wake every min or so and pulling some docs from the DB and performing some actions and in the end sending messages to SNS.
The tick interval is every 1 min currently.
Every few minutes I see this error info in the log:
[INFO] [06/04/2020 07:50:32.326] [default-akka.actor.default-dispatcher-5] [default/Pool(shared->https://sns.eu-west-1.amazonaws.com:443)] Pool is now shutting down as requested.
[INFO] [06/04/2020 07:51:32.666] [default-akka.actor.default-dispatcher-15] [default/Pool(shared->https://sns.eu-west-1.amazonaws.com:443)] Pool shutting down because akka.http.host-connection-pool.idle-timeout triggered after 30 seconds.
What does it mean? Did someone have it before? 443 was worrying me.
Akka http connection pools are terminated by akka automatically if not used for a certain time (default is 30 seconds). This can be configured and set to infinite if needed.
The pools are re-created on next use but this takes some time, so the request initiating the creation will be "blocked" till the pool is re-created.
From documentation.
The time after which an idle connection pool (without pending requests) will automatically terminate itself. Set to infinite to completely disable idle timeouts.
The config parameter that controls it is
akka.http.host-connection-pool.idle-timeout
The log message points to the config parameter too
Pool shutting down because akka.http.host-connection-pool.idle-timeout
triggered after 30 seconds.

Wildfly 10 slow startup

I have Wildfly 10.1.0 and when I start it from command line it takes about 30 seconds to start up. When I start it from inside Eclipe it takes instead about 1 minute.
In Eclipse the first long pause (about 30 seconds) is taken after the line:
[org.jboss.as.ejb3] (MSC service thread 1-3) WFLYEJB0482: Strict pool mdb-strict-max-pool is using a max instance size of 32 (per class), which is derived from the number of CPUs on this host.
The second long pause (another 30 seconds) is taken while in the bottom right corner appears the message:
Checking Deployment Scanners for server
In the command line the first long pause (about 20 seconds) is taken after the line:
[org.jboss.as.ejb3] (MSC service thread 1-1) WFLYEJB0481: Strict pool slsb-strict-max-pool is using a max instance size of 128 (per class), which is derived from thread worker pool sizing.
The second "long" pause (about 10 seconds) is taken after the line:
[org.infinispan.configuration.cache.EvictionConfigurationBuilder] (ServerService Thread Pool -- 60) ISPN000152: Passivation configured without an eviction policy being selected. Only manually evicted entities will be passivated.
So the question is: how do I remove/reduce the above mentioned pauses, especially those in Eclipse? Thanks.

JBoss cluster does not deploy war when infinispan/jgroups timeouts

I have a jboss cluster composed of two jboss eap 6.3.3 instances. In some cases because an application presents a bug two instances raises an exception and I must to restart these two instances (node1 - node 2).
In that situtation when I restart the node1, for instance, and the node 2 is not reachable because is stalled, the node1 start the deploy of my app war and logs the following exception.
ERROR [org.jboss.msc.service.fail] [] (ServerService Thread Pool -- 107) MSC000001:
Failed to start service jboss.persistenceunit."app.war#persistencename":
org.jboss.msc.service.StartException in service jboss.persistenceunit."app.war#persistencename":
org.infinispan.CacheException: Unable to invoke method public void
org.infinispan.statetransfer.StateTransferManagerImpl.start() throws
java.lang.Exception on object of type StateTransferManagerImpl
at org.jboss.as.jpa.service.PersistenceUnitServiceImpl$1.run(PersistenceUnitServiceImpl.java:103)
...
Caused by: org.infinispan.CacheException: org.jgroups.TimeoutException: timeout sending
message to node2/hibernate
at org.infinispan.util.Util.rewrapAsCacheException(Util.java:542)
...
Caused by: org.jgroups.TimeoutException: timeout sending
message to node2/hibernate
at org.jgroups.blocks.MessageDispatcher.sendMessage(MessageDispatcher.java:392)
After that Jboss logs that the implementation of the war has failed.
If I restart the node 2 too, then the node1 starts without problems and deploys the war succesfully.
Why does the deployment stop if one node of the cluster is not reachable?

JBoss shutdown takes forever

I am having issues in stopping the jboss. Most of the times the when I execute the shutdown. it stops the server in couple of seconds.
But some times it takes forver to stop and I have to kill the process.
Whenerver the shut down takes long I see the scheduler was running and in logs I see
2014-07-14 19:19:29,124 INFO [org.springframework.scheduling.quartz.SchedulerFactoryBean] (JBoss Shutdown Hook) Shutting down Quartz Scheduler
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07shutting down.
2014-07-14 19:19:29,124 INFO [org.quartz.core.QuartzScheduler] (JBoss Shutdown Hook) Scheduler scheduler_$_s608203at1vl07 paused.
and nothing after that.
Make sure the Quartz scheduler thread and all threads in its thread pool are marked as daemon threads so that they do not to prevent the JVM from exiting.
This can be achieved by setting the following Quartz properties respectively:
org.quartz.scheduler.makeSchedulerThreadDaemon=true
org.quartz.threadPool.makeThreadsDaemons=true
While it is safe to mark the scheduler thread as a daemon thread, you should think before you mark your thread pool threads as daemons threads, because when the JVM exits, these "worker" threads can be in the middle of executing some logic that you do not want to abort abruptly. If that is the case, you can have your jobs implement the org.quartz.InterruptableJob interface and implement a JVM shutdown hook somewhere in your application that interrupts all currently executing jobs (the list of which can be obtained from the org.quartz.Scheduler API).