Free the EJB bean(jboss) if the message is not processed in specified time limit - jboss

I am using EJB pool. The server is JBOSS. I have set the maximum pool size as 20. Because of some issues in the dependent services, EJB instances are waiting for longer time. Meanwhile we receive more messages and this is leading to maximum pool size limit. Can we somehow free the EJB instance if it couldn't process a message within specified time?

Related

How to recover JMS inbound-gateway containers on Active MQ server failure when number of retry is changed to limited

JMS Inbound gateway is used for request processing at worker side. CustomMessageListenerContainer class is configured to impose back off max attempts as limited.
In some scenarios when active MQ server is not responding before max attempts limit reached container is being stopped with below message.
"Stopping container for destination 'senExtractWorkerInGateway': back-off policy does not allow for further attempts."
Wondering is there any configuration available to recover these containers once the Active MQ is back available.
sample configuration is given below.
<int-jms:inbound-gateway
id="senExtractWorkerInGateway"
container-class="com.test.batch.worker.CustomMessageListenerContainer"
connection-factory="jmsConnectionFactory"
correlation-key="JMSCorrelationID"
request-channel="senExtractProcessingWorkerRequestChannel"
request-destination-name="senExtractRequestQueue"
reply-channel="senExtractProcessingWorkerReplyChannel"
default-reply-queue-name="senExtractReplyQueue"
auto-startup="false"
concurrent-consumers="25"
max-concurrent-consumers="25"
reply-timeout="1200000"
receive-timeout="1200000"/>
You probably can emit some ApplicationEvent from the applyBackOffTime() of your CustomMessageListenerContainer when the super call returns false. This way you would know that something is wrong with ActiveMQ connection. At this moment you also need to stop() your senExtractWorkerInGateway - just autowire it into some controlling service as a Lifecycle. When you done fixing the connection problem, you just need to start this senExtractWorkerInGateway back. That CustomMessageListenerContainer is going to be started automatically.

Mongodb connection pools with cross region replicas

We have a cluster hosted on Mongo Atlas (M50, AWS) with cross region replicas in 5 other regions. This allows my application servers in those regions to read from a local replica using readPreference=nearest which is much faster.
The M50 instance size has a maximum of 16000 connections.
The issue I face is that the connection pool creates a connection to each node in the cluster. With 5 other regions, each having 10 application servers and each server having an application pool of 100 (as is default), that's 5000 connections to every single node when they will only ever read from the replica in the local region. These connections take away from the connections available to application server in the the primary region which is doing all the writes (5000k writes per second). The primary region has 20 application servers. These serves are set to have a minimum of 500 connections each which creates a minimum of 10000 connections. That's a total of 15000 connections.
This creates a problem as we sometimes run out of connections when traffic spikes as those pools increase in size to cope with the additional load. The minimum pool size is required to ensure the application remains responsive when the traffic spikes (if we don't do this we see a lot of MongoWaitQueueFullExceptions and unacceptably high response times). We could set the maximum pool size but this limits throughput and we see timeouts.
Is there any way that the application pools in the replica regions can be prevented from creating connections to every single node in the cluster?
We don't want to increase the instance size as it doubles the cost of the cluster.
The application is written in .NET 6 (minimal API) with MongoDB driver version 2.16.

24 hours performance test execution stopped abruptly running in jmeter pod in AKS

I am running load test of 24 hours using Jmeter in Azure Kubernetes service. I am using Throughput shaping timer in my jmx file. No listener is added as part of jmx file.
My test stopped abruptly after 6 or 7 hrs.
jmeter-server.log file under Jmeter slave pod is giving warning --> WARN k.a.j.t.VariableThroughputTimer: No free threads left in worker pool.
Below is snapshot from jmeter-server.log file.
Using Jmeter version - 5.2.1 and Kubernetes version - 1.19.6
I checked, Jmeter pods for master and slaves are continously running(no restart happened) in AKS.
I provided 2GB memory to Jmeter slave pod still load test is stopped abruptly.
I am using log analytics workspace for logging. Checked ContainerLog table not getting error.
Snapshot of JMX file.
Using following elements -> Thread Group, Throughput Controller, Http request Sampler and Throughput Shaping Timer
Please suggest for same.
It looks like your Schedule Feedback Function configuration is wrong in its last parameter
The warning means that the Throughput Shaping Timer attempts to increase the number of threads to reach/maintain the desired concurrency but it doesn't have enough threads in order to do this.
So either increase this Spare threads ration to be closer to 1 if you're using a float value for percentage or increment the absolute value in order to match the number of threads.
Quote from documentation:
Example function call: ${__tstFeedback(tst-name,1,100,10)} , where "tst-name" is name of Throughput Shaping Timer to integrate with, 1 and 100 are starting threads and max allowed threads, 10 is how many spare threads to keep in thread pool. If spare threads parameter is a float value <1, then it is interpreted as a ratio relative to the current estimate of threads needed. If above 1, spare threads is interpreted as an absolute count.
More information: Using JMeter’s Throughput Shaping Timer Plugin
However it doesn't explain the premature termination of the test so ensure that there are no errors in jmeter/k8s logs, one of the possible reasons is that JMeter process is being terminated by OOMKiller

Issues with Postgres connections pool on Payara/Glassfish

I run a JEE application on Payara 4.1 which uses PostgreSQL 9.5.8. The connection pool is configured in following way.
<jdbc-resource poolName="<poolName>" jndiName="<jndiName>" isConnectionValidationRequired="true"
connectionValidationMethod="table" validationTableName="version()" maxPoolSize="30"
validateAtmostOncePeriodInSeconds="30" statementTimeoutInSeconds="30" isTimerPool="true" steadyPoolSize="5"
idleTimeoutInSeconds="0" connectionCreationRetryAttempts="100000" connectionCreationRetryIntervalInSeconds="30"
maxWaitTimeInMillis="2000">
From what monitors say, the applications needs 1-3 DB connections to postgres when running. Steady pool size is set to 5, max pool size is 30.
I see, that about 4 times a day the application opens all connections to the database hitting the max pool size limit. Some requests to the server fail at this point with exception: java.sql.SQLException: Error in allocating a connection. Cause: In-use connections equal max-pool-size and expired max-wait-time. Cannot allocate more connections.
After some seconds all issues are gone, and the server runs fine till the next hiccup.
I have requested some TCP dumps to be performed to look closely into what happens exactly. I see that:
After 30 connections (sockets) have been opened, most of the connections are rarely used.
After some time (1h or so) the server tries to access some of such pooled connections to realize, that the socket is closed (DB responds immediately with a TCP RST).
As the pooled connections count decreases hitting steady pool size, the connection pool opens 25 connections (sockets) which takes some time (about 0,5 up to 1 second per connection – don’t know why this long, as the TCP handshakes are immediate). At this point some server transactions are failing.
The loop repeats.
This issue is driving me mad. I was wondering, whether I am missing some crucial pool configuration to revalidate the connections more often but could not find anything that would help.
EDIT:
What does not help, as we have tested it already:
Making the pool size bigger (same issues)
Removing idleTimeoutInSeconds="0". We had issues with the connection pool every 10 minutes we did that.

Using maxIdle and minIdle time Connection Properties in JBoss/WildFly

we are in the process of porting our configuration from Tomcat to WildFly. Within our Tomcat connection pool configuration we are using maxIdle and minIdle properties, which as the doc say:
maxIdle (int) The maximum number of connections that should be kept
in the pool at all times. Default value is maxActive:100 Idle
connections are checked periodically (if enabled) and connections that
been idle for longer than minEvictableIdleTimeMillis will be released.
(also see testWhileIdle)
minIdle (int) The minimum number of established connections that
should be kept in the pool at all times.
Looking at JBoss/WildFly docs, the only available parameter is idle-timeout-minutes, which refers anyway to a single connection idle time:
The maximum time, in minutes, before an idle
connection is closed. A value of 0 disables timeout. Defaults to 15
minutes.
Is there a workaround to mimic the same configuration also on JBoss/WildFly ?
Thanks!
Tomcat uses both a minIdle and maxIdle parameter to determine respectively the minimum and maximum of idle connections that should be kept in the pool. While the minIdle parameter, can be turned to be WildFly min-pool-size, on the other hand, the maxIdle parameter has not corresponding match on WildFly. The closest match is the idle-timeout-minutes, which is the number of minutes after which unused connections are closed (default 15 minutes). You can actually vary this parameter let’s say to 10 minutes as follows:
/subsystem=datasources/data-source=ExampleDS/:write-attribute(name=idle-timeout-minutes,value=10)
Source: From Tomcat to WildFly in one day