Could not open JDBC Connection, Unable to get managed connection for java during load test - jboss

Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:

This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.

Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.

Related

Wildfly won't deploy when datasource is unavailable

I am using wildfly-8.2.0.Final.
There are several databases that i have to connect to. However, some of them are only used for certain functionalities on the web application and they are not needed to be online all the time. So when the wildfly starts, some of the datasources may not be online. However, disconnection to any datasource causes wildfly to not deploy .war deployment and i cannot find any way to solve this problem. Is there a way?
UPDATE:
I have a single table on a remote database server. The user will be able to query the table via my web application. The thing is, I have almost no control over the mentioned database. When the web application starts, it could be offline. However, this would cause my web application to fail to start. What I want is being able to run queries on a remote database if it is online. If it is offline, the web page could fail or the query can be canceled. But the only thing that I don’t want is that my web application to be limited by a remote database that I may have no control over.
My previous solution was a workaround. I would run queries on the remote database via a local database which has a foreign table to the remote one. However, the local one reads all data on the remote table before applying any constraints on postgresql 9.5. As the remote table has a large number of rows and I am using lazy loading, it takes so long for a single query and defeats the whole purpose of the lazy loading.
I found a similar question, but there is no answer.
On wildfly, you can set the datasource so that it tries to reconnect periodically when it disconnects. In my case, the deployment should be successful initially for this to be helpful.
The deployment will failed if it references those datasources.
Also you could define but disable those datasources.

Could not inspect JDBC autocommit mode

We are using spring data jpa, and at one place we have injected entity manager. For all other functionalities we are using jpa-repository but for one functionality we are using injected entity manager. This works fine for some time after application is started but starts giving below error. It seems after some time entity-manager drops the database connection. Not able to replicate this. Is there any time out setting in entity manager after which it drops DB connection.
PersistenceException:
org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1763) at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1677) at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:458)
There is nothing like this in the entity manager itself. But your database might close connections after some time, especially when they are idle, or do a lot of work (database can have a quota on how much CPU time a connection or a transaction may use).
Your connection pool is another possible source of such surprises.
The interesting question is: why does it only happen on the entity manager you use directly. One possible reason is that you are not properly releasing the connections. This might make the connection pool consider them stale and closing them after some timeout.
In order to debug this, I would start by inspecting your connection pool configuration and activating logging for it so you can see when connections are handed out when they get returned and any other special events the pool might trigger.

Grails shareable vs unshareable connection pools for postgres datasource

My problem is that I have two apps that are both getting these exceptions:
Caused by: org.apache.tomcat.jdbc.pool.PoolExhaustedException:
[pool-2-thread-273] Timeout: Pool empty. Unable to fetch a connection
in 30 seconds, none available[size:100; busy:99; idle:0;
lastwait:30000].
There are two apps:
grails app war running in tomcat connecting to postgres data source A
standalone jar connecting to data source B which is a different db on the same server as postgres data source A.
Both apps use org.apache.tomcat.jdbc.pool.ConnectionPool by default it seems (because I didn't configure the default pool anywhere and both apps use this). Also, my max connection limit is 200 and only using < 130 connections, so I'm not hitting a max connection issue. Since two apps are using separate data sources, I read that this would mean they cannot be the same conn pool.
When I login to my postgres server, I can see that app 2 has 100 idle connections and the max idle size of the pool is 100. So this is fine. However, what I was not expecting is that my app 1 would use connections from app 2's pool - or rather, since it appears that apps share a connection pool - I suppose that app 1 is trying to take from this common pool which already has 100 connections allocated. I would not really have expected that because they use a tomcat conn pool and AppB doesn't even use tomcat, so why would they be shared...
So my questions are (since I really am having a hard time finding docs about this):
Is it accurate that by default different apps would use the same conn pool?
If they're using the same conn pool then how can they share a conn pool if they're using different data sources?
Is it possible in grails to specify shared vs unshared conn pool? https://tomcat.apache.org/tomcat-7.0-doc/jndi-datasource-examples-howto.html This link mentions postgres specifically and that it seems like there's shared vs unshared concept (though I can't find any good documentation about this) but it's configured outside of grails. Any way to do it in grails?
Notes: Using grails 2.4.5 and postgres server 90502

JMeter: java.net.SocketException: Connection reset

Once a Login script is executed with few user, I don't see connection reset problem, whereas, when the same is run 100 users, "java.net.SocketException: Connection reset" starts throwing for very first link.
What I don't understand is if there is connection problem, then it should even show the same error for single or few users as well.
This means that your server is rejecting connections because it is either overloaded or misconfigured.
It is regular that you don't face it with 1 user and face it with 100, this is typically what load testing brings, ie simulate traffic on your server
It might be the case described in Connection Reset since JMeter 2.10 ? wiki page.
If you are absolutely sure that your server is not overloaded and is configured to accept 100+ connections (defaults are good for development, not for production, they need to be tweaked) you can try work it around as follows:
In user.properties file add the next 2 lines:
httpclient4.retrycount=1
hc.parameters.file=hc.parameters
In hc.parameters file add the following line:
http.connection.stalecheck$Boolean=true
Both files live in JMeter's bin folder.
You need to restart JMeter to pick the properties up.
Above instructions are applicable for HttpClient4 implementation, make sure you use it, the fastest and the easiest way to set HttpClient4 implementation for all the HTTP Request samplers is using HTTP Request Defaults

ClientSession is closed by HornetQ

we encountered the following exception in HornetQ (with HornetQ 2.2.5 GA with JBoss 4.3.3, with the InVM connector. both the client and the server are on the same machine):
hornetq-failure-check-thread,Connection failure has been detected: Did not receive data from invm:0.
the error code is 3 (which is HornetQException.CONNECTION_TIMEDOUT).
this causes the RemotingServiceImpl.FailureCheckAndFlushThread to run, which writes the following log multiple times:
Client connection failed, clearing up resources for session 95406085-7b3a-11e2-86d3-005056b14e26
note that in our application we reuse our ClientSessions. we have one instance of ClientSession per connection (we open multiple connections, one per each client), and the above problem caused one of the sessions to be closed.
after reading this post: Connection timeout issues - Connection failure has been detected
I understood that we need to configure the following on our ServerLocator instance (which is used to create the ClientSessionFactory that creates our ClientSessions):
ServerLocator locator = HornetQClient.createServerLocatorWithoutHA(connectorConfig);
locator.setClientFailureCheckPeriod(Long.MAX_VALUE);
locator.setConnectionTTL(-1);
this configurtion solved the problem, and the above error was not reproduced.
our problem is the following - in case the sessions will be closed again by HornetQ from some other reason, how can we create new sessions instead of the closed ones?
I'm asking this because after we found the session was closed (and before we set the clientFailure and clientTTL values), we tried to create a new session by calling the createSession(false, true, true) method on the ClientSessionFactory instance (we create that instance upon system startup only once and resue it since) and it failed with the following error:
HornetQException[errorCode=0 message=Failed to create session]
so we didn't succed to create new sessions, and the only solution was restarting the JBoss.
note that we can't restart our application on the client site, so we need to find a way to create new sessions in case the old ones were closed from some reason.
Instead of doing that, you should probably configure retry and use a proper value, that way your connection will be reconnected.
But since you're using inVM, and as long as you don't stop the server you should be fine with that configuration. However if you intend to restart just the server, you could use reconnectionRetry (-1) and the session would be reattached or recreated seamesly to you.
anyway I would recommend you going to a newer version beyond 2.2.5.