We are periodically getting an error in our application when Spring Batch is attempting to get a connection to the metadata tables. It seems that we have a leak somewhere or somehow that is not releasing or closing connections.
What I am looking for is some way to have Spring Batch log when it is getting a connection from the pool, releasing a connection back to the pool, etc. Then we can attempt to determine where our leak is.
You should be able to see that by enabling debug logs for org.springframework.jdbc. Otherwise you can use a tool like p6spy.
Related
I have statistics enabled for a data source and can see that there are much more active connections than expected. I suppose some deployment on the server misses to call Connection.close() thus keeping connections active and not returning them to the pool.
I would like to ask for your advice regarding the method I can use to figure out who on server is keeping connections active. There are several deployments that are using a particular data source.
Profiler? JMX? anything else?
Thanks,
Valery
https://access.redhat.com/solutions/309913
Seems to be exactly what I was looking for.
Resolution To enable the cached connection manager (CCM) to identify a
connection leak:
Enable the CCM for the datasource. It defaults to true if it is not explicitly specified but you may set use-ccm="true" explicitly.
Verify that exists in the jca subsystem and set debug="true"
Setting debug="true" will:
Log an INFO message indicating that JBoss is "Closing a connection for
you. Please close them yourself" Generate a stacktrace for the code
where the leaked connection was first opened. Close the leaked
connection
During Stress/Load test and after sending so many queries using JMeter to my JBoss server, the server becomes irresponsive / unreachable. I want to know if there is any mechanism that makes JBoss unstable.
This might be an issue with the threads, there might be some thread blocking or taking longer. On this case, you will need to get a thread dump, and verify where it's stuck/unresponsive. From the description it might a thread on JMeter that is using the resources and destabilizing JBoss. A server log could also show some issue as well.
Recommendations
Get the thread dumps on the moment it happens
Analyze it with fastthread.io or other thread analyzer, e.g. TDA.
Verify any issue on the server.log
Observation
For opening issues with JBoss 5/6/7 please update the logs and configuration files, this will make the debug easier.
-f
Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:
This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.
Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.
We are using spring data jpa, and at one place we have injected entity manager. For all other functionalities we are using jpa-repository but for one functionality we are using injected entity manager. This works fine for some time after application is started but starts giving below error. It seems after some time entity-manager drops the database connection. Not able to replicate this. Is there any time out setting in entity manager after which it drops DB connection.
PersistenceException:
org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1763) at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1677) at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:458)
There is nothing like this in the entity manager itself. But your database might close connections after some time, especially when they are idle, or do a lot of work (database can have a quota on how much CPU time a connection or a transaction may use).
Your connection pool is another possible source of such surprises.
The interesting question is: why does it only happen on the entity manager you use directly. One possible reason is that you are not properly releasing the connections. This might make the connection pool consider them stale and closing them after some timeout.
In order to debug this, I would start by inspecting your connection pool configuration and activating logging for it so you can see when connections are handed out when they get returned and any other special events the pool might trigger.
Reshift newbie here - greetings!
I am trying to unload data to S3 from Redshift, using a java program running locally which issues an UNLOAD statement over a JDBC connection. At some point the JDBC connection appears lost on my end (exception caught).
However, looking at the S3 location, it seems that the unload runs to completion. It is true however that I am unloading a rather small set of data.
So my question is, in principle, how is unload supposed to behave in case of a lost connection (say, a firewall kills it or even someone does a kill -9 on the process that executes the unload)? Will it run to completion? Will it stop as soon as it senses that the connection is lost? I have been unable to find the answer neither by rtfm'ing, nor by googling...
Thank you!
The UNLOAD will run until it completes, is cancelled, or encounters an error. Loss of the issuing connection is not interpreted as a cancel.
The statement can be cancelled on a separate connection using CANCEL or PG_CANCEL_BACKEND.
http://docs.aws.amazon.com/redshift/latest/dg/r_CANCEL.html
http://docs.aws.amazon.com/redshift/latest/dg/PG_CANCEL_BACKEND.html