Could not inspect JDBC autocommit mode - jpa

We are using spring data jpa, and at one place we have injected entity manager. For all other functionalities we are using jpa-repository but for one functionality we are using injected entity manager. This works fine for some time after application is started but starts giving below error. It seems after some time entity-manager drops the database connection. Not able to replicate this. Is there any time out setting in entity manager after which it drops DB connection.
PersistenceException:
org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: could not inspect JDBC autocommit mode at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1763) at org.hibernate.jpa.spi.AbstractEntityManagerImpl.convert(AbstractEntityManagerImpl.java:1677) at org.hibernate.jpa.internal.QueryImpl.getResultList(QueryImpl.java:458)

There is nothing like this in the entity manager itself. But your database might close connections after some time, especially when they are idle, or do a lot of work (database can have a quota on how much CPU time a connection or a transaction may use).
Your connection pool is another possible source of such surprises.
The interesting question is: why does it only happen on the entity manager you use directly. One possible reason is that you are not properly releasing the connections. This might make the connection pool consider them stale and closing them after some timeout.
In order to debug this, I would start by inspecting your connection pool configuration and activating logging for it so you can see when connections are handed out when they get returned and any other special events the pool might trigger.

Related

How much of an issue is connection pinning with RDS Proxy?

I'm using Postgres with row-level security to lock down all queries across tables to a particular tenancy. I do this by calling a SET app.tenant_id = x; statement each time my service opens a connection, and my RLS policies use this session-level setting to limit the return data. Basically the approach described here under 'Alternative Approach'.
If this service is deployed in AWS, with RDS Proxy in between it and the database then I understand it'll be subject to 'connection pinning' since I'm using a SET statement. I'm trying to get a feel for how big an issue this actually is. A few questions:
Are SET LOCAL statements also going to cause pinning?
If my service connections to RDS Proxy are short-lived and a single transaction (which they will be 99% of the time) does this lessen the impact?
Does service connection pooling (service -> RDS Proxy) help or hinder?
Basically any advice on how much of an issue this is, how I can make this work, or any workarounds, would be appreciated.
I had to delete my previous answer because I was originally using pgAdmin, which is apparently very keen on pinning connections. This meant I couldn't trust my data. I have redone this experiment with another more well behaved tool, psql.
I understand where you're coming from. According to the documentation, SET will cause pinning, but it's unclear if that includes SET LOCAL. Since I need this information too, I will run a test today and post the results here.
I will do the following
Step 1: Open one connection through our proxy and use a regular SET so ensure that the DatabaseConnectionsCurrentlySessionPinned metric increases to 1. I will use the following query:
SET search_path TO blah;
SET app.tenant_id TO 123;
Step 2: I will close that connection and see that the metric decreases back down to 0.
Step 3: I will open a new connection, but this time I will use the following query:
SET LOCAL search_path TO blah;
SET LOCAL app.tenant_id TO 123;
Step 4: I will close the connection, and if the connection is pinned, I will monitor the metric to see if and when it decreases back to 0.
Let's do this
Caveat: don't look at the metrics in RDS Management Console. See: https://repost.aws/questions/QUfPWoiFxmR7-lios5NrFwBA/fix-database-connections-currently-session-pinned-metric-on-rds-proxy-dashboard
Step 1
The connection between proxy and server was pinned immediately when I ran SET commands, as expected.
Step 2
The pinned connection between proxy and server was closed immediately when I closed the connection between client and proxy.
Step 3
The connection between proxy and server was not pinned when I ran SET LOCAL commands.
Step 4
The connection was not pinned, so this step is superfluous.
Conclusion
SET LOCAL does circumvent pinning in RDS Proxy, with the caveat that it must be done within a transaction.
In my previous attempt to answer this question, pgAdmin's behavior made me conclude that pinning does occur in both cases. This is wrong.
To answer your other questions, if pinning does occur, it doesn't matter that your transactions are short. The server connection will remain pinned until the client connection is gone. Your only resort is to make sure you close client connections once they're pinned.
The documentation states that "when a connection is pinned, each later transaction uses the same underlying database connection until the session ends. Other client connections also can't reuse that database connection until the session ends. The session ends when the client connection is dropped."

Spring Batch Metadata Connection Debugging

We are periodically getting an error in our application when Spring Batch is attempting to get a connection to the metadata tables. It seems that we have a leak somewhere or somehow that is not releasing or closing connections.
What I am looking for is some way to have Spring Batch log when it is getting a connection from the pool, releasing a connection back to the pool, etc. Then we can attempt to determine where our leak is.
You should be able to see that by enabling debug logs for org.springframework.jdbc. Otherwise you can use a tool like p6spy.

How to know if a Firebird 2.0 database is being accessed?

I know that using Firebird 2.5+ I can check if there are users accessing my database using SQL, but unfortunately, Firebird 2.0 doesn't have this feature. Yes, I know it's an old version, but it's a legacy software and I'm not allowed to upgrade this in a short time... :(
I need to know if someone is connected to my 2.0 Firebird database, due to a process I'll run:
Block connections to DB (but ONLY if no one is connected)
Run my process
Allow users to reconnect again
I can start my process only when there are no users connected.
My database is part of a client/server system (no Web).
Any hints?
-at[tach] : this parameter prevents any new connections to the database from being made with the exception of the SYSDBA and the database owner. The shutdown will fail if there are any sessions connected after the timeout period has expired. It makes no difference if those connected sessions belong to the SYSDBA, the database owner or any other user. Any connections remaining will terminate the shutdown with the following details:
https://firebirdsql.org/manual/gfix-dbstartstop.html
There is also Services API to do it so your database access library should expose the shutdown function. Specify a short shutdown, and if it failed - then there were some users. If it succeeded - now you can go on with maintenance, having a warranty client applications will not be able to connect.
Alternatively you can upgrade Firebird 2.0 -> 2.1 which is more close to 2.0 than 2.5 but already have Monitoring Tables implemented.
However this your approach has one weak point - race conditions. Using M.T. you envision your work as following:
Keep querying M.T. (which slows down server work significantly) until there are no other connections.
start maintenance work, that would fail if other connections are active
complete maintenance work
Problem is, that even after at step 1 you gained "no other connection" state, it does not mean that between steps 1 and 2, and especially between steps 2 and 3 now new connections would be made.
Even if you made your checks and ensure #1 condition, when you would go on with maintenance there would be some new user connected back and working now. Not every time of course, but as time goes by it will eventually happen one day.
But there is yet one more good thing in FB 2.1 - database-level triggers.
c:\Program Files\Firebird\Firebird_2_1\doc\sql.extensions\README.db_triggers.txt
You can create a regular "all_current_connections" table, using on connect and on disconnect triggers to keep it up to date.
You perhaps would also have to add some logic to your applications, so they would update that table with your internal application ID, to tell main workflow apps/connections from servicing utilities. However it is also possible that mere CURRENT_USER and CURRENT_CONNECTION pair, which the trigger knows and can store to the table, would be enough for that table, if you can infer kind of application from mere user name.
Then on disconnect trigger might be checking whether all "main workflow" apps disconnected and POST_EVENT to notify servicing utilities. However those utilities would still have to shutdown the database first, anyway.
You can shut down the database using gfix. The gfix tool will try to shutdown the database and if connections still exist after a timeout, the shutdown will fail.
For example, use:
gfix -shut -attach 5 <your-database>
This will:
prevent new connection being created,
wait 5 seconds for the existing connections to end,
if after 5 seconds there are still active connections the shutdown will abort,
otherwise, after 5 seconds the database will be shut down.
After shutdown, only SYSDBA or the database owner can create a connection to the database. This is only a viable option if your application it self doesn't use SYSDBA or the database owner account.
You bring the database back online using:
gfix -online <your-database>
For more information, see also Gfix - Database Housekeeping: Database Startup and Shutdown
Well, not an elegant way, but works...
I try to rename the database file.
If there is someone accessing the database, the rename operation will give me
an exception, saying that the file is in use by some process.
If rename succeeds, new users will not be able to access the database
anymore (the connection string used by my systems is not changed).
I run the exclusive process I have to.
Rename the database file to its original name, allowing new users to
connect again.
I post my solution in the hope that helps someone facing a similar problem.
Our new version of the product will probably a Web application and the database was not choosen yet, but certainly will no be Firebird.
Thanks to all that tried to give me an answer.

Could not open JDBC Connection, Unable to get managed connection for java during load test

Noticed below error during load test with multiple users and not in case of single SOAP request.
Could not open JDBC Connection for transaction; nested exception is java.sql.SQLException: javax.resource.ResourceException: IJ000453: Unable to get managed connection for java:
This could be due to any of the following:
The datasource connection pool has not been tuned (e.g. max-pool-size and blocking-timeout-millis) correctly for the maximum load on the application.
The application is leaking connections because it is not closing them and thereby returning them to the pool.
Threads with connections to the database are hanging and holding on to the connections.
Make sure that the min-pool-size and max-pool-size values for the respective datasource are set according to application load testing and connections are getting closed after use inside the application code.
Most likely you've found the bottleneck in your application, it seems that it cannot handle that many virtual users. The easiest solution would be raising an issue in your bug tracker system and let developers investigate it.
If you need to provide the root cause of the failure I can think of at least 2 reasons for this:
Your application or application server configuration is not suitable for high loads (i.e. number of connections in your JBOSS JDBC Connection pool configuration is lower than it is required given the number of virtual users you're simulating. Try amending min-pool-size and max-pool-size values to match the number of virtual users
Your database is overloaded hence cannot accept that many queries. In this case you can consider load testing the database separately (i.e. fire requests to the database directly via JMeter's JDBC Request sampler without hitting the SOAP endpoint of your application.) See The Real Secret to Building a Database Test Plan With JMeter article to learn more about database load testing concept.

Postgres: "ERROR: cached plan must not change result type"

This exception is being thrown by the PostgreSQL 8.3.7 server to my application.
Does anyone know what this error means and what I can do about it?
ERROR: cached plan must not change result type
STATEMENT: select code,is_deprecated from country where code=$1
I figured out what was causing this error.
My application opened a database connection and prepared a SELECT statement for execution.
Meanwhile, another script was modifying the database table, changing the data type of one of the columns being returned in the above SELECT statement.
I resolved this by restarting the application after the database table was modified. This reset the database connection, allowing the prepared statement to execute without errors.
I'm adding this answer for anyone landing here by googling ERROR: cached plan must not change result type when trying to solve the problem in the context of a Java / JDBC application.
I was able to reliably reproduce the error by running schema upgrades (i.e. DDL statements) while my back-end app that used the DB was running. If the app was querying a table that had been changed by the schema upgrade (i.e. the app ran queries before and after the upgrade on a changed table) - the postgres driver would return this error because apparently it does caching of some schema details.
You can avoid the problem by configuring your pgjdbc driver with autosave=conservative. With this option, the driver will be able to flush whatever details it is caching and you shouldn't have to bounce your server or flush your connection pool or whatever workaround you may have come up with.
Reproduced on Postgres 9.6 (AWS RDS) and my initial testing seems to indicate the problem is completely resolved with this option.
Documentation: https://jdbc.postgresql.org/documentation/head/connect.html#connection-parameters
You can look at the pgjdbc Github issue 451 for more details and history of the issue.
JRuby ActiveRecords users see this: https://github.com/jruby/activerecord-jdbc-adapter/blob/master/lib/arjdbc/postgresql/connection_methods.rb#L60
Note on performance:
As per the reported performance issues in the above link - you should do some performance / load / soak testing of your application before switching this on blindly.
On doing performance testing on my own app running on an AWS RDS Postgres 10 instance, enabling the conservative setting does result in extra CPU usage on the database server. It wasn't much though, I could only even see the autosave functionality show up as using a measurable amount of CPU after I'd tuned every single query my load test was using and started pushing the load test hard.
For us, we were facing similar issue. Our application works on multiple schema. Whenever we were doing schema changes, this issue started occruding.
Setting up prepareThreshold=0 parameter inside JDBC parameter disables statement caching at database level. This solved it for us.
I got this error, I manually ran the failing select query and it fixed the error.