postgres query timeout issue while using connection pooling in JBoss - postgresql

I am facing the below issue
ERROR: canceling statement due to user request
inconsistently after I enabled query timeout for xa datasource in my xxx-ds.xml. I have added the following in my ds xml file.
<query-timeout>180</query-timeout>
The query timeout is set for 180 seconds, which means any sql query
that takes more than 180 seconds will get cancelled from application
server side.
But the issue I am facing is inconsistent and the queries gets timed out now and then without taking exactly 180 seconds.
We are using connection pooling also.
While searching stackoverflow found this question, which discusses about the possible causes for this issue while using connection pooling.
The solution suggested there was to set statement_timeout setting in postgresql.conf file. But it is bit difficult for me to enable statement_timeout setting in my database environment as the DB server is shared by multiple applications.
I would like to have a solution to terminate timed out queries from client side effectively and consistently while using connection pooling. I am using
JBoss 4.2.2-GA
postgresql 9.2 (64 bit)
java 1.7
postgresql-9.2-1002.jdbc4.jar

It looks like the issue is with postgresql driver 9.2. When I upgraded to 9.3, issue is fixed.

Related

After restarting DB2 service, the application server gets ERRORCODE=-4499, SQLSTATE=58009 in database connections

We have an application on IBM WebSphere Application Server 7.x and it connects to a remote database on z/os DB2 10.x. For annual operation, DB2 shut down and restarted. After starting the database, we first get
com.ibm.websphere.ce.cm.StaleConnectionException
and then we get
The database manager is not able to accept new requests, has terminated all requests in progress, or has terminated this particular request due to unexpected error conditions detected at the target system. ERRORCODE=-4499, SQLSTATE=58009
The connection between WebSphere and DB2 tested by 'test Connection' in WAS datasource. Both systems are up and running but there is no correct connection between them! There was no change in DB2, WAS, and JDBC driver.
Update: The JDBC driver version is 4.15.134, connection properties is IBM WebSphere default setting and the connection is direct to DB2. Another problem later showed that while the connection still has the problem, executing the query directly on z/OS's DB2 gets the same the error. The query consist of a select with a join on two different tables, selecting on each table is ok, but the final query does not work and gets ERRORCODE=-4499, SQLSTATE=58009.
Update 2
The detail of environment is: IBM WebSphere Application Server 7.0.0.45, DB2 10.1, Java version 1.6 SR16 and z/OS 1.13.
This specific query gets the error in all environments, on all application server, z/os SPUFI, database viewer, such as DBeaver.
Any help is greatly appreciated.
Finally, we found the solution, ran REORG and RUNSTATS on both tables and on all their partitions, and the error vanished both on the application and SPUFI. I guess something went wrong during restart and tables corrupted. Now everything is ok.
If I got you correctly, you complain on inability of the driver to reestablish the database connections after the DB2 for Z/OS restart.
If yes, then have you tried to set the corresponding connection properties described at the following link?
Configuration of Sysplex workload balancing and automatic client reroute for Java clients

intermittent "connection reset by peer" sql postgres

After a period of inactivity, my go web service is getting a net.OpError with message read tcp x.x.x.x:52086->x.x.x.x:24414: read: connection reset by peer when executing the first postgres sql query. After the error, the subsequent requests will work fine.
The postgres database is hosted with compose.com which has haproxy in front of the postgres db. My go web app is using standard sql and sqlx.
I've tried running a ticker invoking db.Ping() every 15 minutes, but this hasn't fixed the issue.
Why is the go standard sql lib not handling these connection drops?
Since no one wrote that explicity. The solution to this problem is setting db.SetConnMaxLifetime(time.Minute). I tried it and it works. Connection reset occurs often on AWS where there is inactivity limit set to 350 seconds, after that TCP RST is returned.

not able to connect postgresql after 30 connections

I have 2 cloud servers of postgresql, 1st one is working fine but in second after 30 mins i am not able to connect from java application. When i connect from pgadmin it shows 30 to 40 connection and after killing those connection every thing runs smooth.
its
configuration:
postgresql/9.3
max_connections = 100
shared_buffers = 4GB
When same application is connect to other postgresql with same schema every thing works fine forever
Configuration:
postgresql/9.1
max_connections = 100
shared_buffers = 32MB
Can u please help me to understand or fix the issue
I work on a PostgreSQL 9.3 instance with hundreds of open connections. I concur to you that the open connections themselves shouldn't be a problem. Sine we don't have much information, what follows is a description of how to get started troubleshooting.
Check server logs for anything wrong. Maybe there is an issue on the OS level with initiating connections?
Try logging in with psql as the application user. Does the problem persist? If not, the problem is not with PostgreSQL. I would take a closer look at the Java code and see if something is happening there.
Note that psql and other libpq actions may not give you the full picture. Try connecting locally over a non-SSL connection while watching a packet capture. You can find (and look up) the SQLSTATE error of the connection in this case. This is because, for legacy and backwards compatibility reasons libpq does not pass the sqlstate up to the client app when connecting to the database.
My bet though is that this is not a postgresql issue. It may be an operating system issue. It may be a resource issue. It may be a client application issue.

How to change PostgreSQL Studio's inactivity timeout?

I want to increase PostgreSQL's Studio login/session time out time. When I leave PostgreSQL Studio to idle just for some time, I get the following message:
You have been logged out due to inactivity. Please relogin or exit.
I am using PostgreSQL from BigSQL 5.0.3 package bundle. Actually, I am researching about compability of MS SQL Server and PostgreSQL queries.
As I am using Postgres now for learning purpose rather than security, I feel it annoying to login frequently.
How can I increase the login/session timeout inside PostgreSQL Studio?
Postgres itself doesn't have an idle connection timeout. This is coming from something else.
Unfortunately, the timeout value is hard-coded to 30 minutes in PostgreSQL Studio. You should only see that if you are not using Studio at all for 30 minutes. Its a pretty simple matter to move that property to the config file so it can be changed. We just need to write a patch.
The timeout is to prevent Studio from holding open idle connections to PostgreSQL. PostgreSQL Studio uses connection pooling to manage connections back to the database so if the browser connection goes away without logging out, we need a way to remove those connections.

Timeout of remote connection to Postgresql

I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation