After a period of inactivity, my go web service is getting a net.OpError with message read tcp x.x.x.x:52086->x.x.x.x:24414: read: connection reset by peer when executing the first postgres sql query. After the error, the subsequent requests will work fine.
The postgres database is hosted with compose.com which has haproxy in front of the postgres db. My go web app is using standard sql and sqlx.
I've tried running a ticker invoking db.Ping() every 15 minutes, but this hasn't fixed the issue.
Why is the go standard sql lib not handling these connection drops?
Since no one wrote that explicity. The solution to this problem is setting db.SetConnMaxLifetime(time.Minute). I tried it and it works. Connection reset occurs often on AWS where there is inactivity limit set to 350 seconds, after that TCP RST is returned.
Related
I have a Python/Flask app that leverages Heroku platform. Problem is that with every read/write request to Heroku Cloud Postgres, the connection that gets created remains open even after the underlying job is complete. My backend code always have "conn.close" wherever I draw from the database, but irrespective Heroku keeps the connection alive until the server is restarted or the connections are manually killed using :
heroku pg:killall
Problem is that Heroku has a connection limit of 20 for free/hobby databases and this connection limit gets saturated pretty quick.
I want to know if there may be a way to auto shut off the connection (??), once the underlying job is complete, i.e. when backend code says :
conn.close()
I am facing the below issue
ERROR: canceling statement due to user request
inconsistently after I enabled query timeout for xa datasource in my xxx-ds.xml. I have added the following in my ds xml file.
<query-timeout>180</query-timeout>
The query timeout is set for 180 seconds, which means any sql query
that takes more than 180 seconds will get cancelled from application
server side.
But the issue I am facing is inconsistent and the queries gets timed out now and then without taking exactly 180 seconds.
We are using connection pooling also.
While searching stackoverflow found this question, which discusses about the possible causes for this issue while using connection pooling.
The solution suggested there was to set statement_timeout setting in postgresql.conf file. But it is bit difficult for me to enable statement_timeout setting in my database environment as the DB server is shared by multiple applications.
I would like to have a solution to terminate timed out queries from client side effectively and consistently while using connection pooling. I am using
JBoss 4.2.2-GA
postgresql 9.2 (64 bit)
java 1.7
postgresql-9.2-1002.jdbc4.jar
It looks like the issue is with postgresql driver 9.2. When I upgraded to 9.3, issue is fixed.
I am using pgAdmin to connect remotely to my database as phpPgAdmin is a bit limited in its features. Only problem is if I leave the SQL window open without running a query for a few minutes, then it says I need to reconnect.
Is this a setting I need to change in my database to keep remote connections alive for longer or is it a pgAdmin setting?
It is client setting.
You need specify connect_timeout in your config file.
29.1. Database Connection Control Functions
29.14. The Connection Service File
We are using PostgreSQL Crane plan, and got a lot of log like this
app postgres - - [5-1] ... LOG: could not receive data from client: Connection reset by peer
We are using about 50 dynos.
Is PostgreSQL running out of connections with bunch of dynos?
Can someone help me explain this case?
Thanks
From what I've found the cause for the errors is the client not disconnecting at the end of the session, or a new connection not being created.
New connection solving the problem:
Postgres error on Heroku with Resque
Explicit disconnection solving the problem:
https://github.com/resque/resque/issues/367 (comment #2)
There's a Heroku FAQ entry on this: Understanding Heroku Postgres Log Statements and Common Errors: could not receive data from client: Connection reset by peer.
Although this log is emitted from postgres, the cause for the error has nothing to do with the database itself. Your application happened to crash while connected to postgres, and did not clean up its connection to the database. Postgres noticed that the client (your application) disappeared without ending the connection properly, and logged a message saying so.
If you are not seeing your application’s backtrace, you may need to ensure that you are, in fact, logging to stdout (instead of a file) and that you have stdout sync’d.
Assuming that no statements to close the connection are made before my script ends and no exception is encountered before closing the connection, does the database's connection stay open?
I'm connecting to the database programmatically via Python Psycopg2 and via Java JDBC4 driver.
Not entirely sure what you want exactly, but let's try:
You can see the connections that exist at any time with PGAdmin or this SQL command
SELECT * FROM pg_stat_activity;
It should be fairly simple to spot when - for your specific use case - the connection closes.
If an SQL query is running at the time you close a connection, I think it will run to completion, ie the backend serving it will remain alive, even if the connection is closed from the client side.