How long for Postgres to drop idle user [duplicate] - postgresql

This question already has answers here:
Is there a timeout for idle PostgreSQL connections?
(7 answers)
Closed 8 years ago.
I had a user who was having a crashing issue with our app and was not closing his connection to the database properly - eventually reaching his connection limit and was unable to log back in.
That's ok. The problem was eventually fixed and the app is closing the connection properly now.
I was wondering, if a user reaches his connection limit, and without me doing anything, what is the default setting for Postgres 9.1 to drop the connection on its own?

Well, for your actual question:
if a user reaches his connection limit, and without me doing anything, what is the default setting for Postgres 9.1 to drop the connection on its own?
If the connections you are talking about are active connections to real clients (i.e. the client is still around, not crashed and exited, and can read and write to its connection socket), Postgres will not "drop the connection on its own". See this question if you're interested in ways to prune such idle clients.
On the other hand, it is possible for a Postgres backend to be nominally connected to a client which has really crashed or otherwise exited, as you alluded to earlier. In that case, Postgres may not notice the client is gone and not just in an idle state for some time. I believe the exact time would be controlled by a "TCP keepalive time" OS setting, or you could even fiddle with this setting in your client via connection parameters: see the various keepalives-related settings.

Related

How to kill Firebird (2.1) attachment/connection if VPN was used for database session

I am using VPN (Endpoint Security, Check Point) to establish connection to the Firebird 2.1 database from IBExpert on my computer. Sometimes I just forget disconnect from database and I cancel/disonnect VPN session only.
When I am connectiong once more to the VPN and database I can see in the mon$attachments that the previous connection/attachment is still existing and its unresolved transactions are causing deadlock errors (that belong to the previous attachment - this can be verified exactly by the transaction number that is reported in the error message of deadlock error).
So - VPN sometimes retains sessions and those VPN sessions keeps the Firebird attachments in existences.
Is there way how can I (using SYSDBA connection) end those other Firebird attachments from my current Firebird session?
I have contacted the VPN administrator to cancel VPN sessions, but it takes time. Database shutdown is out of the question - DB is in production mode. So, ending Firebird attachments using SQL is the only option left for me - if such option exists at all?
In Firebird 2.5 and later, you can delete a connection from MON$ATTACHMENTS to kill a connection. This is not supported with the monitoring tables in Firebird 2.1 as far as I'm aware.
Given even Firebird 2.5 is end-of-life, and Firebird 2.1 has been end-of-life since 2014, you should really consider updating.
Normally, Firebird uses the SO_KEEPALIVE socket option to detect dead connections, but this can take a long time (depending on your OS configuration). An alternative might be to configure dummy_packet_interval in firebird.conf to a non-zero value (the value is seconds, so set it to a reasonable (read, not too low) value).

ADO.NET background pool validation

in Java, application servers like JBoss EAP have the option to periodically verify the connections in a database pool (https://access.redhat.com/documentation/en-us/red_hat_jboss_enterprise_application_platform/6.4/html/administration_and_configuration_guide/sect-database_connection_validation). This has been very useful for removing stale connections.
I'm now looking at a ADO.NET application, and I was wondering if there was any similar functionality that could be used with a Microsoft SQL Server?
I ended up find this post by redgate that describes some of the validation that goes on when connections are taken from the pool:
If the connection has died because a router has decided that it no
longer wants to forward your packets and no other routers like you
either then there is no way to know this unless you try to send some
data and don’t get a response.
If you create a connection and a connection pool is created and
connections are put into the pool and not used, the longer they are in
there, the bigger the chance of something bad happening to it.
When you go to use a connection there is nothing to warn you that a
router has stopped forwarding your packets until you go to use it; so
until you use it, you do not know that there is a problem.
This was an issue with connection pooling that was fixed in the first
.Net 4 reliability update (see issue 14 which vaguely describes this)
with a feature called “Connection Pool Resiliency”. The update meant
that when a connection is about to be taken from the pool, it is
checked for TCP validity and only returned if it is in a good state.

PostgreSQL 10 error: server closed the connection unexpectedly

When I run a query that takes a long time on my Postgres server (maybe 30 minutes), I get the error. I've verified the query is running with active status on the server using pgAdmin. I've also verified the correctness of the query, as it runs successfully on a smaller dataset. Server configurations are default, I haven't changed anything. Please help!
Look into the PostgreSQL server log.
Either you'll find a crash report there, which would explain the broken connection, or there is something in your network that cuts connections with no activity after a while.
Investigate your firewalls!
Maybe it is a solution to set the configuration parameter tcp_keepalives_idle to a value shorter than the time when the connection is cut. That will cause the server operating system to send keepalive messages on idle connections, which may be enough to prevent the overzealous connection reaper in your environment from disrupting your work.

TCP keepalive not working

The situation:
Postgres 9.1 on Debian Server
Scala(Java) application using the LISTEN/NOTIFY mechanism to get notified through JDBC
As there can be very long pauses (multipla days) between notifications I ran into the problem that the underlying TCP connection silently got terminated after some time and my application stopped to receive the notifications.
When googeling for a solution I found that there is a parameter tcpKeepAlive that you can set on the connection. So I set it to true and was happy. Until the next day I saw that again my connection was dead.
As I had been suspicious there was a wireshark capture running in parallel which now turns out to be very usefull. Just about exactly two hours after the last successfull communication on the connection of interest my application sends a keepalive packet to the database server. However the server responds with RST as it seems it has already closed the connection.
The net.ipv4.tcp_keepalive_time on the server is set to 7200 which is 2 hours.
Do I need to somehow enable keepalive on the server or increase the keepalive_time?
Is this the way to go about keeping my application connected?
TL;DR: My database connection gets terminated after long inactivity. Setting tcpKeepAlive didnt fix it as server responds with RST. What to do?
As Craig suggested in the comments the problem was very likely related to some piece of network hardware in between the server and the application. The fix was to increase the frequency of the keepalive messages.
In my case the OS was Windows where you have to create a Registry key with the idle time in milliseconds after which the message should be sent. Info on that here
I have set it to 15 minutes which seems to have solved the issue.
UPDATE:
It only seemed like it solved the issue. After about two days of program run time my connection was gone again. I switched to checking the validity my connection every time I use it. This does not seem like it is the solution but it is a solution nonetheless.

How do you know if PGBouncer is working?

I've set up PGBouncer and configured it to connect to my postgres DB and it all connects just fine, however I'm not sure if its actually working.
I have a php script that runs as a daemon and picks up beanstalk jobs. The problem is for every distinct user/action on the system it opens a new connection to postgres and then leaves that connection idling because the daemon doesn't actually stop running so the connection is never terminated (a quick fix for this was to reset the connection at the end of the script loop, but that is going to be inefficient with many connects).
Anyway, this caused postgres to eventually run out of connections and lock up...
So PGBouncer seems like the answer.
But now when I run it, I see the same db connection multiple times when i do ps ax | grep postgres.
Isn't PGBouncer supposed to only ever have 1 connection open to the DB and route all traffic through that connection? Then open a new connection if that one is full?
At present I have 3 for one db connection (my access control system) and 2 for the other database (my client specific data).
To me, it feels like if I roll out these changes then I will just be faced with the same problem that the connections will just get eaten up again because they're not being released.
I hope that explains enough for someone to offer any advice.
It sounds like an important step here is to fix the scripts so they release the connections when they're done. Once you do that, PgBouncer will help reduce the connection setup/teardown overhead, but in connection pooling mode it won't give you the ability to maintain more connections to Pg than you otherwise could.
However, you can also use PgBouncer in transaction-pooling mode. When used for transaction pooling, PgBouncer keeps a pool of idle backend transactions and only assigns them when a client does a BEGIN. Connections are returned to the pool after the client does a COMMIT or ROLLBACK. That means that in transaction pooling mode you can have large numbers of open connections to PgBouncer; they don't each need a corresponding connection to a PostgreSQL backend, so long as most of them are idle at any point in time and don't have any transaction open.
The only real downside of transaction pooling mode is that it breaks applications that expect to SET session-level variables, keep prepared statements across transactions, etc. Applications may need modification, e.g. to use SET LOCAL.