Postgresql dies with 235 + concurrent connections - postgresql

I have installed postgresql on an Azure VM and am running tests to see if postgresql can support the expected load. I have increased the max_connections value to 1000 but when I run ab -c 300, postgresql stops responding. Are there any other settings I should be changing?
Thanks, Kate.

PostgreSQL will perform best with a lot less than 1000 connections on most hardware. Usually less than 100. If your application cannot queue work using a connection pool, you should put an external connection pool like PgBouncer between your application and PostgreSQL.
See: https://wiki.postgresql.org/wiki/Number_Of_Database_Connections

Related

Where is defined number of idle persistent connections in PostgreSQL?

I have PostgreSQL 9.6 and backend on PHP. When I use Persistent connections to PostgreSQL via PDO PHP I have some idle processes. Command
select * from pg_stat_activity; shows me 3 idle process with query column DEALLOCATE pdo_stmt_0000013e. I understand, that these processes wait new queries, but I not understand why are 3 processes? On other project with PostgreSQL I have 50 the same idle connections. Where this number is defined, why it depends?

Regarding Heroku Postgres limit of 20 connections, if I close pgadmin will I have to waste another connection?

I connected to my team's Heroku Postgres database through pgAdmin 4, using up 1 out of the 20 connections limit. If I stop the pgAdmin server, will I have to reconnect later when I need it and waste another connection? I'm new to backend and just an intern so I don't want to mess anything up. Thanks for the help

not able to connect postgresql after 30 connections

I have 2 cloud servers of postgresql, 1st one is working fine but in second after 30 mins i am not able to connect from java application. When i connect from pgadmin it shows 30 to 40 connection and after killing those connection every thing runs smooth.
its
configuration:
postgresql/9.3
max_connections = 100
shared_buffers = 4GB
When same application is connect to other postgresql with same schema every thing works fine forever
Configuration:
postgresql/9.1
max_connections = 100
shared_buffers = 32MB
Can u please help me to understand or fix the issue
I work on a PostgreSQL 9.3 instance with hundreds of open connections. I concur to you that the open connections themselves shouldn't be a problem. Sine we don't have much information, what follows is a description of how to get started troubleshooting.
Check server logs for anything wrong. Maybe there is an issue on the OS level with initiating connections?
Try logging in with psql as the application user. Does the problem persist? If not, the problem is not with PostgreSQL. I would take a closer look at the Java code and see if something is happening there.
Note that psql and other libpq actions may not give you the full picture. Try connecting locally over a non-SSL connection while watching a packet capture. You can find (and look up) the SQLSTATE error of the connection in this case. This is because, for legacy and backwards compatibility reasons libpq does not pass the sqlstate up to the client app when connecting to the database.
My bet though is that this is not a postgresql issue. It may be an operating system issue. It may be a resource issue. It may be a client application issue.

Is there a timeout for idle PostgreSQL connections?

1 S postgres 5038 876 0 80 0 - 11962 sk_wai 09:57 ? 00:00:00 postgres: postgres my_app ::1(45035) idle
1 S postgres 9796 876 0 80 0 - 11964 sk_wai 11:01 ? 00:00:00 postgres: postgres my_app ::1(43084) idle
I see a lot of them. We are trying to fix our connection leak. But meanwhile, we want to set a timeout for these idle connections, maybe max to 5 minute.
It sounds like you have a connection leak in your application because it fails to close pooled connections. You aren't having issues just with <idle> in transaction sessions, but with too many connections overall.
Killing connections is not the right answer for that, but it's an OK-ish temporary workaround.
Rather than re-starting PostgreSQL to boot all other connections off a PostgreSQL database, see: How do I detach all other users from a postgres database? and How to drop a PostgreSQL database if there are active connections to it? . The latter shows a better query.
For setting timeouts, as #Doon suggested see How to close idle connections in PostgreSQL automatically?, which advises you to use PgBouncer to proxy for PostgreSQL and manage idle connections. This is a very good idea if you have a buggy application that leaks connections anyway; I very strongly recommend configuring PgBouncer.
A TCP keepalive won't do the job here, because the app is still connected and alive, it just shouldn't be.
In PostgreSQL 9.2 and above, you can use the new state_change timestamp column and the state field of pg_stat_activity to implement an idle connection reaper. Have a cron job run something like this:
SELECT pg_terminate_backend(pid)
FROM pg_stat_activity
WHERE datname = 'regress'
AND pid <> pg_backend_pid()
AND state = 'idle'
AND state_change < current_timestamp - INTERVAL '5' MINUTE;
In older versions you need to implement complicated schemes that keep track of when the connection went idle. Do not bother; just use pgbouncer.
In PostgreSQL 9.6, there's a new option idle_in_transaction_session_timeout which should accomplish what you describe. You can set it using the SET command, e.g.:
SET SESSION idle_in_transaction_session_timeout = '5min';
In PostgreSQL 9.1, the idle connections with following query. It helped me to ward off the situation which warranted in restarting the database. This happens mostly with JDBC connections opened and not closed properly.
SELECT
pg_terminate_backend(procpid)
FROM
pg_stat_activity
WHERE
current_query = '<IDLE>'
AND
now() - query_start > '00:10:00';
if you are using postgresql 9.6+, then in your postgresql.conf you can set
idle_in_transaction_session_timeout = 30000 (msec)
There is a timeout on broken connections (i.e. due to network errors), which relies on the OS' TCP keepalive feature. By default on Linux, broken TCP connections are closed after ~2 hours (see sysctl net.ipv4.tcp_keepalive_time).
There is also a timeout on abandoned transactions, idle_in_transaction_session_timeout and on locks, lock_timeout. It is recommended to set these in postgresql.conf.
But there is no timeout for a properly established client connection. If a client wants to keep the connection open, then it should be able to do so indefinitely. If a client is leaking connections (like opening more and more connections and never closing), then fix the client. Do not try to abort properly established idle connections on the server side.
A possible workaround that allows to enable database session timeout without an external scheduled task is to use the extension pg_timeout that I have developped.
Another option is set this value "tcp_keepalives_idle". Check more in documentation https://www.postgresql.org/docs/10/runtime-config-connection.html.

Timeout of remote connection to Postgresql

I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation