PGSQL could not fork new process for connection - postgresql

when I try to connect my postgresql database I get an error message "could not fork new process for connection : No error could not fork new process for connection"

You either have exceeded the ulimit of processes for the PostgreSQL user, or you have run out of memory. There are perhaps other causes, but these are the most frequent. You will have to determine the cause and fix the operating system and/or PostgreSQL configuration.
Reduce your max_connections to a reasonable limit like 100 and make sure that work_mem is not excessively high.

Related

postgres corrupted shared memory

i am trying to use postgresql to process queries on GPU using PG_strom.
whenever i try to process queries i got issue:
postmaster has commanded this server process to roll back the current
transaction and exit, because another
server process exited abnormally and possibly corrupted shared memory.
i have checked that i have to increase shared buffer and work memory these are my current configuartions:
shared_buffers = 10GB
work_mem = 1GB
i want to know how can i increase them to its maximum limit as whenever i try to increase them i couldn't connect to postgresql.

How can I get the host name and query that has been executed on the slave postgres database that caused the system into the memory crashed

My slave database has undergone the memory crash(Out of memory) and in recovery stage.
I want to know the query that causes this issue.
I have checked logs I get one query just before the system goes into the recovery mode;But I want to confirm it.
I am using postgres 9.4
If any one has any idea?
If you set log_min_error_statement to error (the default) or lower, you will get the statement that caused the out-of-memory error in the database log.
But I assume that you got hit by the Linux OOM killer, that would cause a PostgreSQL process to be killed with signal 9, whereupon the database goes into recovery mode.
The correct solution here is to disable memory overcommit by setting vm.overcommit_ratio to 2 in /etc/sysctl.conf and activate the setting with sysctl -p (you should then also tune vm.overcommit_ratio correctly).
Then you will get an error rather than a killed process, which is easier to debug.

Getting "FATAL: sorry, too many clients already" when the max_connections number is not reached

When I try to login to the PostgreSQL database I get the error "FATAL: sorry, too many clients already". I checked the number of the connections from another already opened session with the sql select sum(numbackends) from pg_stat_database where datname = '$dbname';, the number is only 13. But the max_connections is set to 100. How can the error happen when the max_connections is not reached?
The PostgreSQL server is a read replica instance running in the Docker container. I've verified the max_connections value by running show max_connections;. Also, the error seems random to me because sometimes I tried to open as many connections as possible to test the limit and it can open 100 connections then throw the error after that.
It turns out the reason was that the IO was occupied fully and it was almost not responding. Then it threw the error "FATAL: sorry, too many clients already" despite the max connections was not reached.

psql seems to timeout with long queries

I am performing a bulk copy into postgres with about 80GB of data.
\copy my_table FROM '/path/csv_file.csv' csv DELIMITER ','
Before the transaction is committed I get the following error.
Server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
In the PostgreSQL logs:
LOG:server process (PID 21122) was terminated by signal 9: Killed
LOG:terminating any other active server processes
WARNING:terminating connection because of crash of another server process
DETAIL:The postmaster has commanded this server process to roll back the current transaction and exit, because another server process exited abnormally and possibly corrupted shared memory.
HINT: In a moment you should be able to reconnect to the database and repeat your command.
Your backend process receiving a signal 9 (SIGKILL). This can happen if:
Somebody sends a kill -9 manually;
A cron job is set up to send kill -9 under some circumstances (very unsafe, do not do this); or
the Linux out-of-memory (OOM) killer triggers and terminates the process.
In the latter case you will see reports of OOM killer activity in the kernel's dmesg output. I expect this is what you'll see in your case.
PostgreSQL servers should be configured without virtual memory overcommit so that the OOM killer does not run and PostgreSQL can handle out-of-memory conditions its self. See the PostgreSQL documentation on Linux memory overcommit.
The separate question "why is this using so much memory" remains. Answering that requires more knowledge of your setup: how much RAM the server has, how much of it is free, what your work_mem and maintenance_work_mem settings are, etc. It isn't a very interesting problem to look into until you upgrade to the current PostgreSQL 8.4 patch release to make sure the problem isn't one that's already fixed.

Timeout of remote connection to Postgresql

I have two EC2 instances, one of them needs to insert large amounts of data into a Postgresql db that lives on the other one. Incidentally it's Open Street Map data and I am using the osm2pgsql utility to insert the data, not sure if that's relevant, I don't think so.
For smaller inserts everything is fine but for very large inserts something times out after around 15 minutes and the operation fails with:
COPY_END for planet_osm_point failed: SSL SYSCALL error: Connection timed out
Is this timeout enforced by Postgresql, Ubuntu or AWS? Not too sure where to start troubleshooting.
Thanks
Could be caused by renegotiation. Check the log, and maybe tweak
ssl_renegotiation_limit = 512MB (the default)
setting it to zero will disable negotiation