FATAL: out of memory +postgreSQL 9.6 - postgresql

We are facing FATAL: out of memory frequently in postgreSQL 9.6 database, we have 125 GB physical memory available on the DB Server and 8 GB has been allocated to shared_buffers.
Please provide inputs to tune any DB related parameters to avoid out of memory related issues.

Could be because of limitations on the postgres user.
Check the output of 'ulimit -a' from postgres.
Set to unlimited (at least for files/open files/max processes), if not already.

Related

RDS Postgres out of shared memory

i've the rds server from aws which contains two databases staging and development and the sessions is not that high however i receive an error of
ErrorResponse: out of shared memory
the projects are node js applications used sequelize orm
is there's a solution that i can do
How did you set the shared_buffers option?
and.. your db instance it's really low type.
https://postgresqlco.nf/doc/en/param/shared_buffers/
If you have a dedicated database server with 1GB or more of RAM, a reasonable starting value for shared_buffers is 25% of the memory in your system

Heroku Postgres FATAL: out of shared memory

I'm on the Heroku Hobby Tier Postgres. After a redeploy I got
psql: FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
when trying to access my psql database.
pg:info shows
Plan: Hobby-basic
Status: Unavailable, operator notified
Connections: 2/20
PG Version: 10.6
Created: 2018-07-02 18:38 UTC
Data Size: 1.38 GB
Tables: 78
Rows: 4643980/10000000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
is there something I can do to resolve this myself from Heroku?
Heroku routinely performs maintenance on the Heroku Postgres resource if you are on on the hobby tier. That is what happened in this situation. They are not obligated to notify you.
It's also important to note that this should only cause a minimal amount of downtime:
Maintenance windows are 4 hours long. Heroku attempts to begin the maintenance as close to the beginning of the specified window as possible. The duration of the maintenance can vary, but usually, your database is only offline for 10 – 60 seconds. Source
If it is longer, this is likely not your problem.

How to limit pg_dump's memory usage?

I have a ~140 GB postgreDB on Heroku / AWS. I want to create a dump of this on a windows Azure - Windows server 2012 R2 virtual machine, as i need to move the DB into Azure environment.
The DB has a couple of smaller tables, but mainly consists of a single table taking ~130 GB, including indexes. It has ~500 million rows.
I've tried to use pg_dump for this, with:
./pg_dump -Fc --no-acl --no-owner --host * --port 5432 -U * -d * > F:/051418.dump
I've tried on various Azure virtual machine sizes, including some fairly large with (D12_V2) 28GB ram, 4 VCPUs 12000 MAXIOPs, etc. But in all cases the pg_dump stalls completely due to memory swapping.
On above machine it's currently using all available memory and has used the past 12 hrs swapping memory on the disk. I dont expect it to complete, due to the swapping.
From other posts i've understood it could be an issue with the network speed, beeing much faster than the disk IO speed, causing pg_dump to suck up all available memory and more, so i've tried using the azure machine with most IOPs. This hasnt helped.
So is there another way i can force pg_dump to cap it's memory usage, or wait on pulling more data until it has written to disk and clear memory ?
Looking forward to your help!
Krgds.
Christian

Postgres not able to start

I have a problem with postgres not being able to start:
* Starting PostgreSQL 9.5 database server * The PostgreSQL server failed to start. Please check the log output:
2016-08-25 04:20:53 EDT [1763-1] FATAL: could not map anonymous shared memory: Cannot allocate memory
2016-08-25 04:20:53 EDT [1763-2] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 1124007936 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
This is the response that I got. I checked the log in tail -f /var/log/postgresql/postgresql-9.5-main.logand I can see the same response.
Can someone suggest what can be the problem?
I used following commands to stop/start the postgres server on Ubuntu 14.04 with postgres 9.5:
sudo service postgresql stop
sudo service postgresql start
As other people suggested, the error was only memory. I increased droplet memory on Digital Ocean and it fixed it.

How to increase the max connections in postgres?

I am using Postgres DB for my product. While doing the batch insert using slick 3, I am getting an error message:
org.postgresql.util.PSQLException: FATAL: sorry, too many clients already.
My batch insert operation will be more than thousands of records.
Max connection for my postgres is 100.
How to increase the max connections?
Just increasing max_connections is bad idea. You need to increase shared_buffers and kernel.shmmax as well.
Considerations
max_connections determines the maximum number of concurrent connections to the database server. The default is typically 100 connections.
Before increasing your connection count you might need to scale up your deployment. But before that, you should consider whether you really need an increased connection limit.
Each PostgreSQL connection consumes RAM for managing the connection or the client using it. The more connections you have, the more RAM you will be using that could instead be used to run the database.
A well-written app typically doesn't need a large number of connections. If you have an app that does need a large number of connections then consider using a tool such as pg_bouncer which can pool connections for you. As each connection consumes RAM, you should be looking to minimize their use.
How to increase max connections
1. Increase max_connection and shared_buffers
in /var/lib/pgsql/{version_number}/data/postgresql.conf
change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB
The shared_buffers configuration parameter determines how much memory is dedicated to PostgreSQL to use for caching data.
If you have a system with 1GB or more of RAM, a reasonable starting
value for shared_buffers is 1/4 of the memory in your system.
it's unlikely you'll find using more than 40% of RAM to work better
than a smaller amount (like 25%)
Be aware that if your system or PostgreSQL build is 32-bit, it might
not be practical to set shared_buffers above 2 ~ 2.5GB.
Note that on Windows, large values for shared_buffers aren't as
effective, and you may find better results keeping it relatively low
and using the OS cache more instead. On Windows the useful range is
64MB to 512MB.
2. Change kernel.shmmax
You would need to increase kernel max segment size to be slightly larger
than the shared_buffers.
In file /etc/sysctl.conf set the parameter as shown below. It will take effect when postgresql reboots (The following line makes the kernel max to 96Mb)
kernel.shmmax=100663296
References
Postgres Max Connections And Shared Buffers
Tuning Your PostgreSQL Server
Adding to Winnie's great answer,
If anyone is not able to find the postgresql.conf file location in your setup, you can always ask the postgres itself.
SHOW config_file;
For me changing the max_connections alone made the trick.
EDIT: From #gies0r: In Ubuntu 18.04 it is at
/etc/postgresql/11/main/postgresql.conf
If your postgres instance is hosted by Amazon RDS, Amazon configures the max connections for you based on the amount of memory available.
Their documentation says you get 112 connections per 1 GB of memory (with a limit of 5000 connections no matter how much memory you have), but we found we started getting error messages closer to 80 connections in an instance with only 1 GB of memory. Increasing to 2 GB let us use 110 connections without a problem (and probably more, but that's the most we've tried so far.) We were able to increase the memory of an existing instance from 1 GB to 2 GB in just a few minutes pretty easily.
Here's the link to the relevant Amazon documentation:
https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/CHAP_Limits.html#RDS_Limits.MaxConnections
change max_connections variable
in postgresql.conf file located in
/var/lib/pgsql/data or /usr/local/pgsql/data/
Locate postgresql.conf file by below command
locate postgresql.conf
Edit postgresql.conf file by below command
sudo nano /etc/postgresql/14/main/postgresql.conf
Change
max_connections = 100
shared_buffers = 24MB
to
max_connections = 300
shared_buffers = 80MB