Postgres not able to start - postgresql

I have a problem with postgres not being able to start:
* Starting PostgreSQL 9.5 database server * The PostgreSQL server failed to start. Please check the log output:
2016-08-25 04:20:53 EDT [1763-1] FATAL: could not map anonymous shared memory: Cannot allocate memory
2016-08-25 04:20:53 EDT [1763-2] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 1124007936 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
This is the response that I got. I checked the log in tail -f /var/log/postgresql/postgresql-9.5-main.logand I can see the same response.
Can someone suggest what can be the problem?
I used following commands to stop/start the postgres server on Ubuntu 14.04 with postgres 9.5:
sudo service postgresql stop
sudo service postgresql start

As other people suggested, the error was only memory. I increased droplet memory on Digital Ocean and it fixed it.

Related

How can I get the host name and query that has been executed on the slave postgres database that caused the system into the memory crashed

My slave database has undergone the memory crash(Out of memory) and in recovery stage.
I want to know the query that causes this issue.
I have checked logs I get one query just before the system goes into the recovery mode;But I want to confirm it.
I am using postgres 9.4
If any one has any idea?
If you set log_min_error_statement to error (the default) or lower, you will get the statement that caused the out-of-memory error in the database log.
But I assume that you got hit by the Linux OOM killer, that would cause a PostgreSQL process to be killed with signal 9, whereupon the database goes into recovery mode.
The correct solution here is to disable memory overcommit by setting vm.overcommit_ratio to 2 in /etc/sysctl.conf and activate the setting with sysctl -p (you should then also tune vm.overcommit_ratio correctly).
Then you will get an error rather than a killed process, which is easier to debug.

Heroku Postgres FATAL: out of shared memory

I'm on the Heroku Hobby Tier Postgres. After a redeploy I got
psql: FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
when trying to access my psql database.
pg:info shows
Plan: Hobby-basic
Status: Unavailable, operator notified
Connections: 2/20
PG Version: 10.6
Created: 2018-07-02 18:38 UTC
Data Size: 1.38 GB
Tables: 78
Rows: 4643980/10000000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
is there something I can do to resolve this myself from Heroku?
Heroku routinely performs maintenance on the Heroku Postgres resource if you are on on the hobby tier. That is what happened in this situation. They are not obligated to notify you.
It's also important to note that this should only cause a minimal amount of downtime:
Maintenance windows are 4 hours long. Heroku attempts to begin the maintenance as close to the beginning of the specified window as possible. The duration of the maintenance can vary, but usually, your database is only offline for 10 – 60 seconds. Source
If it is longer, this is likely not your problem.

FATAL: out of memory +postgreSQL 9.6

We are facing FATAL: out of memory frequently in postgreSQL 9.6 database, we have 125 GB physical memory available on the DB Server and 8 GB has been allocated to shared_buffers.
Please provide inputs to tune any DB related parameters to avoid out of memory related issues.
Could be because of limitations on the postgres user.
Check the output of 'ulimit -a' from postgres.
Set to unlimited (at least for files/open files/max processes), if not already.

postgresql consumed 100% of storage

I'm running a Red Hat 4.8.3-9 server in AWS cloud with version 9.4 of Postgresql. The database has consumed 100% of my disk usage. I went in to the database and truncated the table with the most data. After viewing the size of the tables with \d+, there was not any tables over a couple MB's. I ran du -h * --max-depth=1 and found that /var/lib/pgsql94/data/base/16384 held 472G of the 500G total storage. Even after truncating the tables, my disk usage is still 99%. I'm wondering if there is a solution to release the data because I believe deleting all the OID's in 'data/base/16384 would be bad. I have tried stopping, restarting postgres service. I am not allowed to reboot the machine unfortunately.
df -ih shows inode usage is 1%
sudo lsof +L1 does not show any large files at all
Thank you
Log files: 8K worth of repeating string
LOG: could not write temporary statistics file
"pg_stat_tmp/db_16384.tmp": No space left on device
LOG: could not
close temporary statistics file "pg_stat_tmp/db_0.tmp": No space left
on device
LOG: could not close temporary statistics file
"pg_stat_tmp/global.tmp": No space left on device
LOG: using stale
statistics instead of current ones because stats collector is not
responding

PG_DUMP Memory out of error

I am trying to backup a database products with pg_Dump.
The total size of the database is 1.6 gb. One of the table in the database is product_image which is 1gb in size.
When I run the pg_dump on the database the database backup fails with this error.
##pg_dump: Dumping the contents of table "product_image" failed:
PQgetCopyData
() failed.
pg_dump: Error message from server: lost synchronization with server:
got messag
e type "d", length 6036499
pg_dump: The command was: COPY public.product_image (id, username,
projectid, session, filename, filetype, filesize, filedata, uploadedon, "timestamp") T
If I try to backup the database by excluding the product_image table, the backup succeeds.
I tried increasing the shared_buffer in the postgres.conf to 1.5gb from 128MB , but the issue still persists. How can this issue be resolved?
I ran into the same error and it was due to a buggy patch from RedHat for OpenSSL in early June (2015). There is related discussion on the PostgresSQL mailing list.
If you use SSL connections and cross a transferred size threshold, which depends on your PG version (default 512MB for PG < 9.4), the tunnel attempts to renegotiate the SSL keys and the connection dies with the errors you posted.
The fix that worked for me was setting the ssl_renegotiation_limit to 0 (unlimited) in postgresql.conf, followed by a reload.
ssl_renegotiation_limit (integer)
Specifies how much data can flow over an SSL-encrypted connection before renegotiation of the session keys will take place. Renegotiation decreases an attacker's chances of doing cryptanalysis when large amounts of traffic can be examined, but it also carries a large performance penalty. The sum of sent and received traffic is used to check the limit. If this parameter is set to 0, renegotiation is disabled. The default is 512MB.