Heroku Postgres FATAL: out of shared memory - postgresql

I'm on the Heroku Hobby Tier Postgres. After a redeploy I got
psql: FATAL: out of shared memory
HINT: You might need to increase max_locks_per_transaction.
when trying to access my psql database.
pg:info shows
Plan: Hobby-basic
Status: Unavailable, operator notified
Connections: 2/20
PG Version: 10.6
Created: 2018-07-02 18:38 UTC
Data Size: 1.38 GB
Tables: 78
Rows: 4643980/10000000 (In compliance)
Fork/Follow: Unsupported
Rollback: Unsupported
Continuous Protection: Off
is there something I can do to resolve this myself from Heroku?

Heroku routinely performs maintenance on the Heroku Postgres resource if you are on on the hobby tier. That is what happened in this situation. They are not obligated to notify you.

It's also important to note that this should only cause a minimal amount of downtime:
Maintenance windows are 4 hours long. Heroku attempts to begin the maintenance as close to the beginning of the specified window as possible. The duration of the maintenance can vary, but usually, your database is only offline for 10 – 60 seconds. Source
If it is longer, this is likely not your problem.

Related

PostgreSQL 9.6 vacuum using 100% CPU

I am running PostgreSQL 9.6 on Centos 7.
The database was migrated from a PostgreSQL 9.4 server that did not have the issue.
With autovacuum on Postgres is using 100% of one core constantly (10% of total CPU). With autovacuum off it does not use any CPU other than when executing queries.
If this expected or normal, or something bad going on? Note it is a very big database, with many schemas/tables.
I tried,
vacuumdb --all -w
and,
ANALYZE VERBOSE;
The "ANALYZE VERBOSE;" made the database run a lot faster, but did not change the CPU usage.

Do backups using pg_dump cause server outage if the database is too busy?

I have a Postgres database in production environment, and it has millions of records in tables. So I wanted to take a backup using pg_dump for some investigation.
But this database is so busy. So I am afraid if backup operation is caused any server issue like slow down server or crash database etc. as it is busy database.
Can anyone share if there is any risk? And please give some idea about best practice to take a backup from Postgres with no risk.
Running pg_dump will not cause a server crash, but it will add some extra CPU and particularly I/O load. You can test if that is a problem, pg_dump can be canceled any time.
On a busy database, it can also lead to table bloat, because old row versions have to be retained for the duration of pg_dump and cannot be vacuumed.
There are some alternatives:
Run pg_dump against a standby server.
Use pg_basebackup to perform a physical backup. That can be throttled to reduce the I/O load.

FATAL: out of memory +postgreSQL 9.6

We are facing FATAL: out of memory frequently in postgreSQL 9.6 database, we have 125 GB physical memory available on the DB Server and 8 GB has been allocated to shared_buffers.
Please provide inputs to tune any DB related parameters to avoid out of memory related issues.
Could be because of limitations on the postgres user.
Check the output of 'ulimit -a' from postgres.
Set to unlimited (at least for files/open files/max processes), if not already.

Postgres not able to start

I have a problem with postgres not being able to start:
* Starting PostgreSQL 9.5 database server * The PostgreSQL server failed to start. Please check the log output:
2016-08-25 04:20:53 EDT [1763-1] FATAL: could not map anonymous shared memory: Cannot allocate memory
2016-08-25 04:20:53 EDT [1763-2] HINT: This error usually means that PostgreSQL's request for a shared memory segment exceeded available memory, swap space, or huge pages. To reduce the request size (currently 1124007936 bytes), reduce PostgreSQL's shared memory usage, perhaps by reducing shared_buffers or max_connections.
This is the response that I got. I checked the log in tail -f /var/log/postgresql/postgresql-9.5-main.logand I can see the same response.
Can someone suggest what can be the problem?
I used following commands to stop/start the postgres server on Ubuntu 14.04 with postgres 9.5:
sudo service postgresql stop
sudo service postgresql start
As other people suggested, the error was only memory. I increased droplet memory on Digital Ocean and it fixed it.

Heroku: update database plan, then delete the first one

I updated my DB plan on heroku quite some time ago, following this clear tutorial: https://devcenter.heroku.com/articles/upgrade-heroku-postgres-with-pgbackups
So now I have 2 DB running:
$ heroku pg:info
=== HEROKU_POSTGRESQL_NAVY_URL (DATABASE_URL)
Plan: Crane
Status: Available
Data Size: 26.1 MB
Tables: 52
PG Version: 9.2.6
Connections: 8
Fork/Follow: Available
Rollback: Unsupported
Created: 2013-11-04 09:42 UTC
Region: eu-west-1
Maintenance: not required
=== HEROKU_POSTGRESQL_ORANGE_URL
Plan: Dev
Status: available
Connections: 0
PG Version: 9.2.7
Created: 2013-08-13 20:05 UTC
Data Size: 11.8 MB
Tables: 49
Rows: 7725/10000 (In compliance, close to row limit) - refreshing
Fork/Follow: Unsupported
Rollback: Unsupported
Region: Europe
I keep receiving mails saying that I'm close to rate limit on HEROKU_POSTGRESQL_ORANGE_URL. I'd rather delete it, but I'd like to make sure I'm not going to loose any data. Heroku is not clear about it:
The original database will continue to run (and incur charges) even after the upgrade. If desired, remove it after the upgrade is successful.
But can I be 100% sure that all the data in the HEROKU_POSTGRESQL_ORANGE_URL is duplicated in the HEROKU_POSTGRESQL_NAVY_URL? Because if HEROKU_POSTGRESQL_ORANGE_URL were a follower of HEROKU_POSTGRESQL_NAVY_URL, its data should be as big as the first one.
So I just need a confirmation.
Thanks
It sounds to me like the upgrade dumped and reloaded the DB. So the new DB is a copy of the old one. If that's the case, it will contain all data from the old one at the time it was copied - but if you kept on adding new data to the old database, that data wouldn't appear in the new one.
I strongly recommend that before dropping the DB you:
Disable access to it except for pg_dump
Dump it with pg_dump (or use Heroku's tools to do that)
... and only then delete it.
That way, if you discover you've made a mistake, you have a dump to restore.