Heroku postgres follower has more tables - postgresql

Just created a follower Heroku postgres database. The follower seems to have more tables than the 'master'. Why?
$ heroku pg:info
=== HEROKU_POSTGRESQL_XXXX_URL (DATABASE_URL)
Plan: Ronin
Status: Available
Data Size: 3.12 GB
Tables: 56
PG Version: 9.3.4
Connections: 20
Fork/Follow: Available
Rollback: Unsupported
Created: 2014-07-12 21:35 UTC
Followers: HEROKU_POSTGRESQL_YYYY
Maintenance: not required
=== HEROKU_POSTGRESQL_YYYY_URL
Plan: Premium 2
Status: Available
Data Size: 5.05 GB
Tables: 70
PG Version: 9.3.5
Connections: 2
Fork/Follow: Unavailable on followers
Rollback: earliest from 2014-08-20 05:56 UTC
Created: 2014-08-27 05:47 UTC
Data Encryption: In Use
Following: HEROKU_POSTGRESQL_XXXX
Behind By: 72755 commits
Maintenance: not required
Note: My original db plan is now legacy, so I had to create my follower with a different, larger db plan.
My app's operation isn't unduly affected, but I'm curious about the table number discrepancy. Also, if I hit-swap this follower to become primary, will the table count go from 70 to 56?

What DrColossos said in the comments; your database is behind in commits, something is blocking it from applying the upstream changes. You can install the pg-extras plugin and examine your follower database:
$ heroku pg:locks HEROKU_POSTGRESQL_YYY_URL -a app_name
That should show you some information on locks that could be preventing your database from catching up. If it's still 72k or more commits behind, I imagine you'll find a very old lock in place.

Related

PSQL timeline conflict prevent start of master

We had an outage on one of our PSQL 14 (managed by Zalando) due to k8s control plane being unreachable for 30min.
Control plane is now ok but master PSQL does not want to start:
LOG,00000,"listening on IPv4 address ""0.0.0.0"", port 5432"
LOG,00000,"listening on IPv6 address ""::"", port 5432"
LOG,00000,"listening on Unix socket ""/var/run/postgresql/.s.PGSQL.5432"""
LOG,00000,"database system was shut down at 2023-01-30 02:51:10 UTC"
WARNING,01000,"specified neither primary_conninfo nor restore_command",,"The database server will regularly poll the pg_wal subdirectory to check for files placed there."
LOG,00000,"entering standby mode"
FATAL,XX000,"requested timeline 5 is not a child of this server's history","Latest checkpoint is at 2/82000028 on timeline 4, but in the history of the requested timeline, the server forked off from that timeline at 0/530000A0."
LOG,00000,"startup process (PID 23007) exited with exit code 1"
LOG,00000,"aborting startup due to startup process failure"
LOG,00000,"database system is shut down"
We can see in archive_status folder:
-rw-------. 1 postgres postgres 0 Jan 30 02:51 000000040000000200000081.ready
-rw-------. 1 postgres postgres 0 Jan 30 02:51 00000005.history.done
Would you know how we can recover safely from this?
I guess switching back to timeline 4 would be enough as timeline 5 was made after start of outage.
The server is started in standby mode. Remove standby.signal if you want to start the server as primary server.

Recover Postgresql pgBarman

I've setup a postgresql DB and I want to backup it.
I've 1 server with my main DB et 1 with Barman.
All the setup is working, I can backup my DB with barman.
I just don't understand how I can recover my DB on a exact time point between the backups that I do everyday.
barman#ubuntu:~$ barman check main-db-server
WARNING: No backup strategy set for server 'main-db-server' (using default 'exclusive_backup').
WARNING: The default backup strategy will change to 'concurrent_backup' in the future. Explicitly set 'backup_options' to silence this warning.
Server main-db-server:
PostgreSQL: OK
is_superuser: OK
wal_level: OK
directories: OK
retention policy settings: OK
backup maximum age: OK (interval provided: 1 day, latest backup age: 9 minutes, 59 seconds)
compression settings: OK
failed backups: OK (there are 0 failed backups)
minimum redundancy requirements: OK (have 6 backups, expected at least 0)
ssh: OK (PostgreSQL server)
not in recovery: OK
systemid coherence: OK (no system Id available)
archive_mode: OK
archive_command: OK
continuous archiving: OK
archiver errors: OK
And when I backup my DB
barman#ubuntu:~$ barman backup main-db-server
WARNING: No backup strategy set for server 'main-db-server' (using default 'exclusive_backup').
WARNING: The default backup strategy will change to 'concurrent_backup' in the future. Explicitly set 'backup_options' to silence this warning.
Starting backup using rsync-exclusive method for server main-db-server in /var/lib/barman/main-db-server/base/20210427T150505
Backup start at LSN: 0/1C000028 (00000005000000000000001C, 00000028)
Starting backup copy via rsync/SSH for 20210427T150505
Copy done (time: 2 seconds)
Asking PostgreSQL server to finalize the backup.
Backup size: 74.0 MiB. Actual size on disk: 34.9 KiB (-99.95% deduplication ratio).
Backup end at LSN: 0/1C0000C0 (00000005000000000000001C, 000000C0)
Backup completed (start time: 2021-04-27 15:05:05.289717, elapsed time: 11 seconds)
Processing xlog segments from file archival for main-db-server
00000005000000000000001B
00000005000000000000001C
00000005000000000000001C.00000028.backup
I don't know how to restore my DB on a time between 2 backups :/
Thanks

PostgreSQL failed with vacuum and autovacuum

Postgres v11.9
There are many errors on Postgres log like this:
2020-09-05 17:35:37 GMT [22464]: #: [6-1] ERROR: uncommitted xmin 636700836 from before xid cutoff 809126794 needs to be frozen
2020-09-05 17:35:37 GMT [22464]: #: [7-1] CONTEXT: automatic vacuum of table "table_nane"
Manual vacuum fails with this error too.
What can I do to fix this error?
Export the database with pg_dump.
Create a new database cluster and restore the dump into it.
Remove the original database cluster.

error during upgrade postgres version -11 to 12 (old and new pg_controldata WAL segment sizes are invalid or do not match)

we are facing error during postgres upgrade version from 11 to 12 .
Please suggest what I do to resolve this issue.
Error:
Performing Consistency Checks on Old Live Server
------------------------------------------------
Checking cluster versions ok
old and new pg_controldata WAL segment sizes are invalid or do not match
Failure, exiting
bash-4.1$
.....................
command:
/usr/pgsql-12/bin/pg_upgrade --old-datadir /var/lib/pgsql/11/data/ --new-datadir /var/lib/pgsql/12/data/ --old-bindir /usr/pgsql-11/bin/ --new-bindir /usr/pgsql-12/bin/ --check

Postgres is not accepting commands and Vacuum failed due to missing chunk number error

Version: 9.4.4
Exception while inserting a record in health_status.
org.postgresql.util.PSQLException: ERROR: database is not accepting commands to avoid wraparound data loss in database "db"
Hint: Stop the postmaster and vacuum that database in single-user mode.
As indicated in the above error, I tried logging to single-user mode and tried to run full vacuum but instead received below error:
PostgreSQL stand-alone backend 9.4.4
backend> vacuum full;
< 2019-11-06 14:26:25.179 UTC > WARNING: database "db" must be vacuumed within 999999 transactions
< 2019-11-06 14:26:25.179 UTC > HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
< 2019-11-06 14:26:25.215 UTC > ERROR: missing chunk number 0 for toast value xxxx in pg_toast_1234
< 2019-11-06 14:26:25.215 UTC > STATEMENT: vacuum full;
I tried to run vacuum but the same is leading to another error that indicates missing attributes for relid xxxxx
backend> vacuum;
< 2019-11-06 14:27:47.556 UTC > ERROR: catalog is missing 3 attribute(s) for relid xxxxx
< 2019-11-06 14:27:47.556 UTC > STATEMENT: vacuum;
I tried to do a vacuum freeze for the entire db but it is leading to the catalog error again after waiting for sometime.
Furthermore, I tried to run vacuum freeze for a single table which was working fine but when I do the vacuuming for all tables, it probably includes the corrupted one as well and ends up with the same error:
backend> vacuum full freeze
< 2019-11-07 08:54:25.958 UTC > WARNING: database "db" must be vacuumed within 999987 transactions
< 2019-11-07 08:54:25.958 UTC > HINT: To avoid a database shutdown, execute a database-wide VACUUM in that database.
You might also need to commit or roll back old prepared transactions.
< 2019-11-07 08:54:26.618 UTC > ERROR: missing chunk number 0 for toast value xxxxx in pg_toast_xxxx
< 2019-11-07 08:54:26.618 UTC > STATEMENT: vacuum full freeze
Is there a way to figure out the corrupted table and a way to restore the integrity of the database so the application can access the rest of the database?
P.S. I do not have a backup to restore the data so deleting the corrupted data or somehow fixing it would be the only solution here.