Replication on the postgres DB breaks when VACUUM job runs - postgresql

We have PostgreSQL 9.1.3 running on (SUSE Linux) 4.3.4 - 64 bit. There is a master slave set up and a streaming replication has been set up between the two. We have a cron job set up which runs the VACUUM command on the master database every Friday. My observation is that replication breaks withing half an hour from the time VACUUM job was run.
Ther error in the postgres logs on slave is
FATAL: could not receive data from WAL stream: FATAL: requested WAL segment 00000003000013500000001A has already been removed
Could you please help me understand the relation between the two and how can I prevent replication from breaking.
Please note I am a novice in Postgres.

Related

How to exit out of database recovery mode (currently locked in read-only mode)

A slave database was set up some time ago for the purpose of backing up or replicating a remote database. However I can no longer write to the database using a Delphi based ETL (the ETL works for another database pair, but to date has never been used for this particular pair). The replication database was setup by somebody else who has since left the company. I am reasonably sure this has been setup as a replication database, however the employee who has since left told me that replication never worked for unrelated reasons. Using the ETL we can (using SQL queries) read from the one database, and write back to the replication database, Or should be able to, as it is currently read only.
I have tried:
Maintenance such as VACUUM
Attempt to drop tables and the entire database
Restore a full backup from the master database
None of these work, and I am told the database is read-only.
I have looked at postgresql.conf and see that hot_standby is checked, so I think (but am not 100% certain) that the database is in some sort of replication mode (I've never touched replication as supported by Postgres, so I wouldn't know).
I have checked permissions in pg_hba.conf and see there are some credentials in there for replication. I am not sure whether this activates "replication mode" for the database, or simply means these credentials are for replication only.
I have been through months worth of log files (This has not been working since our IT department upgraded the entire network about 5 months ago). I see the log file contents seen below, repeated over and over with nothing else for months. Note the IP address shown below is listed in the pg_hba.conf file, so credentials are valid.
The database is in recovery mode, as I have found by using:
select pg_is_in_recovery();
This explains to me why it's read only, but why can I not restore databases, or just simply dump the entire database and start again (it's a backup so losing/restoring it is not an issue)?
I was tempted to try modifying the recovery.conf file (which exists) but I read/believe that once recovery has been initiated (which in my case it has) modifying the file will have no effect.
I'm using a legacy version of Postgres: 9.2.9
Any help here would be greatly appreciated, as I have been working solidly on this for more than a day now.
Log File entry (sample):
FATAL: could not connect to the primary server:
FATAL: no pg_hba.conf entry for replication connection from host "192.168.20.2", user "postgres", SSL off
FATAL: could not connect to the primary server: server closed the connection unexpectedly
This probably means the server terminated abnormally before or while processing the request.
A couple of options would work for me:
Convert the database from being a read-only replication database, to a standard read/write database or
Dump/drop the entire database so I can create a new one with write capabilities.
It looks like the two database clusters have been set up for replication, but configuration changes on one of the machines broke the replication (changed pg_hba.conf on the primary, changed IP addresses, …).
Here is the way to your desired solutions:
Bringing the standby out of recovery mode: Run
/path/to/pg_ctl promote -D /path/to/data/directory
on the standby as operating system user postgres.
Nuking the standby: Remove the data directory on the standby with rm -rf (or the equivalent on your operating system). Kill all PostgreSQL processes.
Then use initdb to create a new database cluster in the same location.

PostgreSQL PITR

I have a master\server setup with pgpool and postgres 9.5. Both servers are running on centOS7.
I wanted to setup a point in time recovery with base backups every saturday, eliminating the old xlogs.
The server is archiving the xlogs with success on a external filesystem.
But when I try to execute the basebackup command it gives the following error:
pg_basebackup: could not connect to server: FATAL: database "replication" does not exist.
You seems to be missing the explicit HBA record for replication database, because specifying all does not covers the replication connections
host replication postgres 127.0.0.1/0 trust
The value replication specifies that the record matches if a
replication connection is requested (note that replication connections
do not specify any particular database). Otherwise, this is the name
of a specific PostgreSQL database.

PostgreSQL DB after replication inaccessible - "role "postgres" does not exist"

We have master-slave replication on
PostgreSQL 9.4.9
CentOS 6.8
and till today we've had beautiful time with our replication between our two +- identical servers. But today I ran VACUUM FULL (on the master of course) which destroyed replication (as expected), but that was not supposed to be a big deal as we have "turned" the replication off and back on for so many times before. But this time it was different.
After executing our many-times-proved script (basically pg_start_backup(), full rsync of data/ directory (with some conf exludes) and pg_stop_backup()), the synchronization looked ok, but the slave DB has become no longer (RO-) accessible by psql. The error reads:
psql: FATAL: could not open file "global/12745": No such file or
directory
After a couple of re-runs I gave up and created empty global/12745 to see what's going to happen, but instead I am always getting
psql: FATAL: role "postgres" does not exist
Actually it seems, that no role we have on the master exists for the slave DB, and this is still true even after disabling replication.
So now, I have no idea how even to access the slave database.
At the same time, the master DB has no such issue, and "postgres" (or any other user we have) is functioning there perfectly.
I did many attempts, including complete removal of /var/lib/pgsql/9.4 directory and reinstall of rpms with initdb. (Fresh empty DB works fine on the slave.)
What could have gone wrong? Have my primary DB became somehow "non-replicable" anymore? That'd be pitty, as this is our primary mean of backup.
Any help is greatly appreciated. Thanks a lot.

How do I fix a PostgreSQL 9.3 Slave that Cannot Keep Up with the Master?

We have a master-slave replication configuration as follows.
On the master:
postgresql.conf has replication configured as follows (commented line taken out for brevity):
max_wal_senders = 1
wal_keep_segments = 8
On the slave:
Same postgresql.conf as on the master. recovery.conf looks like this:
standby_mode = 'on'
primary_conninfo = 'host=master1 port=5432 user=replication password=replication'
trigger_file = '/tmp/postgresql.trigger.5432'
When this was initially setup, we performed some simple tests and confirmed the replication was working. However, when we did the initial data load, only some of the data made it to the slave.
Slave's log is now filled with messages that look like this:
< 2015-01-23 23:59:47.241 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:47.241 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
< 2015-01-23 23:59:52.259 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:52.260 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
< 2015-01-23 23:59:57.270 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:57.270 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
After some analysis and help on the #postgresql IRC channel, I've come to the conclusion that the slave cannot keep up with the master. My proposed solution is as follows.
On the master:
Set max_wal_senders=5
Set wal_keep_segments=4000 . Yes I know it is very high, but I'd like to monitor the situation and see what happens. I have room on the master.
On the slave:
Save configuration files in the data directory (i.e. pg_hba.conf pg_ident.conf postgresql.conf recovery.conf)
Clear out the data directory (rm -rf /var/lib/pgsql/9.3/data/*) . This seems to be required by pg_basebackup.
Run the following command:
pg_basebackup -h master -D /var/lib/pgsql/9.3/data --username=replication --password
Am I missing anything ? Is there a better way to bring the slave up-to-date w/o having to reload all the data ?
Any help is greatly appreciated.
The two important options for dealing with the WAL for streaming replication:
wal_keep_segments should be set high enough to allow a slave to catch up after a reasonable lag (e.g. high update volume, slave being offline, etc...).
archive_mode enables WAL archiving which can be used to recover files older than wal_keep_segments provides. The slave servers simply need a method to retrieve the WAL segments. NFS is the simplest method, but anything from scp to http to tapes will work so long as it can be scripted.
# on master
archive_mode = on
archive_command = 'cp %p /path_to/archive/%f'
# on slave
restore_command = 'cp /path_to/archive/%f "%p"'
When the slave can't pull the WAL segment directly from the master, it will attempt to use the restore_command to load it. You can configure the slave to automatically remove segments using the archive_cleanup_commandsetting.
If the slave comes to a situation where the next WAL segment it needs is missing from both the master and the archive, there will be no way to consistently recover the database. The only reasonable option then is to scrub the server and start again from a fresh pg_basebackup.
You can configure replication slots for postgress to keep WAL segments for replica mentioned in such slot.
Read more at https://www.percona.com/blog/2018/11/30/postgresql-streaming-physical-replication-with-slots/
On master server run
SELECT pg_create_physical_replication_slot('standby_slot');
On slave server add next line to recovery.conf
primary_slot_name = 'standby_slot'
actually to recover, you don't have to drop the whole DB and start from scratch. since master has up-to-date binary, you can do following to recover the slave and bring them back to in-sync:
psql -c "select pg_start_backup('initial_backup');"
rsync -cva --inplace --exclude=*pg_xlog* <data_dir> slave_IP_address:<data_dir>
psql -c "select pg_stop_backup();"
Note:
1. slave has to be turned down by service stop
2. master will turn to read-only due to query pg_start_backup
3. master can continue serving read only queries
4. bring back slave at the end of the steps
I did this in prod, it works perfect for me.
slave and master are in sync and there is no data loss.
You will get that error if keep_wal_segments setting is too low.
When you set the value for keep_wal_segments consider that "How long is the pg_basebackup taking?"
Remember that segments are generated about every 5 minutes, so if the backup takes an hour, you need at least 12 segments saved. At 2 hours, you need 24, etc. I would set the value to about 12.2 segments/hour of backup.
As Ben Grimm suggested in the comments, this is a question of making sure to set segments to the maximum possible value to allow the slave to catch up.

PostgreSQL - using log shipping to incrementally update a remote read-only slave

My company's website uses a PostgreSQL database. In our data center we have a master DB and a few read-only slave DB's, and we use Londiste for continuous replication between them.
I would like to setup another read-only slave DB for reporting purposes, and I'd like this slave to be in a remote location (outside the data center). This slave doesn't need to be 100% up-to-date. If it's up to 24 hours old, that's fine. Also, I'd like to minimize the load I'm putting on the master DB. Since our master DB is busy during the day and idle at night, I figure a good idea (if possible) is to get the reporting slave caught up once each night.
I'm thinking about using log shipping for this, as described on
http://www.postgresql.org/docs/8.4/static/continuous-archiving.html
My plan is:
Setup WAL archiving on the master DB
Produce a full DB snapshot and copy it to the remote location
Restore the DB and get it caught up
Go into steady state where:
DAYTIME -- the DB falls behind but people can query it
NIGHT -- I copy over the day's worth of WAL files and get the DB caught up
Note: the key here is that I only need to copy a full DB snapshot one time. Thereafter I should only have to copy a day's worth of WAL files in order to get the remote slave caught up again.
Since I haven't done log-shipping before I'd like some feedback / advice.
Will this work? Does PostgreSQL support this kind of repeated recovery?
Do you have other suggestions for how to set up a remote semi-fresh read-only slave?
thanks!
--S
Your plan should work.
As Charles says, warm standby is another possible solution. It's supported since 8.2 and has relatively low performance impact on the primary server.
Warm Standby is documented in the Manual: PostgreSQL 8.4 Warm Standby
The short procedure for configuring a
standby server is as follows. For full
details of each step, refer to
previous sections as noted.
Set up primary and standby systems as near identically as possible,
including two identical copies of
PostgreSQL at the same release level.
Set up continuous archiving from the primary to a WAL archive located
in a directory on the standby server.
Ensure that archive_mode,
archive_command and archive_timeout
are set appropriately on the primary
(see Section 24.3.1).
Make a base backup of the primary server (see Section 24.3.2), and load
this data onto the standby.
Begin recovery on the standby server from the local WAL archive,
using a recovery.conf that specifies a
restore_command that waits as
described previously (see Section
24.3.3).
To achieve only nightly syncs, your archive_command should exit with a non-zero exit status during daytime.
Additional Informations:
Postgres Wiki about Warm Standby
Blog Post Warm Standby Setup
9.0's built-in WAL streaming replication is designed to accomplish something that should meet your goals -- a warm or hot standby that can accept read-only queries. Have you considered using it, or are you stuck on 8.4 for now?
(Also, the upcoming 9.1 release is expected to include an updated/rewritten version of pg_basebackup, a tool for creating the initial backup point for a fresh slave.)
Update: PostgreSQL 9.1 will include the ability to pause and resume streaming replication with a simple function call on the slave.

Categories