I have followed the postgresql wiki binary replication tutorial and cannot get the wal_sender and wal_receiver processes to start on the master or slave server. I'm not seeing any relevant information in the log files to help. I'm able to connect via psql from my slave to my master server, so I'm relatively certain the connection configuration for SR has been setup correctly. Any pointers or tips on setting up SR without log shipping would be wonderful.
Assuming you have PG installed and everything the settings are:
On Master:
add to postgres.conf wal_level = hot_standby and max_wal_senders = 5 settings
add to pb_hba.conf host replication [insert uname] [insert slave ip]/32 trust
On Slave:
create recovery.conf file and add standby_mode = 'on' and primary_conninfo = 'host=localhost port=5432 user=eggie5 password=asdf'
Create baseline:
This is the hard part. You need to get a "snapshot" of the master data (directory) and get to to the slave so they start in synch. You can do this any number of ways: see this page for simple instructions: http://eggie5.com/15-setting-up-pg9-streaming-replication
I had the same problem. I traced the problem back to having used the Postgres-9.0 package that Martin Pitt provides (which I have used since Ubuntu 10.10 doesn't have Postgres-9* in it's package repository yet). I'm guessing that he didn't build the package with streaming replication support.
I have then downloaded and installed the binary package that PostgreSQL provides and the streaming replication started to work smoothly.
Related
Is there a way to run cleanups on Master Server for these archive files that are older and are not needed for the slave server for streaming replication?
You can use the recovery parameter archive_cleanup_command together with the pg_archivecleanup command:
archive_cleanup_command = 'pg_archivecleanup /var/lib/postgresql/pg_log_archive/main %r'
That command assumes that the WAL archives are accessible in /mnt/server/archivedir on the standby server.
Note that for PostgreSQL versions older than v12, this has to be specified in recovery.conf.
If you don't have an easy way to access the WAL archives from the standby, you could use an NFS mount.
I'm trying to setup a basic master slave configuration using streaming replication for postgres 10 and docker
Since the official docker image provides a docker-entrypoint-initdb.d folder for placing initialization scripts i thought it would be a swell idea to start placing my preparation code here.
What i'm trying to do is automate the way the database is restored before starting the slave in standby mode, so i run
rm -rf /var/lib/postgresql/data/* && pg_basebackup 'host=postgres-master port=5432 user=foo password=foo' -D /var/lib/postgresql/data/
and this succeeds.
Then the server is shutdown and restarted as per the docker initialization script, which pops up a message saying
database system identifier differs between the primary and standby
Now I've been sitting online for a while now, and the only 2 explanations i got is that I either have a misconfigured recovery.conf file, which looks like this
standby_mode = 'on'
primary_conninfo = 'host=postgres-master port=5432 user=foo password=foo'
trigger_file = '/tmp/postgresql.trigger'
Where the connection string is the same one i used for the base backup.
The second solution circulating is that the backup command could be messed up, but the only thing i add to the data folder after backup is the recovery.conf file.
Anyone have any idea where i'm messing up?
Should i just go for repmgr and call it a day?
Thanks in advance
To give an answer to my own question, the issue lied in how the dockerfile entrypoint scripts were called. Rather, they would end up starting the instance as a master or slave according to variables that were not properly set by me.
I have tried to do database replication in linux 7.0 red hat using postgresql.
I refered this URL for Confuring http://blog.terminal.com/postgresql-replication-and-backup-methods/ I completed the the steps upto this step
Configuring the slave server
but the step
Initial replication
when I executed this command in Master
-bash-4.2$ psql -c "select pg_start_backup('initial_backup');"
I got Error Like this
ERROR: WAL level not sufficient for making an online backup
HINT: wal_level must be set to "archive" or "hot_standby" at server start.
So kindly let me know where we are wrong.
On your master PostgreSQL server's configuration file <PG_DATA>/postgresql.conf there is parameter called wal_level it should be set to either "archive" or "hot_standby"
Both said to postgres to keep WAL segmetnts for replication server. The difference between the two is rather simple:
"hot_standby" means that your replication server will allow connections for SELECT statements
"archive" means you don't want that possibility.
More to read in this tutorial on streaming replication
Also keep in mind that change of wal_level property needs server restart to take an effect.
I am very new to postgres and being new I got stuck at a point and need some help, please pardon if you find it silly.
I am doing a pgpool HA and at postgres level i have streaming replication between 3 nodes of postgresql-9.5 - 1 master and 2 slaves
I was trying to configure auto failover but when i switched back to my original master, and restarted the postgres service, I am getting the following error:
slave 1-highest timeline 1 of the primary is behind recovery timeline 11
slave 2-highest timeline 1 of the primary is behind recovery timeline 10
slave 3-highest timeline 1 of the primary is behind recovery timeline 3
I tried deleting pg_xlog files in slaves and copying all the files from master pg_xlog into the slaves and then did a rsync.
i also did a pg_rewind but it says:
target server needs to use either data checksums or wal_log_hints = on
(I have wal_log_hints = on set in postgresql.conf already)
I've tried doing a pg_basebackup but since the data base server in slaves are still starting up its not able to connect to the server
Is there any way to bring the master and the slave at a same timeline?
In my case, it happened because ( experimentally ), I updated the standby database tables and again when I simulate the master-standby streaming replication I got the same errors.
So once again I cleaned the whole standby database directory and migrate the master database using cmd like
"pg_basebackup -P -R -X stream -c fast -h 10.10.40.105 -U postgres -D standby/"
I think something is wrong in your pgpool configuration. What tool you have been using for manement of replication and master-slave control? Is it post master or repmgr?
I was trying to configure pgpool with 3 data nodes using a tutorial from http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool and have done it correctly.
Also you can lean auto failover here.
(These question is obviously duplicate of this one, so I'll repeat the answer also.)
I'm not sure what you exactly mean by "when i switched back to my original master", but it looks that you are doing the wrongest possible thing in PostgreSQL streaming replication - introducing the second master.
The most important thing you should know about PostgreSQL replication is that once the failover is performed, you cannot simply "switch back to original master" - there's now a new master in cluster, and existence of two masters will make damage.
After a slave is promoted to master, the only way for you to re-join the old master is to:
Destroy it (delete the data directory);
Join it as a slave.
If you want it to be master again you'll continue with the following:
Let it run for awhile as a slave so that it can sync the data;
Kill temporary master and failover to old master;
Rejoin temporary master again as a slave.
You cannot simply switch master servers! Master can be created ONLY by failover (promoting a slave)
You should also know that whenever you are performing failover (whenever the master is changed), all slaves (except for the one that is promoted) need to be reconfigured to target the new master.
I suggest you reading this tutorial - it'll help.
I have three servers. One is running pgpool, another two in master-slave mode streaming replication. When installing pgpool, I was suggested to install the pgpool_regclass on my database servers as well. There's no problem installing it in the master node, but when I tried to do the same in the slave, I got error ERROR: cannot execute CREATE EXTENSION in a read-only transaction.
I think it's because the slave is a hot standby, and SELECT pg_is_in_recovery(); returns true. So I wonder am I supposed to install pgpool_regclass on the slave or not. It seems not, but pgpool doc says I should install it on every database pgpool is going to access.
I found the cause. Delete the recovery.conf file in the slave database, and then run pgpool_regclass. Otherwise, the slave is in recovery mode and cannot execute write commands.