PostgreSQL 9.5 - change to max_connections not being visible to slaves - postgresql

I've added a pgbouncer process to my master server so I want to lower the number of connections from 1500 down to 100 or so to free up resources on the master, but when I change it on both the master and slave, the new setting isn't visible to the slave:
2020-01-29 14:59:19 dbr5 postgres[47563]: [4-1] 2020-01-29 14:59:19 EST [47563]: [4-1] user=,db=,app=,client= FATAL: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 1500)
This is after the master has been changed:
master=# show max_connections;
max_connections
-----------------
100
(1 row)
Any clues why the slaves aren't taking the new setting?

The slave needs to know the master's max_connections setting to perform this check, and so the master notifies the slave of changes via a WAL entry.
However, the slave won't read any WAL entries if its current max_connections setting is incompatible with the last known setting on the master.
You should reconfigure the master first, give the corresponding WAL entry a chance to replicate, and then reconfigure the slave.

Putting in hot_standby = off on the slaves, restarting the master with the new lower connection number, and then after confirming the slave had received the change by looking in the log for the FATAL: the database system is starting up log entry, then switching it back to on worked. The system now has the new lower connection limit.

Configuration parameters, when set globally, are never replicated. They can be set
in postgresql.conf
with ALTER SYSTEM
with command line options when the server process is started
The last option clearly cannot be replicated, and the first two use configuration files which are also not replicated.
This is a feature: you may want configuration parameters different in some cases (although it is not generally commendable).
You will have to change the parameter on the standby server.

Related

How to Reduce max_connections value in Postgres Cluster?

I have three node cluster.
Now, I want to reduce the max_connections field from 300 to 100. I have changed the value in both master and replica in postgresql.conf file. I have restarted my master first than the other replica nodes. everything seems ok in master but replicas are shutting down automatically.
Here is the error: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 300)
I have found a solution where need to start as hot_standby=off.
Is there any other solution rather than this?
As you are changing max_connections,
Stop all instances, change max_connections setting postgresql.conf in all three nodes.
Start master
Then start replica
So Basically what happened we can't start Replica with less max_connections value than Primary when we have set hot_standby = on.
Though i have updated The Primary server's max_connections The information didn't arrive in Replica side. Normally, Primary server's config changes are notified with wal log changes. For this reason, after restarting the primary with lower max_connections we need to wait for write operations and then change the Replica's max_connecions.
But i think this one is not a feasible solution.
Better Solution:
Start the Primary with lower max_connections
Start replica's with hot_standby = off (if the server stucked in starting state, after few second don't wait for start)
Shut down the replica
Start replica's with hot_standby = on

WAL level not sufficient for making an online backup

I have tried to do database replication in linux 7.0 red hat using postgresql.
I refered this URL for Confuring http://blog.terminal.com/postgresql-replication-and-backup-methods/ I completed the the steps upto this step
Configuring the slave server
but the step
Initial replication
when I executed this command in Master
-bash-4.2$ psql -c "select pg_start_backup('initial_backup');"
I got Error Like this
ERROR: WAL level not sufficient for making an online backup
HINT: wal_level must be set to "archive" or "hot_standby" at server start.
So kindly let me know where we are wrong.
On your master PostgreSQL server's configuration file <PG_DATA>/postgresql.conf there is parameter called wal_level it should be set to either "archive" or "hot_standby"
Both said to postgres to keep WAL segmetnts for replication server. The difference between the two is rather simple:
"hot_standby" means that your replication server will allow connections for SELECT statements
"archive" means you don't want that possibility.
More to read in this tutorial on streaming replication
Also keep in mind that change of wal_level property needs server restart to take an effect.

Primary and standby server at different timelines in postgres

I am very new to postgres and being new I got stuck at a point and need some help, please pardon if you find it silly.
I am doing a pgpool HA and at postgres level i have streaming replication between 3 nodes of postgresql-9.5 - 1 master and 2 slaves
I was trying to configure auto failover but when i switched back to my original master, and restarted the postgres service, I am getting the following error:
slave 1-highest timeline 1 of the primary is behind recovery timeline 11
slave 2-highest timeline 1 of the primary is behind recovery timeline 10
slave 3-highest timeline 1 of the primary is behind recovery timeline 3
I tried deleting pg_xlog files in slaves and copying all the files from master pg_xlog into the slaves and then did a rsync.
i also did a pg_rewind but it says:
target server needs to use either data checksums or wal_log_hints = on
(I have wal_log_hints = on set in postgresql.conf already)
I've tried doing a pg_basebackup but since the data base server in slaves are still starting up its not able to connect to the server
Is there any way to bring the master and the slave at a same timeline?
In my case, it happened because ( experimentally ), I updated the standby database tables and again when I simulate the master-standby streaming replication I got the same errors.
So once again I cleaned the whole standby database directory and migrate the master database using cmd like
"pg_basebackup -P -R -X stream -c fast -h 10.10.40.105 -U postgres -D standby/"
I think something is wrong in your pgpool configuration. What tool you have been using for manement of replication and master-slave control? Is it post master or repmgr?
I was trying to configure pgpool with 3 data nodes using a tutorial from http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool and have done it correctly.
Also you can lean auto failover here.
(These question is obviously duplicate of this one, so I'll repeat the answer also.)
I'm not sure what you exactly mean by "when i switched back to my original master", but it looks that you are doing the wrongest possible thing in PostgreSQL streaming replication - introducing the second master.
The most important thing you should know about PostgreSQL replication is that once the failover is performed, you cannot simply "switch back to original master" - there's now a new master in cluster, and existence of two masters will make damage.
After a slave is promoted to master, the only way for you to re-join the old master is to:
Destroy it (delete the data directory);
Join it as a slave.
If you want it to be master again you'll continue with the following:
Let it run for awhile as a slave so that it can sync the data;
Kill temporary master and failover to old master;
Rejoin temporary master again as a slave.
You cannot simply switch master servers! Master can be created ONLY by failover (promoting a slave)
You should also know that whenever you are performing failover (whenever the master is changed), all slaves (except for the one that is promoted) need to be reconfigured to target the new master.
I suggest you reading this tutorial - it'll help.

postgresql streaming replication -- continuous archiving?

im trying to set up streaming replication, but for some reason when i update the database on the master, the changes are not reflected on the standby server UNTIL i restart the postgresql service on the master. (i see new xlog files in master server but these do not get synced to the standby server). when i restart the service on master, i finally see new files added to my shared wal_archive folder
the only way I can make it sync automatically is if i set the archive_timeout.
Master:
wal_level = 'hot_standby' # minimal, archive, hot_standby, or logical
archive_mode = on # allows archiving to be done
# (change requires restart)
archive_command = 'copy "%p" "\\\\VBOXSVR\\wal_archive\\%f"'
max_wal_senders = 3 # max number of walsender processes
# (change requires restart)
wal_keep_segments = 10 # in logfile segments, 16MB each; 0 disables
pb_hba.conf
host replication postgres slaveip/32 trust
It sounds like you're using archive-based replication without streaming. So it's only replicating when a WAL archive is finished and a new one is opened, which happens:
When the server does a checkpoint before a clean shutdown
When a WAL archive is filled by write activity and a new one is needed
at archive_timeout time
If you want continuous replication you need to use streaming replication. See the manual for details. This involves setting a connection string in your downstream server's recovery.conf so it can connect directly to the upstream master to receive new writes in near-real-time.
You should still leave archive based replication enabled, because this allows the replica to recover if it's disconnected for a while. It's also useful for point-in-time recovery.

How do I fix a PostgreSQL 9.3 Slave that Cannot Keep Up with the Master?

We have a master-slave replication configuration as follows.
On the master:
postgresql.conf has replication configured as follows (commented line taken out for brevity):
max_wal_senders = 1
wal_keep_segments = 8
On the slave:
Same postgresql.conf as on the master. recovery.conf looks like this:
standby_mode = 'on'
primary_conninfo = 'host=master1 port=5432 user=replication password=replication'
trigger_file = '/tmp/postgresql.trigger.5432'
When this was initially setup, we performed some simple tests and confirmed the replication was working. However, when we did the initial data load, only some of the data made it to the slave.
Slave's log is now filled with messages that look like this:
< 2015-01-23 23:59:47.241 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:47.241 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
< 2015-01-23 23:59:52.259 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:52.260 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
< 2015-01-23 23:59:57.270 EST >LOG: started streaming WAL from primary at F/52000000 on timeline 1
< 2015-01-23 23:59:57.270 EST >FATAL: could not receive data from WAL stream: ERROR: requested WAL segment 000000010000000F00000052 has already been removed
After some analysis and help on the #postgresql IRC channel, I've come to the conclusion that the slave cannot keep up with the master. My proposed solution is as follows.
On the master:
Set max_wal_senders=5
Set wal_keep_segments=4000 . Yes I know it is very high, but I'd like to monitor the situation and see what happens. I have room on the master.
On the slave:
Save configuration files in the data directory (i.e. pg_hba.conf pg_ident.conf postgresql.conf recovery.conf)
Clear out the data directory (rm -rf /var/lib/pgsql/9.3/data/*) . This seems to be required by pg_basebackup.
Run the following command:
pg_basebackup -h master -D /var/lib/pgsql/9.3/data --username=replication --password
Am I missing anything ? Is there a better way to bring the slave up-to-date w/o having to reload all the data ?
Any help is greatly appreciated.
The two important options for dealing with the WAL for streaming replication:
wal_keep_segments should be set high enough to allow a slave to catch up after a reasonable lag (e.g. high update volume, slave being offline, etc...).
archive_mode enables WAL archiving which can be used to recover files older than wal_keep_segments provides. The slave servers simply need a method to retrieve the WAL segments. NFS is the simplest method, but anything from scp to http to tapes will work so long as it can be scripted.
# on master
archive_mode = on
archive_command = 'cp %p /path_to/archive/%f'
# on slave
restore_command = 'cp /path_to/archive/%f "%p"'
When the slave can't pull the WAL segment directly from the master, it will attempt to use the restore_command to load it. You can configure the slave to automatically remove segments using the archive_cleanup_commandsetting.
If the slave comes to a situation where the next WAL segment it needs is missing from both the master and the archive, there will be no way to consistently recover the database. The only reasonable option then is to scrub the server and start again from a fresh pg_basebackup.
You can configure replication slots for postgress to keep WAL segments for replica mentioned in such slot.
Read more at https://www.percona.com/blog/2018/11/30/postgresql-streaming-physical-replication-with-slots/
On master server run
SELECT pg_create_physical_replication_slot('standby_slot');
On slave server add next line to recovery.conf
primary_slot_name = 'standby_slot'
actually to recover, you don't have to drop the whole DB and start from scratch. since master has up-to-date binary, you can do following to recover the slave and bring them back to in-sync:
psql -c "select pg_start_backup('initial_backup');"
rsync -cva --inplace --exclude=*pg_xlog* <data_dir> slave_IP_address:<data_dir>
psql -c "select pg_stop_backup();"
Note:
1. slave has to be turned down by service stop
2. master will turn to read-only due to query pg_start_backup
3. master can continue serving read only queries
4. bring back slave at the end of the steps
I did this in prod, it works perfect for me.
slave and master are in sync and there is no data loss.
You will get that error if keep_wal_segments setting is too low.
When you set the value for keep_wal_segments consider that "How long is the pg_basebackup taking?"
Remember that segments are generated about every 5 minutes, so if the backup takes an hour, you need at least 12 segments saved. At 2 hours, you need 24, etc. I would set the value to about 12.2 segments/hour of backup.
As Ben Grimm suggested in the comments, this is a question of making sure to set segments to the maximum possible value to allow the slave to catch up.