How to Reduce max_connections value in Postgres Cluster? - postgresql

I have three node cluster.
Now, I want to reduce the max_connections field from 300 to 100. I have changed the value in both master and replica in postgresql.conf file. I have restarted my master first than the other replica nodes. everything seems ok in master but replicas are shutting down automatically.
Here is the error: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 300)
I have found a solution where need to start as hot_standby=off.
Is there any other solution rather than this?

As you are changing max_connections,
Stop all instances, change max_connections setting postgresql.conf in all three nodes.
Start master
Then start replica

So Basically what happened we can't start Replica with less max_connections value than Primary when we have set hot_standby = on.
Though i have updated The Primary server's max_connections The information didn't arrive in Replica side. Normally, Primary server's config changes are notified with wal log changes. For this reason, after restarting the primary with lower max_connections we need to wait for write operations and then change the Replica's max_connecions.
But i think this one is not a feasible solution.
Better Solution:
Start the Primary with lower max_connections
Start replica's with hot_standby = off (if the server stucked in starting state, after few second don't wait for start)
Shut down the replica
Start replica's with hot_standby = on

Related

Google Cloud SQL FATAL: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 500)

I was running PostgresSQL 13.4 replica instance on Google Cloud SQL. The replica keeps failing to start after maintenance with the error FATAL: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 500) even though max_connections = 500 flag which is same as primary instance is explicitly set.
I found this post PostgreSQL 9.5 - change to max_connections not being visible to slaves but hot_standby flag is not modifiable in Cloud SQL.
Restarting and stopping replication does not work so far.

Postgresql 9.6 cascading stream replication issue: physical replication slots not synced from master to slave

I am running postgresql 9.6 and trying to set up the cascading physical replications.
However, when I notice that the replication slots that set up on the master is not shown on the cascading standby units so that the downstream standby fails the basebackup when a replication slot is specified.
on my master:
wal_level = replica
wal_log_hints = on
max_wal_senders = 10
wal_keep_segments = 1024
archive_mode = on
archive_command = 'test ! -f /backup/pg_archive_5432/%f && cp %p /backup/pg_archive_5432/%f'
on my standby:
hot_standby = on
Is this normal behavior on 9.6? If anyone is doing active-standby setup, can you check on your standby unit?
Thank you very much
Replication slots are not replicated. So if you want to use cascading replication with replication slots, you have to create another replication slot on the first standby server. That replication slot can be used by the second standby server.
If you think about it, that makes sense: the cascading standby is not at the same place in the WAL stream as the first one, so they need different replication slots.

PostgreSQL 9.5 - change to max_connections not being visible to slaves

I've added a pgbouncer process to my master server so I want to lower the number of connections from 1500 down to 100 or so to free up resources on the master, but when I change it on both the master and slave, the new setting isn't visible to the slave:
2020-01-29 14:59:19 dbr5 postgres[47563]: [4-1] 2020-01-29 14:59:19 EST [47563]: [4-1] user=,db=,app=,client= FATAL: hot standby is not possible because max_connections = 100 is a lower setting than on the master server (its value was 1500)
This is after the master has been changed:
master=# show max_connections;
max_connections
-----------------
100
(1 row)
Any clues why the slaves aren't taking the new setting?
The slave needs to know the master's max_connections setting to perform this check, and so the master notifies the slave of changes via a WAL entry.
However, the slave won't read any WAL entries if its current max_connections setting is incompatible with the last known setting on the master.
You should reconfigure the master first, give the corresponding WAL entry a chance to replicate, and then reconfigure the slave.
Putting in hot_standby = off on the slaves, restarting the master with the new lower connection number, and then after confirming the slave had received the change by looking in the log for the FATAL: the database system is starting up log entry, then switching it back to on worked. The system now has the new lower connection limit.
Configuration parameters, when set globally, are never replicated. They can be set
in postgresql.conf
with ALTER SYSTEM
with command line options when the server process is started
The last option clearly cannot be replicated, and the first two use configuration files which are also not replicated.
This is a feature: you may want configuration parameters different in some cases (although it is not generally commendable).
You will have to change the parameter on the standby server.

Postgres: How do I safely remove a replica?

Do I need to do anything on the primary if I permanently remove its only replica? I'm concerned about WAL files filling up the disk.
I want to remove the only replica from a single-node replication setup:
P -> R
I want to remove R.
Your concerns are absolutely correct. Replica creates replication slot on primary server, where restart_lsn is stored. According to the docs, restart_lsn is:
The address (LSN) of oldest WAL which still might be required by the consumer of this slot and thus won't be automatically removed during checkpoints.
If replica does not advance LSN in this replication slot, primary will keep all WAL segments starting from this position and ignoring max_wal_size limit.
If you want to remove replica and enable WAL rotation, then you have to drop replication slot as well:
postgres=# SELECT * FROM pg_replication_slots;
postgres=# SELECT pg_drop_replication_slot('replication_slot_name');
There is a patch on Postgres Commitfest, which introduces new GUC to limit a volume of WALs held by replication slot. However, the patch is long-living and still not yet committed.

Unable to elect primary after secondary taken offline Mongo

I have a replica setup with 1 Arbiter and 3 Mongo Databases.
2 of the databases (db1 and db2) I have given an equal priority of becoming primary, and the third one (db3) I have a priority of 0.
I am trying to take db3 offline to copy the data to another server, but every time I run db.shutdownServer() in db3, it causes db1 and db2 to become secondaries, and they remain stuck in this configuration.
It's my understanding that reelection only takes place when Primaries become unavailable.
Am I missing something??
So what was actually happening was I had added 3 other databases (shutdown) in hidden mode which were going to become my next replica set. Apparently Mongo has a setting where if the number of shutdown dbs > running dbs, the replica set goes into read-only mode, so obviously every time I would shutdown db3 this would trigger this to take place.