Alternative to a PostgreSQL cluster restore by promoting the secondary offline replica - postgresql

We have an old PostgreSQL cluster version 10.6 running on REH 7
We use repmgr version 5.0.0
We have one primary replica and a secondary replica and we use repmgr for managing manula swithcover, promotions, etc.
We do not use repmgr daemon.
The primary uses replication slots for barman and for the secondary replica.
We have to do an update of the database content which will last several hours and we want to be able to restore the database back to the status before the update - in case the update fails.
One option is to restore a backup from barman. This would take a while.
We want to use another approach:
Before starting with the update on the primary, we stop the secondary replica and leave it down until the database content update is completed.
In case the database content update fails we stop the primary replica
We promote the secondary replica by running: “repmgr standby promote”
This will allow us to have the new primary replica as the former primary replica was before the database update started.
Then we delete the former primary and we build the secondary replica with repmgr standby clone
Any issue with this approach ?
Should I add more steps (like stopping the replication slot before stopping the secondary replica)
Many thanks Mari

Related

Postgresql Replication EFM how to reject inserts when Standby nodes are not sync?

I have a 3 nodes cluster for PostgreSQL 14 made by EDB Failover Manager.
When I turn off the two standby nodes, the Primary node takes forever to do the insert commands.
I don't want it to do insert commands, I want it to act like a read-only node (reject inserts/updates..) when there is no Standby node sync.

Postgres Logical Replication disaster recovery

We are looking to use Postgres Logical Replication to move changes from an upstream server ("source" server) to a downstream server ("sink" server).
We run into issues when we simulate a disaster recovery scenario. In order to simulate this, we delete the source database while the replication is still active. We then bring up a new source database and try to: a) move data from the sink into the source, and b) set up replication. At this stage we get one of two errors, depending on when we set up the replication (before or after moving the data).
The errors we get after testing the above are one of the below:
Replication slot already in use, difficulty in re-enabling slot without deletion
LOG: logical replication apply worker for subscription "test_sub" has started
ERROR: could not start WAL streaming: ERROR: replication slot "test_sub" does not exist
LOG: worker process: logical replication worker for subscription 16467 (PID 205) exited with exit code 1
Tried amending using:
ALTER SUBSCRIPTION "test_sub" disable;
ALTER SUBSCRIPTION "test_sub" SET (slot_name = NONE);
DROP SUBSCRIPTION "test_sub";
Cannot create subscription due to PK conflicts
ERROR: duplicate key value violates unique constraint "test_pkey"
DETAIL: Key (id)=(701) already exists.
CONTEXT: COPY test, line 1
Some possible resolutions:
Have the Logical Replication set up after a given WAL record number. This might avoid the PK issues we are facing
Find a way to recreate the replication slot on the source database
Backup the Postgres server, including the replication slot, and re-import
Is this a well-catered for use case for Postgres Logical Replication? This is a typical disaster recovery scenario, so would like to know how best to implement. Thanks!

Mongodb replication automatically

Is there any ways or methods to start mongodb replication directly when mongod service start? I don't want to enter to shell and ON the replication?
Thanks!
You can create a mongod service which starts automatically when server starts.
First you need to create a configuration file(mongodb.conf) which will include configuration settings such as replicaSet name etc. Then create a service and install it using following command
mongod -f c:\mongod.conf --install
Then start the service using
net start mongodb
Read about configuration file here and
How to install mongo as service here
When you create a valid replica set in mongodb, your data will be asynchronously from the primary member to the secondary members in replica set
Having said that, you're not required to do extra efforts manually to get data replication done
When you do rs.slaveOk() on secondary, that allows you to query data from secondary members in the replica set.
It's a provision. It allows you to read from secondary provided that you're can tolerate the possible eventual consistent data. The replication does not happen when you do rs.slaveOk() on secondary
I'm not sure to understand. Your question was about service start. On my part, I install mongo on ubuntu and the service is not started with replicatet mode.
Finally, I disabled the first one and I created another service with the option --replSet myReplicat .
When you have only 2 servers, there is a problem with majority votes. On my part, I had 2 secondary after I stopped the primary and it was difficult to comeback with 1 primary and 1 secondary.
Effectively, the replication is always active. By default, all connections should go to the Primary. If you want to readonly from a secondary, you first enter the commande rs.slaveOk(). This command is active at session level. If you reconnect, you have to pass it again. It is not possible to put it at server side.

Unable to elect primary after secondary taken offline Mongo

I have a replica setup with 1 Arbiter and 3 Mongo Databases.
2 of the databases (db1 and db2) I have given an equal priority of becoming primary, and the third one (db3) I have a priority of 0.
I am trying to take db3 offline to copy the data to another server, but every time I run db.shutdownServer() in db3, it causes db1 and db2 to become secondaries, and they remain stuck in this configuration.
It's my understanding that reelection only takes place when Primaries become unavailable.
Am I missing something??
So what was actually happening was I had added 3 other databases (shutdown) in hidden mode which were going to become my next replica set. Apparently Mongo has a setting where if the number of shutdown dbs > running dbs, the replica set goes into read-only mode, so obviously every time I would shutdown db3 this would trigger this to take place.

MongoDB replica set no primary, need to force new primary

We have a mongoDB replica set which has 3 nodes;
Primary
Secondary
Arbiter
Somehow our replica set has ended up with 1 & 2 both being set as secondary members. I'm not sure how this has happened (we have had a server migration which one of the nodes runs on, but only one).
Anyway, I've been trying to re-elect a new primary for the replica set following the guide here
I'm unable to just use
rs.reconfig(cfg)
as it will only work if directed at the primary (which I don't have).
Using the force parameter
rs.reconfig(cfg, { force: true})
would appear to work but then when I requery the status of the replica set, both servers are still only showing as Secondary.
Why hasn't the force reconfig worked? Currently the database is locked out whatever I try.
1.Convert all nodes to standalone.
Stop mongod deamon and edit /etc/mongod.conf to comment replSet option.
Start mongod deamon.
2.Use mongodump to backup data for all nodes.
Reference from mongo docs:
https://docs.mongodb.com/manual/reference/program/mongodump/
3.Log into each node, and drop local database.
Doing this will delete replica set config on the node.
Or you can just delete a record in collection system.replset in local db, like it said here:
https://stackoverflow.com/a/31745150/4242454
4.Start all nodes with replSet option.
5.On the previous data node (not arbiter), initialize a new replica set.
6.Finally, reconfig replica set with rs.reconfig.