POSTGRES REPLICATION for new master - postgresql

We are putting together an architecture to support High Availability for our Postgres 9.5 Database. We have 1 master and 3 slaves Replicating the data of the master. When The master goes down Slave 1 is promoted to new master but Slave 2 and Slave 3 are still pointing to the previous master and not the updated master node.
Is there a way to make the slaves to read from the new master dynamically . Or does it require changing the configurations manually and restarting the slaves?

There's no short answer, but I'll try:
When primary server fails you'll promote one slave, and reconfigure all other slaves to target the new master. However there's one scenario in which reconfiguring other slaves might not be needed: if you are using "WAL archiving", and your archive is stored on a shared drive which survived the failure of the old primary. If the new primary continues to use the same shared store you might not need to reconfigure other slaves. Again, I've never tried this - you can try.
If your replication mechanism is based on "replication slots" (introduced in PostgreSQL 9.4) - then you have to reconfigure all the slaves. In this case actually you'll have to rebuild replication on all other slaves from scratch (as if they've never been slaves at all). Nevertheless, in my opinion "replication slots" are better choice.
Regarding automation: You've asked if it is possible to automatically reconfigure other slaves, but thing you've missed to mention is if you have any failover automation implemented. What I'm trying to say is that PostgreSQL itself will not automatically perform failover (promote one of slaves when the master fails). At least you have to create "trigger file" on the slave to be promoted, and you have to do this manually or by using another product (for example pgpool2).
If you use pgpool2 - you can setup automatic slave reconfiguration by setting follow_master_command pgpool.conf value.
Finally I'll strongly recommend reading this tutorial - it'll make your life easier.
Edit:
I've forgot to say two things:
Automatically reconfiguring all other slaves as soon as the new master is promoted might not be a good idea, especially if you have many slaves. It'll put additional pressure on your new primary, and on your network, so in some cases it is better to postpone this for night hours for example. More on this in the mentioned tutorial.
I've wrote the tutorial.

As e4c5 commented, you can use repmgr for managing this type of tasks. I have tried repmgr and I was done without a problem.
I have followed a tutorial for doing that and here is the link:
http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool
I hope following this tutorial you can do what you want without any problem.

Related

Check postgresql replication

I have created a replicated Postgresql database (Master - Slave). I did this with an already existing Ansible Playbook (Role) , which I don't fully understand yet. The cluster currently consists of only 2 databases on different VMs.
So I want to test this replication now.
Unfortunately I have little experience with Postgresql.
How can I control whether they connect stable?
If the slave really takes over the task if the master should fail?
Many thanks for any information, tips & tricks.
Postgresql v. 9.6
Official PostgreSQL does not yet support automatic failover (Although there are multiple third-party projects which support this feature). Therefore if the deployment you have mentioned is only official PostgreSQL, after master failure, none of replicas take over the write task. But they can answer read queries if they are configured as hot_standby.
If you want to check the state of replication, in master you can check out pg_stat_replication in master.
Also these official docs would help you understand Postgres streaming replication & failover better:
https://www.postgresql.org/docs/9.6/warm-standby.html#STREAMING-REPLICATION
https://www.postgresql.org/docs/9.6/warm-standby-failover.html

MongoDb preparing for Sharded Clusters

We are currently setting up our mongodb environment for production. At the moment we only have one dedicated mongodb database server. We will expand this in the near future with a 2nd server and I already indicated to the management that for the ideal situation we should get a 3rd server as well.
Since I already know we're going to use sharding and replication in the near future I want to be prepared for it.
The idea I have now is to start now with the Development Configuration (as mongo's documentation names it).
Whenever our second server comes available I would like to expand this setup to a configuration with 2 configuration servers en 2 shards (replica sets).
And of course when our third server comes available have the fully functional sharded cluster configuration.
While reading mongo's documentation I was getting triggered by the note that de Development setup should not be used in production.
MongoDb Development Configuration
Keeping in mind that we will add more servers soon, would it be a bad idea to already configure the Development Configuration already so we can easily add the 2nd server to the cluster when it comes available?
After setting up the 'development sharded setup' I've found my anwser. Of course i'm happy to share in case anybody runs into the same questions as I do when starting with this.
In my case, it was ok to start with the development setup untill my new servers arrived. It was a temporary situation and when my new servers arived I was able to easily expand my replicasets. There are a number of reasons why this isn't adviced for production:
To state the obvious, there is no replication yet. Since I was running shards on one machine there is a single point of failure. If the machine, or one node goes down, the cluster won't work anymore.
Now this part is interesting. After I added a second server, I did have primary and secondary nodes. Primary nodes were used for writing and secondary for reading. I've eliminated the issue that there was no replication AND my data had a higher availability. However, I noticed with the 2-member replica sets, if one member of the replicaset went down (even is this was a secondary), the primary stepped down to a secondary node as well. This had to do with the voting mechanism that MongoDb uses. See Markus' more detailed answer on this.. Since there are no more primaries in the replicaset, my cluster won't function anymore. Now, if i were to use an arbiter I could eliminate this problem as well.
When you have a 3-member replicataset, automatic failover kicks in. Whenever a node goes down, another primary is assigned automatically and the cluster will continue performing as before.
During my tests I also got to a point where one of my MongoD.exe instances stopped working due to a "Out of memory exception". I was running a cluster with 3 replicasets, meaning every machine had at least 4 mongod.exe processes running (3 for the replicaset shards and one for the configuration server replicaset). Besides having a query which wasn't optimized yet I also noticed that the WiredTiger storage engine by default can use up to 50% of ram minus one gigabyte. Perhaps it wasn't the best choise to have multiple replicaset-shards on one machine but I was able to eliminate the problem by capping the wiredtiger memory usage.
I hope this answer helps anybody who's starting to set up replication and sharding for MongoDb.

Is there a way to upgrade to a different v2 instance type without downtime?

Is it possible to create a read replica/failover, shut down the master and switch the slave instance to master, upgrade the old master and then make it the new master again - upgrading up or down instance types without downtime?
I took a look at failovers, but they appear to only activate when the master is shut down for maintenance and not when the master is shut down.
If not, is this feature in the works?
Thanks.
You cannot upgrade from 1st Generation to 2nd Generation, but you have to create a new instance. You can however use the import and export function to transfer the data (storing them temporary in the Cloud Storage).
For the transfer period you could use a read-only replica to avoid changes.
Note that you might need to make small or bigger changes to your code as the access is different in many ways.

Automatic failover with PostgreSQL 9.1

PostgreSql 9.1 has master-slave synchronous replication. Suppose the master is machine A and the slave is machine B.
If the master fails, how does PostgreSQL know when to make the slave the master? What if the slave incorrectly thought the master was down because of a temporary network glitch on the master where the client program could still contact the master though.
And moreover, how would my client program know the slave in the new master and more importantly is ready to accept writes. Does the slave send a message to the client?
Check repmgr, it's one of its jobs is to deal with this issue.
Typically you want to use a promotion-management system like repmgr or patroni. Then you want to use some sort of a high availability proxy (could be pgbouncer or haproxy) to handle the actual abstraction so your applications do not need to know what system is master.
In answer to your question, most of these systems use a heartbeat to determine if there is a problem. Patroni goes out over the etcd heartbeat. Repmgr has its own heartbeat check. With Repmgr you need to write hook scripts to take care of stonith, and so forth.

Which PostgreSQL replication solution to use for my specific scenario

I need to replicate a PostgreSQL database server as follows:
Two servers are adjacent to each-other - one is the master and the other standby. If the master fails, the standby takes over. Replication from master to slave needs to be failsafe, hence, synchronous. The standby will not be used for any querying unless it has become a master. So, no high-availability/load-balancing is required.
There is another backup server at a remote location. Data from the master server mentioned above will be replicated to this remote server asynchronously and in batches. Time is not a factor at all in this replication - a couple of hours is just fine. This server would be used just for backup.
I've studied the currently available replication solutions from the PostgreSQL docs as well as from Google, but can't decide which combination of synchronous-asynchronous solutions would I need.
The closest I came up with is using pgpool-II for scenario 1 and Mammoth for scenario 2. However, as pgpool is statement-based, what would happen to queries containing rand() and now()?
Please note that I'd rather use free and open-source replication tools.
Also, just a side question - according to scenario 1 above, when the master fails, the standby will take over. Would the master-slave role be reversed after that, or would after the recovery of the master server the slave would go back to its standby state?
Any suggestion would be highly appreciated. Thanks.
I suggest using DRBD for scenario 1 and either 9.0 built-in replication or Slony for scenario 2.
Before PostgreSQL 9.1 (not yet released), there is no other synchronous replication solution available, and DRBD is widely established for this purpose. Together with Pacemaker or Heartbeat, which come with all the scripts needed for PostgreSQL monitoring and switchover, you have a very robust and fairly easy to manage solution. (In fact, I'd consider continuing to use DRBD even after 9.1 comes out; it's just a lot easier and has a longer track record.)
For the cross-site asynchronous, you could try the built-in replication of PostgreSQL 9.0, perhaps in conjunction with repmgr for monitoring and management. Alternatively, you could try the (now a bit) old-school Slony, but I'd guess it will more complicated for your needs.
You didn't mention if the server in question was on a specific version or if this was a new project with the freedom to choose the version. The answers vary based on that information.
If you are starting with a clean slate, I would recommend designing based on the PostgreSQL 9.1 beta. The final version will be released long before you would be ready to go into a production environment and it has binary synchronous replication built-in.
I've been using the built-in asynchronous replication in PostgreSQL for years in almost the exact same scenario you describe and it has always been rock-solid for me. It's become even better with 9.0 with Hot standby and it's become much easier to configure and maintain. 9.1 provides the only missing piece you require.
However, if you are trying to replicate an existing server, built-in asynchronous replication with aggressive settings for "checkpoint_timeout" a very frequent backup of unarchived WAL files could be sufficient until you can upgrade to 9.1.
The bottom line here is that you can get exactly what you want is with stock PostgreSQL 9.1--no third-party products required.
As for failover, it is not an automatic process, you'll need to handle that yourself. I would recommend that after a failover, switching the roles of the two machines until either the next failover event or until a controlled manual failover during a scheduled outage during a slow period of use. Again, this is not automatic and much be managed by the administrator (via shell scripts, presumably).