The right procedure to upgrade RDS Postgres instance with read replica? - postgresql

The current version is 9.4.20 and I want to upgrade to 9.5.X. I am wondering the right procedure to do this since my Postgres has read replica so it is a bit more than just Modify. Downtime is acceptable. The seamless upgrade is NOT required. The docs on the AWS side are not clear. https://i.stack.imgur.com/WlCqV.png
amz docs
Here are the purposed steps, yet I can't figure out how to perform the second step:
take a snapshot of the primary instance,
stop replication,
upgrade a primary instance,
upgrade read replica,
promote read replica and start replication again

I will just post what I did as an answer.
snapshot primary,
delete old read replica,
upgrade primary,
create new read replica, done.

The RDS for PostgreSQL doc https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html states:
"A read replica can't undergo a major version upgrade but the read replica's source instance can. If a read replica's source instance undergoes a major version upgrade, all read replicas for that source instance remain with the previous engine version. In this case, the read replicas can no longer replicate changes performed on the source instance.
We recommend that you either promote your read replicas, or delete and recreate them after the source instance has upgraded to a different major version."
When you initiate a major version upgrade for an RDS for PostgreSQL instance with one or more read replicas, the replication will be automatically stopped, and it won't be restarted after the primary (source) is finished upgrading. You will need to create new read replicas after upgrading the source database instance.

Related

Check postgresql replication

I have created a replicated Postgresql database (Master - Slave). I did this with an already existing Ansible Playbook (Role) , which I don't fully understand yet. The cluster currently consists of only 2 databases on different VMs.
So I want to test this replication now.
Unfortunately I have little experience with Postgresql.
How can I control whether they connect stable?
If the slave really takes over the task if the master should fail?
Many thanks for any information, tips & tricks.
Postgresql v. 9.6
Official PostgreSQL does not yet support automatic failover (Although there are multiple third-party projects which support this feature). Therefore if the deployment you have mentioned is only official PostgreSQL, after master failure, none of replicas take over the write task. But they can answer read queries if they are configured as hot_standby.
If you want to check the state of replication, in master you can check out pg_stat_replication in master.
Also these official docs would help you understand Postgres streaming replication & failover better:
https://www.postgresql.org/docs/9.6/warm-standby.html#STREAMING-REPLICATION
https://www.postgresql.org/docs/9.6/warm-standby-failover.html

Mongo Replication

I have a mongo 2.4.8 database setup and running in a live environment. I am wanting to add a replica however I would like to use the latest version 3.2.9 for the replica.
Is the only way for me to do this to upgrade the current node to version 3.2.9 then add the replica?
My plan would be sync all the data to the new node make it primary then update the old node to the latest version is this possible?
yes, you can create a new node and make a replica, and update the old node.
few things to keep in mind are:-
The default storage engine for 3.2.9 will be wiredtiger and for 2.4.8 it will be mmapv1, so you would have to change the configuration so that you can keep on using mmapv1 as your storage engine.
Do replication very carefully. if not done properly, there are chances that the whole database is blown. i recommend you to take the backup of the database before doing replication
I would definitely go with the first method that you mentioned. Upgrade the current stand alone database and then create a replica set. I tried to find the best practice from Mongodb, but I couldn't find an answer. So, I asked Adam ex employee of MongoDB and creator M202 course to find his opinion.
Source: Adam, ex employee of Mongodb
I have gone with the route of a full mongo backup then restore into the new nodes.
The replication old to new was very fragile plus the backup is very fast to do as long as you allowed to bring the server down.

POSTGRES REPLICATION for new master

We are putting together an architecture to support High Availability for our Postgres 9.5 Database. We have 1 master and 3 slaves Replicating the data of the master. When The master goes down Slave 1 is promoted to new master but Slave 2 and Slave 3 are still pointing to the previous master and not the updated master node.
Is there a way to make the slaves to read from the new master dynamically . Or does it require changing the configurations manually and restarting the slaves?
There's no short answer, but I'll try:
When primary server fails you'll promote one slave, and reconfigure all other slaves to target the new master. However there's one scenario in which reconfiguring other slaves might not be needed: if you are using "WAL archiving", and your archive is stored on a shared drive which survived the failure of the old primary. If the new primary continues to use the same shared store you might not need to reconfigure other slaves. Again, I've never tried this - you can try.
If your replication mechanism is based on "replication slots" (introduced in PostgreSQL 9.4) - then you have to reconfigure all the slaves. In this case actually you'll have to rebuild replication on all other slaves from scratch (as if they've never been slaves at all). Nevertheless, in my opinion "replication slots" are better choice.
Regarding automation: You've asked if it is possible to automatically reconfigure other slaves, but thing you've missed to mention is if you have any failover automation implemented. What I'm trying to say is that PostgreSQL itself will not automatically perform failover (promote one of slaves when the master fails). At least you have to create "trigger file" on the slave to be promoted, and you have to do this manually or by using another product (for example pgpool2).
If you use pgpool2 - you can setup automatic slave reconfiguration by setting follow_master_command pgpool.conf value.
Finally I'll strongly recommend reading this tutorial - it'll make your life easier.
Edit:
I've forgot to say two things:
Automatically reconfiguring all other slaves as soon as the new master is promoted might not be a good idea, especially if you have many slaves. It'll put additional pressure on your new primary, and on your network, so in some cases it is better to postpone this for night hours for example. More on this in the mentioned tutorial.
I've wrote the tutorial.
As e4c5 commented, you can use repmgr for managing this type of tasks. I have tried repmgr and I was done without a problem.
I have followed a tutorial for doing that and here is the link:
http://jensd.be/591/linux/setup-a-redundant-postgresql-database-with-repmgr-and-pgpool
I hope following this tutorial you can do what you want without any problem.

Migrating MongoDB instances with no down-time

We are using MongoDB in production environment and now, due to some issues of current servers, I'm going to change the server and start a new MongoDB instance.
We have a replica set and a single mongod instance (two different MongoDB networks for different purposes). Now, first I should migrate the single mongod instance and then the whole replica set to the new server.
What I want to know is, how can I migrate both instances with no down-time? I don't want to shutdown the server or stop write operations.
Thanks in advance.
So first of all you should never run mongodb as a single instance for production. At a minimum you should have 1 primary, 1 secondary and 1 arbiter.
Second, even with a replica set you will always have a bit of write downtime when you switch primaries, as writes are not possible during the election process. From the docs:
IMPORTANT Elections are essential for independent operation of a
replica set; however, elections take time to complete. While an
election is in process, the replica set has no primary and cannot
accept writes. MongoDB avoids elections unless necessary.
Elections are going to occur when for example you bring down the primary to move it to a new server or virtual instance, or upgrade the database version (like going from 2.4 to 2.6).
You can keep downtime to a minimum with an existing replica set by setting the appropriate options to allow queries to run against secondaries. Again from the docs:
Maintaining availability during a failover. Use primaryPreferred if
you want an application to read from the primary under normal
circumstances, but to allow stale reads from secondaries in an
emergency. This provides a “read-only mode” for your application
during a failover.
This takes care of reads at least. Writes are best dealt with by having your application retry failed writes, or queue them up.
Regarding your standalone the documented procedures for converting to a replica set are well tested and can be completed very quickly with minimal downtime:
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
You cannot have no downtime (a new mongod will run on new IP so you need to at least connect to it). But you can minimize downtime by making geographically distributed replica set.
Please Read
http://docs.mongodb.org/manual/tutorial/deploy-geographically-distributed-replica-set/
Use the given process but please note:
Do not set priority 0 of instances at New Location so that they become primary when old ones at Old Location step down.
You still need to restart mongod in replica set mode at Old Location.
You need 3 instances including an arbiter at New Location, if you want it to be
replica set.
When complete data is in sync with instances at New Location, step down instances at Old Location (one by one). Now everything will go to New Location but the problem is that it is directed through a distant mongod.
So stop mongod at Old Location and start a new one at new Location. Connect your applications to New Location Mongod.
Note: I have not done the same so far. I had planned it once but then I got the problem and it was not of hosting provider. Practically you may get some issues.
Replica Set is the feature provided by the Mongodb database to achieve high availability and automatic failover.
It is kinda traditional master-slave configuration but have capability of automatic failover.
It is basically group/cluster of the mongod instances which communicates, replicates to each other to provide high availability and to do automatic failover
Basically, in replica sets there are minimum 2 and maximum of 12 mongod instances can exist
In replica set following types of server exist. out of all, one server is always primary.
http://blog.ajduke.in/2013/05/31/setup-mongodb-replica-set-in-4-steps/
John answer is right, btw in your case you have no way to avoid downtime, you can just try to make it shorter as possible.
You can prepare the new replica set and save its configuration.
Same for the single mongod instance, prepare a js file with specific configuration (ie: stuff going on the admin database).
disable client connections on production servers.
copy the datafiles from the old servers to the new ones (http://docs.mongodb.org/manual/core/backups/#backup-with-file-copies)
apply your previous saved replica set config and configuration.
done
you can use diffent ways as add an hidden secondary member on the replica set if you have a lot of data, so you can wait it's is up-to-date before stopping the production server. Basically for the replica set you have many ways to handle a migration, with the single instance instead you don't have such features.

mongodb automatic failover / high availability on aws

I need the proper way of failover mechanism for mongodb on aws ec2. I know failover can be accomplished by replica sets, but what is the best way to fire a new mongo installed ubuntu-ec2 ami node and add it to replica set again automatically (with zero manual operation) and return the replica set to it's proper state ?
EBS has some problems, but if I use local instance storage, I will lost the dead nodes data, but does the replica got all the master data and so is replaca is enough to recover everthing (on mongo 1.8 with journaling), or do I have to use only EBS ?
How should I start mongo instances, If I should start with repair option, how can I sperate node's first run from failover restart ?
Regards,
The easiest way to bring up new nodes is to bring up a new node with a recent backup.
So now it's a question of how you do your backup and how you restore from the backup quickly.
The MongoDB site has a write up for backups (in general) and backups on EC2 specifically. There's also a write-up for adding a new set member.
You can do this with instance storage or EBS drives, but you'll need different strategies for each. There's really no single way to do this, so I would check out the docs I've linked to for a primer.
Highly recommend reading Sean Coates' article on mutli-node MongoDB Elections, failover and AWS - specifically, the subtlety on distributed arbiter nodes (e.g., make sure to give yourself a voting majority when an AZ goes down). A similar recommendation can be found in a comment on this (now-closed) MongoDB vs. Cassandra thread.