I have a mongo 2.4.8 database setup and running in a live environment. I am wanting to add a replica however I would like to use the latest version 3.2.9 for the replica.
Is the only way for me to do this to upgrade the current node to version 3.2.9 then add the replica?
My plan would be sync all the data to the new node make it primary then update the old node to the latest version is this possible?
yes, you can create a new node and make a replica, and update the old node.
few things to keep in mind are:-
The default storage engine for 3.2.9 will be wiredtiger and for 2.4.8 it will be mmapv1, so you would have to change the configuration so that you can keep on using mmapv1 as your storage engine.
Do replication very carefully. if not done properly, there are chances that the whole database is blown. i recommend you to take the backup of the database before doing replication
I would definitely go with the first method that you mentioned. Upgrade the current stand alone database and then create a replica set. I tried to find the best practice from Mongodb, but I couldn't find an answer. So, I asked Adam ex employee of MongoDB and creator M202 course to find his opinion.
Source: Adam, ex employee of Mongodb
I have gone with the route of a full mongo backup then restore into the new nodes.
The replication old to new was very fragile plus the backup is very fast to do as long as you allowed to bring the server down.
Related
In the server structure, primary, secondary, and arbiter are each physically operated.
mongo db version is 4.2.3.
Some of the documents were deleted in the oldest order because too many documents were accumulated in a specific collection.
However, even deleting documents did not release the storage area.
Upon checking, I found that mongodb's mechanism retains reusable bytes even if the document is deleted.
Also, I found out that unnecessary disk space can be freed with the compact command in the WiredTiger engine.
Currently, all clients connected to the db are querying using the arbiter ip and port.
Since the DB is composed only of replication, not sharding, if each individual executes the compact command independently, Even if each instance is locked, it is expected that the arbiter will distribute the query to the currently available instances.
Is this possible?
Or, Should I shutdown each instance, run it standalone, run the compact command, and then reconfigure psa?
You may upgrade your MonogDB to latest version 4.4. Documentation of compact:
Blocking
Changed in version 4.4.
Starting in v4.4, on WiredTiger, compact only blocks the following
metadata operations:
db.collection.drop
db.collection.createIndex and db.collection.createIndexes
db.collection.dropIndex and db.collection.dropIndexes
compact does not block MongoDB CRUD Operations for the database it is
currently operating on.
Before v4.4, compact blocked all operations for the database it was
compacting, including MongoDB CRUD Operations, and was therefore
recommended for use only during scheduled maintenance periods.
Starting in v4.4, the compact command is appropriate for use at any
time.
To anyone looking for the answer with 4.4 please see this bug and the documentation entry as the compact routine still forces the node to recovery state if you are running in replica set (and I assume this is the default use case for most projects)
The current version is 9.4.20 and I want to upgrade to 9.5.X. I am wondering the right procedure to do this since my Postgres has read replica so it is a bit more than just Modify. Downtime is acceptable. The seamless upgrade is NOT required. The docs on the AWS side are not clear. https://i.stack.imgur.com/WlCqV.png
amz docs
Here are the purposed steps, yet I can't figure out how to perform the second step:
take a snapshot of the primary instance,
stop replication,
upgrade a primary instance,
upgrade read replica,
promote read replica and start replication again
I will just post what I did as an answer.
snapshot primary,
delete old read replica,
upgrade primary,
create new read replica, done.
The RDS for PostgreSQL doc https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/USER_UpgradeDBInstance.PostgreSQL.html states:
"A read replica can't undergo a major version upgrade but the read replica's source instance can. If a read replica's source instance undergoes a major version upgrade, all read replicas for that source instance remain with the previous engine version. In this case, the read replicas can no longer replicate changes performed on the source instance.
We recommend that you either promote your read replicas, or delete and recreate them after the source instance has upgraded to a different major version."
When you initiate a major version upgrade for an RDS for PostgreSQL instance with one or more read replicas, the replication will be automatically stopped, and it won't be restarted after the primary (source) is finished upgrading. You will need to create new read replicas after upgrading the source database instance.
We have a 3 member replica set mongodb running on mLab for a production website. We want to move the database to a new replica set hosted in our own Google Cloud account.
My current idea is to do the following steps
use dump/restore to copy a snapshot of the current database to the new replica set on Google Cloud
use oplog to keep the new replica set in sync with the current database
stop writing to current database and switch endpoint to the new new replica set
The production website can still be accessible during step 1 and 2. And I can do step 3 at my choice of time to reduce down time.
I don't have much mongo DBA experience so looking for suggestions on
Is the plan above making sense?
What commands/tools should I look into to make my plan work?
Thanks in advance!
tokumx and mongodb are incompatible; you couldn't build a mixed replica-set because they had different storage engines and spoke different replication languages. But PSMDB seems to have closed this gap (with pluggable storage engines, at least, which can allow wiredTiger). Does this mean they can now also be mixed (i.e. have the differences in replication-language also been rectified?) I ask because I've got a very old tokumx system with important data on it and MUST bring it into a mongodb cluster, but there seems to be no simple way to do this. If I can migrate tokumx->PSMDB->mongodb, that would be fantastic! Any help would be appreciated!
I've got a very old tokumx system with important data on it and MUST bring it into a mongodb cluster, but there seems to be no simple way to do this.
TokuMX's replication protocol is not compatible with MongoDB or Percona servers, so migrating from TokuMX will unfortunately require dumping and restoring your data. Outside of replication, there are also some incompatible TokuMX index options to remove before restoring into MongoDB.
See Migrate from TokuMX to Percona Server for migration approaches & scripts to help with this.
If I can migrate tokumx->PSMDB->mongodb, that would be fantastic!
If your goal is to migrate to MongoDB Community or Enterprise edition, an intermediate migration via PSMDB will not provide any benefit. PSMDB uses replication code from the upstream MongoDB community server, but does not provide any special migration path from TokuMX.
I need the proper way of failover mechanism for mongodb on aws ec2. I know failover can be accomplished by replica sets, but what is the best way to fire a new mongo installed ubuntu-ec2 ami node and add it to replica set again automatically (with zero manual operation) and return the replica set to it's proper state ?
EBS has some problems, but if I use local instance storage, I will lost the dead nodes data, but does the replica got all the master data and so is replaca is enough to recover everthing (on mongo 1.8 with journaling), or do I have to use only EBS ?
How should I start mongo instances, If I should start with repair option, how can I sperate node's first run from failover restart ?
Regards,
The easiest way to bring up new nodes is to bring up a new node with a recent backup.
So now it's a question of how you do your backup and how you restore from the backup quickly.
The MongoDB site has a write up for backups (in general) and backups on EC2 specifically. There's also a write-up for adding a new set member.
You can do this with instance storage or EBS drives, but you'll need different strategies for each. There's really no single way to do this, so I would check out the docs I've linked to for a primer.
Highly recommend reading Sean Coates' article on mutli-node MongoDB Elections, failover and AWS - specifically, the subtlety on distributed arbiter nodes (e.g., make sure to give yourself a voting majority when an AZ goes down). A similar recommendation can be found in a comment on this (now-closed) MongoDB vs. Cassandra thread.