How to get a consistent MongoDB backup for a single node setup - mongodb

I'm using MongoDB in a pretty simple setup and need a consistent backup strategy. I found out the hard way that wrapping a mongodump in a lock/unlock is a bad idea. Then I read that the --oplog option should be able to provide consistency without lock/unlock. However, when I tried that, it said that I could only use the --oplog option on a "full dump." I've poked around the docs and lots of articles but it still seems unclear on how to dump a mongo database from a single point in time.
For now I'm just going with a normal dump but I'm assuming that if there are writes during the dump it would make the backup not from a single point in time, correct?
mongodump -h $MONGO_HOST:$MONGO_PORT -d $MONGO_DATABASE -o ./${EXPORT_FILE} -u backup -p password --authenticationDatabase admin

In production environment, MongoDB is typically deployed as replica set(s) to ensure redundancy and high availability. There are a few options available for point in time backup if you are running a standalone mongod instance.
One option as you have mentioned is to do a mongodump with –oplog option. However, this option is only available if you are running a replica set. You can convert a standalone mongod instance to a single node replica set easily without adding any new replica set members. Please check the following document for details.
http://docs.mongodb.org/manual/tutorial/convert-standalone-to-replica-set/
This way, if there are writes while mongodump is running, they will be part of your backup. Please see Point in Time Operation Using Oplogs section from the following link.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-binary-database-dumps/#point-in-time-operation-using-oplogs
Be aware that using mongodump and mongorestore to back up and restore MongodDB can be slow.
File system snapshot is another option. Information from the following link details two snapshot options for performing hot backup of MongoDB.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-filesystem-snapshots/
You can also look into MongoDB backup service.
http://www.10gen.com/products/mongodb-backup-service
In addition, mongodump with oplog options does not work with single db/collection at this moment. There are plans to implement the feature. You can follow the ticket and vote for the feature under the More Actions button.
https://jira.mongodb.org/browse/SERVER-4273

Related

Can I restore data from mongo oplog?

my mongodb was hacked today, all data was deleted, and hacker requires some amount to get it back, I will not pay him, cause I know he will not send me back my database.
But I have had oplog turn on, I see it contains over 300 000 documents, saving all operations.
Is there any tool that can restore my data from this logs?
Depending on how far back your oplog is, you may be able to restore the deployment. I would recommend taking a backup of the current state of your dbpath just in case.
Note that there are many variables in play for doing a restore like this, so success is never a guarantee. It can be done using mongodump and mongorestore, but only if your oplog goes back to the beginning of time (i.e. when the deployment was first created). If it does, you may be able to restore your data. If it does not, you'll see errors during the process.
Secure your deployment before doing anything else. This situation arises due to a lack of security. There are extensive security features available in MongoDB. Check out the Security Checklist page for details.
Dump the oplog collection using mongodump --host <old_host> --username <user> --password <pwd> -d local -c oplog.rs -o oplogDump.
Check the content of the oplog to determine the timestamp when the offending drop operation occur by using bsondump oplogDump/local/oplog.rs.bson. You're looking for a line that looks approximately like this:
{"ts":{"$timestamp":{"t":1502172266,"i":1}},"t":{"$numberLong":"1"},"h":{"$numberLong":"7041819298365940282"},"v":2,"op":"c","ns":"test.$cmd","o":{"dropDatabase":1}}
This line means that a dropDatabase() command was executed on the test database.
Keep note of the t value in {"$timestamp":{"t":1502172266,"i":1}}.
Restore to a secure new deployment using mongorestore --host <new_host> --username <user> --password <pwd> --oplogReplay --oplogLimit=1502172266 --oplogFile=oplogDump/local/oplog.rs.bson oplogDump
Note the parameter to oplogLimit, which is basically telling mongorestore to stop replaying the oplog once it hit that timestamp (which is the timestamp of the dropDatabase command in Step 3.
The oplogFile parameter is new to MongoDB 3.4. For older versions, you would need to copy the oplogDump/local/oplog.rs.bson to the root of the dump directory to a file named oplog.bson, e.g. oplogDump/oplog.bson and remove the oplogFile parameter from the example command above.
After Step 4, if your oplog goes back to the beginning of time and you stop the oplog replay at the right time, hopefully you should see your data at the point just before the dropDatabase command was executed.

How does restoring a db backup affect the oplog?

I have a standalone MongoDb instance. It has many databases in it. I am though only concerned with backingup/restoring one of those databases, lets call it DbOne.
Using the instructions in (http://www.mongodb.com/blog/post/dont-let-your-standalone-mongodb-server-stand-alone), I can create an oplog on this standalone server.
Using the tool Tayra, I can record/store the oplog entries. Being able to create incremental backups is the main reason I enabled the oplog on my standalone instance.
I intend to take full backups once a day, using the command
mongodump --db DbOne --oplog
From my understanding, this backup will contain a point-in-time snapshot of my db.
Assuming I want to discard all updates since this backup, I delete all backedup oplog and I restore only this full backup, using the command
mongorestore --drop --db DbOne --oplogReplay
At this point, do I need to do something to the oplog collection in the local db? Will mongodb automatically drop the entries pertaining to this db from the oplog? Because if not, then wouldn't Tayra end up finding those oplog entries and backup them all over again?
Tbh, I haven't tried this yet on my machine. I am hoping someone can point to a document that lists supported/expected behaviour in this scenario.
I had experimented with a MongoDb server, setup as a replica set with only 1 member, shortly after asking the question. I however forgot to answer this question.
I took a backup using mongodump --db DbOne --oplog. I executed some additional updates. Keeping the mongodb server as is, ie still running under replication, if I run the mongorestore command, then it would create thousands of oplog entries, one for each document of each collection in the db. It was a big mess.
The alternative was to shutdown MongoDb and start it as a standalone instance (ie not running as a replica set). Now if I were to restore using mongorestore the oplog wouldnt be touched. This was bad, because the oplog now contained entries that were not present in the actual collections.
I wanted a mechanism that would restore both my database as well as oplog info in the local database to the time the backup took place. mongodump doesnt backup the local database.
Eventually I had stop using mongodump and instead switched to backing up the entire data directory (after stopping mongodb). Once we switch to AWS, I could use the EBS Snapshot feature to perform the same.
I understand you want a link to the docs about mongorestore:
http://docs.mongodb.org/manual/reference/program/mongorestore/
From what I understand you want to make a point in time backup and then restore that backup. The commands you listed above will do that:
1)mongodump --db DbOne --oplog
2)mongorestore --drop --db DbOne --oplogReplay
However, please note that the "point in time" that the backup was effectively taken at it is when the dump ends, not the moment the command started. This is a fine detail that might not matter to you, but is included for completeness.
Let me know if there is anything else.
Best,
Charlie

Mongodump with --oplog for hot backup

I'm looking for the right way to do a Mongodb backup on a replica set (non-sharded).
By reading the Mongodb documentation, I understand that a "mongodump --oplog" should be enough, even on a replica (slave) server.
From the mongodb / mongodump documentation :
--oplog
Use this option to ensure that mongodump creates a dump of the database that includes an oplog, to create a point-in-time snapshot of the state of a mongod instance. To restore to a specific point-in-time backup, use the output created with this option in conjunction with mongorestore --oplogReplay.
Without --oplog, if there are write operations during the dump operation, the dump will not reflect a single moment in time. Changes made to the database during the update process can affect the output of the backup
I'm still having a very hard time to understand how Mongodb can backup and keep writing on the database and make a consistent backup, even with --oplog.
Should I lock my collections first or is it safe to run "mongodump --oplog ?
Is there anything else I should know about?
Thanks.
The following document explains how mongodump with –oplog option works to create a point in time backup.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-binary-database-dumps/
However, using mongodump and mongorestore to back up and restore MongodDB can be slow. If file system snapshot is an option, you may want to consider using snapshot for MongoDB backup. Information from the following link details two snapshot options for performing hot backup of MongoDB.
http://docs.mongodb.org/manual/tutorial/backup-databases-with-filesystem-snapshots/
You can also look into MongoDB backup service.
http://www.10gen.com/products/mongodb-backup-service

Initial sync of mongodb and Elasticsearch

What is a easy way to do the initial sync between mongodb and elasticsearch. I use the https://github.com/richardwilly98/elasticsearch-river-mongodb to sync any updates. The river works by tracking the changes in the mongodb replica set logs and applying those to ES but how can I sync what is already in mongodb to elasticsearch.
A proposed solution I have seen is to dump ( mongodump) the data and restore ( mongorestore ) but not sure of its impact on a live mongo database.
That is actually the solution.
mongodump -u root -p 'yourpassword' --oplog
the oplog will also copy the transaction log which I think is required for your script to work.
after that you do mongorestore on the otherside
mongorestore --oplogReplay
Also another solution is to use "OplogReplay" script instead of the one you are using
This script does the initial sync from source to destination automatically when you run it for the first time
https://pypi.python.org/pypi/OplogReplay/
I recommend that you download the latest code from github directly
https://github.com/uberVU/mongo-oplogreplay

Does mongodump lock the database?

I'm in the middle of setting up a backup strategy for mongo, was just curious to know if mongodump locks the database before performing the database dump?
I found this on mongo's google group:
Mongodump does a simple query on the live system and does not require
a shutdown. Like all queries it requires a read lock while running but
doesn't not block any more than normal queries.
If you have a replica set you will probably want to use the --oplog
flag to do your backups.
See the docs for more information
http://docs.mongodb.org/manual/administration/backups/
Additionally I found this previously asked question
MongoDB: mongodump/restore vs. backup up files directly
Excerpt from above question
Locking and copying files is only an option when you don't have heavy
write load.
mongodump can be run against live server. It will create some
additional load, so don't do it on peak hours. Also, it is advised to
do it on a secondary node (if you don't use replica sets, you should).
There are some complications when you have a DB so large that no
single machine can hold it. See this document.
Also, if you have replica set, you take down one of secondaries and copy its files directly. See http://www.mongodb.org/display/DOCS/Backups:
Mongdump does not lock the db. It means other read and write operations will continue normally.
Actually, both mongodump and mongorestore are non-blocking. So if you want to mongodump mongorestore a db then its your responsibility to make sure that it is really a desired snapshot backup/restore. To do this, you must stop all other write operations while taking/restoring backups with mongodump/mongorestore. If you are running a sharded environment then its recommended you stop the balancer also.