We have a three-server replicaset running MongoDB 2.2 on Ubuntu 10.04, and recently had to upgrade the hard drive for each server where one particular database resides. This database contains log information for web service requests, where they write to collections in hourly buckets using the current timestamp to determine the name, e.g. log_yyyymmddhh.
I performed this process:
backup the database on the primary server with mongodump --db log_db
take a secondary server offline, replace the disk
bring the secondary server up in standalone mode (i.e. comment out the replSet entry
in /etc/mongodb.conf before starting the service)
restore the database on the secondary server with mongorestore --drop --db log_db
add the secondary server back into the replicaset and bring it online,
letting replication catch up the hourly buckets that were updated/created
while it had been offline
Everything seemed to go as expected, except that the collection which was the current bucket at the time of the backup was not brought up to date by replication. I had to manually copy that collection over by hand to get it up to date. Note that collections which were created after the backup were synched just fine.
What did I miss in this process that caused MongoDB not to get things back in synch for that one collection? I assume something got out of whack with regard to the oplog?
Edit 1:
The oplog on the primary showed that its earliest timestamp went back a couple of days, so there should have been plenty of space to maintain transactions for a few hours (which was the time the secondary was offline).
Edit 2:
Our MongoDB installation uses two disk partitions: /dev/sda1 and /dev/sdb1. The primary MongoDB directory /var/lib/mongodb/ is on /dev/sda1, and holds several databases, while the log database resides by itself on /dev/sdb1. There's a sym link /var/lib/mongodb/log_db which points to a directory on /dev/sdb1. Since the log db was getting full, we needed to upgrade the disk for /dev/sdb1.
You should be using mongodump with the --oplog option. Running a full database backup with mongodump on a replicaset that is updating collections at the same time may not leave you with a consistent backup. This becomes worse with larger databases, more collections and more frequent updates/inserts/deletes.
From the documentation for your version (2.2) of MongoDB (it's the same for 2.6 but just to be as accurate as possible):
--oplog
Use this option to ensure that mongodump creates a dump of the
database that includes an oplog, to create a point-in-time snapshot of
the state of a mongod instance. To restore to a specific point-in-time
backup, use the output created with this option in conjunction with
mongorestore --oplogReplay.
Without --oplog, if there are write operations during the dump
operation, the dump will not reflect a single moment in time. Changes
made to the database during the update process can affect the output
of the backup.
http://docs.mongodb.org/v2.2/reference/mongodump/
This is not covered well in most MongoDB tutorials around backups and restores. Generally you are better off if you can perform a live snapshot of the storage volume your database resides on (assuming your storage solution has a live snapshot ability compatible with MongoDB). Failing that, your next best bet is taking a secondary offline and then performing a snapshot or backup of the database files. Mongodump on a live database is increasingly a less optimal solution for larger databases due to performance issues.
I'd definitely take a look at the MongoDB overview of backup options: http://docs.mongodb.org/manual/core/backups/
I would guess this has to do with the oplog not being long enough, although it seems like you checked that and it looked reasonably big.
Still, when adding new members to a replica set you shouldn't be snapshotting and restoring them. It's better to simply add a new member and let replication happen by itself. This is described in the Mongo docs and is the process I've always followed.
Is it possible to modify the MongoDB oplog and replay it?
A bug caused an update to be applied to more documents than it was supposed to be, overwriting some data. Data was recovered from backup and reintegrated, so nothing was actually lost, but I was wondering if there was a way to modify the oplog to remove or modify the offending update and replay it.
I don't have in depth knowledge of MongoDB internals, so informative answers along the lines of, "you don't understand how it works, it's like this" will also be considered for acceptance.
One of the big issues in application or human error data corruption is that the offending write to the primary will immediately be replicated to the secondary.
This is one of the reasons that users take advantage of "slaveDelay" - an option to run one of your secondary nodes with a fixed time delay (of course that only helps you if you discover the error or bug during the time period that's shorter than the delay on that secondary).
In case you don't have such a set-up, you have to rely on a backup to recreate the state of the records you need to restore to their pre-bug state.
Perform all the operations on a separate stand-alone copy of your data - only after verifying that everything was properly recreated should you then move the corrected data into your production system.
What is required to be able to do this is a recent copy of the backup (let's say the backup is X hours old) and the oplog on your cluster must hold more than X hours worth of data. I didn't specify which node's oplog because (a) every member of the replica set has the same contents in the oplog and (b) it is possible that your oplog size is different on different node members, in which case you want to check the "largest" one.
So let's say your most recent backup is 52 hours old, but luckily you have an oplog that holds 75 hours worth of data (yay).
You already realized that all of your nodes (primary and secondaries) have the "bad" data, so what you would do is restore this most recent backup into a new mongod. This is where you will restore these records to what they were right before the offending update - and then you can just move them into the current primary from where they will get replicated to all the secondaries.
While restoring your backup, create a mongodump of your oplog collection via this command:
mongodump -d local -c oplog.rs -o oplogD
Move the oplog to its own directory renaming it to oplog.bson:
mkdir oplogR
mv oplogD/local/oplog.rs.bson oplogR/oplog.bson
Now you need to find the "offending" operation. You can dump out the oplog to human readable form, using the bsondump command on oplogR/oplog.bson file (and then use grep or what-not to find the "bad" update). Alternatively you can query against the original oplog in the replica set via use local and db.oplog.rs.find() commands in the shell.
Your goal is to find this entry and note its ts field.
It might look like this:
"ts" : Timestamp( 1361497305, 2789 )
Note that the mongorestore command has two options, one called --oplogReplay and the other called oplogLimit. You will now replay this oplog on the restored stand-alone server BUT you will stop before this offending update operation.
The command would be (host and port are where your newly restored backup is):
mongorestore -h host --port NNNN --oplogReplay --oplogLimit 1361497305:2789 oplogR
This will restore each operation from the oplog.bson file in oplogR directory stopping right before the entry with ts value Timestamp(1361497305, 2789).
Recall that the reason you were doing this on a separate instance is so you can verify the restore and replay created correct data - once you have verified it then you can write the restored records to the appropriate place in the real primary (and allow replication propagate the corrected records to the secondaries).
I'm in the middle of setting up a backup strategy for mongo, was just curious to know if mongodump locks the database before performing the database dump?
I found this on mongo's google group:
Mongodump does a simple query on the live system and does not require
a shutdown. Like all queries it requires a read lock while running but
doesn't not block any more than normal queries.
If you have a replica set you will probably want to use the --oplog
flag to do your backups.
See the docs for more information
http://docs.mongodb.org/manual/administration/backups/
Additionally I found this previously asked question
MongoDB: mongodump/restore vs. backup up files directly
Excerpt from above question
Locking and copying files is only an option when you don't have heavy
write load.
mongodump can be run against live server. It will create some
additional load, so don't do it on peak hours. Also, it is advised to
do it on a secondary node (if you don't use replica sets, you should).
There are some complications when you have a DB so large that no
single machine can hold it. See this document.
Also, if you have replica set, you take down one of secondaries and copy its files directly. See http://www.mongodb.org/display/DOCS/Backups:
Mongdump does not lock the db. It means other read and write operations will continue normally.
Actually, both mongodump and mongorestore are non-blocking. So if you want to mongodump mongorestore a db then its your responsibility to make sure that it is really a desired snapshot backup/restore. To do this, you must stop all other write operations while taking/restoring backups with mongodump/mongorestore. If you are running a sharded environment then its recommended you stop the balancer also.
I've got a MongoDB database that was once large (>3GB). Since then, documents have been deleted and I was expecting the size of the database files to decrease accordingly.
But since MongoDB keeps allocated space, the files are still large.
I read here and there that the admin command mongod --repair is used to free the unused space, but I don't have enough space on the disk to run this command.
Do you know a way I can freed up unused space?
UPDATE: with the compact command and WiredTiger it looks like the extra disk space will actually be released to the OS.
UPDATE: as of v1.9+ there is a compact command.
This command will perform a compaction "in-line". It will still need some extra space, but not as much.
MongoDB compresses the files by:
copying the files to a new location
looping through the documents and re-ordering / re-solving them
replacing the original files with the new files
You can do this "compression" by running mongod --repair or by connecting directly and running db.repairDatabase().
In either case you need the space somewhere to copy the files. Now I don't know why you don't have enough space to perform a compress, however, you do have some options if you have another computer with more space.
Export the database to another computer with Mongo installed (using mongoexport) and then you can Import that same database (using mongoimport). This will result in a new database that is more compressed. Now you can stop the original mongod replace with the new database files and you're good to go.
Stop the current mongod and copy the database files to a bigger computer and run the repair on that computer. You can then move the new database files back to the original computer.
There is not currently a good way to "compact in place" using Mongo. And Mongo can definitely suck up a lot of space.
The best strategy right now for compaction is to run a Master-Slave setup. You can then compact the Slave, let it catch up and switch them over. I know still a little hairy. Maybe the Mongo team will come up with better in place compaction, but I don't think it's high on their list. Drive space is currently assumed to be cheap (and it usually is).
It looks like Mongo v1.9+ has support for the compact in place!
> db.runCommand( { compact : 'mycollectionname' } )
See the docs here: http://docs.mongodb.org/manual/reference/command/compact/
"Unlike repairDatabase, the compact command does not require double disk space to do its work. It does require a small amount of additional space while working. Additionally, compact is faster."
I had the same problem, and solved by simply doing this at the command line:
mongodump -d databasename
echo 'db.dropDatabase()' | mongo databasename
mongorestore dump/databasename
Compact all collections in current database
db.getCollectionNames().forEach(function (collectionName) {
print('Compacting: ' + collectionName);
db.runCommand({ compact: collectionName });
});
If you need to run a full repair, use the repairpath option. Point it to a disk with more available space.
For example, on my Mac I've used:
mongod --config /usr/local/etc/mongod.conf --repair --repairpath /Volumes/X/mongo_repair
Update: Per MongoDB Core Server Ticket 4266, you may need to add --nojournal to avoid an error:
mongod --config /usr/local/etc/mongod.conf --repair --repairpath /Volumes/X/mongo_repair --nojournal
Starting with 2.8 version of Mongo, you can use compression. You will have 3 levels of compression with WiredTiger engine, mmap (which is default in 2.6 does not provide compression):
None
snappy (by default)
zlib
Here is an example of how much space will you be able to save for 16 GB of data:
data is taken from this article.
We need solve 2 ways, based on StorageEngine.
1. MMAP() engine:
command: db.repairDatabase()
NOTE: repairDatabase requires free disk space equal to the size of your current data set plus 2 gigabytes. If the volume that holds dbpath lacks sufficient space, you can mount a separate volume and use that for the repair. When mounting a separate volume for repairDatabase you must run repairDatabase from the command line and use the --repairpath switch to specify the folder in which to store temporary repair files.
eg: Imagine DB size is 120 GB means, (120*2)+2 = 242 GB Hard Disk space required.
another way you do collection wise,
command: db.runCommand({compact: 'collectionName'})
2. WiredTiger:
Its automatically resolved it-self.
There has been some considerable confusion over space reclamation in MongoDB, and some recommended practice are downright dangerous to do in certain deployment types. More details below:
TL;DR repairDatabase attempts to salvage data from a standalone MongoDB deployments that is trying to recover from a disk corruption. If it recovers space, it is purely a side effect. Recovering space should never be the primary consideration of running repairDatabase.
Recover space in a standalone node
WiredTiger: For a standalone node with WiredTiger, running compact will release space to the OS, with one caveat: The compact command on WiredTiger on MongoDB 3.0.x was affected by this bug: SERVER-21833 which was fixed in MongoDB 3.2.3. Prior to this version, compact on WiredTiger could silently fail.
MMAPv1: Due to the way MMAPv1 works, there is no safe and supported method to recover space using the MMAPv1 storage engine. compact in MMAPv1 will defragment the data files, potentially making more space available for new documents, but it will not release space back to the OS.
You may be able to run repairDatabase if you fully understand the consequences of this potentially dangerous command (see below), since repairDatabase essentially rewrites the whole database by discarding corrupt documents. As a side effect, this will create new MMAPv1 data files without any fragmentation on it and release space back to the OS.
For a less adventurous method, running mongodump and mongorestore may be possible as well in an MMAPv1 deployment, subject to the size of your deployment.
Recover space in a replica set
For replica set configurations, the best and the safest method to recover space is to perform an initial sync, for both WiredTiger and MMAPv1.
If you need to recover space from all nodes in the set, you can perform a rolling initial sync. That is, perform initial sync on each of the secondaries, before finally stepping down the primary and perform initial sync on it. Rolling initial sync method is the safest method to perform replica set maintenance, and it also involves no downtime as a bonus.
Please note that the feasibility of doing a rolling initial sync also depends on the size of your deployment. For extremely large deployments, it may not be feasible to do an initial sync, and thus your options are somewhat more limited. If WiredTiger is used, you may be able to take one secondary out of the set, start it as a standalone, run compact on it, and rejoin it to the set.
Regarding repairDatabase
Please don't run repairDatabase on replica set nodes. This is very dangerous, as mentioned in the repairDatabase page and described in more details below.
The name repairDatabase is a bit misleading, since the command doesn't attempt to repair anything. The command was intended to be used when there's disk corruption on a standalone node, which could lead to corrupt documents.
The repairDatabase command could be more accurately described as "salvage database". That is, it recreates the databases by discarding corrupt documents in an attempt to get the database into a state where you can start it and salvage intact document from it.
In MMAPv1 deployments, this rebuilding of the database files releases space to the OS as a side effect. Releasing space to the OS was never the purpose.
Consequences of repairDatabase on a replica set
In a replica set, MongoDB expects all nodes in the set to contain identical data. If you run repairDatabase on a replica set node, there is a chance that the node contains undetected corruption, and repairDatabase will dutifully remove the corrupt documents for you.
Predictably, this makes that node contains a different dataset from the rest of the set. If an update happens to hit that single document, the whole set could crash.
To make matters worse, it is entirely possible that this situation could stay dormant for a long time, only to strike suddenly with no apparent reason.
In case a large chunk of data is deleted from a collection and the collection never uses the deleted space for new documents, this space needs to be returned to the operating system so that it can be used by other databases or collections. You will need to run a compact or repair operation in order to defragment the disk space and regain the usable free space.
Behavior of compaction process is dependent on MongoDB engine as follows
db.runCommand({compact: collection-name })
MMAPv1
Compaction operation defragments data files & indexes. However, it does not release space to the operating system. The operation is still useful to defragment and create more contiguous space for reuse by MongoDB. However, it is of no use though when the free disk space is very low.
An additional disk space up to 2GB is required during the compaction operation.
A database level lock is held during the compaction operation.
WiredTiger
The WiredTiger engine provides compression by default which consumes less disk space than MMAPv1.
The compact process releases the free space to the operating system.
Minimal disk space is required to run the compact operation.
WiredTiger also blocks all operations on the database as it needs database level lock.
For MMAPv1 engine, compact doest not return the space to operating system. You require to run repair operation to release the unused space.
db.runCommand({repairDatabase: 1})
Mongodb 3.0 and higher has a new storage engine - WiredTiger.
In my case switching engine reduced disk usage from 100 Gb to 25Gb.
Database files cannot be reduced in size. While "repairing" database, it is only possible for mongo server to delete some of its files. If large amount of data has been deleted, mongo server will "release" (delete), during repair, some of its existing files.
In general compact is preferable to repairDatabase. But one advantage of repair over compact is you can issue repair to the whole cluster. compact you have to log into each shard, which is kind of annoying.
When i had the same problem, i stoped my mongo server and started it again with command
mongod --repair
Before running repair operation you should check do you have enough free space on your HDD (min - is the size of your database)
For standalone mode you could use compact or repair,
For sharded cluster or replica set, in my experience, after you running compact on the primary, followed by compact the secondary, the size of primary database reduced, but not the secondary.
You might want to do resync member to reduce the size of secondary database. and by doing this you might find that the size of secondary database is even more reduced than the primary, i guess the compact command not really compacting the collection.
So, i ended up switching the primary and secondary of the replica set and doing resync member again.
my conclusion is, the best way to reduce the size of sharded/replica set is by doing resync member, switch primary secondary, and resync again.
mongoDB -repair is not recommended in case of sharded cluster.
If using replica set sharded cluster, use compact command, it will rewrites and defragments all data and index files of all collections.
syntax:
db.runCommand( { compact : "collection_name" } )
when used with force:true, compact runs on primary of replica set.
e.g. db.runCommand ( { command : "collection_name", force : true } )
Other points to consider:
-It blocks the operations. so recommended to execute in maintenance window.
-If replica sets running on different servers, needs to be execute on each member separately
- In case of sharded cluster, compact needs to execute on each shard member separately. Cannot execute against mongos instance.
Just one way that I was able to do it. No guarantee on the safety of your existing data. Try with your own risk.
Delete the data files directly and restart mongod.
For example, with ubuntu (default path to data: /var/lib/mongodb), I had couple files with name like: collection.#. I keep the collection.0 and deleted all others.
Seems an easier way if you don't have serious data in database.