Mongorestore seems to run out of memory and kills the mongo process - mongodb

In current setup there are two Mongo Docker containers, running on hosts A and B, with Mongo version of 3.4 and running in a replica set. I would like to upgrade them to 3.6 and increase a member so the containers would run on hosts A, B and C. Containers have 8GB memory limit and no swap allocated (currently), and are administrated in Rancher. So my plan was to boot up the three new containers, initialize a replica set for those, take a dump from the 3.4 container, and restore it the the new replica set master.
Taking the dump went fine, and its size was about 16GB. When I tried to restore it to the new 3.6 master, restoring starts fine, but after it has restored roughly 5GB of the data, mongo process seems to be killed by OS/Rancher, and while the container itself doesn't restart, MongoDB process just crashes and reloads itself back up again. If I run mongorestore to the same database again, it says unique key error for all the already inserted entries and then continue where it left off, only to do the same again after 5GB or so. So it seems that mongorestore loads all the entries it restores to memory.
So I've got to get some solution to this, and:
Every time it crashes, just run the mongorestore command so it continues where it left off. It probably should work, but I feel a bit uneasy doing it.
Restore the database one collection at a time, but the largest collection is bigger than 5GB so it wouldn't work properly either.
Add swap or physical memory (temporarily) to the container so the process doesn't get killed after the process runs out of physical memory.
Something else, hopefully a better solution?

Increasing the swap size as the other answer pointed out worked out for me. Also, The --numParallelCollections option controls the number of collections mongodump/mongorestore should dump/restore in parallel. The default is 4 which may consume a lot of memory.

Since it sounds like you're not running out of disk space due to mongorestore continuing where it left off successfully, focusing on memory issues is the correct response. You're definitely running out of memory during the mongorestore process.
I would highly recommend going with the swap space, as this is the simplest, most reliable, least hacky, and arguably the most officially supported way to handle this problem.
Alternatively, if you're for some reason completely opposed to using swap space, you could temporarily use a node with a larger amount of memory, perform the mongorestore on this node, allow it to replicate, then take the node down and replace it with a node that has fewer resources allocated to it. This option should work, but could become quite difficult with larger data sets and is pretty overkill for something like this.

I solved the OOM problem by using the --wiredTigerCacheSizeGB parameter of mongod. Excerpt from my docker-compose.yaml below:
version: '3.6'
services:
db:
container_name: db
image: mongo:3.2
volumes:
- ./vol/db/:/data/db
restart: always
# use 1.5GB for cache instead of the default (Total RAM - 1GB)/2:
command: mongod --wiredTigerCacheSizeGB 1.5

Just documenting here my experience in 2020 using mongodb 4.4:
I ran into this problem restoring a 5GB collection on a machine with 4GB mem. I added 4GB swap which seemed to work, I was no longer seeing the KILLED message.
However, a while later I noticed I was missing a lot of data! Turns out if mongorestore runs out of memory during the final step (at 100%) it will not show killed, BUT IT HASNT IMPORTED YOUR DATA.
You want to make sure you see this final line:
[########################] cranlike.files.chunks 5.00GB/5.00GB (100.0%)
[########################] cranlike.files.chunks 5.00GB/5.00GB (100.0%)
[########################] cranlike.files.chunks 5.00GB/5.00GB (100.0%)
[########################] cranlike.files.chunks 5.00GB/5.00GB (100.0%)
[########################] cranlike.files.chunks 5.00GB/5.00GB (100.0%)
restoring indexes for collection cranlike.files.chunks from metadata
finished restoring cranlike.files.chunks (23674 documents, 0 failures)
34632 document(s) restored successfully. 0 document(s) failed to restore.
In my case I needed 4GB mem + 8GB swap, to import 5GB GridFS collection.

Rather than starting up a new replica set, it's possible to do the entire expansion and upgrade without even going offline.
Start MongoDB 3.6 on host C
On the primary (currently A or B), add node C into the replica set
Node C will do an initial sync of the data; this may take some time
Once that is finished, take down node B; your replica set has two working nodes still (A and C) so will continue uninterrupted
Replace v3.4 on node B with v3.6 and start back up again
When node B is ready, take down node A
Replace v3.4 on node A with v3.6 and start back up again
You'll be left with the same replica set running as before, but now with three nodes all running v.3.4.
PS Be sure to check out the documentation on Upgrade a Replica Set to 3.6 before you start.

I ran into a similar issue running 3 nodes on a single machine (8GB RAM total) as part of testing a replicaset. The default storage cache size is .5 * (Total RAM - 1GB). The mongorestore caused each node to use the full cache size on restore and consume all available RAM.
I am using ansible to template this part of mongod.conf, but you can set your cacheSizeGB to any reasonable amount so multiple instances do not consume the RAM.
storage:
wiredTiger:
engineConfig:
cacheSizeGB: {{ ansible_memtotal_mb / 1024 * 0.2 }}

My scenario is similar to #qwertz but to be able to upload all collection to my database the following script was created to handle partials uploads; uploading every single collection one-by-one instead of trying to send all database at once was the only way to properly populate it.
populate.sh:
#!/bin/bash
backupFiles=`ls ./backup/${DB_NAME}/*.bson.gz`
for file in $backupFiles
do
file="${file:1}"
collection=(${file//./ })
collection=(${collection//// })
collection=${collection[2]}
mongorestore \
$file \
--gzip \
--db=$DB_NAME \
--collection=$collection \
--drop \
--uri="${DB_URI}://${DB_USERNAME}:${DB_PASSWORD}#${DB_HOST}:${DB_PORT}/"
done
Dockerfile.mongorestore:
FROM alpine:3.12.4
RUN [ "apk", "add", "--no-cache", "bash", "mongodb-tools" ]
COPY populate.sh .
ENTRYPOINT [ "./populate.sh" ]
docker-compose.yml:
...
mongorestore:
build:
context: .
dockerfile: Dockerfile.mongorestore
restart: on-failure
environment:
- DB_URI=${DB_URI}
- DB_NAME=${DB_NAME}
- DB_USERNAME=${DB_USERNAME}
- DB_PASSWORD=${DB_PASSWORD}
- DB_HOST=${DB_HOST}
- DB_PORT=${DB_PORT}
volumes:
- ${PWD}/backup:/backup
...

Related

Does mongod run in a single thread when performing a mongorestore?

When monitoring a mongodb restore, I'm tracking two processes...
CPU COMMAND
100% mongod --config /etc/mongo/mongod.conf
0% mongorestore /data/dump
MongoDB is 4.4.14 and mongorestore is version is 100.5.3. I'm running it inside a docker container.
I never see mongod go past 100%.. Is there a way to allow it to use more than a single core when performing a mongo restore?
By default mongorestore will restore max 4x collections in paralel with single insertionWorkerPerCollection( Since you see only two processes running in paralel , perhaps you have only 2x collections in your database... ) , but you can increase this parameters if you have more collections and would like to restore faster , check the official docs for details:
--numParalelCollections ( default=4 )
Number of collections mongorestore should restore in parallel.
--numInsertionWorkersPerCollection ( default=1 )
Specifies the number of insertion workers to run concurrently per collection.
For large imports, increasing the number of insertion workers may increase the speed of the import.

mongorestore failing on shard cluster v4.2.x | Error: "panic: close of closed channel"

Scenario:
I had a standalone MongoDB Server v3.4.x where I had several DBs and collections respectively. As the plan was to upgrade to lastest 4.2.x, I have created a mongo dump of all DBs.
Created a shard cluster of config server (replica cluster), shard-1 server (replica cluster) & shard-2 server (cluster) [MongoDB v4.2.x]
Issue:
Now when I try to restore the dump, it's partially restoring every time I try to restore DBs. If I try to restore single DB it fails with same error. But whenever I try to restore specific DB & specific collection it always works fine. But the problem is so many collections across many DBs. Cannot do it for all indicvidually & every time it fails at different progress percentage/collection/DBs.
Error:
2020-02-07719:07:03.822+0000 [#####################...] myproduct_new.chats 68.1MB/74.8MB (91.0%)
2020-02-07719:07:03.851+0000 [########## ] myproduct_new.metaCrashes 216MB/502MB (42.9%)
2020-02-07719:07:03.876+0000 [################## ] myproduct_new.feeds 152MB/196MB (77.4%)
panic: close of closed channel
goroutine 25 [running]: github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreCollectionToDB(Oxc0001a0000, 0xc000234540, Oxc, 0xc00023454d, 900, Ox7fa5503e21f0, 0xc00020b890, 0x1f66e326, Ox0, ...)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntent(Oxc0001a0000, Oxc00022f9e0, Ox0, Ox0, Ox0, Ox0)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents.funcl(Oxc0001a0000, 0xc000146420, 0x3)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. created by github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. ubuntu#ip-00-xxx-xxx-00:/usr/local/backups/Dev_backup_07-02-2020$ Ox10, Oxc00000f go:503 +0x49b go:311 +Oxbe9 go:126 +Oxlcb go:109
+0x12d
Question:
I am connecting to mongos and trying to restore. Currently, sharding is not yet enabled for any DB. Can anyone put some light on whats going wrong or how to restore the dump?
I have got the same problem, then I found out that it is the problem of my mongodb replica set caused this error.
check the rs.status() of your database.
if you got the message
Our replica set config is invalid or we are not a member of it
try this answer
We faced exact same issue for same spec trying to restore from Mongodump. There is no definite reason but could be best to check below factors
Check your Dump size(bjson) vs Allocated Disk free space on Cluster. Dump size could be 2x to 3x of our core Mongo data folder size(Which is compressed on top of BJSON)
Check your Oplog size configured during cluster creation, for first time migration provide 10-15% of free diskspace size as Oplog size, you can change this after migration. This will help secondaries to lag bit longer and catch up on sync faster from WAL. eg: Allocated 3.5 GB for oplog out of Total 75 GB Hardisk size, with 45GB of Total data(compressed). In real world usage scenario(post migration), keep it 1-2 hour write data volumes as oplog size.
Now your total disk space would be Dump folder size + Oplog + 6GB(Default mongo installation + system extras).
Note: If you cannot afford to allocate Dump folder size, you have to run the restore in batches(DBs or Collections nsInclude option), giving time for Mongo to compress after importing bjson. This should be done in minutes
After your restore, Mongo will shrink the data size and diskspace will more or less match close to your Standalone data folder size.
If this is not set and your diskspace is under provisioned during Migration, while your migration is on, Mongo will try to increase diskspace and it cannot do when Primary is used, tries to increase in secondary and make current primary to secondary which could potentially cause above error. You can also check the Hardware/Status Vitals to see whether the servers changed state from Primary to Secondary
Also try NOT to enable Server Auto upsizing while creating cluster, I don't have definite rationale, but we don't want any actions to happen in background to upgrade hardware say M30 to M40 because CPU is busy in middle of Migration(it happened to me)
Finally as good practice, try to run Large Databases mainly with Large single Collection > 4 GB(Non-Shard) separately. I had 40+ dbs, with 20% > 15GB in dump BJSON size and having 1 or 2 big collections with >4GB multi-million docs. Once I separated them, it gave breathing space for Mongo to bulk insert and compress them with some elapsed time of few mins. Mongo restore happens at collection level
So instead of taking 40-50 mins to restore it took 90-120 mins after some mock practices and order.
If you have time to plan, try this out. Any other learning please share
Key takeaways - Check your Dump Folder size and large collection size.
RATE OF DISK WRITES, OPLOG Lag, RAM, CPU, IO are good KPIs to keep watch on during Dumprestore

Mongod backup without replicate sets or db.fsyncLock()

I need to make a daily backups of mongodb database/s. I am running Mongo 3.4 on a 64 bit system so journaling is enabled by default.
I don't have replica sets for this setup so I want to manually (
actually schedule a cron job) copy or make an image of the mongo dbs from disk. I don't want to use the mongodump command.
Questions
Because of journaling am I able to simple do a cp -R /data/db/* /backup? Or do I need to do a db.fsyncLock() I'm not concerned about loss of some small amount of data that could be persistent in memory when I do the cp. More that the copy will work if I need to use it in the future I.e. it won't contains errors etc.
If I do need to do a db.fsyncLock() before copying the databases what happens if there are reads and writes that continue to occur? If they block will they perform as normal once the db.fsyncUnlock() occurs?
If i do need to use mongodump is it true that using _ids other than normal ObjectIds will cause issues? One of my databases doesn't use normal ObjectIds

Auto compact the deleted space in mongodb?

The mongodb document says that
To compact this space, run db.repairDatabase() from the mongo shell (note this operation will block and is slow).
in http://www.mongodb.org/display/DOCS/Excessive+Disk+Space
I wonder how to make the mongodb free deleted disk space automatically ?
p.s. We stored many downloading task in mongodb, up to 20GB, and finished these in half an hour.
In general if you don't need to shrink your datafiles you shouldn't shrink them at all. This is because "growing" your datafiles on disk is a fairly expensive operation and the more space that MongoDB can allocate in datafiles the less fragmentation you will have.
So, you should try to provide as much disk-space as possible for the database.
However if you must shrink the database you should keep two things in mind.
MongoDB grows it's data files by
doubling so the datafiles may be
64MB, then 128MB, etc up to 2GB (at
which point it stops doubling to
keep files until 2GB.)
As with most any database ... to
do operations like shrinking you'll
need to schedule a separate job to
do so, there is no "autoshrink" in
MongoDB. In fact of the major noSQL databases
(hate that name) only Riak
will autoshrink. So, you'll need to
create a job using your OS's
scheduler to run a shrink. You could use an bash script, or have a job run a php script, etc.
Serverside Javascript
You can use server side Javascript to do the shrink and run that JS via mongo's shell on a regular bases via a job (like cron or the windows scheduling service) ...
Assuming a collection called foo you would save the javascript below into a file called bar.js and run ...
$ mongo foo bar.js
The javascript file would look something like ...
// Get a the current collection size.
var storage = db.foo.storageSize();
var total = db.foo.totalSize();
print('Storage Size: ' + tojson(storage));
print('TotalSize: ' + tojson(total));
print('-----------------------');
print('Running db.repairDatabase()');
print('-----------------------');
// Run repair
db.repairDatabase()
// Get new collection sizes.
var storage_a = db.foo.storageSize();
var total_a = db.foo.totalSize();
print('Storage Size: ' + tojson(storage_a));
print('TotalSize: ' + tojson(total_a));
This will run and return something like ...
MongoDB shell version: 1.6.4
connecting to: foo
Storage Size: 51351
TotalSize: 79152
-----------------------
Running db.repairDatabase()
-----------------------
Storage Size: 40960
TotalSize: 65153
Run this on a schedule (during none peak hours) and you are good to go.
Capped Collections
However there is one other option, capped collections.
Capped collections are fixed sized
collections that have a very high
performance auto-FIFO age-out feature
(age out is based on insertion order).
They are a bit like the "RRD" concept
if you are familiar with that.
In addition, capped collections
automatically, with high performance,
maintain insertion order for the
objects in the collection; this is
very powerful for certain use cases
such as logging.
Basically you can limit the size of (or number of documents in ) a collection to say .. 20GB and once that limit is reached MongoDB will start to throw out the oldest records and replace them with newer entries as they come in.
This is a great way to keep a large amount of data, discarding the older data as time goes by and keeping the same amount of disk-space used.
I have another solution that might work better than doing db.repairDatabase() if you can't afford for the system to be locked, or don't have double the storage.
You must be using a replica set.
My thought is once you've removed all of the excess data that's gobbling your disk, stop a secondary replica, wipe its data directory, start it up and let it resynchronize with the master.
The process is time consuming, but it should only cost a few seconds of down time, when you do the rs.stepDown().
Also this can not be automated. Well it could, but I don't think I'm willing to try.
Running db.repairDatabase() will require that you have space equal to the current size of the database available on the file system. This can be bothersome when you know that the collections left or data you need to retain in the database would currently use much less space than what is allocated and you do not have enough space to make the repair.
As an alternative if you have few collections you actually need to retain or only want a subset of the data, then you can move the data you need to keep into a new database and drop the old one. If you need the same database name you can then move them back into a fresh db by the same name. Just make sure you recreate any indexes.
use cleanup_database
db.dropDatabase();
use oversize_database
db.collection.find({},{}).forEach(function(doc){
db = db.getSiblingDB("cleanup_database");
db.collection_subset.insert(doc);
});
use oversize_database
db.dropDatabase();
use cleanup_database
db.collection_subset.find({},{}).forEach(function(doc){
db = db.getSiblingDB("oversize_database");
db.collection.insert(doc);
});
use oversize_database
<add indexes>
db.collection.ensureIndex({field:1});
use cleanup_database
db.dropDatabase();
An export/drop/import operation for databases with many collections would likely achieve the same result but I have not tested.
Also as a policy you can keep permanent collections in a separate database from your transient/processing data and simply drop the processing database once your jobs complete. Since MongoDB is schema-less, nothing except indexes would be lost and your db and collections will be recreated when the inserts for the processes run next. Just make sure your jobs include creating any nessecary indexes at an appropriate time.
If you are using replica sets, which were not available when this question was originally written, then you can set up a process to automatically reclaim space without incurring significant disruption or performance issues.
To do so, you take advantage of the automatic initial sync capabilities of a secondary in a replica set. To explain: if you shut down a secondary, wipe its data files and restart it, the secondary will re-sync from scratch from one of the other nodes in the set (by default it picks the node closest to it by looking at ping response times). When this resync occurs, all data is rewritten from scratch (including indexes), effectively do the same thing as a repair, and disk space it reclaimed.
By running this on secondaries (and then stepping down the primary and repeating the process) you can effectively reclaim disk space on the whole set with minimal disruption. You do need to be careful if you are reading from secondaries, since this will take a secondary out of rotation for a potentially long time. You also want to make sure your oplog window is sufficient to do a successful resync, but that is generally something you would want to make sure of whether you do this or not.
To automate this process you would simply need to have a script run to perform this action on separate days (or similar) for each member of your set, preferably during your quiet time or maintenance window. A very naive version of this script would look like this in bash:
NOTE: THIS IS BASICALLY PSEUDO CODE - FOR ILLUSTRATIVE PURPOSES ONLY - DO NOT USE FOR PRODUCTION SYSTEMS WITHOUT SIGNIFICANT CHANGES
#!/bin/bash
# First arg is host MongoDB is running on, second arg is the MongoDB port
MONGO=/path/to/mongo
MONGOHOST=$1
MONGOPORT=$2
DBPATH = /path/to/dbpath
# make sure the node we are connecting to is not the primary
while (`$MONGO --quiet --host $MONGOHOST --port $MONGOPORT --eval 'db.isMaster().ismaster'`)
do
`$MONGO --quiet --host $MONGOHOST --port $MONGOPORT --eval 'rs.stepDown()'`
sleep 2
done
echo "Node is no longer primary!\n"
# Now shut down that server
# something like (assuming user is set up for key based auth and has password-less sudo access a la ec2-user in EC2)
ssh -t user#$MONGOHOST sudo service mongodb stop
# Wipe the data files for that server
ssh -t user#$MONGOHOST sudo rm -rf $DBPATH
ssh -t user#$MONGOHOST sudo mkdir $DBPATH
ssh -t user#$MONGOHOST sudo chown mongodb:mongodb $DBPATH
# Start up server again
# similar to shutdown something like
ssh -t user#$MONGOHOST sudo service mongodb start

Reducing MongoDB database file size

I've got a MongoDB database that was once large (>3GB). Since then, documents have been deleted and I was expecting the size of the database files to decrease accordingly.
But since MongoDB keeps allocated space, the files are still large.
I read here and there that the admin command mongod --repair is used to free the unused space, but I don't have enough space on the disk to run this command.
Do you know a way I can freed up unused space?
UPDATE: with the compact command and WiredTiger it looks like the extra disk space will actually be released to the OS.
UPDATE: as of v1.9+ there is a compact command.
This command will perform a compaction "in-line". It will still need some extra space, but not as much.
MongoDB compresses the files by:
copying the files to a new location
looping through the documents and re-ordering / re-solving them
replacing the original files with the new files
You can do this "compression" by running mongod --repair or by connecting directly and running db.repairDatabase().
In either case you need the space somewhere to copy the files. Now I don't know why you don't have enough space to perform a compress, however, you do have some options if you have another computer with more space.
Export the database to another computer with Mongo installed (using mongoexport) and then you can Import that same database (using mongoimport). This will result in a new database that is more compressed. Now you can stop the original mongod replace with the new database files and you're good to go.
Stop the current mongod and copy the database files to a bigger computer and run the repair on that computer. You can then move the new database files back to the original computer.
There is not currently a good way to "compact in place" using Mongo. And Mongo can definitely suck up a lot of space.
The best strategy right now for compaction is to run a Master-Slave setup. You can then compact the Slave, let it catch up and switch them over. I know still a little hairy. Maybe the Mongo team will come up with better in place compaction, but I don't think it's high on their list. Drive space is currently assumed to be cheap (and it usually is).
It looks like Mongo v1.9+ has support for the compact in place!
> db.runCommand( { compact : 'mycollectionname' } )
See the docs here: http://docs.mongodb.org/manual/reference/command/compact/
"Unlike repairDatabase, the compact command does not require double disk space to do its work. It does require a small amount of additional space while working. Additionally, compact is faster."
I had the same problem, and solved by simply doing this at the command line:
mongodump -d databasename
echo 'db.dropDatabase()' | mongo databasename
mongorestore dump/databasename
Compact all collections in current database
db.getCollectionNames().forEach(function (collectionName) {
print('Compacting: ' + collectionName);
db.runCommand({ compact: collectionName });
});
If you need to run a full repair, use the repairpath option. Point it to a disk with more available space.
For example, on my Mac I've used:
mongod --config /usr/local/etc/mongod.conf --repair --repairpath /Volumes/X/mongo_repair
Update: Per MongoDB Core Server Ticket 4266, you may need to add --nojournal to avoid an error:
mongod --config /usr/local/etc/mongod.conf --repair --repairpath /Volumes/X/mongo_repair --nojournal
Starting with 2.8 version of Mongo, you can use compression. You will have 3 levels of compression with WiredTiger engine, mmap (which is default in 2.6 does not provide compression):
None
snappy (by default)
zlib
Here is an example of how much space will you be able to save for 16 GB of data:
data is taken from this article.
We need solve 2 ways, based on StorageEngine.
1. MMAP() engine:
command: db.repairDatabase()
NOTE: repairDatabase requires free disk space equal to the size of your current data set plus 2 gigabytes. If the volume that holds dbpath lacks sufficient space, you can mount a separate volume and use that for the repair. When mounting a separate volume for repairDatabase you must run repairDatabase from the command line and use the --repairpath switch to specify the folder in which to store temporary repair files.
eg: Imagine DB size is 120 GB means, (120*2)+2 = 242 GB Hard Disk space required.
another way you do collection wise,
command: db.runCommand({compact: 'collectionName'})
2. WiredTiger:
Its automatically resolved it-self.
There has been some considerable confusion over space reclamation in MongoDB, and some recommended practice are downright dangerous to do in certain deployment types. More details below:
TL;DR repairDatabase attempts to salvage data from a standalone MongoDB deployments that is trying to recover from a disk corruption. If it recovers space, it is purely a side effect. Recovering space should never be the primary consideration of running repairDatabase.
Recover space in a standalone node
WiredTiger: For a standalone node with WiredTiger, running compact will release space to the OS, with one caveat: The compact command on WiredTiger on MongoDB 3.0.x was affected by this bug: SERVER-21833 which was fixed in MongoDB 3.2.3. Prior to this version, compact on WiredTiger could silently fail.
MMAPv1: Due to the way MMAPv1 works, there is no safe and supported method to recover space using the MMAPv1 storage engine. compact in MMAPv1 will defragment the data files, potentially making more space available for new documents, but it will not release space back to the OS.
You may be able to run repairDatabase if you fully understand the consequences of this potentially dangerous command (see below), since repairDatabase essentially rewrites the whole database by discarding corrupt documents. As a side effect, this will create new MMAPv1 data files without any fragmentation on it and release space back to the OS.
For a less adventurous method, running mongodump and mongorestore may be possible as well in an MMAPv1 deployment, subject to the size of your deployment.
Recover space in a replica set
For replica set configurations, the best and the safest method to recover space is to perform an initial sync, for both WiredTiger and MMAPv1.
If you need to recover space from all nodes in the set, you can perform a rolling initial sync. That is, perform initial sync on each of the secondaries, before finally stepping down the primary and perform initial sync on it. Rolling initial sync method is the safest method to perform replica set maintenance, and it also involves no downtime as a bonus.
Please note that the feasibility of doing a rolling initial sync also depends on the size of your deployment. For extremely large deployments, it may not be feasible to do an initial sync, and thus your options are somewhat more limited. If WiredTiger is used, you may be able to take one secondary out of the set, start it as a standalone, run compact on it, and rejoin it to the set.
Regarding repairDatabase
Please don't run repairDatabase on replica set nodes. This is very dangerous, as mentioned in the repairDatabase page and described in more details below.
The name repairDatabase is a bit misleading, since the command doesn't attempt to repair anything. The command was intended to be used when there's disk corruption on a standalone node, which could lead to corrupt documents.
The repairDatabase command could be more accurately described as "salvage database". That is, it recreates the databases by discarding corrupt documents in an attempt to get the database into a state where you can start it and salvage intact document from it.
In MMAPv1 deployments, this rebuilding of the database files releases space to the OS as a side effect. Releasing space to the OS was never the purpose.
Consequences of repairDatabase on a replica set
In a replica set, MongoDB expects all nodes in the set to contain identical data. If you run repairDatabase on a replica set node, there is a chance that the node contains undetected corruption, and repairDatabase will dutifully remove the corrupt documents for you.
Predictably, this makes that node contains a different dataset from the rest of the set. If an update happens to hit that single document, the whole set could crash.
To make matters worse, it is entirely possible that this situation could stay dormant for a long time, only to strike suddenly with no apparent reason.
In case a large chunk of data is deleted from a collection and the collection never uses the deleted space for new documents, this space needs to be returned to the operating system so that it can be used by other databases or collections. You will need to run a compact or repair operation in order to defragment the disk space and regain the usable free space.
Behavior of compaction process is dependent on MongoDB engine as follows
db.runCommand({compact: collection-name })
MMAPv1
Compaction operation defragments data files & indexes. However, it does not release space to the operating system. The operation is still useful to defragment and create more contiguous space for reuse by MongoDB. However, it is of no use though when the free disk space is very low.
An additional disk space up to 2GB is required during the compaction operation.
A database level lock is held during the compaction operation.
WiredTiger
The WiredTiger engine provides compression by default which consumes less disk space than MMAPv1.
The compact process releases the free space to the operating system.
Minimal disk space is required to run the compact operation.
WiredTiger also blocks all operations on the database as it needs database level lock.
For MMAPv1 engine, compact doest not return the space to operating system. You require to run repair operation to release the unused space.
db.runCommand({repairDatabase: 1})
Mongodb 3.0 and higher has a new storage engine - WiredTiger.
In my case switching engine reduced disk usage from 100 Gb to 25Gb.
Database files cannot be reduced in size. While "repairing" database, it is only possible for mongo server to delete some of its files. If large amount of data has been deleted, mongo server will "release" (delete), during repair, some of its existing files.
In general compact is preferable to repairDatabase. But one advantage of repair over compact is you can issue repair to the whole cluster. compact you have to log into each shard, which is kind of annoying.
When i had the same problem, i stoped my mongo server and started it again with command
mongod --repair
Before running repair operation you should check do you have enough free space on your HDD (min - is the size of your database)
For standalone mode you could use compact or repair,
For sharded cluster or replica set, in my experience, after you running compact on the primary, followed by compact the secondary, the size of primary database reduced, but not the secondary.
You might want to do resync member to reduce the size of secondary database. and by doing this you might find that the size of secondary database is even more reduced than the primary, i guess the compact command not really compacting the collection.
So, i ended up switching the primary and secondary of the replica set and doing resync member again.
my conclusion is, the best way to reduce the size of sharded/replica set is by doing resync member, switch primary secondary, and resync again.
mongoDB -repair is not recommended in case of sharded cluster.
If using replica set sharded cluster, use compact command, it will rewrites and defragments all data and index files of all collections.
syntax:
db.runCommand( { compact : "collection_name" } )
when used with force:true, compact runs on primary of replica set.
e.g. db.runCommand ( { command : "collection_name", force : true } )
Other points to consider:
-It blocks the operations. so recommended to execute in maintenance window.
-If replica sets running on different servers, needs to be execute on each member separately
- In case of sharded cluster, compact needs to execute on each shard member separately. Cannot execute against mongos instance.
Just one way that I was able to do it. No guarantee on the safety of your existing data. Try with your own risk.
Delete the data files directly and restart mongod.
For example, with ubuntu (default path to data: /var/lib/mongodb), I had couple files with name like: collection.#. I keep the collection.0 and deleted all others.
Seems an easier way if you don't have serious data in database.