Backing up mongo database of about 120 GB size - mongodb

We have a mongo database of about 120 GB size. I have run mongodump using nohup and redirecting the logs to /dev/null about 3 days back, but the dump file is ~40GB in size now, and the dump is still running. Is this expected?
If yes, what is the approximate compression ratio for a mongo database? i.e. for a 120 GB database, how much is the backup file size going to be?
This would help me in estimating the time remaining for the dump to finish. I have no clue why it is taking up so much time, also, wanted to know if there is a faster/better way of backing up the mongo database (remote copy is not something i'm considering)?

Related

mongorestore failing on shard cluster v4.2.x | Error: "panic: close of closed channel"

Scenario:
I had a standalone MongoDB Server v3.4.x where I had several DBs and collections respectively. As the plan was to upgrade to lastest 4.2.x, I have created a mongo dump of all DBs.
Created a shard cluster of config server (replica cluster), shard-1 server (replica cluster) & shard-2 server (cluster) [MongoDB v4.2.x]
Issue:
Now when I try to restore the dump, it's partially restoring every time I try to restore DBs. If I try to restore single DB it fails with same error. But whenever I try to restore specific DB & specific collection it always works fine. But the problem is so many collections across many DBs. Cannot do it for all indicvidually & every time it fails at different progress percentage/collection/DBs.
Error:
2020-02-07719:07:03.822+0000 [#####################...] myproduct_new.chats 68.1MB/74.8MB (91.0%)
2020-02-07719:07:03.851+0000 [########## ] myproduct_new.metaCrashes 216MB/502MB (42.9%)
2020-02-07719:07:03.876+0000 [################## ] myproduct_new.feeds 152MB/196MB (77.4%)
panic: close of closed channel
goroutine 25 [running]: github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreCollectionToDB(Oxc0001a0000, 0xc000234540, Oxc, 0xc00023454d, 900, Ox7fa5503e21f0, 0xc00020b890, 0x1f66e326, Ox0, ...)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntent(Oxc0001a0000, Oxc00022f9e0, Ox0, Ox0, Ox0, Ox0)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents.funcl(Oxc0001a0000, 0xc000146420, 0x3)
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. created by github.com/mongodb/mongo-tools/mongorestore.(*MongoRestore).RestoreIntents
/data/mci/533e19bcc94a47bf738334351cf58a07/src/src/mongo/gotools/src/github.com/mongodb/mongo-tools/mongorestore/restore. ubuntu#ip-00-xxx-xxx-00:/usr/local/backups/Dev_backup_07-02-2020$ Ox10, Oxc00000f go:503 +0x49b go:311 +Oxbe9 go:126 +Oxlcb go:109
+0x12d
Question:
I am connecting to mongos and trying to restore. Currently, sharding is not yet enabled for any DB. Can anyone put some light on whats going wrong or how to restore the dump?
I have got the same problem, then I found out that it is the problem of my mongodb replica set caused this error.
check the rs.status() of your database.
if you got the message
Our replica set config is invalid or we are not a member of it
try this answer
We faced exact same issue for same spec trying to restore from Mongodump. There is no definite reason but could be best to check below factors
Check your Dump size(bjson) vs Allocated Disk free space on Cluster. Dump size could be 2x to 3x of our core Mongo data folder size(Which is compressed on top of BJSON)
Check your Oplog size configured during cluster creation, for first time migration provide 10-15% of free diskspace size as Oplog size, you can change this after migration. This will help secondaries to lag bit longer and catch up on sync faster from WAL. eg: Allocated 3.5 GB for oplog out of Total 75 GB Hardisk size, with 45GB of Total data(compressed). In real world usage scenario(post migration), keep it 1-2 hour write data volumes as oplog size.
Now your total disk space would be Dump folder size + Oplog + 6GB(Default mongo installation + system extras).
Note: If you cannot afford to allocate Dump folder size, you have to run the restore in batches(DBs or Collections nsInclude option), giving time for Mongo to compress after importing bjson. This should be done in minutes
After your restore, Mongo will shrink the data size and diskspace will more or less match close to your Standalone data folder size.
If this is not set and your diskspace is under provisioned during Migration, while your migration is on, Mongo will try to increase diskspace and it cannot do when Primary is used, tries to increase in secondary and make current primary to secondary which could potentially cause above error. You can also check the Hardware/Status Vitals to see whether the servers changed state from Primary to Secondary
Also try NOT to enable Server Auto upsizing while creating cluster, I don't have definite rationale, but we don't want any actions to happen in background to upgrade hardware say M30 to M40 because CPU is busy in middle of Migration(it happened to me)
Finally as good practice, try to run Large Databases mainly with Large single Collection > 4 GB(Non-Shard) separately. I had 40+ dbs, with 20% > 15GB in dump BJSON size and having 1 or 2 big collections with >4GB multi-million docs. Once I separated them, it gave breathing space for Mongo to bulk insert and compress them with some elapsed time of few mins. Mongo restore happens at collection level
So instead of taking 40-50 mins to restore it took 90-120 mins after some mock practices and order.
If you have time to plan, try this out. Any other learning please share
Key takeaways - Check your Dump Folder size and large collection size.
RATE OF DISK WRITES, OPLOG Lag, RAM, CPU, IO are good KPIs to keep watch on during Dumprestore

Is it possible to restore a MongoDB database from a .bson file quicker than mongorestore?

I have a very huge database from an old backup. It's about 500GB total and it's a .bson file. At the current rate of my harddrive and CPU, I am done in probably 10-20 hours. EDIT: About 9 hours.
I simply ran:
mongorestore -d database -c collection C:\very_large_backup.bson
Is it possible for MongoDB to simply access the .bson file directly, or is mongorestore the only option I have?
I plan on moving to a Microsoft SQL Server with this data (discarding the extra bits of information that might overlap). Maybe there's a faster way that way?

mongorestore hangs while restoring fs.chunks

I am trying to upgrade from the mongodb sandbox option onto a shared cluster, and to keep my current data I have to do a mongodump and mongorestore to migrate the old data onto the new database.
This is what I put in the command line.
mongorestore -h url:host -d heroku_zc -u heroku_zc -p 470grupv030prq5uj0fm mongo-dump-dir/heroku_9r
It seems to all go fine and restores all the data entries, but while uploading the file chunks it hangs part way through. Sometimes 5% of the ay through sometimes 20% of the way through sometimes 50%.
As I say, when I look at the new database, all the rows are there correctly, and only the actual data files are missing.
This is what happens in the terminal, it doesn't give an error it just stops.
2017-02-09T15:45:20.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:23.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:26.510+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
Both db's are created from heroku as addons to my parse server.
EDIT: I also don't know if this is a problem, the local system database says 2.03GB. I don't understand how this can be, as the total database size is only 500mb

performance issue until mongodump

we operate for our customer a server with a single mongo instance, gradle, postgres and nginx running on it. The problem is we had massiv performance problmes until mongodump is running. The mongo queue is growing and no data be queried. The next problem is the costumer want not invest in a replica-set or a software update (mongod 3.x).
Has somebody any idea how i clould improve the performance.
command to create the dump:
mongodump -u ${MONGO_USER} -p ${MONGO_PASSWORD} -o ${MONGO_DUMP_DIR} -d ${MONGO_DATABASE} --authenticationDatabase ${MONGO_DATABASE} > /backup/logs/mongobackup.log
tar cjf ${ZIPPED_FILENAME} ${MONGO_DUMP_DIR}
System:
6 Cores
36 GB RAM
1TB SATA HDD
+ 2TB (backup NAS)
MongoDB 2.6.7
Thanks
Best regards,
Markus
As you have heavy load, adding a replica set is a good solution, as backup could be taken on secondary node, but be aware that replica need at least three servers (you can have an master/slave/arbiter - where the last need a little amount of resources)
MongoDump makes general query lock which will have an impact if there is a lot of writes in dumped database.
Hint: try to make backup when there is light load on system.
Try with volume snapshots. Check with your cloud provider what are the options available to take snapshots. It is super fast and cheaper if you compare actual pricing used in taking a backup(RAM and CPU used and if HDD then transactions const(even if it is little)).

PostgreSQL backup with smallest output files

We have a Postgresql database that is over 732 GB when backed as a file system backup. When we do a pg_dump we can get it down to 585 GB. If I combined the pg_dump with the PITR method will this give me the best backup with smallest backup data file size? My plan was to run the pg_start_backup, then the pg_dump, then the pg_stop_backup. I know the documentation states to run a file system backup but I want a smaller backup data set. I would then copy off WAL files and then backup them up at night.
To truly get the smallest file, you'll have to try compressing your pg_dump -Fc dump file with one of many compression tools and settings. Using gzip or xz with maximum possible compression would be a start. This will of course require an excellent CPU and lots of CPU time.