MongoDB gives out of memory at dumping - mongodb

I have a database which is almost 4gb size which working on 16GB memory, 6 core Ubuntu 18.04. I write a cron which backup database at specific time everyday, but yesterday, firstly it occured an error which is Out of memory at backup process and killed the mongo service. This is my script for dumping mongodump --db $DB --archive=/home/backup/$BACKUP_NAME --gzip. I am not sure 4gb is big for a db, so what do you suggest for me ?

Related

Recovery postgresql database after initdb

I have an important issue with my database and I don't know how to fix it.
I have postgresql 9.6 in CentOS running. After a system reboot, postgrsql-service doesn't start, so following the instructions in the shell, I launched "sudo /usr/pgsql-9.6/bin/postgresql96-setup initdb"
What my surpirse when I started pgsql... and it is a new empty instance, I had a database with a size of 4GB and has dissapear...
However data must be still in the files, because data folder have a size of 4GB, how I can recover this last situation?
Thank you very much,
Regards,

How to stop mongodump safely?

I started a mongo dump and my database is with 95% of storage filled, and I am downloading to my local machine the bson file.
But it is very slow and I am afraid to stop mongo dump to increase server storage size, and one observation I am downloading a database to change to mongo atlas because I think it is easier to administrate.
Is there any command to stop mongodump safely?
mongodump is a process that dump the data from your DB, it is not supposed to be stopped after performing some operations rather than others, but even stopping it not safely (e.g. sending an interrupt signal):
Your DB will just release the read lock for mongodump
The worst that could happen on your machine is having a malformed BSON

How to limit pg_dump's memory usage?

I have a ~140 GB postgreDB on Heroku / AWS. I want to create a dump of this on a windows Azure - Windows server 2012 R2 virtual machine, as i need to move the DB into Azure environment.
The DB has a couple of smaller tables, but mainly consists of a single table taking ~130 GB, including indexes. It has ~500 million rows.
I've tried to use pg_dump for this, with:
./pg_dump -Fc --no-acl --no-owner --host * --port 5432 -U * -d * > F:/051418.dump
I've tried on various Azure virtual machine sizes, including some fairly large with (D12_V2) 28GB ram, 4 VCPUs 12000 MAXIOPs, etc. But in all cases the pg_dump stalls completely due to memory swapping.
On above machine it's currently using all available memory and has used the past 12 hrs swapping memory on the disk. I dont expect it to complete, due to the swapping.
From other posts i've understood it could be an issue with the network speed, beeing much faster than the disk IO speed, causing pg_dump to suck up all available memory and more, so i've tried using the azure machine with most IOPs. This hasnt helped.
So is there another way i can force pg_dump to cap it's memory usage, or wait on pulling more data until it has written to disk and clear memory ?
Looking forward to your help!
Krgds.
Christian

mongorestore hangs while restoring fs.chunks

I am trying to upgrade from the mongodb sandbox option onto a shared cluster, and to keep my current data I have to do a mongodump and mongorestore to migrate the old data onto the new database.
This is what I put in the command line.
mongorestore -h url:host -d heroku_zc -u heroku_zc -p 470grupv030prq5uj0fm mongo-dump-dir/heroku_9r
It seems to all go fine and restores all the data entries, but while uploading the file chunks it hangs part way through. Sometimes 5% of the ay through sometimes 20% of the way through sometimes 50%.
As I say, when I look at the new database, all the rows are there correctly, and only the actual data files are missing.
This is what happens in the terminal, it doesn't give an error it just stops.
2017-02-09T15:45:20.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:23.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:26.510+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
Both db's are created from heroku as addons to my parse server.
EDIT: I also don't know if this is a problem, the local system database says 2.03GB. I don't understand how this can be, as the total database size is only 500mb

performance issue until mongodump

we operate for our customer a server with a single mongo instance, gradle, postgres and nginx running on it. The problem is we had massiv performance problmes until mongodump is running. The mongo queue is growing and no data be queried. The next problem is the costumer want not invest in a replica-set or a software update (mongod 3.x).
Has somebody any idea how i clould improve the performance.
command to create the dump:
mongodump -u ${MONGO_USER} -p ${MONGO_PASSWORD} -o ${MONGO_DUMP_DIR} -d ${MONGO_DATABASE} --authenticationDatabase ${MONGO_DATABASE} > /backup/logs/mongobackup.log
tar cjf ${ZIPPED_FILENAME} ${MONGO_DUMP_DIR}
System:
6 Cores
36 GB RAM
1TB SATA HDD
+ 2TB (backup NAS)
MongoDB 2.6.7
Thanks
Best regards,
Markus
As you have heavy load, adding a replica set is a good solution, as backup could be taken on secondary node, but be aware that replica need at least three servers (you can have an master/slave/arbiter - where the last need a little amount of resources)
MongoDump makes general query lock which will have an impact if there is a lot of writes in dumped database.
Hint: try to make backup when there is light load on system.
Try with volume snapshots. Check with your cloud provider what are the options available to take snapshots. It is super fast and cheaper if you compare actual pricing used in taking a backup(RAM and CPU used and if HDD then transactions const(even if it is little)).