I am trying to upgrade from the mongodb sandbox option onto a shared cluster, and to keep my current data I have to do a mongodump and mongorestore to migrate the old data onto the new database.
This is what I put in the command line.
mongorestore -h url:host -d heroku_zc -u heroku_zc -p 470grupv030prq5uj0fm mongo-dump-dir/heroku_9r
It seems to all go fine and restores all the data entries, but while uploading the file chunks it hangs part way through. Sometimes 5% of the ay through sometimes 20% of the way through sometimes 50%.
As I say, when I look at the new database, all the rows are there correctly, and only the actual data files are missing.
This is what happens in the terminal, it doesn't give an error it just stops.
2017-02-09T15:45:20.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:23.509+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
2017-02-09T15:45:26.510+0100 [#.......................] heroku_z25kbwmc.fs.chunks 15.8 MB/299.6 MB (5.3%)
Both db's are created from heroku as addons to my parse server.
EDIT: I also don't know if this is a problem, the local system database says 2.03GB. I don't understand how this can be, as the total database size is only 500mb
Related
I just had to use pg_restore with a small dump of 30MB and it took in average 5 minutes! On my colleagues' computers, it is ultra fast, like a dozen of seconds. The difference between the two is the CPU usage: while for the others, the database uses quite a bunch of CPU (60-70%) during the restore operation, on my machine, it stays around a few percents only (0-3%) as if it was not active at all.
The exact command was : pg_restore -h 127.0.0.1 --username XXX --dbname test --no-comments test_dump.sql
The originating command to produce this dump was: pg_dump --dbname=XXX --user=XXX --no-owner --no-privileges --verbose --format=custom --file=/sql/test_dump.sql
Look at the screenshot taken in the middle of the restore operation:
Here is the corresponding vmstat 1 result running the command:
I've looked at the web for a solution during a few hours but this under-usage of the CPU remains quite mysterious. Any idea will be appreciated.
For the stack, I am on Ubuntu 20.04 and postgres version 13.6 is running into a docker container. I have a decent hardware, neither bad nor great.
EDIT: This very same command worked in the past on my machine with a same common HDD but now it is terribly slow. The only difference I saw with others (for whom it is blazing fast) was really on the CPU-usage from my point of view (even if they have an SSD which shouldn't be at all the limiting factor especially with a 30 MB dump).
EDIT 2: For those who proposed the problem was about IO-boundness and maybe a slow disk, I just tried without any conviction to run my command on an SSD partition I just made and nothing has changed.
The vmstat output shows that you are I/O bound. Get faster storage, and performance will improve.
PostgreSQL, by default, is tuned for data durability. Usually transactions are flushed to the disk at each and every commit, forcing write-through of any disk write cache, so it seems to be very IO-bound.
When restoring database from a dump file, it may make sense to lower these durability settings, especially if the restore is done while your application is offline, especially in non-production environments.
I temporarily run postgres with these options: -c fsync=off -c synchronous_commit=off -c full_page_writes=off -c checkpoint_flush_after=256 -c autovacuum=off -c max_wal_senders=0
Refer to these documentation sections for more information:
14.4.9. Some Notes about pg_dump
14.5. Non-Durable Settings.
Also this article:
Settings for a fast pg_restore
I have a database which is almost 4gb size which working on 16GB memory, 6 core Ubuntu 18.04. I write a cron which backup database at specific time everyday, but yesterday, firstly it occured an error which is Out of memory at backup process and killed the mongo service. This is my script for dumping mongodump --db $DB --archive=/home/backup/$BACKUP_NAME --gzip. I am not sure 4gb is big for a db, so what do you suggest for me ?
I have a very huge database from an old backup. It's about 500GB total and it's a .bson file. At the current rate of my harddrive and CPU, I am done in probably 10-20 hours. EDIT: About 9 hours.
I simply ran:
mongorestore -d database -c collection C:\very_large_backup.bson
Is it possible for MongoDB to simply access the .bson file directly, or is mongorestore the only option I have?
I plan on moving to a Microsoft SQL Server with this data (discarding the extra bits of information that might overlap). Maybe there's a faster way that way?
I have a ~140 GB postgreDB on Heroku / AWS. I want to create a dump of this on a windows Azure - Windows server 2012 R2 virtual machine, as i need to move the DB into Azure environment.
The DB has a couple of smaller tables, but mainly consists of a single table taking ~130 GB, including indexes. It has ~500 million rows.
I've tried to use pg_dump for this, with:
./pg_dump -Fc --no-acl --no-owner --host * --port 5432 -U * -d * > F:/051418.dump
I've tried on various Azure virtual machine sizes, including some fairly large with (D12_V2) 28GB ram, 4 VCPUs 12000 MAXIOPs, etc. But in all cases the pg_dump stalls completely due to memory swapping.
On above machine it's currently using all available memory and has used the past 12 hrs swapping memory on the disk. I dont expect it to complete, due to the swapping.
From other posts i've understood it could be an issue with the network speed, beeing much faster than the disk IO speed, causing pg_dump to suck up all available memory and more, so i've tried using the azure machine with most IOPs. This hasnt helped.
So is there another way i can force pg_dump to cap it's memory usage, or wait on pulling more data until it has written to disk and clear memory ?
Looking forward to your help!
Krgds.
Christian
We have a mongo database of about 120 GB size. I have run mongodump using nohup and redirecting the logs to /dev/null about 3 days back, but the dump file is ~40GB in size now, and the dump is still running. Is this expected?
If yes, what is the approximate compression ratio for a mongo database? i.e. for a 120 GB database, how much is the backup file size going to be?
This would help me in estimating the time remaining for the dump to finish. I have no clue why it is taking up so much time, also, wanted to know if there is a faster/better way of backing up the mongo database (remote copy is not something i'm considering)?