bson vs gzip dump of mongodb - mongodb

I have a Database with
On disk size 19.032GB (using show dbs command)
Data size 56 GB (using db.collectionName.stats(1024*1024*1024).size command)
While taking mongodump using command mongodump we can set param --gzip. These are the observations I have with and without this flag.
command
timeTaken in dump
size of dump
restoration time
observation
with gzip
30 min
7.5 GB
20 min
in mongostat the insert rate was ranging from 30K to 80k par sec
without gzip
10 min
57 GB
50 min
in mongostat the insert rate was very erratic, and ranging from 8k to 20k par sec
Dump was taken from machine with 8 core, 40 GB ram(Machine B) to 12 core, 48GB ram machine (Machine A). And restored to 12 core, 48 gb machine(Machine C) from Machine A to make sure there is no resource contention between mongo, mongorestore and mongodump process. Mongo version 4.2.0
I have few questions like
What is the functional difference between 2 dumps?
Can the bson dump be zipped to make it zip?
how does number of indexes impact the mongodump and restore process. (If we drop some unique indexes and then recreate it, will it expedite total dump and restore time? considering while doing insert mongodb will not have to take care of uniqueness part)
Is there a way to make overall process faster. From these result I see that have we have to choose 1 between dump and restore speed.
Will having a bigger machine(RAM) which reads the dump and restores it expedite the overall process?
Will smaller dump help in overall time?
Update:
2. Can the bson dump be zipped to make it zip?
yes
% ./mongodump -d=test
2022-11-16T21:02:24.100+0530 writing test.test to dump/test/test.bson
2022-11-16T21:02:24.119+0530 done dumping test.test (10000 documents)
% gzip dump/test/test.bson
% ./mongorestore --db=test8 --gzip dump/test/test.bson.gz
2022-11-16T21:02:51.076+0530 The --db and --collection flags are deprecated for this use-case; please use --nsInclude instead, i.e. with --nsInclude=${DATABASE}.${COLLECTION}
2022-11-16T21:02:51.077+0530 checking for collection data in dump/test/test.bson.gz
2022-11-16T21:02:51.184+0530 restoring test8.test from dump/test/test.bson.gz
2022-11-16T21:02:51.337+0530 finished restoring test8.test (10000 documents, 0 failures)
2022-11-16T21:02:51.337+0530 10000 document(s) restored successfully. 0 document(s) failed to restore.

I am no MongoDB expert, but I have good experience working with MongoDB backup and restore activities and will answer to the best of my knowledge.
What is the functional difference between 2 dumps?
mongodump command without the use of the --gzip option will save each and every document to a file in bson format.
This will significantly reduce the time taken for backup and restore operations since it just reads the bson file and inserts the document, with the compromise being the .bson dump file size
However, when we pass the --gzip option, the bson data is compressed and it is being dumped to a file. This will significantly increase the time taken for mongodump and mongorestore, but the size of the backup file will be very less due to compression.
Can the bson dump be zipped to make it zip?
Yes, it can be further zipped. But, You will be spending additional time since you have to compress the already compressed file and extract it again before the restore operation, increasing the overall time taken. Do it if the compressed file size is very small compared to just gzip.
EDIT:
As #best wishes pointed, I completely misread this question.
gzip performed by mongodump is just a gzip performed on the mongodump side. It is literally the same as compressing the original BSON file manually from our end.
For instance, If you extract the .gzip.bson file with any compression application, you will get the actual BSON backup file.
Note that zip and gzip are not the same (in terms of compression) since they both use different compression algorithms, even though they both compress files. So you will get different results in file size on comparing mongodump gzip and manual zip of files.
How does number of indexes impact the mongodump and restore process. (If we drop some unique indexes and then recreate it, will it expedite total dump and restore time? considering while doing insert mongodb will not have to take care of uniqueness part)
Whenever you take a dump, mongodump tool creates a <Collection-Name.metadata.json> file. This basically contains all the indexes followed by collection name, uuid, colmod, dbUsersAndRoles and so on.
The number and type of index in the collection will not have an impact during the mongodump operation. However, after the restoration of data using mongorestore command, it will go through all the indexes in the metaadata file and try to recreate the indexes.
The time taken by this operation depends on the number of indexes and the number of documents in your collection. In short (No. of Indexes * No. of Documents). The type of the index (Even if it's unique) doesn't have a mojor impact on performance. If the indexes are applied in the original collection using the background: true option, it's going to take even more time to rebuild the indexes while restoring.
You can avoid the indexing operaion during the mongorestore operation by passing the --noIndexRestore option in commandline. You can index later on when required.
In the Production backup environment of my company, indexing of keys takes more time compared to the restoration of data.
Is there a way to make the overall process faster. From these result I see that have we have to choose 1 between dump and restore speed
The solution depends...
If Network bandwidth is not an issue (Example: Moving data between two instances running in the cloud), don't use and compression, since it will save you time.
If the data in the newly moved instance won't be accessed immediately, perform the restoration process with the --noIndexRestore flag.
If the backup is for cold storage or saving data for later use, apply gzip compression, or manual zip compression, or both (whatever works best for you)
Choose whichever scenario works best for you, but you have to find the right balance between time and space primarily while deciding and secondly, whether to apply indexes or not.
In my company, we usually take non-compressed backup and restore for P-1 and gzip compression for weeks old prod backups, and further manually compress it for backups that are months older.
You have one more option and I DON'T RECOMMEND THIS METHOD. you can directly move the data path pointed by your MongoDB instance and change the DB path in the MongoDB instance of the migrated machine. Again, I don't recommend this method as there are many things that could go wrong, although I had no issues with this methodology on my end. But I can't guarantee the same for you. Do this at your own risk if you decide to.
Will having a bigger machine(RAM) which reads the dump and restores it expedite the overall process?
I don't think so. I am not sure about this, but I have 16 GB RAM and I restored a backup of 40GB mongodump to my local and didn't face any bottleneck due to RAM, but I could be wrong as I am not sure. Please let me know if you come to know the answer yourself.
Will smaller dump help in overall time?
If by smaller dump, you mean limiting the data to be dumped using the --query flag, for sure it will since the data to be backed up and restored is very less. Remember the No. of Indexes * No. of Documents rule.
Hope this helped you answer your questions. Let me know if you have any:
Any further questions
If I made any mistakes
Found a better solution
What you have decided finally

Here are my two cents:
In my experience: using --gzip to save space on storage with time, so-called space for time, both mongodump and mongorestore will have overhead.
in additions, I also use parallel settings:
--numParallelCollections=n1
--numInsertionWorkersPerCollection=n2
which may increase performance by little around 10%, n1 and n2 depending on CPU numbers on the server.
restore process also has rebuild index, it depends on how much indexes in your databases. In general speaking, rebuild index is faster than data restore.
hope these help!

Related

Postgres database dump size larger than physical size

I just made an pg_dump backup from my database and its size is about 95GB but the size of the direcory /pgsql/data is about 38GB.
I run a vacuum FULL and the size of the dump does not change. The version of my postgres installation is 9.3.4, on a CentOS release 6.3 server.
It is very weird the size of the dump comparing with the physical size or I can consider this normal?
Thanks in advance!
Regards.
Neme.
The size of pg_dump output and the size of a Postgres cluster (aka 'instance') on disk have very, very little correlation. Consider:
pg_dump has 3 different output formats, 2 of which allow compression on-the-fly
pg_dump output contains only schema definition and raw data in a text (or possibly "binary" format). It contains no index data.
The text/"binary" representation of different data types can be larger or smaller than actual data stored in the database. For example, the number 1 stored in a bigint field will take 8 bytes in a cluster, but only 1 byte in pg_dump.
This is also why VACUUM FULL had no effect on the size of the backup.
Note that a Point In Time Recovery (PITR) based backup is entirely different from a pg_dump backup. PITR backups are essentially copies of the data on disk.
Postgres does compress its data in certain situations, using a technique called TOAST:
PostgreSQL uses a fixed page size (commonly 8 kB), and does not allow tuples to span multiple pages. Therefore, it is not possible to store very large field values directly. To overcome this limitation, large field values are compressed and/or broken up into multiple physical rows. This happens transparently to the user, with only small impact on most of the backend code. The technique is affectionately known as TOAST (or "the best thing since sliced bread").

Is database back up size is same as database size

am working with PostgreSQL i checked following command then it returns 12MB
SELECT pg_size_pretty(pg_database_size('itcs'));
but when i took back up using pgadmin back up size is 1MB why this difrence
If you are taking a logical backup (with pg_dump), the backup contains only the data, no empty pages, no old versions of rows, no padding, no indexes. It may also be compressed. All that can greatly reduce the size.
If you are taking a physical backup, the backup more or less consists of the actual database files as they are, plus recovery logs to get them to a consistent state. So that would be roughly the same size as the database itself (but you can also compress it).

Dumping postgres DB, time and .sql file weight

I have a big db (nominatim db, for address geocoding reverse), is about 408gb big.
Now, to provide an estimate to the customer, I would like to know how long will take the export/reimport procedure and how big will .sql dump file be.
My postgresql version is 9.4, is installed on a centOS 6.7 virtual machine, with 16gb RAM and 500 gb disk space.
Can you help me?
Thank you all guys for your answer, anyway to restore the dumped db I don't use the command pg_restore but psql -d newdb -f dump.sql (I read this way to do in a official doc). This because I have to set-up this db on another machine to avoid the nominatim db indexing procedure! I don't know if someone knows nominatim (is a openstreetmap opensource product) but the db indexing process of European map (15.8 gb), in a CentOS 6.7 machine with 16gb ram tooks me 32 days...
Than another possible question should be: pg_restore is equal to psql -d -f? Wich is faster?
Thanks again
As #a_horse_with_no_name says, nobody will be able to give you exact answers for your environment. But this is the procedure I would use to get some estimates.
I have generally found that a compressed backup of my data is 1/10th or less the size of the live database. You can also usually deduct the on-disk size of the indexes from the backup size as well. Examine the size of things in-database to get a better idea. You can also try forming a subset of the database you have which is much smaller and compare the live size to the compressed backup; this may give you a ratio that should be in the ballpark. SQL files are gassy and compress well; the on-disk representation Postgres uses seems to be even gassier though. Price of performance probably.
The best way to estimate time is just to do some exploratory runs. In my experience this usually takes longer than you expect. I have a ~1 TB database that I'm fairly sure would take about a month to restore, but it's also aggressively indexed. I have several ~20 GB databases that backup/restore in about 15 minutes. So it's pretty variable, but indexes add time. If you can set up a similar server, you can try the backup-restore procedure and see how long it will take. I would recommend doing this anyway, just to build confidence and suss out any lingering issues before you pull the trigger.
I would also recommend you try out pg_dump's "custom format" (pg_dump -Fc) which makes compressed archives that are easy for pg_restore to use.

Why does MongoDB takes up so much space?

I am trying to store records with a set of doubles and ints (around 15-20) in mongoDB. The records mostly (99.99%) have the same structure.
When I store the data in a root which is a very structured data storing format, the file is around 2.5GB for 22.5 Million records. For Mongo, however, the database size (from command show dbs) is around 21GB, whereas the data size (from db.collection.stats()) is around 13GB.
This is a huge overhead (Clarify: 13GB vs 2.5GB, I'm not even talking about the 21GB), and I guess it is because it stores both keys and values. So the question is, why and how Mongo doesn't do a better job in making it smaller?
But the main question is, what is the performance impact in this? I have 4 indexes and they come out to be 3GB, so running the server on a single 8GB machine can become a problem if I double the amount of data and try to keep a large working set in memory.
Any guesses into if I should be using SQL or some other DB? or maybe just keep working with ROOT files if anyone has tried them?
Basically, this is mongo preparing for the insertion of data. Mongo performs prealocation of storage for data to prevent (or minimize) fragmentation on the disk. This prealocation is observed in the form of a file that the mongod instance creates.
First it creates a 64MB file, next 128MB, next 512MB, and on and on until it reaches files of 2GB (the maximum size of prealocated data files).
There are some more things that mongo does that might be suspect to using more disk space, things like journaling...
For much, much more info on how mongoDB uses storage space, you can take a look at this page and in specific the section titled Why are the files in my data directory larger than the data in my database?
There are some things that you can do to minimize the space that is used, but these tequniques (such as using the --smallfiles option) are usually only recommended for development and testing use - never for production.
Question: Should you use SQL or MongoDB?
Answer: It depends.
Better way to ask the question: Should you use use a relational database or a document database?
Answer:
If your data is highly structured (every row has the same fields), or you rely heavily on foreign keys and you need strong transactional integrity on operations that use those related records... use a relational database.
If your records are heterogeneous (different fields per document) or have variable length fields (arrays) or have embedded documents (hierarchical)... use a document database.
My current software project uses both. Use the right tool for the job!

Compact command not freeing up space in MongoDB 2.0

I just installed MongoDB 2.0 and tried to run the compact command instead of the repair command in earlier versions. My database is empty at the moment, meaning there is only one collection with 0 entries and the two system collections (indices, users). Currently the db takes about 4 GB of space on the harddisk. The db is used as a temp queue with all items being removes after they have been processed.
I tried to run the following in the mongo shell.
use mydb
db.theOnlyCollection.runCommand("compact")
It returns with
ok: 1
But still the same space is taken on the harddisk. I tried to compact the system collections as well, but this did not work.
When I run the normal repair command
db.repairDatabase()
the database is compacted and only takes 400 MB.
Anyone has an idea why the compact command is not working?
Thanks a lot for your help.
Best
Alex
Collection compaction is not supposed to decrease the size of data files. Main point is to defragment collection and index data - combine unused space gaps into continuous space allowing new data to be stored there. Moreover it may actually increase the size of data files:
Compaction may increase the total size of your data files by up to 2GB. Even in this case, total collection storage space will decrease.
http://www.mongodb.org/display/DOCS/compact+Command#compactCommand-Effectsofacompaction