I just installed MongoDB 2.0 and tried to run the compact command instead of the repair command in earlier versions. My database is empty at the moment, meaning there is only one collection with 0 entries and the two system collections (indices, users). Currently the db takes about 4 GB of space on the harddisk. The db is used as a temp queue with all items being removes after they have been processed.
I tried to run the following in the mongo shell.
use mydb
db.theOnlyCollection.runCommand("compact")
It returns with
ok: 1
But still the same space is taken on the harddisk. I tried to compact the system collections as well, but this did not work.
When I run the normal repair command
db.repairDatabase()
the database is compacted and only takes 400 MB.
Anyone has an idea why the compact command is not working?
Thanks a lot for your help.
Best
Alex
Collection compaction is not supposed to decrease the size of data files. Main point is to defragment collection and index data - combine unused space gaps into continuous space allowing new data to be stored there. Moreover it may actually increase the size of data files:
Compaction may increase the total size of your data files by up to 2GB. Even in this case, total collection storage space will decrease.
http://www.mongodb.org/display/DOCS/compact+Command#compactCommand-Effectsofacompaction
Related
I have a question regarding MongoDB's collection size.
I did a small stress test in which my MongoDB server was constantly inserting, deleting and updating data for about 48 hours. The documents were only of small size, simply a numerical value and a timestamp as well as an ID.
Now, after those 48 hours, the collection used for inserting, deleting and updating data was 98.000 Bytes and the preallocated storage size was 696.320 Bytes. It has become that much higher than the actual collection size because of one input spike during an insertion phase. Due to following deletions of objects the actual collection size decreased again, the preallocated storage size didn't (AFAIK a common database management problem, since it's the same with e.g. MySQL).
After the stress test was completed I created a dump of my MongoDB database and dropped the database completely, so I could import the dump afterwards again and see how the stats would look then. And as I suspected, the collection size was still the same (98.000 Bytes) but the preallocated storage size went down to 40.960 Bytes (from 696.320 Bytes before).
Since we want to try out MongoDB for an application that produces hundreds of MB of data and therefore I/O traffic every day, we need to keep the database and its occupied space to a minimum. And preferably without having to create a dump, drop the whole database and import the dump again every now and then.
Now my question is: is there a way to call the MongoDB garbage collector functionally from code? The software behind it is a Java software and my idea was to call the garbage collector after a certain amount of time/operations or after the preallocated storage size has reached a certain threshold.
Or maybe there's an ever better (more elegant) way to minimize the occupied space?
Any help would be appreciated and I'll try to provide any further information if needed. Thanks in advance.
On my first server I get:
root#prod ~ # du -hs /var/lib/mongodb/
909G /var/lib/mongodb/
After migration this database with mongodump/mongorestore
On my second server I get:
root#prod ~ # du -hs /var/lib/mongodb/
30G /var/lib/mongodb/
After I waited a few hours, mongo finished indexing I got:
root#prod ~ # du -hs /var/lib/mongodb/
54G /var/lib/mongodb/
I tested database and there's no corrupted or missed data.
Why there's so big difference in size before and after migration?
MongoDB does not recover disk space when actually data size drops due to data deletion along with other causes. There's a decent explanation in the online docs:
Why are the files in my data directory larger than the data in my database?
The data files in your data directory, which is the /data/db directory
in default configurations, might be larger than the data set inserted
into the database. Consider the following possible causes:
Preallocated data files.
In the data directory, MongoDB preallocates data files to a particular
size, in part to prevent file system fragmentation. MongoDB names the
first data file .0, the next .1, etc. The
first file mongod allocates is 64 megabytes, the next 128 megabytes,
and so on, up to 2 gigabytes, at which point all subsequent files are
2 gigabytes. The data files include files with allocated space but
that hold no data. mongod may allocate a 1 gigabyte data file that may
be 90% empty. For most larger databases, unused allocated space is
small compared to the database.
On Unix-like systems, mongod preallocates an additional data file and
initializes the disk space to 0. Preallocating data files in the
background prevents significant delays when a new database file is
next allocated.
You can disable preallocation by setting preallocDataFiles to false.
However do not disable preallocDataFiles for production environments:
only use preallocDataFiles for testing and with small data sets where
you frequently drop databases.
On Linux systems you can use hdparm to get an idea of how costly
allocation might be:
time hdparm --fallocate $((1024*1024)) testfile
The oplog.
If this mongod is a member of a replica set, the data directory
includes the oplog.rs file, which is a preallocated capped collection
in the local database. The default allocation is approximately 5% of
disk space on 64-bit installations, see Oplog Sizing for more
information. In most cases, you should not need to resize the oplog.
However, if you do, see Change the Size of the Oplog.
The journal.
The data directory contains the journal files, which store write
operations on disk prior to MongoDB applying them to databases. See
Journaling Mechanics.
Empty records.
MongoDB maintains lists of empty records in data files when deleting
documents and collections. MongoDB can reuse this space, but will
never return this space to the operating system.
To de-fragment allocated storage, use compact, which de-fragments
allocated space. By de-fragmenting storage, MongoDB can effectively
use the allocated space. compact requires up to 2 gigabytes of extra
disk space to run. Do not use compact if you are critically low on
disk space.
Important
compact only removes fragmentation from MongoDB data files and does
not return any disk space to the operating system.
To reclaim deleted space, use repairDatabase, which rebuilds the
database which de-fragments the storage and may release space to the
operating system. repairDatabase requires up to 2 gigabytes of extra
disk space to run. Do not use repairDatabase if you are critically low
on disk space.
http://docs.mongodb.org/manual/faq/storage/
What they don't tell you are the two other ways to restore/recover disk space - mongodump/mongorestore as you did or adding a new member to the replica set with an empty disk so that it writes it's databsae files from scratch.
If you are interested in monitoring this, the db.stats() command returns a wealth of data on data, index, storage and file sizes:
http://docs.mongodb.org/manual/reference/command/dbStats/
Over time the MongoDB files develop fragmentation. When you do a "migration", or whack the data directory and force a re-sync, the files pack down. If your application does a lot of deletes or updates which grow the documents fragmentation develops fairly quickly. In our deployment it is updates that grow the documents that causes this. Somehow MongoDB moves the document when it sees that the updated document can't fit in the space of the original document. There is some way to add padding factors to the collection to avoid this.
I am trying to store records with a set of doubles and ints (around 15-20) in mongoDB. The records mostly (99.99%) have the same structure.
When I store the data in a root which is a very structured data storing format, the file is around 2.5GB for 22.5 Million records. For Mongo, however, the database size (from command show dbs) is around 21GB, whereas the data size (from db.collection.stats()) is around 13GB.
This is a huge overhead (Clarify: 13GB vs 2.5GB, I'm not even talking about the 21GB), and I guess it is because it stores both keys and values. So the question is, why and how Mongo doesn't do a better job in making it smaller?
But the main question is, what is the performance impact in this? I have 4 indexes and they come out to be 3GB, so running the server on a single 8GB machine can become a problem if I double the amount of data and try to keep a large working set in memory.
Any guesses into if I should be using SQL or some other DB? or maybe just keep working with ROOT files if anyone has tried them?
Basically, this is mongo preparing for the insertion of data. Mongo performs prealocation of storage for data to prevent (or minimize) fragmentation on the disk. This prealocation is observed in the form of a file that the mongod instance creates.
First it creates a 64MB file, next 128MB, next 512MB, and on and on until it reaches files of 2GB (the maximum size of prealocated data files).
There are some more things that mongo does that might be suspect to using more disk space, things like journaling...
For much, much more info on how mongoDB uses storage space, you can take a look at this page and in specific the section titled Why are the files in my data directory larger than the data in my database?
There are some things that you can do to minimize the space that is used, but these tequniques (such as using the --smallfiles option) are usually only recommended for development and testing use - never for production.
Question: Should you use SQL or MongoDB?
Answer: It depends.
Better way to ask the question: Should you use use a relational database or a document database?
Answer:
If your data is highly structured (every row has the same fields), or you rely heavily on foreign keys and you need strong transactional integrity on operations that use those related records... use a relational database.
If your records are heterogeneous (different fields per document) or have variable length fields (arrays) or have embedded documents (hierarchical)... use a document database.
My current software project uses both. Use the right tool for the job!
I have a huge amount of data in my mongodb. It's filled with tweets (50 GB) and my Ram is 8 GB. When querying it retrieves all tweets and mongodb starts filling the ram, when it reaches 8 GB it starts moving files to disk. This is the part where it gets really slowwwww. So i changed the query from skipping and starting using indexes. Now i have indexes and i query only 8GB to my program, save the id of the last tweet used in a file and the program stops. Then restart the program and it goes get the id of the tweet from the file. But mogod server still is ocupping the ram with the first 8GB, that no longer will be used, because i have a index to the last. How can i clean the memory of the mongo db server without restarting it?
(running in a win)
I am a bit confused by your logic here.
So i changed the query from skipping and starting using indexes. Now i have indexes and i query only 8GB to my program, save the id of the last tweet used in a file and the program stops.
Using ranged queries will not help the amount of data you have to page in (in fact it might worsen it because of the index), it merely makes the query faster server side by using an index for huge skips (like 42K+ row skip). If you are dong the same as that skip() but in index then (without a covered index) then you are still paging in exactly the same.
It is slow due to memory mapping and your working set. You have more data than RAM and not only that but you are using more of that data than you have RAM as such you are page faulting probably all the time.
Restarting the program will not solve this, nor will clearing its data OS side (with restart or specific command) because of your queries. You probably need to either:
Think about your queries so that your working set is more in line to your memory
Or shard your data across many servers so that you don't have to build up your primary server
Or get a bigger primary server (moar RAM!!!!!)
Edit
The LRU of your OS should be swapping out old data already since MongoDB is using its fully allocated lot, which means that if that 8GB isn't swapped it is because your working set is taking that full 8GB (most likely with some swap on the end).
I'm copying 100Million records (about 97G) data to another server using copyDatabase in Mongo. Both servers have more than 500G diskspace. However, i notice although the process is still running, but the actual files are not added anymore. stop at xxxxx.11 any idea?
Copy database will move collection data over, then build indexes for each collection. Building indexes on a 100GB data set can take a lot of time (especially with small amounts of RAM). It's likely that you're in the middle of a large index build.
You can check the progress by watching the logs and running db.currentOp() in the shell on the destination DB.