Stop Mongo Db from creating backups - mongodb

Can any one tell me how to stop mongo DB from creating backup restores ?
If my DB name is "Database"
It is creating backups like
DataBase1
Database2
Database3
.
.
.
DataBase.ns
I want to use only working copy

MongoDB allocates data files like this:
First, a namespace file (mydb.ns) and a data file with 64MB (mydb.0). If the required space grows larger, it will add a 128MB file (mydb.1) and continuing like this, doubling the file size every time until the files are 2GB each (mydb.5 and following).
This is a somewhat aggressive allocation pattern. If you perform a lot of in-place updates and deletes, your datafiles can fragment severely. Running the repair database command via db.runCommand({repairDatabase:1}) can help, but it requires even more disk space while it runs and it stalls writes to the DB. Make sure to carefully read the documentation first.
Before you do that, run db.stats(), then compare dataSize (the amount of data you actually stored), storageSize (the allocated size including padding, but w/o indexes), and fileSize (the disk space allocated). If the differences are huge (factors of > 3), repair will probably reclaim quite a bit of disk space. If not, it can't help you because it can't magically shrink your data.

Related

Mongodb data files become smaller after migration

On my first server I get:
root#prod ~ # du -hs /var/lib/mongodb/
909G /var/lib/mongodb/
After migration this database with mongodump/mongorestore
On my second server I get:
root#prod ~ # du -hs /var/lib/mongodb/
30G /var/lib/mongodb/
After I waited a few hours, mongo finished indexing I got:
root#prod ~ # du -hs /var/lib/mongodb/
54G /var/lib/mongodb/
I tested database and there's no corrupted or missed data.
Why there's so big difference in size before and after migration?
MongoDB does not recover disk space when actually data size drops due to data deletion along with other causes. There's a decent explanation in the online docs:
Why are the files in my data directory larger than the data in my database?
The data files in your data directory, which is the /data/db directory
in default configurations, might be larger than the data set inserted
into the database. Consider the following possible causes:
Preallocated data files.
In the data directory, MongoDB preallocates data files to a particular
size, in part to prevent file system fragmentation. MongoDB names the
first data file .0, the next .1, etc. The
first file mongod allocates is 64 megabytes, the next 128 megabytes,
and so on, up to 2 gigabytes, at which point all subsequent files are
2 gigabytes. The data files include files with allocated space but
that hold no data. mongod may allocate a 1 gigabyte data file that may
be 90% empty. For most larger databases, unused allocated space is
small compared to the database.
On Unix-like systems, mongod preallocates an additional data file and
initializes the disk space to 0. Preallocating data files in the
background prevents significant delays when a new database file is
next allocated.
You can disable preallocation by setting preallocDataFiles to false.
However do not disable preallocDataFiles for production environments:
only use preallocDataFiles for testing and with small data sets where
you frequently drop databases.
On Linux systems you can use hdparm to get an idea of how costly
allocation might be:
time hdparm --fallocate $((1024*1024)) testfile
The oplog.
If this mongod is a member of a replica set, the data directory
includes the oplog.rs file, which is a preallocated capped collection
in the local database. The default allocation is approximately 5% of
disk space on 64-bit installations, see Oplog Sizing for more
information. In most cases, you should not need to resize the oplog.
However, if you do, see Change the Size of the Oplog.
The journal.
The data directory contains the journal files, which store write
operations on disk prior to MongoDB applying them to databases. See
Journaling Mechanics.
Empty records.
MongoDB maintains lists of empty records in data files when deleting
documents and collections. MongoDB can reuse this space, but will
never return this space to the operating system.
To de-fragment allocated storage, use compact, which de-fragments
allocated space. By de-fragmenting storage, MongoDB can effectively
use the allocated space. compact requires up to 2 gigabytes of extra
disk space to run. Do not use compact if you are critically low on
disk space.
Important
compact only removes fragmentation from MongoDB data files and does
not return any disk space to the operating system.
To reclaim deleted space, use repairDatabase, which rebuilds the
database which de-fragments the storage and may release space to the
operating system. repairDatabase requires up to 2 gigabytes of extra
disk space to run. Do not use repairDatabase if you are critically low
on disk space.
http://docs.mongodb.org/manual/faq/storage/
What they don't tell you are the two other ways to restore/recover disk space - mongodump/mongorestore as you did or adding a new member to the replica set with an empty disk so that it writes it's databsae files from scratch.
If you are interested in monitoring this, the db.stats() command returns a wealth of data on data, index, storage and file sizes:
http://docs.mongodb.org/manual/reference/command/dbStats/
Over time the MongoDB files develop fragmentation. When you do a "migration", or whack the data directory and force a re-sync, the files pack down. If your application does a lot of deletes or updates which grow the documents fragmentation develops fairly quickly. In our deployment it is updates that grow the documents that causes this. Somehow MongoDB moves the document when it sees that the updated document can't fit in the space of the original document. There is some way to add padding factors to the collection to avoid this.

Mongodb normal exit before applying a write lock

I am using python, scrapy, MongoDB for my web scraping project. I used to scrape 40Gb data daily. Is there a way or setting in mongodb.conf file so that MongoDB will exit normally before applying a write lock on db due to disk full error ?
Because every time i face this problem of disk full error in MongoDB. Then I have to manually re-install MongoDB to remove the write lock from db. I cant run repair and compact command on the database because for running this command also I need free space.
MongoDB doesn't handle disk-full errors very well in certain cases, but you do not have to uninstall and then re-install MongoDB to remove the lock file. Instead, you can just mongod.lock file from this. As long as you have journalling enabled, your data should be good. Of course, at that moment, you can't add more data to the MongoDB databases.
You probably wouldn't need repair and compact only helps if you actually have deleted data from MongoDB. compact does not compress data, so this is only useful if you indeed have deleted data.
Constant adding, and then deleting later can cause fragmentation and lots of disk space to be unused. You can prevent that mostly by using the userPowerOf2Sizes option that you can set on collections. compact mitigates this by rewriting the database files as well, but as you said you need free disk space for this. I would advice you to also add some monitoring to warn you when your data size reaches 50% of your full disk space. In that case, there is still plenty of time to use compact to reclaim unused space.

db.repairDatabase() did not reduce size of database

I have a db that is about 8G.
I did copy db to generate a copy.
Then I pruned the copy using js console.
Then I ran a reapir DB and the copy is still the exact size as the original.
In all likelihood, this means that you did not free up enough space to return an entire extent or file to the OS. Imagine that you have five 2GB files (MongoDB preallocates files in 2GB increments after the first few smaller files) and now imagine that you had 8GB of data in this DB. The last file will always be empty because MongoDB preallocates a file before it needs it. So 8GBs are occupying four 2GB files and one 2GB file is empty.
Now you do some pruning - maybe even 1.8GBs worth of deleting of stuff. You run repairDB which rewrites every single record, as compactly as possible in a new set of database files. Except it still needs the same five 2GB files because the fourth file has 100MB of data and the last file always has to be empty.
You can look at the output of db.stats() to see what the data size is compared to the storage size, but the fact is that these are relatively small numbers compared to the size of allocated files and that's likely why you are seeing what you are seeing.

mongo db --smallfiles switch drawbacks

I want to use mongodb for my new project. the problem is, mongo use pre-alocate files :
Each datafile is preallocated to a particular size. (This is done to prevent file system fragmentation, among other reasons.) The first filename for a database is .0, then .1, etc. .0 will be 64MB, .1 128MB, et cetera, up to 2GB. Once the files reach 2GB in size, each successive file is also 2GB. Thus, if the last datafile present is, say, 1GB, that file might be 90% empty if it was recently created.
from here : http://www.mongodb.org/display/DOCS/Excessive+Disk+Space
And its normal to have many 2GB files with nothing in it. there is a --smallfiles switch, to limit this files to 512MB
--smallfiles => Use a smaller initial file size (16MB) and maximum size (512MB)
I want to know using smallfiles is good for production? and what's its drawbacks.
there is noprealloc switch but its not good in production. but there is no note about smallfiles.
You would usually only use smallfiles if you are creating a whole bunch of databases, if you're only operating out of a few databases it doesn't save you enough to mess with.
We haven't seen any performance problems with it for customers that have many, many DBS (and actually benefit from small files). Their activity level is normally somewhat low compared to other installs, though. Based on what Mongo is doing, it might be slightly slower to do some operations but I don't think you'll ever notice.
Additionally, if running in AWS cloud and using the m3.small instances with SSDs, you are limited to 4GB storage. Setting this option will allow you to have a small SSD-backed mongodb node. Could be sufficient for small tasks

Compact command not freeing up space in MongoDB 2.0

I just installed MongoDB 2.0 and tried to run the compact command instead of the repair command in earlier versions. My database is empty at the moment, meaning there is only one collection with 0 entries and the two system collections (indices, users). Currently the db takes about 4 GB of space on the harddisk. The db is used as a temp queue with all items being removes after they have been processed.
I tried to run the following in the mongo shell.
use mydb
db.theOnlyCollection.runCommand("compact")
It returns with
ok: 1
But still the same space is taken on the harddisk. I tried to compact the system collections as well, but this did not work.
When I run the normal repair command
db.repairDatabase()
the database is compacted and only takes 400 MB.
Anyone has an idea why the compact command is not working?
Thanks a lot for your help.
Best
Alex
Collection compaction is not supposed to decrease the size of data files. Main point is to defragment collection and index data - combine unused space gaps into continuous space allowing new data to be stored there. Moreover it may actually increase the size of data files:
Compaction may increase the total size of your data files by up to 2GB. Even in this case, total collection storage space will decrease.
http://www.mongodb.org/display/DOCS/compact+Command#compactCommand-Effectsofacompaction