should I let mongodb make use of the new hard disk in this way? - mongodb

I have a mongodb v2.4.6 running on ubuntu 13.04. It is known that mongodb store all data in /var/lib/mongodb. Now the mongodb is running out of the hard disk. Fortunately, I got a new hard disk which is installed, fdisked, formated and got a name /dev/sda3. Unfortunately I don't know how to let the mongodb make use of the new hard disk because my knowledge on ubuntu and mongodb is very limited. After some research in internet, it seems that I should execute the following command
sudo mount /dev/sda3 /var/lib/mongodb
Is this what I need to do to let mongodb use the new disk? If so, will mongodb automatically and intelligently increase its data to this disk? Is there any othere things I should do? Thank you.

Unfortunately this one will not be that straightforward. Even if you succeed with the mounting it will not move the files at all. What you can do is to
mount the disk elsewhere (mkdir /var/lib/mongodb1, mount /dev/sda3 /var/lib/mongodb1)
stop mongo
copy the files from /var/lib/mongodb to /var/lib/mongodb1 (only helps if the new disk is bigger)
reconfigure mongo to use as db dir the new directory or swap the names with mv commands
start mongo
if everything went fine, mongo started and so on,(check it first!!!) you can delete the old data.
If you have a disk which is the same size so with moving the data you will run into the same problem, if you need larger space then a single disk you should play around with RAID and/or LVM and more disks.

Related

What files should I source control with MongoDB

I've been using a MongoDB instance in my docker compose script. I want to set it up so I can keep my database from PC to PC but have all the same data.
There seems to be quite a bit of files in a MongoDB docker installation, .lock, .turtle, .wt, .bson diagnostic.data, journal etc.
Is there a rule of thumb of what I should store and would I should ignore in my repo? It's been unclear to me, I don't want to store anyfiles that could effect booting on another docker container.
Best is to preserve everything under the mongod dbPath folder , but some files/folders can be removed afcourse like diagnostic.data - the folder contain collected metrics during operation necessary for performance analysis at later stage , but in general the already collected stats are not necessary for the mongod process to be executed.

mongodb is not start after restoring /var/lib/mongodb manually

Our server's raid failed today. Right now we have a zip file of mongodbPath but after we extract it, we can not start mongo db again.
I will appreciate any help.
After a lot of searches and contacting with MongoDB's support we fond out that we rsync MongoDB directory when it was writing.
So we cannot restore data and we had data lost!
If you want to use rsync for getting backup of MongoDB you should stop it.
This may help.

What are important mongo data files for backup

If I want to backup database by copying raw files. What files do I need to copy ? only db-name.ns, db-name.0, db-name.1.... or whole folder (local.ns.., journal). I'm running replica set. I understand procedure for locking hidden secondary node and then copying files to new location. But I'm wondering do I need to copy whole folder or just some files.
Thx
Simple answer: All of them. As obvious as it might sound. And here is why:
If you don't copy a namespaces file, your database will most likely not work.
When not copying all datafiles, some of your data is missing and your indices will point to void locations. The database in question might work (minus the data stored in the missing data file), but I would not bet on that – and since the data was important enough to create a backup in the first place, you don't want this to happen, do you?
Config, admin and local databases are vitally necessary for their respective features – and since you used the feature, you probably want to use it after a restore, too.
How do I backup all files?
The best solution save for MMS backup I have found so far is to create LVM snapshots of the filesystem the MongoDB data resides on. In order for tis to work, the journal needs to be included. Usually, you don't need a dedicated backup node for this approach. It is a bit complicated to set up, though.
Preparing LVM backups
Let's assume you have your data in the default data directory /data/db and you have not changed any paths. Then you would mount a logical volume to /data/db and use this to hold the data. Assuming that you don't have anything like this, here is a step by step guide:
Create a logical volume big enough to hold your data. I will call that one /dev/VolGroup/LogVol1 from now on. Make sure that you only use about 80% of the available disk space in the volume group for creating the logical volume.
Create a filesystem on the logical volume. I prefer XFS, so we create an xfs filesystem on /dev/VolGroup/LogVol1:
mkfs.xfs /dev/VolGroup/LogVol1
Mount the newly created filesystem on /mnt
mount /dev/VolGroup/LogVol1 /mnt
Shut down mongod:
killall mongod
(Note that the upstart scripts sometimes have problems shutting down mongod, and this command gracefully stops mongod anyway).
Copy the datafiles from /data/dbto /mntby issuing
cp -a /data/db/* /mnt
Adjust your /etc/fstab so that the logical volume gets mounted on reboot:
# The noatime parameter increases io speed of mongod significantly
/dev/VolGroup/LogVol1 /data/db xfs defaults,noatime 0 1
Umount the logical volume from it's current outpoint and remount it on the correct one:
cd && umount /mnt/ && mount /data/db
Restart mongod
Creating a backup
Creating a backup now becomes as easy as
Create a snapshot:
lvcreate -l100%FREE -s -n mongo_backup /dev/VolGroup/LogVol1
Mount the snapshot:
mount /dev/VolGroup/mongo_backup /mnt
Copy it somewhere. The reason we need to do this is that the snapshot can only be held up until the changes to the data files do not exceed the space in the volume group you did not allocate during preparation. For example, if you have a 100GB disk and you allocated 80GB for /dev/VolGroup/LogVol1, the snapshot size would be 20GB. While the changes on the filesystem from the point you took the snapshot are less than 20GB, everything runs fine. After that, the filesystem will refuse to take any changes. So you aren't in a hurry, but you should definitely move the data to an offsite location, an FTP server or whatever you deem appropriate. Note that compressing the datafiles can take quite long and you might run out of "change space" before finishing that. Personally, I like to have a slower HDD as a temporary place to store the backup, doing all other operations on the HDD. So my copy command looks like
cp -a /mnt/* /home/mongobackup/backups
when the HDD is mounted on /home/mongobackup.
Destroy the snapshot:
umount /mnt && lvremove /dev/VolGroup/mongo_backup
The space allocated for the snapshot is released and the restrictions to the amount of changes to the filesystem are removed.
Whole db-Data folder + where ever you have your logs and journalling
The best solution to backup data on MongoDB would be to use Mongo monitoring Service(MMS). All other solutions including copying files manually, mongodump, mongoexport are way behind MMS.

MongoDB does not see database or collections after migrating from localhost to EBS volume

full disclosure: I am a complete n00b to mongodb and am just getting my feet wet with using mongo on AWS (but have 2 decades working in IT so not a total n00b :P)
I setup an EBS volume and installed mongo on a EC2 instance.
My problem is that I provisioned too small an EBS volume initially.
When I realized this I:
created a new larger EBS volume
mounted it on the server
stopped mongo ( $ sudo service mongod stop)
copied all my /data/db files into the new volume
updated conf files and fstab (dbpath, logpath, pidfilepath and mount point for new volume respectively)
restarted mongod
When I execute: $ sudo service mongod start
- everything runs fine.
- I can futz about in the admin and local databases.
However, when I run the mongos command: > show databases
- I only see the admin and local.
- the database I copied into the new volume (named encompass) is not listed.
I still have a working local copy of the database so my data is not lost, just not sure how best to move mongo data around other than:
A) start all over importing the data to the db on the AWS server (not what I would like since it is already loaded in my local db)
B) copy the local db to the new EBS volume again (also not preferred but better that importing all the data from scratch again!).
NOTE: originally I secure copied the data into the EBS volume with this command:
$ scp -r -i / / ec2-user#:/
then when I copied between volumes I used a vanilla cp command.
Did I miss something here?
The best I could find on SO and the web was this process (How to scale MongoDB?), but perhaps I missed a switch in a command or a nuance to the process that rendered my database files inert/useless?
Any idea how I can get mongo to see my other database files and collections?
Or did I make a irreversible error somewhere along the way?
Thanks for any help!!
Are you sure you conf file is being loaded? You can, for a test, load mongod.exe and specify the path directly to your db for a test, i.e.:
mongod --dbpath c:\mongo\data\db (unix syntax may vary a bit, this is windows)
run this from the command line and see what, if anything, mongo complains about.
A database has a very finicky algorithm that is easy to damage. Before copying from one database to another you should probably seed the database, a few dummy entries will tell you the database is working.

Is it normal for MongoDB whole /data/db to be gone after a electric trip that result in crash

I have a single machine that has MongoDB and its data is at /data/db as usual.
When my machine crashed due to an electric power trip, my MongoDB refuse to start at launch (Mac OS X Server via LaunchAgent) and also /data/db mysteriously disappear!
Also all log file are wipe out. This happen on my development SSD MBA and I thought is just a weird SSD case. But my XServe server is getting it as well when the power trip.
Am I missing some data protection articles somewhere? For sure it can't be this unreliable by just deleting /data/db!!??
MongoDB will never ever remove your database files!
In case of a crash you have to start mongod using the --repair option.
In addition: using the new journaling option of MongoDB in V 1.8+ that should help a lot when you run MongoDB as standalone service.
No that is not normal.
If it won't start, it's likely mongodb is indicating that you need to run a repair because mongod.lock is present and has a certain state in /data/db. But that would mean /data/db exists.
If /data/db exists but were empty (which in this case would be bad obviously), it would start right up.
If you log(s) are missing, sounds like a more general disk issue.
So check the startup message if about mongod.lock there is data there. Also with v1.8+ use journaling. (albeit you wouldn't lose all datafiles even without journaling)