migrate the MMAPv1 generated data to WiredTiger - mongodb

I am running a (keystonejs) webapp using mongodb 3.0 as database. I cloned the webapp and run a second instance using a 3.2 mongodb release (on a generated but yet empty data base). What I need to do now is get the data from the first database to the second. Since mongodb 3.2 uses a different default storage engine which is WiredTiger the clone uses that one. However the original app uses MMAPv1. Is there a easy way to migrate data create by MMAPv1 to WiredTiger?

Create a backup of the database on your old server using mongodump, restore it back to the new one using mongorestore, done. It's covered quite well in the documentation.
https://docs.mongodb.org/manual/tutorial/change-standalone-wiredtiger/

You can create replica set and add new machine to it. Doing so, you'll have latest data on newer server. Once replication is over, switch new machine to primary and shutdown old server if you want. This way you can easily clone your existing data to wiredTiger without loosing data or negatively effecting existing application.

Related

Recovering Mongodb collections and documents data

good day to you all.
I am currently having a hard time restoring the MongoDB database.
Here is what happened.
Thanks for looking into this matter
Basically, my server and MongoDB 3.2 server running on DigitalOcean.
I made some changes on mongod.conf file to allow remote access from my local machine. Since then, it no longer worked. I got Mongodb connection failed issue so reinstalled new Mongodb version 4.4 and it doesn't want to load existing data which is located in var/lib/MongoDB. So I downloaded the whole data files as posted above on my local machine.
Now, at least I want to open the current database on my local machine but I couldn't find any proper way to achieve this. Thanks again for looking into this issue.

Share MongoDB across 2 Docker containers having different MongoDB versions

Already a fully-fledged mongo v3.2 instance with data is running on a container.
I need to create a mongo v3.6 container instance with the same data as v3.2.
I do not have space to clone the data on the server.
I tried a lot of stuff.
Can I point to the data of the v3.2 from my v3.6 so that it is shared and I save space?
You can try this. Don't know if it would work(because of different versions of mongodb).
You can create a sharded cluster and add your old DB as a shard.
I got the space cleared.
Got the dump and made a new instance of mongo.
It works like a charm now.
The sharing volume was corrupting the data.

Change Storage Engine to WiredTiger for data from mongo backup

we are running mongodb 2.6.1 and we would like to upgrade to 3.*
My question is that since we need to change storage engine can we do it with files that come from mongodb backup, instead of making mongodump/mongorestore (as it says in docs)?

MongoDB 2.2: why didn't replication catch up a collection following a dump/restore?

We have a three-server replicaset running MongoDB 2.2 on Ubuntu 10.04, and recently had to upgrade the hard drive for each server where one particular database resides. This database contains log information for web service requests, where they write to collections in hourly buckets using the current timestamp to determine the name, e.g. log_yyyymmddhh.
I performed this process:
backup the database on the primary server with mongodump --db log_db
take a secondary server offline, replace the disk
bring the secondary server up in standalone mode (i.e. comment out the replSet entry
in /etc/mongodb.conf before starting the service)
restore the database on the secondary server with mongorestore --drop --db log_db
add the secondary server back into the replicaset and bring it online,
letting replication catch up the hourly buckets that were updated/created
while it had been offline
Everything seemed to go as expected, except that the collection which was the current bucket at the time of the backup was not brought up to date by replication. I had to manually copy that collection over by hand to get it up to date. Note that collections which were created after the backup were synched just fine.
What did I miss in this process that caused MongoDB not to get things back in synch for that one collection? I assume something got out of whack with regard to the oplog?
Edit 1:
The oplog on the primary showed that its earliest timestamp went back a couple of days, so there should have been plenty of space to maintain transactions for a few hours (which was the time the secondary was offline).
Edit 2:
Our MongoDB installation uses two disk partitions: /dev/sda1 and /dev/sdb1. The primary MongoDB directory /var/lib/mongodb/ is on /dev/sda1, and holds several databases, while the log database resides by itself on /dev/sdb1. There's a sym link /var/lib/mongodb/log_db which points to a directory on /dev/sdb1. Since the log db was getting full, we needed to upgrade the disk for /dev/sdb1.
You should be using mongodump with the --oplog option. Running a full database backup with mongodump on a replicaset that is updating collections at the same time may not leave you with a consistent backup. This becomes worse with larger databases, more collections and more frequent updates/inserts/deletes.
From the documentation for your version (2.2) of MongoDB (it's the same for 2.6 but just to be as accurate as possible):
--oplog
Use this option to ensure that mongodump creates a dump of the
database that includes an oplog, to create a point-in-time snapshot of
the state of a mongod instance. To restore to a specific point-in-time
backup, use the output created with this option in conjunction with
mongorestore --oplogReplay.
Without --oplog, if there are write operations during the dump
operation, the dump will not reflect a single moment in time. Changes
made to the database during the update process can affect the output
of the backup.
http://docs.mongodb.org/v2.2/reference/mongodump/
This is not covered well in most MongoDB tutorials around backups and restores. Generally you are better off if you can perform a live snapshot of the storage volume your database resides on (assuming your storage solution has a live snapshot ability compatible with MongoDB). Failing that, your next best bet is taking a secondary offline and then performing a snapshot or backup of the database files. Mongodump on a live database is increasingly a less optimal solution for larger databases due to performance issues.
I'd definitely take a look at the MongoDB overview of backup options: http://docs.mongodb.org/manual/core/backups/
I would guess this has to do with the oplog not being long enough, although it seems like you checked that and it looked reasonably big.
Still, when adding new members to a replica set you shouldn't be snapshotting and restoring them. It's better to simply add a new member and let replication happen by itself. This is described in the Mongo docs and is the process I've always followed.

Is it possible to undo a drop operation on a Mongo Collection?

I have a standalone setup of MongoDB 2.0.2 with default settings. I accidently dropped a collection from the shell using db.mycollection.drop() . Is there anyway I can undo this operation and rollback to previous state?
With a stand alone (no replicaset) I am afraid that you may not be able to recover your data. Read this post How to recover a dropped MongoDB database?