I tried to use mongodump(Version 3.2.5) to backup MongoDB(version 2.4.9). It sucessed. But I can't restore this backup. Why?
./mongorestore -h 127.0.0.1 -u xxx -p xxx --dir /home/jonkyon/mongo_2 --authenticationDatabase admin --drop
2016-04-25T19:08:24.028+0800 building a list of dbs and collections to restore from /home/jonkyon/mongo_2 dir
2016-04-25T19:08:24.029+0800 assuming users in the dump directory are from <= 2.4 (auth version 1)
2016-04-25T19:08:24.030+0800 cannot drop system collection products.system.users, skipping
2016-04-25T19:08:24.031+0800 reading metadata for products.system.users from /home/jonkyon/mongo_2/products/system.users.metadata.json
2016-04-25T19:08:24.031+0800 restoring products.system.users from /home/jonkyon/mongo_2/products/system.users.bson
2016-04-25T19:08:24.032+0800 error: E11000 duplicate key error index: products.system.users.$_id_ dup key: { : ObjectId('570e2f0ca19b9c2cb7e75905') }
2016-04-25T19:08:24.066+0800 restoring indexes for collection products.system.users from metadata
2016-04-25T19:08:24.068+0800 reading metadata for runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.metadata.json
2016-04-25T19:08:24.070+0800 finished restoring products.system.users (2 documents)
2016-04-25T19:08:24.070+0800 restoring runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.bson
2016-04-25T19:08:24.070+0800 restoring indexes for collection runoob.runoob from metadata
2016-04-25T19:08:24.071+0800 finished restoring runoob.runoob (2 documents)
2016-04-25T19:08:24.071+0800 restoring users from /home/jonkyon/mongo_2/admin/system.users.bson
2016-04-25T19:08:24.088+0800 Failed: restore error: error running merge command: no such cmd: _mergeAuthzCollections
The docs states the following "The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores."
Even you are using mongodb 2.4.9, I think you should avoid using a recent version of mongodump with older data stores
Related
just a newbie in asking a question here. I hope I will not violate any rules with my question.
I created a Full backup (fsyncLock(), snapshot using "tar czvf", fysncUnlock()) and Incremental backup (mongodump -d local -c oplog.rs -q "(timestamp-range)") running every hour in contab (00 * * * *) by following this link below:
Mongodb incremental backups
Also full backup using the native command mongodump (fsyncLock(), mongodump host dump/, fysncUnlock()) and the same strategy using the incremental backup I mentioned above.
I can restore the full backup with no problems, for native mongodump full backup, just create a new instance and restore. For the tar file snapshot, just untar, "mongod -d mongo.cnf", and force to reconfigure (rs.reconfig(cfg, {force: true})) the replica set since upon starting it will show "rs1:OTHER>". I manage to make it "PRIMARY>" after that even only running 1 instance first.
The problem is when restoring my hourly incremental backups, it shows this error upon running the command below:
> mongorestore --port 26017 -uroot --authenticationDatabase admin
> --oplogReplay 20191204-incre/2019-12-04-00\:00\:01/ Enter password:
>
> 2019-12-04T14:23:05.456+0800 preparing collections to restore from
> 2019-12-04T14:23:05.456+0800 replaying oplog
> 2019-12-04T14:23:05.609+0800 Failed: restore error: error handling
> transaction oplog entry: error replaying transaction: error extracting
> transaction ops: applyOps field: no such field
> 2019-12-04T14:23:05.609+0800 0 document(s) restored successfully. 0
> document(s) failed to restore.
When I renamed the file "oplog.rs.bson" to "oplog.bson" and run this command instead:
> mongorestore --port 26017 -uroot --authenticationDatabase admin
> --oplogReplay --oplogFile=20191204-incre/2019-12-04-01\:00\:01/local/oplog.bson 20191204-incre/2019-12-04-01\:00\:01/ Enter password:
>
> 2019-12-04T14:10:22.739+0800 preparing collections to restore from
> 2019-12-04T14:10:22.741+0800 restoring to existing collection
> local.oplog without dropping 2019-12-04T14:10:22.741+0800 restoring
> local.oplog from 20191204-incre/2019-12-04-01:00:01/local/oplog.bson
> 2019-12-04T14:10:22.791+0800 no indexes to restore
> 2019-12-04T14:10:22.791+0800 finished restoring local.oplog (1839
> documents, 0 failures) 2019-12-04T14:10:22.791+0800 replaying oplog
>
> 2019-12-04T14:10:22.915+0800 Failed: restore error: error handling
> transaction oplog entry: error replaying transaction: error extracting
> transaction ops: applyOps field: no such field
> 2019-12-04T14:10:22.915+0800 1839 document(s) restored successfully. 0
> document(s) failed to restore.
It successfully restored the oplogs, but under "local.oplog" instead of the "local.oplog.rs", and it really didn't replay the logs on their respective database.collection.
rs1:PRIMARY> use local
rs1:PRIMARY> show tables
oplog
oplog.rs
rs1:PRIMARY> db.oplog.count()
3679
Which part did I do it wrong? Thanks!
just move your .bson file out of "local" folder. Then remove (rm) the "local" folder and continue with mongorestore.
-I'm trying to import a mongodb database
- From a mongodb base version 2.4.10 I exported the database with the command: mongodump -d DBNAME -o / path / folder
-I tried to import the dump to a mongodb engine version 3.6.3
-I got this error:
root#server:~# mongorestore -vvvv --nsInclude NEWDBNAME /home/path/to/folder/
2018-04-06T10:16:35.729+0200 checking options
2018-04-06T10:16:35.734+0200 dumping with object check disabled
2018-04-06T10:16:35.734+0200 will listen for SIGTERM, SIGINT, and SIGKILL
2018-04-06T10:16:35.927+0200 connected to node type: standalone
2018-04-06T10:16:35.929+0200 standalone server: setting write concern w to 1
2018-04-06T10:16:35.929+0200 using write concern: w='1', j=false, fsync=false, wtimeout=0
2018-04-06T10:16:35.929+0200 mongorestore target is a directory, not a file
2018-04-06T10:16:35.929+0200 preparing collections to restore from
2018-04-06T10:16:35.929+0200 using /home/path/to/folder/ as dump root directory
2018-04-06T10:16:35.952+0200 don't know what to do with file "/home/path/to/folder/collection1.bson", skipping...
2018-04-06T10:16:35.952+0200 don't know what to do with file "/home/path/to/folder/collection1.metadata.json", skipping...
2018-04-06T10:16:35.952+0200 don't know what to do with file "/home/path/to/folder/collection2.bson", skipping...
2018-04-06T10:16:35.953+0200 don't know what to do with file "/home/path/to/folder/collection2.metadata".json", skipping...
2018-04-06T10:16:35.953+0200 don't know what to do with file "/home/path/to/folder/collection3.bson", skipping...
2018-04-06T10:16:35.953+0200 don't know what to do with file "/home/path/to/folder/collection3.metadata".json", skipping...
2018-04-06T10:16:35.953+0200 don't know what to do with file "/home/path/to/folder/collectionX.bson", skipping...
2018-04-06T10:16:35.953+0200 don't know what to do with file "/home/path/to/folder/collectionX.metadata.json", skipping...
.
.
.
2018-04-06T10:16:35.958+0200 finalizing intent manager with multi-database longest task first prioritizer
2018-04-06T10:16:35.958+0200 restoring up to 4 collections in parallel
2018-04-06T10:16:35.958+0200 starting restore routine with id=3
2018-04-06T10:16:35.958+0200 ending restore routine with id=3, no more work to do
2018-04-06T10:16:35.958+0200 starting restore routine with id=0
2018-04-06T10:16:35.958+0200 ending restore routine with id=0, no more work to do
2018-04-06T10:16:35.958+0200 starting restore routine with id=1
2018-04-06T10:16:35.958+0200 ending restore routine with id=1, no more work to do
2018-04-06T10:16:35.959+0200 starting restore routine with id=2
2018-04-06T10:16:35.959+0200 ending restore routine with id=2, no more work to do
2018-04-06T10:16:35.959+0200 done
Note: Result with other command
root#server:~# mongorestore -vvv -d NEWDBName /home/tmp/path/folder/
2018-04-06T10:47:22.949+0200 checking options
2018-04-06T10:47:22.951+0200 dumping with object check disabled
2018-04-06T10:47:22.954+0200 will listen for SIGTERM, SIGINT, and SIGKILL
2018-04-06T10:47:22.963+0200 connected to node type: standalone
2018-04-06T10:47:22.963+0200 standalone server: setting write concern w to 1
2018-04-06T10:47:22.963+0200 using write concern: w='1', j=false, fsync=false, wtimeout=0
**2018-04-06T10:47:22.963+0200 the --db and --collection args should only be used when restoring from a BSON file. Other uses are deprecated and will not exist in the future; use --nsInclude instead**
2018-04-06T10:47:22.963+0200 mongorestore target is a directory, not a file
2018-04-06T10:47:22.963+0200 building a list of collections to restore from /home/tmp/path/folder dir
2018-04-06T10:47:22.963+0200 reading collections for database DBNAME in folder
2018-04-06T10:47:22.963+0200 found collection DBNAME.collectionX bson to restore to DBNAME.collectionX
2018-04-06T10:47:22.963+0200 found collection metadata from DBNAME.collectionX to restore to DBNAME.collectionX
2018-04-06T10:47:22.963+0200 found collection DBNAME.collectionY bson to restore to DBNAME.collectionY
2018-04-06T10:47:22.963+0200 found collection metadata from DBNAME.collectionY to restore to DBNAME.collectionY
2018-04-06T10:47:22.963+0200 found collection DBNAME.collectionZ bson to restore to DBNAME.collectionZ
2018-04-06T10:47:22.963+0200 found collection metadata from DBNAME.collectionZ to restore to DBNAME.collectionZ
.
.
.
2018-04-06T10:47:22.964+0200 not restoring system.indexes collection because database DBNAME has .metadata.json files
2018-04-06T10:47:22.964+0200 found collection DBNAME.system.users bson to restore to DBNAME.system.users
2018-04-06T10:47:22.964+0200 found collection metadata from DBNAME.system.users to restore to DBNAME.system.users
2018-04-06T10:47:22.964+0200 found collection DBNAME.typeincidents bson to restore to DBNAME.typeincidents
2018-04-06T10:47:22.964+0200 found collection metadata from DBNAME.typeincidents to restore to DBNAME.typeincidents
.
.
.
2018-04-06T10:47:22.964+0200 finalizing intent manager with multi-database longest task first prioritizer
2018-04-06T10:47:22.964+0200 restoring up to 4 collections in parallel
2018-04-06T10:47:22.964+0200 starting restore routine with id=3
2018-04-06T10:47:22.964+0200 starting restore routine with id=0
2018-04-06T10:47:22.964+0200 starting restore routine with id=1
2018-04-06T10:47:22.964+0200 starting restore routine with id=2
2018-04-06T10:47:22.964+0200 reading metadata for DBNAME.evenements from /home/tmp/path/folder/evenements.metadata.json
2018-04-06T10:47:22.964+0200 creating collection DBNAME.evenements using options from metadata
2018-04-06T10:47:22.964+0200 using collection options: bson.D{bson.DocElem{Name:"create", Value:"evenements"}, bson.DocElem{Name:"idIndex", Value:mongorestore.IndexDocument{Options:bson.M{"name":"_id_", "ns":"DBNAME.evenements"}, Key:bson.D{bson.DocElem{Name:"_id", Value:1}}, PartialFilterExpression:bson.D(nil)}}}
2018-04-06T10:47:22.965+0200 Failed: DBNAME.evenements: error creating collection DBNAME.evenements: error running create command: BSON field 'OperationSessionInfo.create' is a duplicate field
Thank you for help
mongorestore expects the dump folder to contain sub-folders with the database name, which in turn contain the BSON dump and the metadata. The error you're seeing is because it didn't find any subdirectory with BSON/metadata files in it.
For Example in your case
mongorestore -vvvv --nsInclude NEWDBNAME /home/path/to/folder/
You should use below command
mongorestore -vvvv --nsInclude NEWDBNAME /home/path/to/
where "folder" will be the name of the directory containing the database name dump directory and inside it should have the bson files
Refer below link
Stack Overflow Question Answered on Don't know what to do with file “/”, skipping
Migrating Data from Tokumx To Percona Server For MonoDB
Step 1 :
This guide describes how to upgrade existing Percona TokuMX instance to Percona Server for MongoDB. The following JavaScript files are required to perform the upgrade:
• allDbStats.js
• tokumx_dump_indexes.js
• psmdb_restore_indexes.js
You can download those files from GitHub.
Step 2 :
Run the allDbStats.js script to record database state before migration:
$ mongo ./allDbStats.js > ~/allDbStats.before.out
Step 3 :
Perform a dump of the database:
$ mongodump --out /your/dump/path
Step 4 :
Perform a dump of the indexes:
$ ./tokumx_dump_indexes.js > /your/dump/path/tokumxIndexes.json
Step 5 :
Restore the collections without indexes using “--noIndexRestore” switch:
$ mongorestore --noIndexRestore /your/dump/path
Step 6 :
Restore the indexes (this may take a while). This step will remove clustering options to the collections before inserting.
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Step 7 :
Run the allDbStats.js script to record database state after migration:
mongo ./allDbStats.js > ~/allDbStats.after.out
This is the guide i have found in the Migration from Tokumx to Percona server for mongodb. at step 6 when i try to restore indexes i get below mentioned error :
/mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY SyntaxError: Unexpected identifier
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY Error: error loading js file: /mnt/tokumx-bkup/tokumxIndexes.json
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78
failed to load: /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js
Any help will be welcomed.
Thanks
check the tokumxIndexes.json file. When tokumx_dump_indexes.js is run, the mongo shell parameter --quiet must be used or the resulting json will contain the shell preamble at the beginning.
And check the file using something like http://jsonlint.com/
Also if preamble is present delete these two lines from the tokumxIndexes.json file.
"MongoDB shell version: 3.0.11-1.6
connecting to: 127.0.0.1:27017/test"
and Run the script again.
and Run the script again
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Now this script will start build Index Process.
I'm trying to restore a collection like so:
$ mongorestore --verbose --db MY_DB --collection MY_COLLECTION /path/to/MY_COLLECTION.bson --port 1234 --noOptionsRestore
Here's the error output (timestamps removed):
using write concern: w='majority', j=false, fsync=false, wtimeout=0
checking for collection data in /path/to/MY_COLLECTION.bson
found metadata for collection at /path/to/MY_COLLECTION.metadata.json
reading metadata file from /path/to/MY_COLLECTION.metadata.json
skipping options restoration
restoring MY_DB.MY_COLLECTION from file /path/to/MY_COLLECTION.bson
file /path/to/MY_COLLECTION.bson is 241330 bytes
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
restoring indexes for collection MY_DB.MY_COLLECTION from metadata
Failed: restore error: MY_DB.MY_COLLECTION: error creating indexes for MY_DB.MY_COLLECTION: createIndex error: exception: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
The result of the restore is a database and collection with correct names but no documents.
OS: Ubuntu 14.04 running on Azure VM.
I just solved my own problem. See answer below.
The problem seemed to be that I was using mongod on the replica set PRIMARY member.
Once I commented out the following line in /etc/mongod.conf, it worked without problems:
replSet=REPL_SET_NAME --> #replSet=REPL_SET_NAME
I assume passing the correct replica set name to the mongorestore command (like in this question) could also work, but haven't tried that yet.
I'm trying to dump a database from another server (this works fine), then restore it on a new server (this does not work fine).
I first run:
mongodump --host -d
This creates a folder dump/db which contains all of the bson documents.
Then in the dump folder, I'm running:
mongorestore -d dbname db
This works and iterates through the files, but I get this error on dbname.system.users
Wed May 23 02:08:05 { key: { _id: 1 }, ns: "dbname.system.users", name: "_id_" }
Error creating index dbname.system.usersassertion: 13111 field not found, expected type 16
Any ideas how to resolve this?
If it realy different versions, use --noIndexRestore option. And create all index after that.
Any chance the source and destination are different versions?
In any case, to get around this, restore the collections individually using the -c flag to the target DB and then build the indexes afterward. The system collection is the one used for indexes, so it is fairly easy to recreate - try it last once everything else has been restore, and if it still fails you can always just recreate the relevant indexes.
The issue could also caused by this bug in older versions of Mongo (In my case it was 2.0.8):
https://jira.mongodb.org/browse/SERVER-7181
Basically, you get 13111 field not found, expected type 16 error when it should actually be prompting you to enter your authentication details.
And example of how I fixed it:
root#precise64:/# mongorestore /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:48:15 going into namespace [test.system.indexes]
Fri May 24 11:48:15 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Error creating index test.system.usersassertion: 13111 field not found, expected type 16
# Error when not giving username and password
root#precise64:/# mongorestore -u fakeuser -p fakepassword /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:57:11 /backups/demand/ondemand.05-24-2013T114223/test/system.users.bson
Fri May 24 11:57:11 going into namespace [test.system.users]
1 objects found
# Works fine when giving username and password! :)
Hope that helps anyone who's issue doesn't get fixed by the previous 2 replies!
This can also happen if you are trying to mongorestore into MongoDB 2.6+ and the dump you are trying to restore contains a system.users table in any database other than admin. In MongoDB 2.2 and 2.4 the system.userscollections could occur in any database. The auth schema migration associated with MongoDB 2.6 moved all users into the system.users table in the admin database, but left behind the system.users tables in the other databases (MongoDB 2.6 just ignores these). This seems to cause this assertion when importing into MongoDB 2.6.