Mongorestore raises error - can i skip a document? - mongodb

I'm trying to restore a mongo dump but got an error:
"2020-12-21T01:43:22.398-0300 Failed: namedb.namecollection: error restoring from \namedb\namecollection.bson: (InvalidBSON) not null terminated string in element with field name 'url' in object with _id: ObjectId('5fded20599e3604d10bb2adf')"
Then the mongorestore imports only 8000 documents, but my dump has above 150k documents.
Any idea?
Mongodb version: 4.2.4 community

I solved!
Convert bson to json: bsondump.exe --outFile=collection.json mycol.bson
this step skips all documents with problems
Import to mongo: mongoimport.exe --db=mydb --collection=mycol --file=collection.json

Related

Mongorestore for oplog.rs collection is not working - restore error: applyOps: (DuplicateKey) E11000 duplicate key error collection

I am trying to enable mongodb backup on Azure Kubernetes Services(AKS). I have taken mongodump of oplog.rs collections. Now i want to take only one dropped collection from oplog.rs. I have found out the timestamp of the dropped collection. When i am trying to restore giving me below error.
Mongodump command which i used :
nohup mongodump --host=hostname --db=local --collection=Oplog.rs --username=username --authenticationDatabase=admin --password=password --out="/var/backup/"
Mongorestore command :
mongorestore --host=hostname --port=27017 --username="username" --password="password" --authenticationDatabase=admin --oplogReplay --oplogLimit 1643871153 /var/backup/local/oplog.rs.bson
Its showing following error :
2022-02-09T12:22:03.295+0000 skipping applying the config.system.sessions namespace in applyOps
2022-02-09T12:22:03.296+0000 skipping applying the config.system.sessions namespace in applyOps
2022-02-09T12:22:03.296+0000 skipping applying the config.transactions namespace in applyOps
2022-02-09T12:22:03.398+0000 oplog 694MB
2022-02-09T12:22:03.403+0000 Failed: restore error: error handling transaction oplog entry: error applying transaction op: applyOps: (DuplicateKey) E11000 duplicate key error collection: DataUniverseStg.Hierarchy index: HierachyIDandDUPKI dup key: { HierarchyID: 1343, DU_PKI: 15 }
2022-02-09T12:22:03.403+0000 0 document(s) restored successfully. 0 document(s) failed to restore.

Inserting data in mongo collection through cmd command

When I am inserting data in MongoDb database using cmd command
mongo mongodb://localhost:27017/DummyDatabase --eval "db.Dummy.insert({a:12345,b:asd})"
I am getting following error -
MongoDB shell version v4.2.0
connecting to: mongodb://localhost:27017/Ontologies?compressors=disabled&gssapiServiceName=mongodb
Implicit session: session { "id" : UUID("6da84ed8-8bf0-4a6c-9d38-7c70cbfc8c7e") }
MongoDB server version: 4.2.0
2019-11-07T13:49:15.316+0530 E QUERY [js] uncaught exception: ReferenceError: asd is not defined :
#(shell eval):1:38
2019-11-07T13:49:15.320+0530 E - [main] exiting with code -4
But when I give this command -
mongo mongodb://localhost:27017/DummyDatabase --eval "db.Dummy.insert({a:12345})"
This works, what can be the issue?
The issue with failing command is that the value provided to the field with key 'b' is not a legal JSON value type. asd is a string that needs to be enclosed in quotes so that the document being inserted can be a valid JSON object. Refer to json.org for details.

mongodump lower version mongodb

I tried to use mongodump(Version 3.2.5) to backup MongoDB(version 2.4.9). It sucessed. But I can't restore this backup. Why?
./mongorestore -h 127.0.0.1 -u xxx -p xxx --dir /home/jonkyon/mongo_2 --authenticationDatabase admin --drop
2016-04-25T19:08:24.028+0800 building a list of dbs and collections to restore from /home/jonkyon/mongo_2 dir
2016-04-25T19:08:24.029+0800 assuming users in the dump directory are from <= 2.4 (auth version 1)
2016-04-25T19:08:24.030+0800 cannot drop system collection products.system.users, skipping
2016-04-25T19:08:24.031+0800 reading metadata for products.system.users from /home/jonkyon/mongo_2/products/system.users.metadata.json
2016-04-25T19:08:24.031+0800 restoring products.system.users from /home/jonkyon/mongo_2/products/system.users.bson
2016-04-25T19:08:24.032+0800 error: E11000 duplicate key error index: products.system.users.$_id_ dup key: { : ObjectId('570e2f0ca19b9c2cb7e75905') }
2016-04-25T19:08:24.066+0800 restoring indexes for collection products.system.users from metadata
2016-04-25T19:08:24.068+0800 reading metadata for runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.metadata.json
2016-04-25T19:08:24.070+0800 finished restoring products.system.users (2 documents)
2016-04-25T19:08:24.070+0800 restoring runoob.runoob from /home/jonkyon/mongo_2/runoob/runoob.bson
2016-04-25T19:08:24.070+0800 restoring indexes for collection runoob.runoob from metadata
2016-04-25T19:08:24.071+0800 finished restoring runoob.runoob (2 documents)
2016-04-25T19:08:24.071+0800 restoring users from /home/jonkyon/mongo_2/admin/system.users.bson
2016-04-25T19:08:24.088+0800 Failed: restore error: error running merge command: no such cmd: _mergeAuthzCollections
The docs states the following "The data format used by mongodump from version 2.2 or later is incompatible with earlier versions of mongod. Do not use recent versions of mongodump to back up older data stores."
Even you are using mongodb 2.4.9, I think you should avoid using a recent version of mongodump with older data stores

mongorestore failing because of DocTooLargeForCapped error

I'm trying to restore a collection like so:
$ mongorestore --verbose --db MY_DB --collection MY_COLLECTION /path/to/MY_COLLECTION.bson --port 1234 --noOptionsRestore
Here's the error output (timestamps removed):
using write concern: w='majority', j=false, fsync=false, wtimeout=0
checking for collection data in /path/to/MY_COLLECTION.bson
found metadata for collection at /path/to/MY_COLLECTION.metadata.json
reading metadata file from /path/to/MY_COLLECTION.metadata.json
skipping options restoration
restoring MY_DB.MY_COLLECTION from file /path/to/MY_COLLECTION.bson
file /path/to/MY_COLLECTION.bson is 241330 bytes
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
restoring indexes for collection MY_DB.MY_COLLECTION from metadata
Failed: restore error: MY_DB.MY_COLLECTION: error creating indexes for MY_DB.MY_COLLECTION: createIndex error: exception: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
The result of the restore is a database and collection with correct names but no documents.
OS: Ubuntu 14.04 running on Azure VM.
I just solved my own problem. See answer below.
The problem seemed to be that I was using mongod on the replica set PRIMARY member.
Once I commented out the following line in /etc/mongod.conf, it worked without problems:
replSet=REPL_SET_NAME --> #replSet=REPL_SET_NAME
I assume passing the correct replica set name to the mongorestore command (like in this question) could also work, but haven't tried that yet.

Mongodump and mongorestore; field not found

I'm trying to dump a database from another server (this works fine), then restore it on a new server (this does not work fine).
I first run:
mongodump --host -d
This creates a folder dump/db which contains all of the bson documents.
Then in the dump folder, I'm running:
mongorestore -d dbname db
This works and iterates through the files, but I get this error on dbname.system.users
Wed May 23 02:08:05 { key: { _id: 1 }, ns: "dbname.system.users", name: "_id_" }
Error creating index dbname.system.usersassertion: 13111 field not found, expected type 16
Any ideas how to resolve this?
If it realy different versions, use --noIndexRestore option. And create all index after that.
Any chance the source and destination are different versions?
In any case, to get around this, restore the collections individually using the -c flag to the target DB and then build the indexes afterward. The system collection is the one used for indexes, so it is fairly easy to recreate - try it last once everything else has been restore, and if it still fails you can always just recreate the relevant indexes.
The issue could also caused by this bug in older versions of Mongo (In my case it was 2.0.8):
https://jira.mongodb.org/browse/SERVER-7181
Basically, you get 13111 field not found, expected type 16 error when it should actually be prompting you to enter your authentication details.
And example of how I fixed it:
root#precise64:/# mongorestore /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:48:15 going into namespace [test.system.indexes]
Fri May 24 11:48:15 { key: { _id: 1 }, ns: "test.system.users", name: "_id_" }
Error creating index test.system.usersassertion: 13111 field not found, expected type 16
# Error when not giving username and password
root#precise64:/# mongorestore -u fakeuser -p fakepassword /backups/demand/ondemand.05-24-2013T114223/
connected to: 127.0.0.1
[REDACTED]
Fri May 24 11:57:11 /backups/demand/ondemand.05-24-2013T114223/test/system.users.bson
Fri May 24 11:57:11 going into namespace [test.system.users]
1 objects found
# Works fine when giving username and password! :)
Hope that helps anyone who's issue doesn't get fixed by the previous 2 replies!
This can also happen if you are trying to mongorestore into MongoDB 2.6+ and the dump you are trying to restore contains a system.users table in any database other than admin. In MongoDB 2.2 and 2.4 the system.userscollections could occur in any database. The auth schema migration associated with MongoDB 2.6 moved all users into the system.users table in the admin database, but left behind the system.users tables in the other databases (MongoDB 2.6 just ignores these). This seems to cause this assertion when importing into MongoDB 2.6.