Error cloning collection using Cosmic Clone - mongodb

I'm trying to clone an existing MongoDB collection that is running on azure cosmos DB to another collection on the same DB using Cosmic Clone.
access validation succeeds but the process fails with the following error message:
Collection Copy log
Begin Document Migration.
Source Database: myDB Source Collection: X
Target Database: myDB Target Collection: Y
LogError
Error: Error reading JObject from JsonReader. Path '', line 0, position 0., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Main process exits with error
LogError
Error: One or more errors occurred., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Any ideas are appreciated.

I've not used this tool but I took a quick look at the source for it and I'm fairly certain it is not designed to work with MongoDB collections in Cosmos DB.
If you're looking to copy a MongoDB collection you're better off using native Mongo tools like mongodump and mongorestore.
More details here, https://docs.mongodb.com/database-tools/

Related

Query a stored procedure in mongodb using spring boot

I want to query a stored procedure in mongodb. I can query it using the command line tool but facing issue while querying using java.The piece of code(last two lines) that is throwing error is :
MongoClient mongoClient = new MongoClient();
MongoDatabase mdb = mongoClient.getDatabase("mydb");
mdb.runCommand(new Document("$eval", "db.loadServerScripts()"));
Document doc1 = mdb.runCommand(new Document("$eval", "mysp(5)"));
and the error that it's throwing is 'no such command: '$eval'' on server localhost:27017. The full response is {"ok": 0.0, "errmsg": "no such command: '$eval'", "code": 59, "codeName": "CommandNotFound"}
Now I read several posts and documentation as well stating that $eval or db.eval() doesn't works for mongo version 4.2. So what should I change in my code to make it work or what should be the possible solution. I know this question has been asked several times but those solutions are obsolete , so I need help for this. Can anyone help.
From mongodb-4.2, the support for the eval command is deprecated now.
So I guess the only option will be to use mongodb-4.0 as of now.
Source : https://docs.mongodb.com/manual/reference/method/db.eval/

MongoDB Error when querying a capped collection

I need some help interpreting/resolving this error:
OperationFailure: Executor error during find command :: caused by :: errmsg: "CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(225404776)"
which occurs when I run this command:
mongodb_connection["databaseName"]["cappedCollectionName"].find(query)
The mongodb instance is a "single" instance, and we are querying a "capped" collection. The query is looking at recent data, which should be in the DB (and not written over via the cap).
Thanks!

Applying oplog but found duplicate key error

Mongo version is 3.0.6, I have a process to apply oplog from another database to destination database by mongodump and mongorestore by using --oplogReplay option.
But I found duplicate key error messages many time, source and target database have the same structure (indies and fields) that is impossible to have duplicated record on target because it should be error on source db first.
And error message looks like this
2017-08-20T00:55:55.900+0000 Failed: restore error: error applying oplog: applyOps: exception: E11000 duplicate key error collection: <collection_name> index: <field> dup key: { : null }
And today I found a mystery message like this
2017-08-25T01:02:14.134+0000 Failed: restore error: error applying oplog: applyOps: not master
What's a mean? And my understanding, mongorestore has "--stopOnError" option that means the default process, if have any errors, the restore process will skip and move on. But I got above error and then the restore process has been terminated anytime. :(
This does not answer directly to your question, sorry for that, but...
If you need to apply oplog changes for database A to database B, it would be better to use mongo-connector program, than mongodump/mongorestore -pair.

Migration From Tokumx 1.5 To Percona Server For mongodb 3.11

Migrating Data from Tokumx To Percona Server For MonoDB
Step 1 :
This guide describes how to upgrade existing Percona TokuMX instance to Percona Server for MongoDB. The following JavaScript files are required to perform the upgrade:
• allDbStats.js
• tokumx_dump_indexes.js
• psmdb_restore_indexes.js
You can download those files from GitHub.
Step 2 :
Run the allDbStats.js script to record database state before migration:
$ mongo ./allDbStats.js > ~/allDbStats.before.out
Step 3 :
Perform a dump of the database:
$ mongodump --out /your/dump/path
Step 4 :
Perform a dump of the indexes:
$ ./tokumx_dump_indexes.js > /your/dump/path/tokumxIndexes.json
Step 5 :
Restore the collections without indexes using “--noIndexRestore” switch:
$ mongorestore --noIndexRestore /your/dump/path
Step 6 :
Restore the indexes (this may take a while). This step will remove clustering options to the collections before inserting.
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Step 7 :
Run the allDbStats.js script to record database state after migration:
mongo ./allDbStats.js > ~/allDbStats.after.out
This is the guide i have found in the Migration from Tokumx to Percona server for mongodb. at step 6 when i try to restore indexes i get below mentioned error :
/mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY SyntaxError: Unexpected identifier
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /mnt/tokumx-bkup/tokumxIndexes.json
2016-06-29T05:28:20.028+0000 E QUERY Error: error loading js file: /mnt/tokumx-bkup/tokumxIndexes.json
at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78:1 at /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js:78
failed to load: /tmp/tokumx2_to_psmdb3_migration-master/psmdb_restore_indexes.js
Any help will be welcomed.
Thanks
check the tokumxIndexes.json file. When tokumx_dump_indexes.js is run, the mongo shell parameter --quiet must be used or the resulting json will contain the shell preamble at the beginning.
And check the file using something like http://jsonlint.com/
Also if preamble is present delete these two lines from the tokumxIndexes.json file.
"MongoDB shell version: 3.0.11-1.6
connecting to: 127.0.0.1:27017/test"
and Run the script again.
and Run the script again
$./psmdb_restore_indexes.js --eval "data='/your/dump/path/tokumxIndexes.json' "
Now this script will start build Index Process.

mongorestore failing because of DocTooLargeForCapped error

I'm trying to restore a collection like so:
$ mongorestore --verbose --db MY_DB --collection MY_COLLECTION /path/to/MY_COLLECTION.bson --port 1234 --noOptionsRestore
Here's the error output (timestamps removed):
using write concern: w='majority', j=false, fsync=false, wtimeout=0
checking for collection data in /path/to/MY_COLLECTION.bson
found metadata for collection at /path/to/MY_COLLECTION.metadata.json
reading metadata file from /path/to/MY_COLLECTION.metadata.json
skipping options restoration
restoring MY_DB.MY_COLLECTION from file /path/to/MY_COLLECTION.bson
file /path/to/MY_COLLECTION.bson is 241330 bytes
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
error: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
restoring indexes for collection MY_DB.MY_COLLECTION from metadata
Failed: restore error: MY_DB.MY_COLLECTION: error creating indexes for MY_DB.MY_COLLECTION: createIndex error: exception: write to oplog failed: DocTooLargeForCapped document doesn't fit in capped collection. size: 116 storageSize:1206976512 # 28575
The result of the restore is a database and collection with correct names but no documents.
OS: Ubuntu 14.04 running on Azure VM.
I just solved my own problem. See answer below.
The problem seemed to be that I was using mongod on the replica set PRIMARY member.
Once I commented out the following line in /etc/mongod.conf, it worked without problems:
replSet=REPL_SET_NAME --> #replSet=REPL_SET_NAME
I assume passing the correct replica set name to the mongorestore command (like in this question) could also work, but haven't tried that yet.