MongoDB Error when querying a capped collection - mongodb

I need some help interpreting/resolving this error:
OperationFailure: Executor error during find command :: caused by :: errmsg: "CollectionScan died due to position in capped collection being deleted. Last seen record id: RecordId(225404776)"
which occurs when I run this command:
mongodb_connection["databaseName"]["cappedCollectionName"].find(query)
The mongodb instance is a "single" instance, and we are querying a "capped" collection. The query is looking at recent data, which should be in the DB (and not written over via the cap).
Thanks!

Related

MongoDB - Robo3t: Failed to do query, no good nodes, Field 'cursor' must be a nested object

When viewing documents of a collection and trying to move between pages using "left" and "right" arrow buttons:
suddenly started to get the following error:
Failed to load documents.
Error:
Failed to do query, no good nodes in MyCluster-shard-0, last error: can't query replica set node mycluster-shard-xx-xx.xxx.xxx.net:27017 :: caused by :: Field 'cursor' must be a nested object in: { conversationId: 7, done: false, payload: BinData(0, 723D424753514D4F432F494C776E73765A7263356774622F42564B695A62746F45523832456A5244475473346E30616B4B597938686352413D3D2C733D6F52614C316438586F...), ok: 1 }
Any idea of why is this happening?
Using Robo3T 1.4 on Ubuntu/Windows 10 - same thing on both OS.

Error cloning collection using Cosmic Clone

I'm trying to clone an existing MongoDB collection that is running on azure cosmos DB to another collection on the same DB using Cosmic Clone.
access validation succeeds but the process fails with the following error message:
Collection Copy log
Begin Document Migration.
Source Database: myDB Source Collection: X
Target Database: myDB Target Collection: Y
LogError
Error: Error reading JObject from JsonReader. Path '', line 0, position 0., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Main process exits with error
LogError
Error: One or more errors occurred., Message: Error reading JObject from JsonReader. Path '', line 0, position 0.
Any ideas are appreciated.
I've not used this tool but I took a quick look at the source for it and I'm fairly certain it is not designed to work with MongoDB collections in Cosmos DB.
If you're looking to copy a MongoDB collection you're better off using native Mongo tools like mongodump and mongorestore.
More details here, https://docs.mongodb.com/database-tools/

Applying oplog but found duplicate key error

Mongo version is 3.0.6, I have a process to apply oplog from another database to destination database by mongodump and mongorestore by using --oplogReplay option.
But I found duplicate key error messages many time, source and target database have the same structure (indies and fields) that is impossible to have duplicated record on target because it should be error on source db first.
And error message looks like this
2017-08-20T00:55:55.900+0000 Failed: restore error: error applying oplog: applyOps: exception: E11000 duplicate key error collection: <collection_name> index: <field> dup key: { : null }
And today I found a mystery message like this
2017-08-25T01:02:14.134+0000 Failed: restore error: error applying oplog: applyOps: not master
What's a mean? And my understanding, mongorestore has "--stopOnError" option that means the default process, if have any errors, the restore process will skip and move on. But I got above error and then the restore process has been terminated anytime. :(
This does not answer directly to your question, sorry for that, but...
If you need to apply oplog changes for database A to database B, it would be better to use mongo-connector program, than mongodump/mongorestore -pair.

mongodb sharded collection query failed: setShardVersion failed host

I have encountered a problem after adding a shard to mongodb cluster.
I did the following operations:
1. deploy a mongodb cluster with primary shard named 'shard0002'(10.204.8.155:27010) for all databases.
2. for some reason I removed it and add a new shard of different host (10.204.8.100:27010, was automaticlly named shard0002 too) after migrating finished.
3. then add another shard (the one removed in step 1), named 'shard0003'
4. executing query on a sharded collection.
5. the following errors appeared:
mongos> db.count.find()
error: {
"$err" : "setShardVersion failed host: 10.204.8.155:27010 { errmsg: \"exception: gotShardName different than what i had before before [shard0002] got [shard0003] \", code: 13298, ok: 0.0 }",
"code" : 10429
}
I tried to rename the shardname, but it's not allowed.
mongos> use config
mongos> db.shards.update({_id: "shard0003"}, {$set: {_id: "shard_11"}})
Mod on _id not allowed
I have also tried to remove it, draining stared but processing seems hung up.
what can I do ?
------------------------
lastupate (24/02/2014 00:29)
I found the answer on google. since mongod has it's own cache for mongod configuration, just restart the sharded mongod process, the problem will be fixed.

Exception in Mongo java client 2.4

My pc is running with mongo 1.6.5 .
One of my collections has 973525 records
when I try to find distinct key on that collection its giving me Exception
the query is
db.collection.distinct("id")
java.lang.IllegalArgumentException: 'ok' should never be null...
at com.mongodb.CommandResult.ok(CommandResult.java:30)
at com.mongodb.CommandResult.throwOnError(CommandResult.java:60)
at com.mongodb.DBCollection.distinct(DBCollection.java:756)
at com.mongodb.DBCollection.distinct(DBCollection.java:741)
at com.test.TestMongo$.<init>(TestMongo.scala:26)
at com.test.TestMongo$.<clinit>(TestMongo.scala)
at com.test.TestMongo.startTesting(TestMongo.scala)
at com.test.Main.main(Main.java:13)
And when i try same query in mongo terminal gives error
Thu Mar 10 21:40:20 uncaught exception: error { "$err" : "Invalid BSONObj spec size: 8692881 (91A48400)", "code" : 10334 }
This error comes when you have a document that is too large. You can upgrade to 1.8 where the max document size is 16MB. Mongo 1.6x has max size of 8MB which that document is slightly larger than. You may be able to solve this in a repair (run mongod --repair, may take a long time).