MongoDB schema validation failing upon Update but not one insert - mongodb

I have a Mongodb collection with schema validation.
I executed db.correspondence.validate({full:true}) and received "nInvalidDocuments" : NumberLong(0)
I am able to insert a document but update is failing.
MongoDB Enterprise > db.correspondence.find({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"}).count()
8
MongoDB Enterprise > db.correspondence.insert({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000",mdmContractIdentifier:'3334444444444444','name':'Vat'});
WriteResult({ "nInserted" : 1 })
MongoDB Enterprise > db.correspondence.find({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"}).count()
9
MongoDB Enterprise > db.correspondence.find({mdmContractIdentifier:'3334444444444444'}).count()
2
MongoDB Enterprise > db.correspondence.updateOne({"correspondenceIdentifier": "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"},{$set:{mdmContractIdentifier:'3334444444444444'}});
2020-08-05T20:50:28.164-0400 E QUERY [thread1] **WriteError: Document failed validation** :
WriteError({
"index" : 0,
"code" : 121,
"errmsg" : "Document failed validation",
"op" : {
"q" : {
"correspondenceIdentifier" : "ca4697e2-a40c-11ea-a632-0a0a6b0e0000"
},
"u" : {
"$set" : {
"mdmContractIdentifier" : "3334444444444444"
}
},
"multi" : false,
"upsert" : false
}
})`
WriteError#src/mongo/shell/bulk_api.js:466:48
Bulk/mergeBatchResults#src/mongo/shell/bulk_api.js:846:49
Bulk/executeBatch#src/mongo/shell/bulk_api.js:910:13
Bulk/this.execute#src/mongo/shell/bulk_api.js:1154:21
DBCollection.prototype.updateOne#src/mongo/shell/crud_api.js:572:17
#(shell):1:1

Related

MongoDB hint() fails - not sure if it is because index is still indexing

In SSH session 1, I have ran operation to create partial index in MongoDB as follows:
db.scores.createIndex(
... { event_time: 1, "writes.k": 1 },
... { background: true,
... partialFilterExpression: {
... "writes.l_at": null,
... "writes.d_at": null
... }});
The creation of the index is quite large and lasts about 30+ minutes. While it is still running I started SSH session 2.
In SSH session 2 to cluster, I described indexes on my collection scores, and it looks like it is already there...
db.scores.getIndexes()
[
...,
{
"v" : 1,
"key" : {
"event_time" : 1,
"writes.k" : 1
},
"name" : "event_time_1_writes.k_1",
"ns" : "leaderboard.scores",
"background" : true,
"partialFilterExpression" : {
"writes.l_at" : null,
"writes.d_at" : null
}
}
]
When trying to count with hint to this index, I get below error:
db.scores.find().hint('event_time_1_writes.k_1').count()
2019-02-06T22:35:38.857+0000 E QUERY [thread1] Error: count failed: {
"ok" : 0,
"errmsg" : "error processing query: ns=leaderboard.scoresTree: $and\nSort: {}\nProj: {}\n planner returned error: bad hint",
"code" : 2,
"codeName" : "BadValue"
} : _getErrorWithCode#src/mongo/shell/utils.js:25:13
DBQuery.prototype.count#src/mongo/shell/query.js:383:11
#(shell):1:1
Never seen this below, but need confirmation to check if its failing because indexing is still running ?
Thanks!

MongoDB count (after grouping) using Aggregate gives Error: cmd.cursor is undefined

I am trying to get the count of entries that I have in the collection collectionUID grouped by the field status by using the commands suggested on https://stackoverflow.com/a/23116396/2806163
Sample Response in the collection:
{ "_id" : ObjectId("b....5"), "status" : "processing", "product_id" :
"a...2", "destination" : { }, ..., "request_id" : "b...d", "timestamp"
: "1536028784.0797083", "userID" : "Daksh" }
Error:TypeError: cmd.cursor is undefined
> db.collectionUID.aggregate([
... {"$group" : {_id:"$status", count:{$sum:1}}}
... ])
2018-09-18T18:57:41.983+0530 E QUERY [thread1] TypeError: cmd.cursor is undefined :
DBCollection.prototype.aggregate#src/mongo/shell/collection.js:1322:1
#(shell):1:1
>
PS:
mongo --version: MongoDB shell version v3.4.10-4-g67ee356c6b

Creating index in mongo 3

I want to create an index so that my DB does not allow me to insert documents whose value for the key lema is already present in some document of the DB. I did this:
db.version()
3.0.14
> db.rae.ensureIndex({"lema":1, unique: true})
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 2,
"numIndexesAfter" : 2,
"note" : "all indexes already exist",
"ok" : 1
}
> db.rae.insert({"lema":"a"})
WriteResult({ "nInserted" : 1 })
> db.rae.insert({"lema":"a"})
WriteResult({ "nInserted" : 1 })
> db.rae.insert({"lema":"a"})
WriteResult({ "nInserted" : 1 })
> db.rae.find()
{ "_id" : ObjectId("591a0ce372329f3162a314cc"), "lema" : "a" }
{ "_id" : ObjectId("591a0ce472329f3162a314cd"), "lema" : "a" }
{ "_id" : ObjectId("591a0ce572329f3162a314ce"), "lema" : "a" }
Clearly the DB is letting me insert documents whose values of lema are all a. How can I fix this? Thanks a lot
From Stennie's comment:
I should use createIndex instead of ensureIndex. Also I had a mistake, I should use db.rae.createIndex({"lema":1}, { unique: true}).

MongoDB does Birt support aggregate cursor

When I use Mongodb aggregation with Birt I had the error below:
org.eclipse.datatools.connectivity.oda.OdaException:
Unable to run the Aggregate command operation.
Check that your connected MongoDB server is version 2.2 or later. ;
com.mongodb.CommandResult$CommandFailure: command failed [aggregate]:
{ "serverUsed" : "/xxx.xx.xx.xxx:27017" ,
"errmsg" : "exception: aggregation result exceeds maximum document size (16MB)" ,
"code" : 16389 , "ok" : 0.0 ,
"$gleStats" : { "lastOpTime" : { "$ts" : 0 , "$inc" : 0} ,
"electionId" : { "$oid" : "557cd07784d145278edfba15"}}}
Yes, you can run aggregate MongoDB queries. Check the MongoDB version.

MongoDB upgrade 2.4 to 2.6 check returns error in internal collections

I have a replica set and want to upgrade MongoDB from version 2.4.5 to 2.6.1 and before replacing binaries use this command: db.upgradeCheckAllDBs()
However this error returns:
...
Checking collection local.replset.minvalid
Document Error: document is no longer valid in 2.6 because DollarPrefixedFieldName: $set is not valid for storage.: { "_id" : ObjectId("50101a875b51c70037b81c30"), "ts" : Timestamp(1398232884, 51), "h"
: NumberLong("4590312020654652586"), "op" : "u", "ns" : "jumbo.jumboFile2Upload", "o2" : { "_id" : ObjectId("510b039031c82133929bd77f") }, "o" : { "$set" : { "operation" : { "operation" : "upload
", "total" : NumberLong(1048768), "done" : NumberLong(671576) } } } }
...
To fix the problems above please consult http://dochub.mongodb.org/core/upgrade_checker_help
false
This error is in internal MongoDB collection (local.replset.minvalid). Mentioned link states:
To resolve, remove the document and re-insert with the appropriate
corrections.
What does this local.replset.minvalid do? I do not feel comfortable updating internal MongoDB collections.
This collection local.replset.minvalid contains only one document:
set0:PRIMARY> db.replset.minvalid.findOne()
{
"_id" : ObjectId("50101a875b51c70037b81c30"),
"ts" : Timestamp(1398232884, 51),
"h" : NumberLong("4590312020654652586"),
"op" : "u",
"ns" : "jumbo.jumboFile2Upload",
"o2" : {
"_id" : ObjectId("510b039031c82133929bd77f")
},
"o" : {
"$set" : {
"operation" : {
"operation" : "upload",
"total" : NumberLong(1048768),
"done" : NumberLong(671576)
}
}
}
}
Any suggestions what to do?
It turned out it is a minor bug that will be solved in new version and can be ignored during upgrade in my case. I did the upgrade and everything works as expected.