mongoTemplate save results in DuplicateKeyException occasionally - mongodb

When a record saved using mongoTemplate save method, occasionally it is throwing DuplicateKeyException. save method should internally use upsert and this should not happen under normal situations. So far I cannot reproduce this in test environment but only on production occasionally. Current version used:
MongoDB: 3.4.12
mongodb-driver-sync: 4.6.0
spring-boot-starter-data-mongodb: 2.7.0
springboot: 2.7.0
org.springframework.dao.DuplicateKeyException: Write operation error on server mongodb1:27017. Write error: WriteError{code=11000, message='E11000 duplicate key error collection: push-db.devices index: _id_ dup key: { : { platformId: 15, uuid: "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrC..." } }', details={}}.;
nested exception is com.mongodb.MongoWriteException: Write operation error on server mongodb1:27017. Write error: WriteError{code=11000, message='E11000 duplicate key error collection: push-db.devices index: _id_ dup key: { : { platformId: 15, uuid: "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrC..." } }', details={}}. at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:106) at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:3044) at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:600) at org.springframework.data.mongodb.core.MongoTemplate.saveDocument(MongoTemplate.java:1595) at org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:1530) at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:1473) at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:1458)
I can see index _id exists.
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "push-db.devices"
},
In mongodb collection, data look like this. id consists of platformId and uuid:
{
"_id" : {
"platformId" : 15,
"uuid" : "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrCasq2323232"
},
"appId" : 32342,
"appVersion" : "33.556.66",
"tags" : [
{
"_id" : 39391,
"lastLogin" : NumberLong(1666903871334),
"loginCount" : 1
},
{
"_id" : 34269,
"lastLogin" : NumberLong(1666903871526),
"loginCount" : 1
}
],
"_class" : "com.myCompany.device.PushDevice"
}
Java snippet that saves data:
private final MongoTemplate mongoTemplate;
#Override
public void storeDevice(PushDevice device) {
mongoTemplate.save(device);
}
Mongodb runs as a replication set of 3 servers.
I tried updating driver to latest but did not helped. Any hints to the solution will be highly appreciated.

Related

Removing index from mongodb

I am new in MongoDB and did an import of DB to my local. I get the following error after running my node app.
(node:1592) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): MongoError: exception: Index with name: expires_1 already exists with different options
I logged in to the mongo console and got the following indexes for collection - Session
Indexes for sessions:
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "db_staging.sessions"
},
{
"v" : 1,
"key" : {
"expires" : 1
},
"name" : "expires_1",
"ns" : "db_staging.sessions",
"background" : true
}
]
And then I got another same index name under system.indexes
Can I remove the duplicate key from system.indexes.
Any suggestion is highly appreciated. Thanks in advance.
Try delete all datas in indexes.

Why Mongoose cant create an index in MongoDB Atlas?

I have a Mongoose Schema which contains a field with a certain index:
const reportSchema = new mongoose.Schema({
coords: {
type: [Number],
required: true,
index: '2dsphere'
},
…
}
It works well on my local machine, so when I connect to MongoDB through the shell I get this output for db.reports.getIndexes():
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
},
{
"v" : 2,
"key" : {
"coords" : "2dsphere"
},
"name" : "coords_2dsphere",
"ns" : "weatherApp.reports",
"background" : true,
"2dsphereIndexVersion" : 3
}
]
Then I deploy this app to Heroku and connect to MongoDB Atlas instead of my local database. It works well for saving and retrieving data, but did not create indexes (only default one):
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
}
]
What may cause this problem? Atlas allow to create indexes through Web GUI and it works good as well as creating indexes form the shell. But Mongoose fails this operation for some reason.
I had the same issue when I was on mongoose version v5.0.16. However, since I updated to v5.3.6 it is now creating the (compound) indexes for me on Mongo Atlas. (I just wrote a sample app with both versions to verify this is the case).
I'm not sure which version fixed this issue, but it's somewhere between v5.0.16 and v5.3.6, where v5.3.6 is working.

MongoDB upgrade 2.4 to 2.6 check returns error in internal collections

I have a replica set and want to upgrade MongoDB from version 2.4.5 to 2.6.1 and before replacing binaries use this command: db.upgradeCheckAllDBs()
However this error returns:
...
Checking collection local.replset.minvalid
Document Error: document is no longer valid in 2.6 because DollarPrefixedFieldName: $set is not valid for storage.: { "_id" : ObjectId("50101a875b51c70037b81c30"), "ts" : Timestamp(1398232884, 51), "h"
: NumberLong("4590312020654652586"), "op" : "u", "ns" : "jumbo.jumboFile2Upload", "o2" : { "_id" : ObjectId("510b039031c82133929bd77f") }, "o" : { "$set" : { "operation" : { "operation" : "upload
", "total" : NumberLong(1048768), "done" : NumberLong(671576) } } } }
...
To fix the problems above please consult http://dochub.mongodb.org/core/upgrade_checker_help
false
This error is in internal MongoDB collection (local.replset.minvalid). Mentioned link states:
To resolve, remove the document and re-insert with the appropriate
corrections.
What does this local.replset.minvalid do? I do not feel comfortable updating internal MongoDB collections.
This collection local.replset.minvalid contains only one document:
set0:PRIMARY> db.replset.minvalid.findOne()
{
"_id" : ObjectId("50101a875b51c70037b81c30"),
"ts" : Timestamp(1398232884, 51),
"h" : NumberLong("4590312020654652586"),
"op" : "u",
"ns" : "jumbo.jumboFile2Upload",
"o2" : {
"_id" : ObjectId("510b039031c82133929bd77f")
},
"o" : {
"$set" : {
"operation" : {
"operation" : "upload",
"total" : NumberLong(1048768),
"done" : NumberLong(671576)
}
}
}
}
Any suggestions what to do?
It turned out it is a minor bug that will be solved in new version and can be ignored during upgrade in my case. I did the upgrade and everything works as expected.

assertion exception in mongo mapreduce

I have a collection that stores search query logs. It's two main attributes are user_id and search_query. user_id is null for a logged out user. I am trying to run a mapreduce job to find out the count and terms per user.
var map = function(){
if(this.user_id !== null){
emit(this.user_id, this.search_query);
}
}
var reduce = function(id, queries){
return Array.sum(queries + ",");
}
db.searchhistories.mapReduce(map,
reduce,
{
query: { "time" : {
$gte : ISODate("2013-10-26T14:40:00.000Z"),
$lt : ISODate("2013-10-26T14:45:00.000Z")
}
},
out : "mr2"
}
)
throws the following exception
Wed Nov 27 06:00:07 uncaught exception: map reduce failed:{
"errmsg" : "exception: assertion src/mongo/db/commands/mr.cpp:760",
"code" : 0,
"ok" : 0
}
I looked at mr.cpp L#760 but could not gather any vital information. What could be causing this?
My Collection has values like
> db.searchhistories.find()
{ "_id" : ObjectId("5247a9e03815ef4a2a005d8b"), "results" : 82883, "response_time" : 0.86, "time" : ISODate("2013-09-29T04:17:36.768Z"), "type" : 0, "user_id" : null, "search_query" : "awareness campaign" }
{ "_id" : ObjectId("5247a9e0606c791838005cba"), "results" : 39545, "response_time" : 0.369, "time" : ISODate("2013-09-29T04:17:36.794Z"), "type" : 0, "user_id" : 34225174, "search_query" : "eficaz eficiencia efectividad" }
Looking at the docs I could see that this is not possible in the slave. It will work perfectly fine in the master though. If you still want to use the slave then you have to use the following syntax.
db.searchhistories.mapReduce(map,
reduce,
{
query: { "time" : {
$gte : ISODate("2013-10-26T14:40:00.000Z"),
$lt : ISODate("2013-10-26T14:45:00.000Z")
}
},
out : { inline : 1 }
}
)
** Ensure that the output document size does not exceed 16MB limit while using inline function.

mongo _id field duplicate key error

I have a collection with the _id field as a IP with type String.
I'm using mongoose, but here's the error on the console:
$ db.servers.remove()
$ db.servers.insert({"_id":"1.2.3.4"})
$ db.servers.insert({"_id":"1.2.3.5"}) <-- Throws dup key: { : null }
Likely, it's because you have an index that requires a unique value for one of the fields as shown below:
> db.servers.remove()
> db.servers.ensureIndex({"name": 1}, { unique: 1})
> db.servers.insert({"_id": "1.2.3"})
> db.servers.insert({"_id": "1.2.4"})
E11000 duplicate key error index: test.servers.$name_1 dup key: { : null }
You can see your indexes using getIndexes() on the collection:
> db.servers.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.servers",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"name" : 1
},
"unique" : true,
"ns" : "test.servers",
"name" : "name_1"
}
]
I was confused by exactly the same error today, and later figured it out. It was because I removed a indexed property from a mongoose schema, but did not drop that property from the mongodb index. The error message is infact that the new document has an indexed property whose value is null (not in the json).