Removing index from mongodb - mongodb

I am new in MongoDB and did an import of DB to my local. I get the following error after running my node app.
(node:1592) UnhandledPromiseRejectionWarning: Unhandled promise rejection (rejection id: 1): MongoError: exception: Index with name: expires_1 already exists with different options
I logged in to the mongo console and got the following indexes for collection - Session
Indexes for sessions:
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "db_staging.sessions"
},
{
"v" : 1,
"key" : {
"expires" : 1
},
"name" : "expires_1",
"ns" : "db_staging.sessions",
"background" : true
}
]
And then I got another same index name under system.indexes
Can I remove the duplicate key from system.indexes.
Any suggestion is highly appreciated. Thanks in advance.

Try delete all datas in indexes.

Related

mongoTemplate save results in DuplicateKeyException occasionally

When a record saved using mongoTemplate save method, occasionally it is throwing DuplicateKeyException. save method should internally use upsert and this should not happen under normal situations. So far I cannot reproduce this in test environment but only on production occasionally. Current version used:
MongoDB: 3.4.12
mongodb-driver-sync: 4.6.0
spring-boot-starter-data-mongodb: 2.7.0
springboot: 2.7.0
org.springframework.dao.DuplicateKeyException: Write operation error on server mongodb1:27017. Write error: WriteError{code=11000, message='E11000 duplicate key error collection: push-db.devices index: _id_ dup key: { : { platformId: 15, uuid: "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrC..." } }', details={}}.;
nested exception is com.mongodb.MongoWriteException: Write operation error on server mongodb1:27017. Write error: WriteError{code=11000, message='E11000 duplicate key error collection: push-db.devices index: _id_ dup key: { : { platformId: 15, uuid: "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrC..." } }', details={}}. at org.springframework.data.mongodb.core.MongoExceptionTranslator.translateExceptionIfPossible(MongoExceptionTranslator.java:106) at org.springframework.data.mongodb.core.MongoTemplate.potentiallyConvertRuntimeException(MongoTemplate.java:3044) at org.springframework.data.mongodb.core.MongoTemplate.execute(MongoTemplate.java:600) at org.springframework.data.mongodb.core.MongoTemplate.saveDocument(MongoTemplate.java:1595) at org.springframework.data.mongodb.core.MongoTemplate.doSave(MongoTemplate.java:1530) at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:1473) at org.springframework.data.mongodb.core.MongoTemplate.save(MongoTemplate.java:1458)
I can see index _id exists.
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "push-db.devices"
},
In mongodb collection, data look like this. id consists of platformId and uuid:
{
"_id" : {
"platformId" : 15,
"uuid" : "eW9Zale3ST2EPPtFLQf3R5:APA91bGcXIc4ZIQZ3qVNTp1nDo981oPmj4CR5EXJspU-Ge_oQq1b12v0HLP7E-CPjF5qrK44K7Zr5Fszl0c6tGEmboKg2QrZoQpwFTAb6a_pICvX8V8ZJwbVIW1aMrCasq2323232"
},
"appId" : 32342,
"appVersion" : "33.556.66",
"tags" : [
{
"_id" : 39391,
"lastLogin" : NumberLong(1666903871334),
"loginCount" : 1
},
{
"_id" : 34269,
"lastLogin" : NumberLong(1666903871526),
"loginCount" : 1
}
],
"_class" : "com.myCompany.device.PushDevice"
}
Java snippet that saves data:
private final MongoTemplate mongoTemplate;
#Override
public void storeDevice(PushDevice device) {
mongoTemplate.save(device);
}
Mongodb runs as a replication set of 3 servers.
I tried updating driver to latest but did not helped. Any hints to the solution will be highly appreciated.

MongoDB hint() fails - not sure if it is because index is still indexing

In SSH session 1, I have ran operation to create partial index in MongoDB as follows:
db.scores.createIndex(
... { event_time: 1, "writes.k": 1 },
... { background: true,
... partialFilterExpression: {
... "writes.l_at": null,
... "writes.d_at": null
... }});
The creation of the index is quite large and lasts about 30+ minutes. While it is still running I started SSH session 2.
In SSH session 2 to cluster, I described indexes on my collection scores, and it looks like it is already there...
db.scores.getIndexes()
[
...,
{
"v" : 1,
"key" : {
"event_time" : 1,
"writes.k" : 1
},
"name" : "event_time_1_writes.k_1",
"ns" : "leaderboard.scores",
"background" : true,
"partialFilterExpression" : {
"writes.l_at" : null,
"writes.d_at" : null
}
}
]
When trying to count with hint to this index, I get below error:
db.scores.find().hint('event_time_1_writes.k_1').count()
2019-02-06T22:35:38.857+0000 E QUERY [thread1] Error: count failed: {
"ok" : 0,
"errmsg" : "error processing query: ns=leaderboard.scoresTree: $and\nSort: {}\nProj: {}\n planner returned error: bad hint",
"code" : 2,
"codeName" : "BadValue"
} : _getErrorWithCode#src/mongo/shell/utils.js:25:13
DBQuery.prototype.count#src/mongo/shell/query.js:383:11
#(shell):1:1
Never seen this below, but need confirmation to check if its failing because indexing is still running ?
Thanks!

Why Mongoose cant create an index in MongoDB Atlas?

I have a Mongoose Schema which contains a field with a certain index:
const reportSchema = new mongoose.Schema({
coords: {
type: [Number],
required: true,
index: '2dsphere'
},
…
}
It works well on my local machine, so when I connect to MongoDB through the shell I get this output for db.reports.getIndexes():
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
},
{
"v" : 2,
"key" : {
"coords" : "2dsphere"
},
"name" : "coords_2dsphere",
"ns" : "weatherApp.reports",
"background" : true,
"2dsphereIndexVersion" : 3
}
]
Then I deploy this app to Heroku and connect to MongoDB Atlas instead of my local database. It works well for saving and retrieving data, but did not create indexes (only default one):
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "weatherApp.reports"
}
]
What may cause this problem? Atlas allow to create indexes through Web GUI and it works good as well as creating indexes form the shell. But Mongoose fails this operation for some reason.
I had the same issue when I was on mongoose version v5.0.16. However, since I updated to v5.3.6 it is now creating the (compound) indexes for me on Mongo Atlas. (I just wrote a sample app with both versions to verify this is the case).
I'm not sure which version fixed this issue, but it's somewhere between v5.0.16 and v5.3.6, where v5.3.6 is working.

dropDups true not working mongodb

I am using mongoDB shell with version 3.0.2
I am trying to ensure uniqueness constraint over a username field in collection users.
This is what I gave:
db.users.ensureIndex({"username": 1},{unique: true})
It gave me following error:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"errmsg" : "exception: E11000 duplicate key error index: mybackend.users.$username_1 dup key: { : \"rahat\" }",
"code" : 11000,
"ok" : 0
}
Then i used, dropDups: true in the command:
db.users.ensureIndex({"username": 1},{unique: true, dropDups: true})
Then too I get the same error:
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 1,
"errmsg" : "exception: E11000 duplicate key error index: mybackend.users.$username_1 dup key: { : \"rahat\" }",
"code" : 11000,
"ok" : 0
}
I also saw this SO link but here it already had one index. Mine does not have one.
db.users.getIndexes() ->
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "mybackend.users"
}
]
I looked for this issue on github and restarted the mongo but to no use. What am I doing wrong? I think I am doing a silly mistake. Please help.
The drop duplicates functionality on index creation is no longer supported since Mongo version 3.0. See the compatibility changes page for the 3.0 release.

mongo _id field duplicate key error

I have a collection with the _id field as a IP with type String.
I'm using mongoose, but here's the error on the console:
$ db.servers.remove()
$ db.servers.insert({"_id":"1.2.3.4"})
$ db.servers.insert({"_id":"1.2.3.5"}) <-- Throws dup key: { : null }
Likely, it's because you have an index that requires a unique value for one of the fields as shown below:
> db.servers.remove()
> db.servers.ensureIndex({"name": 1}, { unique: 1})
> db.servers.insert({"_id": "1.2.3"})
> db.servers.insert({"_id": "1.2.4"})
E11000 duplicate key error index: test.servers.$name_1 dup key: { : null }
You can see your indexes using getIndexes() on the collection:
> db.servers.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.servers",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"name" : 1
},
"unique" : true,
"ns" : "test.servers",
"name" : "name_1"
}
]
I was confused by exactly the same error today, and later figured it out. It was because I removed a indexed property from a mongoose schema, but did not drop that property from the mongodb index. The error message is infact that the new document has an indexed property whose value is null (not in the json).