MongoDB DuplicateKeyException - mongodb

WriteConcern detected an error 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: develop.Test.$AppId_1_UserId_1_Type_1__sub_1__key_1 dup key: { ... }'. (Response was { "ok" : 1, "code" : 11000, "err" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: develop.Test.$AppId_1_UserId_1_Type_1__sub_1__key_1 dup key: {...}).
I'm getting the above error when trying to insert a new entry into my collection. The thing that is confusing me is my key is a Guid id field. The entity has AppId and UserId fields but those aren't supposed to be the key and shouldn't have to be unique.
Right before I save the Id is just all zeroes. After it is set to a unique Guid, but the save call throws the MongoDuplicateKey error.
Maybe it's because I'm new to Mongo but I don't understand this any help would be appreciated.
Update
Output of get Indexes
{
"0" : {
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "develop.Test"
},
"1" : {
"v" : 1,
"unique" : true,
"key" : {
"AppId" : 1,
"UserId" : 1,
"Type" : 1,
"_sub" : 1,
"_key" : 1
},
"name" : "AppId_1_UserId_1_Type_1__sub_1__key_1",
"ns" : "develop.Test"
},
"2" : {
"v" : 1,
"key" : {
"Type" : 1,
"_sub" : 1,
"_g" : 1
},
"name" : "Type_1__sub_1__g_1",
"ns" : "develop.Test"
}
}

You have a unique compound indexes on the AppId, UserId, Type, sub and key fields that is why you are getting this error.
"1" : {
"v" : 1,
"unique" : true,
"key" : {
"AppId" : 1,
"UserId" : 1,
"Type" : 1,
"_sub" : 1,
"_key" : 1
},
"name" : "AppId_1_UserId_1_Type_1__sub_1__key_1",
"ns" : "develop.Test"
},
Now how to solve the problem?
If you didn't create it so perhaps you co-worker or someone did. In this case you don't drop the index without talking to them.
You may want to drop the index using db.collection.dropIndex(index) method
db.collection.dropIndex({ AppId: 1, UserId: 1, Type: 1, _sub: 1, _key: 1 })

Related

mongodb findOne not found document in base

in my collection i have this document:
{
"_id" : ObjectId("5eecb84a9e41ff609fd6389a"),
"uid" : NumberLong(619942065802969109),
"banmute" : 0,
"expire" : ISODate("2023-03-15T13:06:18.694Z"),
"fid" : "3cac4490b6ca491e838d4e5317e5b87e",
"id" : null,
"nick" : "Flawe",
"nicks_ld" : "",
"old_nicks" : "",
"reason" : ""
}
Indexes is:
/* 1 */
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "fsl.index_profile"
},
{
"v" : 2,
"unique" : true,
"key" : {
"uid" : 1
},
"name" : "uid_1",
"ns" : "fsl.index_profile",
"background" : true
}
]
On direct request i have null answer:
db.getCollection('index_profile').findOne({uid: 619942065802969109})
result: ->
null
But if i request $gte i found it:
db.getCollection('index_profile').find({uid: {$gte: 619942065802969109}}).limit(1)
result: ->
/* 1 */
{
"_id" : ObjectId("5eecb84a9e41ff609fd6389a"),
"uid" : NumberLong(619942065802969109),
"banmute" : 0,
"expire" : ISODate("2023-03-15T13:06:18.694Z"),
"fid" : "3cac4490b6ca491e838d4e5317e5b87e",
"id" : null,
"nick" : "Flawe",
"nicks_ld" : "",
"old_nicks" : "",
"reason" : ""
}
I tried deleting the cache, rebooting the server, deleting indexes, assigned different new indexes
I am in despair, help solve this problem
have you tried:
db.getCollection('index_profile').findOne({uid: NumberLong(619942065802969109)})

Cannot create index - bad index key pattern

After some activities with my database, I lost my index. I had these indexes:
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "collection.statement"
},
{
"v" : 1,
"unique" : true,
"key" : {
"name" : 1
},
"name" : "name_1",
"ns" : "collection.statement"
}
and now I have only first one.
I've entered this command
db.collection.createIndex({
"v" : 1,
"unique" : true,
"key" :{ "name" : 1 },
"name" : "name_1",
"ns" : "collection.statement"
})
and I only get and error message that i have a bad index key pattern.
Please, help me, how to return this index? What I do wrong?
Use this:
db.collection.createIndex( { "name": 1 }, { unique: true } )
You attempt includes internal aspects of the index ("v" : 1), you just need to supply the field(s) and an order for each and the unique instruction.
More details in the docs.

Unable to create unique index with sparse mongodb

I'm using mongodb 2.6.1. However, I'm not able to create unique index with sparse. Currently, I have the following indexes:
> db.products.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "snapyshop_production.products"
},
{
"v" : 1,
"key" : {
"pickup_location" : "2dsphere"
},
"name" : "pickup_location_2dsphere",
"background" : true,
"ns" : "snapyshop_production.products",
"2dsphereIndexVersion" : 2
},
{
"v" : 1,
"key" : {
"category_id" : 1
},
"name" : "category_id_1",
"background" : true,
"ns" : "snapyshop_production.products"
},
{
"v" : 1,
"key" : {
"_keywords" : 1
},
"name" : "_keywords_1",
"background" : true,
"ns" : "snapyshop_production.products"
}
]
But when I run this command, it prints out error:
> db.products.ensureIndex( { source_url: 1 }, { background: true, sparse: true, unique: true } )
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 4,
"ok" : 0,
"errmsg" : "E11000 duplicate key error index: snapyshop_production.products.$source_url_1 dup key: { : null }",
"code" : 11000
}
I really have no idea how to fix it.
The sparse index you're creating will allow multiple documents to exist without a source_url field, but will still only allow one document where the field is present with a value of null. In other words, the sparse index doesn't treat the null value case any different, only the missing field case.
So the typical way to handle your problem would be to update your collection to remove the source_url field from your existing docs where its value is null before creating the index:
db.products.update({source_url: null}, {$unset: {source_url: true}}, {multi: true})
And then use the absence of the field as your null indicator in your program logic.

Cannot create a TTL index on a field that already has an index. ...really?

From the 10Gen Docs:
"You cannot create a TTL index on a field that already has an index."
However, it seems like this works just fine. What do the docs really mean?
In this example I create multiple indexes on field d before adding TTL. TTL appears correct:
"expireAfterSeconds" : 5
and the documents are removed correctly.
mongo shell:
> db.boo.ensureIndex({a: 1, b: 1, d: -1})
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.boo.ensureIndex({d: -1})
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 2,
"numIndexesAfter" : 3,
"ok" : 1
}
> db.boo.ensureIndex({d: 1}, {expireAfterSeconds: 5});
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 3,
"numIndexesAfter" : 4,
"ok" : 1
}
> db.boo.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "test.boo"
},
{
"v" : 1,
"key" : {
"a" : 1,
"b" : 1,
"d" : -1
},
"name" : "a_1_b_1_d_-1",
"ns" : "test.boo"
},
{
"v" : 1,
"key" : {
"d" : -1
},
"name" : "d_-1",
"ns" : "test.boo"
},
{
"v" : 1,
"key" : {
"d" : 1
},
"name" : "d_1",
"ns" : "test.boo",
"expireAfterSeconds" : 5
}
]
Edit/Summary:
The combination that is actually restricted is adding a TTL expiration to an existing index, like this:
> db.boo.ensureIndex({d: 1});
{
"createdCollectionAutomatically" : true,
"numIndexesBefore" : 1,
"numIndexesAfter" : 2,
"ok" : 1
}
> db.boo.ensureIndex({d: 1}, {expireAfterSeconds: 5});
{
"ok" : 0,
"errmsg" : "Index with name: d_1 already exists with different options",
"code" : 85
}
You have actually created different index type (descending).
Commands:
db.boo.ensureIndex({d: -1})
and
db.boo.ensureIndex({d: 1})
will create two separate indexes (although on the same field).
If you try to create a descending TTL index:
db.boo.ensureIndex({d: -1}, {expireAfterSeconds: 5})
you will get an error:
Index with name ... already exists with different options
If you try to be "clever" and change the name of the index you will get:
Index with pattern... already exists with different options
I guess you should submit a bug to describe that more precisely in their documentation / tutorial.

can't shard a collection on mongodb

I have a db ("mydb") on mongo that contains 2 collections (c1 and c2). c1 is already hash sharded. I want to shard a second collection the same way. I get the following error :
use mydb
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
{
"proposedKey" : {
"LOG_DATE" : "hashed"
},
"curIndexes" : [
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydb.c1",
"name" : "_id_"
}
],
"ok" : 0,
"errmsg" : "please create an index that starts with the shard key before sharding."
So I did
db.c2.ensureIndex({LOG_DATE: 1})
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
Same error but it shows the new index.
"proposedKey" : {
"LOG_DATE" : "hashed"
},
"curIndexes" : [
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydb.c2",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"LOG_DATE" : 1
},
"ns" : "mydb.c2",
"name" : "LOG_DATE_1"
}
],
"ok" : 0,
"errmsg" : "please create an index that starts with the shard key before sharding."
Just to be sure, I run :
db.system.indexes.find()
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "mydb.c1", "name" : "_id_" }
{ "v" : 1, "key" : { "timestamp" : "hashed" }, "ns" : "mydb.c1", "name" : "timestamp_hashed" }
{ "v" : 1, "key" : { "_id" : 1 }, "ns": "mydb.c2", "name" : "_id_" }
{ "v" : 1, "key" : { "LOG_DATE" : 1 }, "ns" : "mydb.c2", "name" : "LOG_DATE_1" }
I try again the same commands on admin and it fails with the same error.
Then I tried on admin without "hashed" and it worked.
db.runCommand({shardCollection: "mydb.c2", key: {"LOG_DATE": 1}})
Problem : now my collection is sharded on something that is not hashed and I can't change it (error : "already sharded")
What was wrong with what I did ?
How can I fix this ?
Thanks in advance
Thomas
The problem initially was that you did not have a hashed index what you proposed to use for sharding this is the error message is about. After the first error message, when you created an index which is
{
"v" : 1,
"key" : {
"LOG_DATE" : 1
},
"ns" : "mydb.c2",
"name" : "LOG_DATE_1"
}
You still just have an ordinary index which is not a hashed one. If you would do this :
db.c2.ensureIndex({LOG_DATE: "hashed"})
Instead of this :
db.c2.ensureIndex({LOG_DATE: 1})
Than would be a hashed index. As you can see in the output of the db.system.indexes.find() on the other collection you have a hashed index for the timestamp i assume this is the shard key for that collection.
So if you have no data in the c2 collection:
db.c2.drop()
db.createCollection('c2')
db.c2.ensureIndex({LOG_DATE: "hashed"})
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
This will work properly.