I'm using mongodb 2.6.1. However, I'm not able to create unique index with sparse. Currently, I have the following indexes:
> db.products.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "snapyshop_production.products"
},
{
"v" : 1,
"key" : {
"pickup_location" : "2dsphere"
},
"name" : "pickup_location_2dsphere",
"background" : true,
"ns" : "snapyshop_production.products",
"2dsphereIndexVersion" : 2
},
{
"v" : 1,
"key" : {
"category_id" : 1
},
"name" : "category_id_1",
"background" : true,
"ns" : "snapyshop_production.products"
},
{
"v" : 1,
"key" : {
"_keywords" : 1
},
"name" : "_keywords_1",
"background" : true,
"ns" : "snapyshop_production.products"
}
]
But when I run this command, it prints out error:
> db.products.ensureIndex( { source_url: 1 }, { background: true, sparse: true, unique: true } )
{
"createdCollectionAutomatically" : false,
"numIndexesBefore" : 4,
"ok" : 0,
"errmsg" : "E11000 duplicate key error index: snapyshop_production.products.$source_url_1 dup key: { : null }",
"code" : 11000
}
I really have no idea how to fix it.
The sparse index you're creating will allow multiple documents to exist without a source_url field, but will still only allow one document where the field is present with a value of null. In other words, the sparse index doesn't treat the null value case any different, only the missing field case.
So the typical way to handle your problem would be to update your collection to remove the source_url field from your existing docs where its value is null before creating the index:
db.products.update({source_url: null}, {$unset: {source_url: true}}, {multi: true})
And then use the absence of the field as your null indicator in your program logic.
Related
How I can find all unique indexes in MongoDB?
The db.collection.getIndexes() function doesn't give any information about uniqueness.
getIndexes() should work:
db.collection.createIndex({key: 1}, {unique: true})
db.collection.getIndexes()
[
{
"v" : 2,
"key" : { "_id" : 1 },
"name" : "_id_"
},
{
"v" : 2,
"key" : { "key" : 1 },
"name" : "key_1",
"unique" : true
}
]
If the index is not unique then "unique": true is simply missing.
I probably would not ask if I have not seen this:
> db.requests.getIndexes()
[
{
"v" : 2,
"key" : {
"_id" : 1
},
"name" : "_id_"
},
{
"v" : 2,
"unique" : true,
"key" : {
"name" : 1,
},
"name" : "name_1"
}
]
Where _id index does not have unique: true. Can it mean that the _id index is somehow not truly unique or something? Can it behave differently (non-unique) if _id is populated with non-ObjectId values - some other fundamental types?
After some activities with my database, I lost my index. I had these indexes:
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "collection.statement"
},
{
"v" : 1,
"unique" : true,
"key" : {
"name" : 1
},
"name" : "name_1",
"ns" : "collection.statement"
}
and now I have only first one.
I've entered this command
db.collection.createIndex({
"v" : 1,
"unique" : true,
"key" :{ "name" : 1 },
"name" : "name_1",
"ns" : "collection.statement"
})
and I only get and error message that i have a bad index key pattern.
Please, help me, how to return this index? What I do wrong?
Use this:
db.collection.createIndex( { "name": 1 }, { unique: true } )
You attempt includes internal aspects of the index ("v" : 1), you just need to supply the field(s) and an order for each and the unique instruction.
More details in the docs.
WriteConcern detected an error 'insertDocument :: caused by :: 11000 E11000 duplicate key error index: develop.Test.$AppId_1_UserId_1_Type_1__sub_1__key_1 dup key: { ... }'. (Response was { "ok" : 1, "code" : 11000, "err" : "insertDocument :: caused by :: 11000 E11000 duplicate key error index: develop.Test.$AppId_1_UserId_1_Type_1__sub_1__key_1 dup key: {...}).
I'm getting the above error when trying to insert a new entry into my collection. The thing that is confusing me is my key is a Guid id field. The entity has AppId and UserId fields but those aren't supposed to be the key and shouldn't have to be unique.
Right before I save the Id is just all zeroes. After it is set to a unique Guid, but the save call throws the MongoDuplicateKey error.
Maybe it's because I'm new to Mongo but I don't understand this any help would be appreciated.
Update
Output of get Indexes
{
"0" : {
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "develop.Test"
},
"1" : {
"v" : 1,
"unique" : true,
"key" : {
"AppId" : 1,
"UserId" : 1,
"Type" : 1,
"_sub" : 1,
"_key" : 1
},
"name" : "AppId_1_UserId_1_Type_1__sub_1__key_1",
"ns" : "develop.Test"
},
"2" : {
"v" : 1,
"key" : {
"Type" : 1,
"_sub" : 1,
"_g" : 1
},
"name" : "Type_1__sub_1__g_1",
"ns" : "develop.Test"
}
}
You have a unique compound indexes on the AppId, UserId, Type, sub and key fields that is why you are getting this error.
"1" : {
"v" : 1,
"unique" : true,
"key" : {
"AppId" : 1,
"UserId" : 1,
"Type" : 1,
"_sub" : 1,
"_key" : 1
},
"name" : "AppId_1_UserId_1_Type_1__sub_1__key_1",
"ns" : "develop.Test"
},
Now how to solve the problem?
If you didn't create it so perhaps you co-worker or someone did. In this case you don't drop the index without talking to them.
You may want to drop the index using db.collection.dropIndex(index) method
db.collection.dropIndex({ AppId: 1, UserId: 1, Type: 1, _sub: 1, _key: 1 })
I have a db ("mydb") on mongo that contains 2 collections (c1 and c2). c1 is already hash sharded. I want to shard a second collection the same way. I get the following error :
use mydb
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
{
"proposedKey" : {
"LOG_DATE" : "hashed"
},
"curIndexes" : [
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydb.c1",
"name" : "_id_"
}
],
"ok" : 0,
"errmsg" : "please create an index that starts with the shard key before sharding."
So I did
db.c2.ensureIndex({LOG_DATE: 1})
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
Same error but it shows the new index.
"proposedKey" : {
"LOG_DATE" : "hashed"
},
"curIndexes" : [
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "mydb.c2",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"LOG_DATE" : 1
},
"ns" : "mydb.c2",
"name" : "LOG_DATE_1"
}
],
"ok" : 0,
"errmsg" : "please create an index that starts with the shard key before sharding."
Just to be sure, I run :
db.system.indexes.find()
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "mydb.c1", "name" : "_id_" }
{ "v" : 1, "key" : { "timestamp" : "hashed" }, "ns" : "mydb.c1", "name" : "timestamp_hashed" }
{ "v" : 1, "key" : { "_id" : 1 }, "ns": "mydb.c2", "name" : "_id_" }
{ "v" : 1, "key" : { "LOG_DATE" : 1 }, "ns" : "mydb.c2", "name" : "LOG_DATE_1" }
I try again the same commands on admin and it fails with the same error.
Then I tried on admin without "hashed" and it worked.
db.runCommand({shardCollection: "mydb.c2", key: {"LOG_DATE": 1}})
Problem : now my collection is sharded on something that is not hashed and I can't change it (error : "already sharded")
What was wrong with what I did ?
How can I fix this ?
Thanks in advance
Thomas
The problem initially was that you did not have a hashed index what you proposed to use for sharding this is the error message is about. After the first error message, when you created an index which is
{
"v" : 1,
"key" : {
"LOG_DATE" : 1
},
"ns" : "mydb.c2",
"name" : "LOG_DATE_1"
}
You still just have an ordinary index which is not a hashed one. If you would do this :
db.c2.ensureIndex({LOG_DATE: "hashed"})
Instead of this :
db.c2.ensureIndex({LOG_DATE: 1})
Than would be a hashed index. As you can see in the output of the db.system.indexes.find() on the other collection you have a hashed index for the timestamp i assume this is the shard key for that collection.
So if you have no data in the c2 collection:
db.c2.drop()
db.createCollection('c2')
db.c2.ensureIndex({LOG_DATE: "hashed"})
sh.shardCollection("mydb.c2", {"LOG_DATE": "hashed"})
This will work properly.