Add field that is unique index to collection in MongoDB - mongodb

I'm trying to add a username field to documents in a 'users' collection, and I'd like it to be a unique index. (So far, we've been using email addresses for login but we'd like to add a username field as well.) However, running db.users.ensureIndex({username:1},{unique:true}) fails because mongo considers all the unset usernames to be duplicates and therefore not unique. Anybody know how to get around this?
Show the current users and username if they have one:
> db.users.find({},{_id:0,display_name:1,username:1})
{ "display_name" : "james" }
{ "display_name" : "sammy", "username" : "sammy" }
{ "display_name" : "patrick" }
Attempt to make the 'username' field a unique index:
> db.users.ensureIndex({username:1},{unique:true})
{
"err" : "E11000 duplicate key error index: blend-db1.users.$username_1 dup key: { : null }",
"code" : 11000,
"n" : 0,
"connectionId" : 272,
"ok" : 1
}
It doesn't work because both james and sammy have username:null.
Let's set patrick's username to 'patrick' to eliminate the duplicate null value.
> db.users.update({display_name: 'patrick'}, { $set: {username: 'patrick'}});
> db.users.ensureIndex({username:1},{unique:true})
> db.users.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "blend-db1.users",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"username" : 1
},
"unique" : true,
"ns" : "blend-db1.users",
"name" : "username_1"
}
]
Now it works!
To clarify the question, what I'd like is to be able to make username a unique index without having to worry about all the documents that have username still set to null.

Try creating a unique sparse index:
db.users.ensureIndex({username:1},{unique:true,sparse:true})
As per the docs:
You can combine the sparse index option with the unique indexes option
so that mongod will reject documents that have duplicate values for a
field, but that ignore documents that do not have the key.
Although this only works for documents which don't have the field, as opposed to documents that do have the field, but where the field has a null value.

Related

db.collection.find() returns omits keys with no values in mongodb

I am performing a query to show the objects from a specific collection of data in mongodb using the query db.collection.find()
On performing the query over a db with collection called User, keys with no value are ommitted, hence the length of objects are not the same.
for example:
when I do : db.User.find()
I get this for the first two objects :
{
"_id" : ObjectId("5f6be9aba8fced67a9af8154"),
"username" : "nestor",
"first_name" : "Nestor",
"email" : "nestor#example.com",
}
{
"_id" : ObjectId("5f6bee1767e194695a0a3516"),
"username" : "salma",
"first_name" : "Salma",
"last_name" : "Driss",
"email" : "salma#example.com",
}
from the example above, the key last_name is omitted in first object since it has no value, but the second one has it present since it has a value. I expect to have 'last_name' : null if last_name value is not present.
So I should expect to get this:
{
"_id" : ObjectId("5f6be9aba8fced67a9af8154"),
"username" : "nestor",
"first_name" : "Nestor",
"last_name" : null,
"email" : "nestor#example.com",
}
{
"_id" : ObjectId("5f6bee1767e194695a0a3516"),
"username" : "salma",
"first_name" : "Salma",
"last_name" : "Driss",
"email" : "salma#example.com",
}
What query would be useful to help me solve this problem?
You can project the result with this:
db.collection.aggregate([
{ $set: { last_name: { $ifNull: ["$last_name", null] } } }
])
The { item : null } query matches documents that either contain the item field whose value is null or that do not contain the item field. MongoDB docs
db.User.find( { item: null } )
You can also test this yourself by following the link to the MongoDB docs as above.

Unique key in moongose db

I have the following DB:
{
"_id" : ObjectId("556da79a77f9f7465943ff93"),
"guid" : "a12345",
"update_day" : "12:05:10 02.06.15"
}
{
"_id" : ObjectId("556dc4509a0a6a002f97e972"),
"guid" : "bbbb",
"update_day" : "15:03:10 02.06.15"
"__v" : 0
}
{
"_id" : ObjectId("556dc470e57836242f5519eb"),
"guid" : "bbbb",
"update_day" : "15:03:10 02.06.15"
"__v" : 0
}
{
"_id" : ObjectId("556dc47c7e882d3c2fe9e0fd"),
"guid" : "bbbb",
"update_day" : "15:03:10 02.06.15"
"__v" : 0
}
I want to set the guid to be unique, so no to duplicate is possible (Like primary key in MYSQL). So the DB will look like this:
{
"_id" : ObjectId("556da79a77f9f7465943ff93"),
"guid" : "a12345",
"update_day" : "12:05:10 02.06.15"
}
{
"_id" : ObjectId("556dc4509a0a6a002f97e972"),
"guid" : "bbbb",
"update_day" : "15:03:10 02.06.15"
"__v" : 0
}
and when I will insert another "guid":"bbbb" (with the save command), it will fails.
While declaring schema in mongoose, do this
guid : { type : String, unique : true}
AND if you want mongodb to create the guid on its own (like _id) then do this
guid : { type : String, index : { unique : true} }
First, you have to deal with the current state of your MongoDB collection and delete all the duplicated documents.
One thing is sure : you won't be able to create the unique index with duplicates in your collection and dropDupes is now deprecated since the version 2.7.5 so you can't use it. By the way, it was removed because it was almost impossible to predict which document would be deleted in the process.
Two possible solutions :
Create a new collection. Create the unique index on this new collection and run a batch to copy all the documents from the old collection to the new one and make sure you ignore duplicated key error during the process.
Deal with it in your own collection manually :
make sure you won't insert more duplicated documents in your code,
run a batch on your collection to delete the duplicates (and make sure you keep the good one if they are not completely identical),
then add the unique index.
I would declare my guid like so in mongoose :
guid : { type : String, unique : true}

mongo _id field duplicate key error

I have a collection with the _id field as a IP with type String.
I'm using mongoose, but here's the error on the console:
$ db.servers.remove()
$ db.servers.insert({"_id":"1.2.3.4"})
$ db.servers.insert({"_id":"1.2.3.5"}) <-- Throws dup key: { : null }
Likely, it's because you have an index that requires a unique value for one of the fields as shown below:
> db.servers.remove()
> db.servers.ensureIndex({"name": 1}, { unique: 1})
> db.servers.insert({"_id": "1.2.3"})
> db.servers.insert({"_id": "1.2.4"})
E11000 duplicate key error index: test.servers.$name_1 dup key: { : null }
You can see your indexes using getIndexes() on the collection:
> db.servers.getIndexes()
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "test.servers",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"name" : 1
},
"unique" : true,
"ns" : "test.servers",
"name" : "name_1"
}
]
I was confused by exactly the same error today, and later figured it out. It was because I removed a indexed property from a mongoose schema, but did not drop that property from the mongodb index. The error message is infact that the new document has an indexed property whose value is null (not in the json).

mongodb:how to add one field to the _id index composed of a Compound index

I can't remove the _id index, why?
When I try running the dropIndexes command, it removes all indexes but not the _id index.
Doing 'db.runCommand' doesn't work either:
> db.runCommand({dropIndexes:'fs_files',index:{_id:1}})
{ "nIndexesWas" : 2, "errmsg" : "may not delete _id index", "ok" : 0 }
not ok.
Can i use a field including _id in a composite index?
I couldn't find anything online, the ensureindex command can't do it.
db.fs_files.ensureIndex({'_id':1, 'created':1});
the above command just created a new composite index. i haven't found some similar 'create Index' command.
the default _id index is a unique index?
the getIndexes returns it's not a unique index.
{
"v" : 1,
"key" : {
"_id" : 1
},
"ns" : "gridfs.fs_files",
"name" : "_id_"
},
{
"v" : 1,
"key" : {
"created" : 1
},
"unique" : true,
"ns" : "gridfs.fs_files",
"name" : "created_1"
}
There is a createIndex command in addition to ensureIndex also.
E.g.
db.<coll>.createIndex({foo:1})
You cannot delete the index on "_id" in mongodb.
Please see the documentation here

Calling ensureIndex with compound key results in _id field in index object

When I call ensureIndex from the mongo shell on a collection for a compound index an _id field of type ObjectId is auto-generated in the index object.
> db.system.indexes.find();
{ "name" : "_id_", "ns" : "database.coll", "key" : { "_id" : 1 } }
{ "_id" : ObjectId("4ea78d66413e9b6a64c3e941"), "ns" : "database.coll", "key" : { "a.b" : 1, "a.c" : 1 }, "name" : "a.b_1_a.c_1" }
This makes intuitive sense as all documents in a collection need an _id field (even system.indexes, right?), but when I check the indexes generated by morphia's ensureIndex call for the same collection *there is no _id property*.
Looking at morphia's source code, it's clear that it's calling the same code that the shell uses, but for some reason (whether it's the fact that I'm creating a compound index or indexing an Embedded document or both) they produce different results. Can anyone explain this behavior to me?
Not exactly sure how you managed to get an _id field in the indexes collection but both shell and Morphia originated ensureIndex calls for compound indexes do not put an _id field in the index object :
> db.test.ensureIndex({'a.b':1, 'a.c':1})
> db.system.indexes.find({})
{ "v" : 1, "key" : { "_id" : 1 }, "ns" : "test.test", "name" : "_id_" }
{ "v" : 1, "key" : { "a.b" : 1, "a.c" : 1 }, "ns" : "test.test", "name" : "a.b_1_a.c_1" }
>
Upgrade to 2.x if you're running an older version to avoid running into now resolved issues. And judging from your output you are running 1.8 or earlier.