How to apply constraints in MongoDB? - mongodb

I have started using MongoDB and I am fairly new to it.
Is there any way by which I can apply constraints on documents in MongoDB?
Like specifying a primary key or taking an attribute as unique?
Or specifying that a particular attribute is greater than a minimum value?

MongoDB 3.2 Update
Document validation is now supported natively by MongoDB.
Example from the documentation:
db.createCollection( "contacts",
{ validator: { $or:
[
{ phone: { $type: "string" } },
{ email: { $regex: /#mongodb\.com$/ } },
{ status: { $in: [ "Unknown", "Incomplete" ] } }
]
}
} )
Original answer
To go beyond the uniqueness constraint available natively in indexes, you need to use something like Mongoose and its ability to support field-based validation. That will give you support for things like minimum value, but only when updates go through your Mongoose schemas/models.

Being a "schemaless" database, some of the things you mention must be constrained from the application side, rather than the db side. (such as "minimum value")
However, you can create indexes (keys to query on--remember that a query can only use one index at a time, so it's generally better to design your indexes around your queries, rather than just index each field you might query against):
http://www.mongodb.org/display/DOCS/Indexes#Indexes-Basics
And you can also create unique indexes, which will enforce uniqueness similar to a unique constraint (it does have some caveats, such as with array fields):
http://www.mongodb.org/display/DOCS/Indexes#Indexes-unique%3Atrue

Related

Multiple arrays of objects inside array of objects in MongoDB

Fellow programmers.
Is it considered as a bad practice to use such MongoDb model:
{
companyId: '',
companyName: '',
companyDivisions: [
{
divisionId: '',
divisionName: '',
divisionDepartments: [
{
departmentId: '',
departmentName: ''
},
...
]
},
...
],
},
...
Because right now it's getting complicated to update certain departments.
Thanks.
I don't think this is a bad practice generally speaking. If your model resembles this data structure it is a good choice storing data this way, leveraging a document database. You can naturally handle data and most likely you have a direct map onto your data model.
Another choice would be to have three different collections:
companies;
divisions;
departements.
However, in this case you would end up storing data as you would do in a relational database. Thus, more than a general rule, it is a matter of data model and expected query profile on your database.
Edit: using MongoDb 3.6+
Using your document oriented approach, a single department can be granularly updated using the following update:
db.companies.findAndModify(
{
query: {
"companyId": "yourCompanyId"
},
update: {
$set : { "companyDivisions.$[element1].divisionDepartments.$[element2].divisioneName": "yourNewName" }
},
arrayFilters: [
{ "element1.divisionId": "yourDivisioneId" },
{ "element2.departmentId": "yourDepartementId" }
]
});
This update uses the new powerful filtered positional operator feature introduced by MongoDB v3.6. The $[<identifier>] syntax allows to select an array entry based on a specific condition expressed in the arrayFilters option of the db.collection.findAndModify() method.
Note that in case the condition matches multiple array items, the update affects all such items, thus allowing for multiple updates as well.
Furthermore, note that I would apply such an optimization only in case of need, since premature optimization is the root of all evil. (D. Knuth).

Unique index in mongoDB 3.2 ignoring null values

I want to add the unique index to a field ignoring null values in the unique indexed field and ignoring the documents that are filtered based on partialFilterExpression.
The problem is Sparse indexes can't be used with the Partial index.
Also, adding unique indexes, adds the null value to the index key field and hence the documents can't be ignored based on $exist criteria in the PartialFilterExpression.
Is it possible in MongoDB 3.2 to get around this situation?
I am adding this answer as I was looking for a solution and didn't find one. This may not answer exactly this question or may be, but will help lot of others out there like me.
Example. If the field with null is houseName and it is of type string, the solution can be like this
db.collectionName.createIndex(
{name: 1, houseName: 1},
{unique: true, partialFilterExpression: {houseName: {$type: "string"}}}
);
This will ignore the null values in the field houseName and still be unique.
Yes, you can create partial index in MongoDB 3.2
Please see https://docs.mongodb.org/manual/core/index-partial/#index-type-partial
MongoDB recommend usage of partial index over sparse index. I'll suggest you to drop your sparse index in favor of partial index.
You can create partial index in mongo:3.2.
Example, if ipaddress can be "", but "127.0.0.1" should be unique. The solution can be like this:
db.collectionName.createIndex(
{"ipaddress":1},
{"unique":true, "partialIndexExpression":{"ipaddress":{"$gt":""}}})
This will ignore "" values in ipaddress filed and still be unique
{
"YourField" : {
"$exists" : true,
"$gt" : "0",
"$type" : "string"
}
}
To create at mongodbCompass you must write it as JSON:
for find other types wich supports see this link.
Yes, that can be a kind of a problem that the partial filter expression cannot contain any 'not' filters.
For those who can be interested in a C# solution for an index like this, here is an example.
We have a 'User' entity, which has one-to-one 'relation' to a 'Doctor' entity.
This relation is represented by the not required, nullable field 'DoctorId' in the 'User' entity. In other words, there is a requirement that a given 'Doctor' can be linked to only single 'User' at a time.
So we need an unique index which can fire an exception when something attempts to set DoctorId to the same Guid which already set for any other 'User' entity. At the same time multiple 'null' entries must be allowed for the 'DoctorId' field, since many users do not have any doctor attached to them.
The solution to build this kind of an index looks like:
var uniqueDoctorIdIndexDefinition = new IndexKeysDefinitionBuilder<User>()
.Ascending(o => o.DoctorId);
var existsFilter = Builders<User>.Filter.Exists(o => o.DoctorId);
var notNullFilter = Builders<User>.Filter.Type(o => o.DoctorId, BsonType.String);
var andFilter = Builders<User>.Filter.And(existsFilter, notNullFilter);
var createIndexOptions = new CreateIndexOptions<User>
{
Unique = true,
Name = UniqueDoctorIdIndexName,
PartialFilterExpression = andFilter,
};
var uniqueDoctorIdIndex = new CreateIndexModel<User>(
uniqueDoctorIdIndexDefinition,
createIndexOptions);
users.Indexes.CreateOne(uniqueDoctorIdIndex);
Probably in your description of a 'User' entity you must directly specify the BsonType of the 'DoctorId' field, by using an attribute, for example in our case it was:
[BsonRepresentation(BsonType.String)]
public Guid? DoctorId { get; set; }
I am more than sure that there is a more proficient and compact solution for this problem, so would be happy if somebody suggests it here.
Here is an example that I modified from the mongoDB partial index documentation:
db.contacts.createIndex(
{ email: 1 },
{ unique: true, partialFilterExpression: { email: { $exists: true } } }
)
IMPORTANT
To use the partial index, a query must contain the filter expression (or a modified filter expression that specifies a subset of the filter expression) as part of its query condition.
You can see that queries such as:
db.contacts.find({'email':'name#email.com'}).explain()
will indicate that they doing an index scan, even if you don't specify {$exists: true} because you're implicitly specifying a subset of the partialFilterExpression by specifying an email in your filter.
On the other hand, the following query will do a collection scan:
db.contacts.find({email: {$exists: false}})
WARNING
mythicalcoder's answer (currently the highest voted answer) is very misleading because it successfully creates a unique index, but the query planner will not generally be able to use the index you've created unless you add houseName: {$type: "string"} into your filter expression. This can have performance costs which you might not be aware of and can cause problems down the road.

MongoDB: Find document given field values in an object with an unknown key

I'm making a database on theses/arguments. They are related to other arguments, which I've placed in an object with a dynamic key, which is completely random.
{
_id : "aeokejXMwGKvWzF5L",
text : "test",
relations : {
cF6iKAkDJg5eQGsgb : {
type : "interpretation",
originId : "uFEjssN2RgcrgiTjh",
ratings: [...]
}
}
}
Can I find this document if I only know what the value of type is? That is I want to do something like this:
db.theses.find({relations['anything']: { type: "interpretation"}}})
This could've been done easily with the positional operator, if relations had been an array. But then I cannot make changes to the objects in ratings, as mongo doesn't support those updates. I'm asking here to see if I can keep from having to change the database structure.
Though you seem to have approached this structure due to a problem with updates in using nested arrays, you really have only caused another problem by doing something else which is not really supported, and that is that there is no "wildcard" concept for searching unspecified keys using the standard query operators that are optimal.
The only way you can really search for such data is by using JavaScript code on the server to traverse the keys using $where. This is clearly not a really good idea as it requires brute force evaluation rather than using useful things like an index, but it can be approached as follows:
db.theses.find(function() {
var relations = this.relations;
return Object.keys(relations).some(function(rel) {
return relations[rel].type == "interpretation";
});
))
While this will return those objects from the collection that contain the required nested value, it must inspect each object in the collection in order to do the evaluation. This is why such evaluation should really only be used when paired with something that can directly use an index instead as a hard value from the object in the collection.
Still the better solution is to consider remodelling the data to take advantage of indexes in search. Where it is neccessary to update the "ratings" information, then basically "flatten" the structure to consider each "rating" element as the only array data instead:
{
"_id": "aeokejXMwGKvWzF5L",
"text": "test",
"relationsRatings": [
{
"relationId": "cF6iKAkDJg5eQGsgb",
"type": "interpretation",
"originId": "uFEjssN2RgcrgiTjh",
"ratingId": 1,
"ratingScore": 5
},
{
"relationId": "cF6iKAkDJg5eQGsgb",
"type": "interpretation",
"originId": "uFEjssN2RgcrgiTjh",
"ratingId": 2,
"ratingScore": 6
}
]
}
Now searching is of course quite simple:
db.theses.find({ "relationsRatings.type": "interpretation" })
And of course the positional $ operator can now be used with the flatter structure:
db.theses.update(
{ "relationsRatings.ratingId": 1 },
{ "$set": { "relationsRatings.$.ratingScore": 7 } }
)
Of course this means duplication of the "related" data for each "ratings" value, but this is generally the cost of being to update by matched position as this is all that is supported with a single level of array nesting only.
So you can force the logic to match with the way you have it structured, but it is not a great idea to do so and will lead to performance problems. If however your main need here is to update the "ratings" information rather than just append to the inner list, then a flatter structure will be of greater benefit and of course be a lot faster to search.

JSON Schema with dynamic key field in MongoDB

Want to have a i18n support for objects stored in mongodb collection
currently our schema is like:
{
_id: "id"
name: "name"
localization: [{
lan: "en-US",
name: "name_in_english"
}, {
lan: "zh-TW",
name: "name_in_traditional_chinese"
}]
}
but my thought is that field "lan" is unique, can I just use this field as a key, so the structure would be
{
_id: "id"
name: "name"
localization: {
"en-US": "name_in_english",
"zh-TW": "name_in_traditional_chinese"
}
}
which would be neater and easier to parse (just localization[language] would get the value i want for specific language).
But then the question is: Is this a good practice in storing data in MongoDB? And how to pass the json-schema check?
It is not a good practice to have values as keys. The language codes are values and as you say you can not validate them against a schema. It makes querying against it impossible. For example, you can't figure out if you have a language translation for "nl-NL" as you can't compare against keys and neither is it possible to easily index this. You should always have descriptive keys.
However, as you say, having the languages as keys makes it a lot easier to pull the data out as you can just access it by ['nl-NL'] (or whatever your language's syntax is).
I would suggest an alternative schema:
{
your_id: "id_for_name"
lan: "en-US",
name: "name_in_english"
}
{
your_id: "id_for_name"
lan: "zh-TW",
name: "name_in_traditional_chinese"
}
Now you can :
set an index on { your_id: 1, lan: 1 } for speedy lookups
query for each translation individually and just get that translation:
db.so.find( { your_id: "id_for_name", lan: 'en-US' } )
query for all the versions for each id using this same index:
db.so.find( { your_id: "id_for_name" } )
and also much easier update the translation for a specific language:
db.so.update(
{ your_id: "id_for_name", lan: 'en-US' },
{ $set: { name: "ooga" } }
)
Neither of those points are possible with your suggested schemas.
Obviously the second schema example is much better for your task (of course, if lan field is unique as you mentioned, that seems true to me also).
Getting element from dictionary/associated array/mapping/whatever_it_is_called_in_your_language is much cheaper than scanning whole array of values (and in current case it's also much efficient from the storage size point of view (remember that all fields are stored in MongoDB as-is, so every record holds the whole key name for json field, not it's representation or index or whatever).
My experience shows that MongoDB is mature enough to be used as a main storage for your application, even on high-loads (whatever it means ;) ), and the main problem is how you fight database-level locks (well, we'll wait for promised table-level locks, it'll fasten MongoDB I hope a lot more), though data loss is possible if your MongoDB cluster is built badly (dig into docs and articles over Internet for more information).
As for schema check, you must do it by means of your programming language on application side before inserting records, yeah, that's why Mongo is called schemaless.
There is a case where an object is necessarily better than an array: supporting upserts into a set. For example, if you want to update an item having name 'item1' to have val 100, or insert such an item if one doesn't exist, all in one atomic operation. With an array, you'd have to do one of two operations. Given a schema like
{ _id: 'some-id', itemSet: [ { name: 'an-item', val: 123 } ] }
you'd have commands
// Update:
db.coll.update(
{ _id: id, 'itemSet.name': 'item1' },
{ $set: { 'itemSet.$.val': 100 } }
);
// Insert:
db.coll.update(
{ _id: id, 'itemSet.name': { $ne: 'item1' } },
{ $addToSet: { 'itemSet': { name: 'item1', val: 100 } } }
);
You'd have to query first to know which is needed in advance, which can exacerbate race conditions unless you implement some versioning. With an object, you can simply do
db.coll.update({
{ _id: id },
{ $set: { 'itemSet.name': 'item1', 'itemSet.val': 100 } }
});
If this is a use case you have, then you should go with the object approach. One drawback is that querying for a specific name requires scanning. If that is also needed, you can add a separate array specifically for indexing. This is a trade-off with MongoDB. Upserts would become
db.coll.update({
{ _id: id },
{
$set: { 'itemSet.name': 'item1', 'itemSet.val': 100 },
$addToSet: { itemNames: 'item1' }
}
});
and the query would then simply be
db.coll.find({ itemNames: 'item1' })
(Note: the $ positional operator does not support array upserts.)

mongoDB: unique index on a repeated value

So i'm pretty new to mongoDb so i figure this could be a misunderstanding on general usage. so bear with me.
I have a document schema I'm working with as such
{
name: "bob",
email: "bob#gmail.com",
logins: [
{ u: 'a', p: 'b', public_id: '123' },
{ u: 'x', p: 'y', public_id: 'abc' }
]
}
My Problem is that i need to ensure that the public ids are unique within a document and collection,
Furthermore there are some existing records being migrated from a mySQL DB that dont have records, and will therefore all be replaced by null values in mongo.
I figure its either an index
db.users.ensureIndex({logins.public_id: 1}, {unique: true});
which isn't working because of the missing keys and is throwing a E11000 duplicate key error index:
or this is a more fundamental schema problem in that I shouldn't be nesting objects in an array structure like that. In which case, what? a seperate collection for the user_logins??? which seems to go against the idea of an embedded document.
If you expect u and p to have always the same values on each insert (as in your example snippet), you might want to use the $addToSet operator on inserts to ensure the uniqueness of your public_id field. Otherwise I think it's quite difficult to make them unique across a whole collection not working with external maintenance or js functions.
If not, I would possibly store them in their own collection and use the public_id as _id field to ensure their cross-document uniqueness inside a collection. Maybe that would contradict the idea of embedded docs in a doc database, but according to different requirements I think that's negligible.
Furthermore there are some existing records being migrated from a mySQL DB that dont have records, and will therefore all be replaced by null values in mongo.
So you want to apply a unique index on a data set that's not truly unique. I think this is just a modeling problem.
If logins.public_id is null that's going to violate your uniqueness constraint, then just don't write it at all:
{
logins: [
{ u: 'a', p: 'b' },
{ u: 'x', p: 'y' }
]
}
Thanks all.
In the end I opted to seperate this into 2 collections, one for users and one for logins.
users this looked a little like..
userDocument = {
...
logins: [
DBRef('loginsCollection', loginDocument._id),
DBRef('loginsCollection', loginDocument2._id),
]
}
loginDocument = {
...
user: new DBRef('userCollection', userDocument ._id)
}
Although not what i was originally after (a single collection) It is working niocely and by utilising the MongoId uniquness there is a constraint now built in at a database level and not implemented at the application level.