I have a structure similar to this:
class Cat {
int id;
List<Kitten> kittens;
}
class Kitten {
int id;
}
I'd like to prevent users from creating a cat with more than one kitten with the same id. I've tried creating an index as follows:
db.Cats.ensureIndex({'id': 1, 'kittens.id': 1}, {unique:true})
But when I attempt to insert a badly-formatted cat, Mongo accepts it.
Am I missing something? can this even be done?
As far as I know, unique indexes only enforce uniqueness across different documents, so this would throw a duplicate key error:
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
But this is allowed:
db.cats.insert( { id: 123, kittens: [ { id: 456 }, { id: 456 } ] } )
I'm not sure if there's any way enforce the constraint you need at the Mongo level, maybe it's something you could check in the application logic when you insert of update?
Ensuring uniqueness of the individual values in an array field
In addition to the example above, there is a function in MongoDB to ensure that when you are adding a new object/value to an array field, that it will only perform the update if the value/object doesn't already exist.
So if you have a document that looks like this:
{ _id: 123, kittens: [456] }
This would be allowed:
db.cats.update({_id:123}, {$push: {kittens:456}})
resulting in
{ _id: 123, kittens: [456, 456] }
however using the $addToSet function (as opposed to $push) would check if the value already exists before adding it.
So, starting with:
{ _id: 123, kittens: [456] }
then executing:
db.cats.update({_id:123}, {$addToSet: {kittens:456}})
Would not have any effect.
So, long story short, unique constraints don't validate uniqueness within the value items of an array field, just that two documents can't have identical values in the indexed fields.
There is an equivalent of insert with uniquness in array attribute. The following command essentially does insert while ensuring the uniqueness of kittens (upsert creates it for you if the object with 123 doesn't already exist).
db.cats.update(
{ id: 123 },
{ $addToSet: {kittens: { $each: [ 456, 456] }}, $set: {'otherfields': 'extraval', "field2": "value2"}},
{ upsert: true}
)
The resulting value of the object will be
{
"id": 123,
"kittens": [456],
"otherfields": "extraval",
"field2": "value2"
}
Well what seemed important here is ensuring that no more than an item should exist in a mongodb object array, with the same id or some other fields that is required to be treated uniquely. Then, a simple query like this will suffice for update, using $addToSet.
Forgive me I am not a mongo-shell expert, using Java Mongo Driver version 4.0.3
collection = database.getCollection("cat", Cat.class);
UpdateResult result = collection.updateOne(and(eq("Id", 1), nin("kittens.id", newKittenId)), addToSet("kittens", new Kitten("newKittenId")));
The query used here added an extra condition to the match query, which goes like; where cat.id is 1 and the newKittenId is not yet owned by any of the kittens that had previously been added. So if the id for the cat is found and no kitten has taken the new kittenId, the query goes ahead and update the cat's kittens by adding a new one. But if the newKittenId had been taken by one of the kittens, it simply returns updateresult with no count, and no modified field (nothing happens).
Note: This does not ensure unique constraints on the kitten.id, mongo DB does not support uniqueness on object arrays in a document, and addToSet does not really handle duplicate item in an object array, except the object is 100% a replica of what is in the database check here for more explanation about addToSet.
there is a workaround you can do using the document validator.
Here is an example validator where "a" is an array and within "a" subdocument field "b" value must be unique. This assumes the collection is either empty or already complies with the rule:
> db.runCommand({collMod:"coll", validator: {$expr:{$eq:[{$size:"$a.b"},{$size:{$setUnion:"$a.b"}}]}}})
/* test it */
> db.coll.insert({a:[{b:1}]}) /* success */
> db.coll.update({},{ '$push' : { 'a':{b:1}}})
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
see more info about this solution from the original post
You can write a custom Mongoose validation method in this case. You can hook into post validation. Mongoose has validation and you can implement a hook before (pre) or after(post) validation. In this case, you can use post validation to see if the array is valid. Then just make sure the array has no duplications. There may be efficiency improvements you can make based upon your details. If you only have '_id' for example you could just use the JS includes function.
catSchema.post('validate',function(next) {
return new Promise((resolve,reject) => {
for(var i = 0; i < this.kittens.length; i++) {
let kitten = this.kittens[i];
for(var p = 0; p < this.kittens.length; p++) {
if (p == i) {
continue;
}
if (kitten._id == this.kittens[p]._id) {
return reject('Duplicate Kitten Ids not allowed');
}
}
}
return resolve();
});
});
I like to use promises in validation because it's easier to specify errors.
Related
I have a structure similar to this:
class Cat {
int id;
List<Kitten> kittens;
}
class Kitten {
int id;
}
I'd like to prevent users from creating a cat with more than one kitten with the same id. I've tried creating an index as follows:
db.Cats.ensureIndex({'id': 1, 'kittens.id': 1}, {unique:true})
But when I attempt to insert a badly-formatted cat, Mongo accepts it.
Am I missing something? can this even be done?
As far as I know, unique indexes only enforce uniqueness across different documents, so this would throw a duplicate key error:
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
But this is allowed:
db.cats.insert( { id: 123, kittens: [ { id: 456 }, { id: 456 } ] } )
I'm not sure if there's any way enforce the constraint you need at the Mongo level, maybe it's something you could check in the application logic when you insert of update?
Ensuring uniqueness of the individual values in an array field
In addition to the example above, there is a function in MongoDB to ensure that when you are adding a new object/value to an array field, that it will only perform the update if the value/object doesn't already exist.
So if you have a document that looks like this:
{ _id: 123, kittens: [456] }
This would be allowed:
db.cats.update({_id:123}, {$push: {kittens:456}})
resulting in
{ _id: 123, kittens: [456, 456] }
however using the $addToSet function (as opposed to $push) would check if the value already exists before adding it.
So, starting with:
{ _id: 123, kittens: [456] }
then executing:
db.cats.update({_id:123}, {$addToSet: {kittens:456}})
Would not have any effect.
So, long story short, unique constraints don't validate uniqueness within the value items of an array field, just that two documents can't have identical values in the indexed fields.
There is an equivalent of insert with uniquness in array attribute. The following command essentially does insert while ensuring the uniqueness of kittens (upsert creates it for you if the object with 123 doesn't already exist).
db.cats.update(
{ id: 123 },
{ $addToSet: {kittens: { $each: [ 456, 456] }}, $set: {'otherfields': 'extraval', "field2": "value2"}},
{ upsert: true}
)
The resulting value of the object will be
{
"id": 123,
"kittens": [456],
"otherfields": "extraval",
"field2": "value2"
}
Well what seemed important here is ensuring that no more than an item should exist in a mongodb object array, with the same id or some other fields that is required to be treated uniquely. Then, a simple query like this will suffice for update, using $addToSet.
Forgive me I am not a mongo-shell expert, using Java Mongo Driver version 4.0.3
collection = database.getCollection("cat", Cat.class);
UpdateResult result = collection.updateOne(and(eq("Id", 1), nin("kittens.id", newKittenId)), addToSet("kittens", new Kitten("newKittenId")));
The query used here added an extra condition to the match query, which goes like; where cat.id is 1 and the newKittenId is not yet owned by any of the kittens that had previously been added. So if the id for the cat is found and no kitten has taken the new kittenId, the query goes ahead and update the cat's kittens by adding a new one. But if the newKittenId had been taken by one of the kittens, it simply returns updateresult with no count, and no modified field (nothing happens).
Note: This does not ensure unique constraints on the kitten.id, mongo DB does not support uniqueness on object arrays in a document, and addToSet does not really handle duplicate item in an object array, except the object is 100% a replica of what is in the database check here for more explanation about addToSet.
there is a workaround you can do using the document validator.
Here is an example validator where "a" is an array and within "a" subdocument field "b" value must be unique. This assumes the collection is either empty or already complies with the rule:
> db.runCommand({collMod:"coll", validator: {$expr:{$eq:[{$size:"$a.b"},{$size:{$setUnion:"$a.b"}}]}}})
/* test it */
> db.coll.insert({a:[{b:1}]}) /* success */
> db.coll.update({},{ '$push' : { 'a':{b:1}}})
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
see more info about this solution from the original post
You can write a custom Mongoose validation method in this case. You can hook into post validation. Mongoose has validation and you can implement a hook before (pre) or after(post) validation. In this case, you can use post validation to see if the array is valid. Then just make sure the array has no duplications. There may be efficiency improvements you can make based upon your details. If you only have '_id' for example you could just use the JS includes function.
catSchema.post('validate',function(next) {
return new Promise((resolve,reject) => {
for(var i = 0; i < this.kittens.length; i++) {
let kitten = this.kittens[i];
for(var p = 0; p < this.kittens.length; p++) {
if (p == i) {
continue;
}
if (kitten._id == this.kittens[p]._id) {
return reject('Duplicate Kitten Ids not allowed');
}
}
}
return resolve();
});
});
I like to use promises in validation because it's easier to specify errors.
I have a structure similar to this:
class Cat {
int id;
List<Kitten> kittens;
}
class Kitten {
int id;
}
I'd like to prevent users from creating a cat with more than one kitten with the same id. I've tried creating an index as follows:
db.Cats.ensureIndex({'id': 1, 'kittens.id': 1}, {unique:true})
But when I attempt to insert a badly-formatted cat, Mongo accepts it.
Am I missing something? can this even be done?
As far as I know, unique indexes only enforce uniqueness across different documents, so this would throw a duplicate key error:
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
But this is allowed:
db.cats.insert( { id: 123, kittens: [ { id: 456 }, { id: 456 } ] } )
I'm not sure if there's any way enforce the constraint you need at the Mongo level, maybe it's something you could check in the application logic when you insert of update?
Ensuring uniqueness of the individual values in an array field
In addition to the example above, there is a function in MongoDB to ensure that when you are adding a new object/value to an array field, that it will only perform the update if the value/object doesn't already exist.
So if you have a document that looks like this:
{ _id: 123, kittens: [456] }
This would be allowed:
db.cats.update({_id:123}, {$push: {kittens:456}})
resulting in
{ _id: 123, kittens: [456, 456] }
however using the $addToSet function (as opposed to $push) would check if the value already exists before adding it.
So, starting with:
{ _id: 123, kittens: [456] }
then executing:
db.cats.update({_id:123}, {$addToSet: {kittens:456}})
Would not have any effect.
So, long story short, unique constraints don't validate uniqueness within the value items of an array field, just that two documents can't have identical values in the indexed fields.
There is an equivalent of insert with uniquness in array attribute. The following command essentially does insert while ensuring the uniqueness of kittens (upsert creates it for you if the object with 123 doesn't already exist).
db.cats.update(
{ id: 123 },
{ $addToSet: {kittens: { $each: [ 456, 456] }}, $set: {'otherfields': 'extraval', "field2": "value2"}},
{ upsert: true}
)
The resulting value of the object will be
{
"id": 123,
"kittens": [456],
"otherfields": "extraval",
"field2": "value2"
}
Well what seemed important here is ensuring that no more than an item should exist in a mongodb object array, with the same id or some other fields that is required to be treated uniquely. Then, a simple query like this will suffice for update, using $addToSet.
Forgive me I am not a mongo-shell expert, using Java Mongo Driver version 4.0.3
collection = database.getCollection("cat", Cat.class);
UpdateResult result = collection.updateOne(and(eq("Id", 1), nin("kittens.id", newKittenId)), addToSet("kittens", new Kitten("newKittenId")));
The query used here added an extra condition to the match query, which goes like; where cat.id is 1 and the newKittenId is not yet owned by any of the kittens that had previously been added. So if the id for the cat is found and no kitten has taken the new kittenId, the query goes ahead and update the cat's kittens by adding a new one. But if the newKittenId had been taken by one of the kittens, it simply returns updateresult with no count, and no modified field (nothing happens).
Note: This does not ensure unique constraints on the kitten.id, mongo DB does not support uniqueness on object arrays in a document, and addToSet does not really handle duplicate item in an object array, except the object is 100% a replica of what is in the database check here for more explanation about addToSet.
there is a workaround you can do using the document validator.
Here is an example validator where "a" is an array and within "a" subdocument field "b" value must be unique. This assumes the collection is either empty or already complies with the rule:
> db.runCommand({collMod:"coll", validator: {$expr:{$eq:[{$size:"$a.b"},{$size:{$setUnion:"$a.b"}}]}}})
/* test it */
> db.coll.insert({a:[{b:1}]}) /* success */
> db.coll.update({},{ '$push' : { 'a':{b:1}}})
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
see more info about this solution from the original post
You can write a custom Mongoose validation method in this case. You can hook into post validation. Mongoose has validation and you can implement a hook before (pre) or after(post) validation. In this case, you can use post validation to see if the array is valid. Then just make sure the array has no duplications. There may be efficiency improvements you can make based upon your details. If you only have '_id' for example you could just use the JS includes function.
catSchema.post('validate',function(next) {
return new Promise((resolve,reject) => {
for(var i = 0; i < this.kittens.length; i++) {
let kitten = this.kittens[i];
for(var p = 0; p < this.kittens.length; p++) {
if (p == i) {
continue;
}
if (kitten._id == this.kittens[p]._id) {
return reject('Duplicate Kitten Ids not allowed');
}
}
}
return resolve();
});
});
I like to use promises in validation because it's easier to specify errors.
Situation:
I have several documents of the same collection(account), each has an attribute of type array(string) named uniquestrings.
Problem: Each entry in uniquestrings must be unique over all documents in mongodb. It seems that MongoDB/Mongoose does not offer such a validation (neither addToSet¹ nor index: {unique: true}² solve the problem). Is there a pattern to restructure my document schema to make sure mongodb itself can validate it? At the moment the software itself checks it before updating the document.
E.g.
account {
_id: 111,
uniquestrings: ["a", "b", "c"]
}
account {
_id: 222,
uniquestrings: ["d", "e", "f"]
}
E.g. prevent account(222).uniquestrings.push("a"); by throwing a duplicate error from mongo.
¹ Uniqueness in an array is not enough
² Each item in array has to be unique across the collection
UPDATE1:
More examples. Affected Schema entry looks like:
var Account = new Schema({
...
uniquestrings: [{type: String, index: {unique: true}}]
...
});
Now when create 4 account documents. I want only 1 and 2 be ok, and rest should fail.
var accountModel1 = new Account.Model({uniquestrings: ["a", "b"]});
accountModel1.save(); // OK
var accountModel2 = new Account.Model({uniquestrings: ["c", "d"]});
accountModel2.save(); // OK
var accountModel3 = new Account.Model({uniquestrings: ["c", "d"]});
accountModel3.save(); // FAIL => Not unique, so far so good
var accountModel4 = new Account.Model({uniquestrings: ["X", "d"]});
accountModel4.save(); // OK => But i Want this to faile because "d" is alreay in use.
It might be possible, if you are willing to store the unique values in a different collection. It would be structured like this:
{ "uniquestring" : "a", "account" : 111 }
{ "uniquestring" : "b", "account" : 111 }
{ "uniquestring" : "c", "account" : 111 }
{ "uniquestring" : "d", "account" : 222 }
{ "uniquestring" : "e", "account" : 222 }
{ "uniquestring" : "f", "account" : 222 }
I am not an expert with Mongoose, but I believe that you can define Models to link collections together, with the account field here referencing the accounts collection's _id field.
Now, you can enforce the uniqueness with a straightforward index:
db.uniquestrings.createIndex( { "uniquestring" : 1 } , { unique : true } )
Now, your app will have a little more work to do when saving the data (it needs to save to the uniquestrings collection as well as the accounts collection), but you do now have database-level enforcement of the uniqueness of these strings, across the database.
PS edits are welcome from anybody with more detailed knowledge of how to implement and use such models in mongoose.
According to this MongoDB Doc, There's no way to force MongoDB to enforce a unique index policy within a single document, but there is a way to enforce within separate documents.
db.collection.createIndex("a.b");
...will enforce uniqueness for these on a.b...
db.collection.insert({ a: [{b: 1}] });
db.collection.insert({ a: [{b: 1}] });
...but will not enforce uniqueness for this...
db.collection.insert({ a: [{b: 1},{b: 1}] ]);
...BUT if you strictly use $addToSet with the index...
db.collection.upsert({ $addToSet: { a: { b: 1 } } });
...and you compromise by not having an exception thrown, but rather the upsert quietly ignores the duplicate, which isn't what you want but closer.
So far, we've covered what's answered in another SO question, but keep reading and maybe you'll get what you're after.
Now, to achieve what your asking with native MongoDB requests is not possible out of the box, but you could ensureIndex and use a covered query to lookup the indexed array and throw an error if you find it, otherwise upsert as directed above.
So...
// Ensure index
db.collection.createIndex({ 'a.b': 1 });
// Test for existence and throws up if not unique
function insertUnique(newVal) {
var exists = db.collection.find({'a.b': newVal});
if (exists) {
throw "Element is not unique in the collection: " + newVal;
} else {
db.collection.upsert({ $addToSet: { a: { b: 1 } } });
}
}
// Use it later...
try {
insertUnique(1);
insertUnique(1); // it should barf
} catch (e) {
console.warn(e);
}
Lastly, depending on which client you use, you may be able to extend the prototype (in JS) with the insertUnique method, and soon you'll forget you couldn't do this to begin with.
In the MongoDB aggregation framework, I was hoping to use the $unwind operator on an object (ie. a JSON collection). Doesn't look like this is possible, is there a workaround? Are there plans to implement this?
For example, take the article collection from the aggregation documentation . Suppose there is an additional field "ratings" that is a map from user -> rating. Could you calculate the average rating for each user?
Other than this, I'm quite pleased with the aggregation framework.
Update: here's a simplified version of my JSON collection per request. I'm storing genomic data. I can't really make genotypes an array, because the most common lookup is to get the genotype for a random person.
variants: [
{
name: 'variant1',
genotypes: {
person1: 2,
person2: 5,
person3: 7,
}
},
{
name: 'variant2',
genotypes: {
person1: 3,
person2: 3,
person3: 2,
}
}
]
It is not possible to do the type of computation you are describing with the aggregation framework - and it's not because there is no $unwind method for non-arrays. Even if the person:value objects were documents in an array, $unwind would not help.
The "group by" functionality (whether in MongoDB or in any relational database) is done on the value of a field or column. We group by value of field and sum/average/etc based on the value of another field.
Simple example is a variant of what you suggest, ratings field added to the example article collection, but not as a map from user to rating but as an array like this:
{ title : title of article", ...
ratings: [
{ voter: "user1", score: 5 },
{ voter: "user2", score: 8 },
{ voter: "user3", score: 7 }
]
}
Now you can aggregate this with:
[ {$unwind: "$ratings"},
{$group : {_id : "$ratings.voter", averageScore: {$avg:"$ratings.score"} } }
]
But this example structured as you describe it would look like this:
{ title : title of article", ...
ratings: {
user1: 5,
user2: 8,
user3: 7
}
}
or even this:
{ title : title of article", ...
ratings: [
{ user1: 5 },
{ user2: 8 },
{ user3: 7 }
]
}
Even if you could $unwind this, there is nothing to aggregate on here. Unless you know the complete list of all possible keys (users) you cannot do much with this. [*]
An analogous relational DB schema to what you have would be:
CREATE TABLE T (
user1: integer,
user2: integer,
user3: integer
...
);
That's not what would be done, instead we would do this:
CREATE TABLE T (
username: varchar(32),
score: integer
);
and now we aggregate using SQL:
select username, avg(score) from T group by username;
There is an enhancement request for MongoDB that may allow you to do this in the aggregation framework in the future - the ability to project values to keys to vice versa. Meanwhile, there is always map/reduce.
[*] There is a complicated way to do this if you know all unique keys (you can find all unique keys with a method similar to this) but if you know all the keys you may as well just run a sequence of queries of the form db.articles.find({"ratings.user1":{$exists:true}},{_id:0,"ratings.user1":1}) for each userX which will return all their ratings and you can sum and average them simply enough rather than do a very complex projection the aggregation framework would require.
Since 3.4.4, you can transform object to array using $objectToArray
See:
https://docs.mongodb.com/manual/reference/operator/aggregation/objectToArray/
This is an old question, but I've run across a tidbit of information through trial and error that people may find useful.
It's actually possible to unwind on a dummy value by fooling the parser this way:
db.Opportunity.aggregate(
{ $project: {
Field1: 1, Field2: 1, Field3: 1,
DummyUnwindField: { $ifNull: [null, [1.0]] }
}
},
{ $unwind: "$DummyUnwindField" }
);
This will produce 1 row per document, regardless of whether or not the value exists. You may be able tinker with this to generate the results you want. I had hoped to combine this with multiple $unwinds to (sort of like emit() in map/reduce), but alas, the last $unwind wins or they combine as an intersection rather than union which makes it impossible to achieve the results I was looking for. I am sadly disappointed with the aggregate framework functionality as it doesn't fit the one use case I was hoping to use it for (and seems strangely like a lot of the questions on StackOverflow in this area are asking) - ordering results based on match rate. Improving the poor map reduce performance would have made this entire feature unnecessary.
This is what I found & extended.
Lets create experimental database in mongo
db.copyDatabase('livedb' , 'experimentdb')
Now Use experimentdb & convert Array to object in your experimentcollection
db.getCollection('experimentcollection').find({}).forEach(function(e){
if(e.store){
e.ratings = [e.ratings]; //Objects name to be converted to array eg:ratings
db.experimentcollection.save(e);
}
})
Some nerdy js code to convert json to flat object
var flatArray = [];
var data = db.experimentcollection.find().toArray();
for (var index = 0; index < data.length; index++) {
var flatObject = {};
for (var prop in data[index]) {
var value = data[index][prop];
if (Array.isArray(value) && prop === 'ratings') {
for (var i = 0; i < value.length; i++) {
for (var inProp in value[i]) {
flatObject[inProp] = value[i][inProp];
}
}
}else{
flatObject[prop] = value;
}
}
flatArray.push(flatObject);
}
printjson(flatArray);
I have a structure similar to this:
class Cat {
int id;
List<Kitten> kittens;
}
class Kitten {
int id;
}
I'd like to prevent users from creating a cat with more than one kitten with the same id. I've tried creating an index as follows:
db.Cats.ensureIndex({'id': 1, 'kittens.id': 1}, {unique:true})
But when I attempt to insert a badly-formatted cat, Mongo accepts it.
Am I missing something? can this even be done?
As far as I know, unique indexes only enforce uniqueness across different documents, so this would throw a duplicate key error:
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
db.cats.insert( { id: 123, kittens: [ { id: 456 } ] } )
But this is allowed:
db.cats.insert( { id: 123, kittens: [ { id: 456 }, { id: 456 } ] } )
I'm not sure if there's any way enforce the constraint you need at the Mongo level, maybe it's something you could check in the application logic when you insert of update?
Ensuring uniqueness of the individual values in an array field
In addition to the example above, there is a function in MongoDB to ensure that when you are adding a new object/value to an array field, that it will only perform the update if the value/object doesn't already exist.
So if you have a document that looks like this:
{ _id: 123, kittens: [456] }
This would be allowed:
db.cats.update({_id:123}, {$push: {kittens:456}})
resulting in
{ _id: 123, kittens: [456, 456] }
however using the $addToSet function (as opposed to $push) would check if the value already exists before adding it.
So, starting with:
{ _id: 123, kittens: [456] }
then executing:
db.cats.update({_id:123}, {$addToSet: {kittens:456}})
Would not have any effect.
So, long story short, unique constraints don't validate uniqueness within the value items of an array field, just that two documents can't have identical values in the indexed fields.
There is an equivalent of insert with uniquness in array attribute. The following command essentially does insert while ensuring the uniqueness of kittens (upsert creates it for you if the object with 123 doesn't already exist).
db.cats.update(
{ id: 123 },
{ $addToSet: {kittens: { $each: [ 456, 456] }}, $set: {'otherfields': 'extraval', "field2": "value2"}},
{ upsert: true}
)
The resulting value of the object will be
{
"id": 123,
"kittens": [456],
"otherfields": "extraval",
"field2": "value2"
}
Well what seemed important here is ensuring that no more than an item should exist in a mongodb object array, with the same id or some other fields that is required to be treated uniquely. Then, a simple query like this will suffice for update, using $addToSet.
Forgive me I am not a mongo-shell expert, using Java Mongo Driver version 4.0.3
collection = database.getCollection("cat", Cat.class);
UpdateResult result = collection.updateOne(and(eq("Id", 1), nin("kittens.id", newKittenId)), addToSet("kittens", new Kitten("newKittenId")));
The query used here added an extra condition to the match query, which goes like; where cat.id is 1 and the newKittenId is not yet owned by any of the kittens that had previously been added. So if the id for the cat is found and no kitten has taken the new kittenId, the query goes ahead and update the cat's kittens by adding a new one. But if the newKittenId had been taken by one of the kittens, it simply returns updateresult with no count, and no modified field (nothing happens).
Note: This does not ensure unique constraints on the kitten.id, mongo DB does not support uniqueness on object arrays in a document, and addToSet does not really handle duplicate item in an object array, except the object is 100% a replica of what is in the database check here for more explanation about addToSet.
there is a workaround you can do using the document validator.
Here is an example validator where "a" is an array and within "a" subdocument field "b" value must be unique. This assumes the collection is either empty or already complies with the rule:
> db.runCommand({collMod:"coll", validator: {$expr:{$eq:[{$size:"$a.b"},{$size:{$setUnion:"$a.b"}}]}}})
/* test it */
> db.coll.insert({a:[{b:1}]}) /* success */
> db.coll.update({},{ '$push' : { 'a':{b:1}}})
WriteResult({
"nMatched" : 0,
"nUpserted" : 0,
"nModified" : 0,
"writeError" : {
"code" : 121,
"errmsg" : "Document failed validation"
}
})
see more info about this solution from the original post
You can write a custom Mongoose validation method in this case. You can hook into post validation. Mongoose has validation and you can implement a hook before (pre) or after(post) validation. In this case, you can use post validation to see if the array is valid. Then just make sure the array has no duplications. There may be efficiency improvements you can make based upon your details. If you only have '_id' for example you could just use the JS includes function.
catSchema.post('validate',function(next) {
return new Promise((resolve,reject) => {
for(var i = 0; i < this.kittens.length; i++) {
let kitten = this.kittens[i];
for(var p = 0; p < this.kittens.length; p++) {
if (p == i) {
continue;
}
if (kitten._id == this.kittens[p]._id) {
return reject('Duplicate Kitten Ids not allowed');
}
}
}
return resolve();
});
});
I like to use promises in validation because it's easier to specify errors.