Unique index into a subdocument list, indexed by one key - mongodb

i need to know if is possible to have a list of objects, where the objects are uniques by day.
I have a collection with this format:
{
domain: "google.com"
counters: [
{ day: "2011-08-03", metric1: 10, metric_2: 15 }
{ day: "2011-08-04", metric1: 08, metric_2: 07 }
{ day: "2011-08-05", metric1: 20, metric_2: 150 }
]
}
I tried something like that:
db.test.ensureIndex({ domain: 1, 'counters.day': 1 }, { unique: true }).
with upsert and $push, but this not works.
Then I tried with upsert and $addToSet. but i can't set the unique fields.
I need to push a new counter, if the day exists, it should be replaced.

Unique indexes working only for the root document, but not for the embedded. So that's mean that you can't insert two documents with same domain and counters.day. But you can insert into embedded counters duplicated rows.
I need to push a new counter, if the day exists, it should be
replaced.
When you trying to insert new embedded document you should check if document with such day exists and in case if it exists make an update, otherwise insert.

Related

Increment or create object in sub-array

I want to store/update statistics for each year. I have the following model
{
entityid: ObjectId,
stats: [
{ year: 2018, value: 25 }
]
}
(This model is a bit simplified, in reality the year has also an array with months -> days -> hours. But the problem stays the same for the simplified model)
For updating I can simply use $inc like
db.statistics.updateOne(
{entityid, 'stats.year': 2018},
{$inc: { 'stats.$.year': 1}}
)
But now a problem arises when a new year begins because there will be no { year: 2019, value: 0 } inside the stats array. Upsert can not really be used because of the positional operator $.
The current solution is to check the result of the update query above if we actually modified a document. If no changes were applied we execute a push to insert the array element for the new year and execute the update again.
The solution feels like a hack and produces some problem with race conditions where multiple objects are pushed for the same year, although this can be fixed easily.
Can the update/push operation be performed in one go? Or is there a better database model to store this information?
You can either follow your hack or make database like this and use upsert on the year key while using $inc on value
{
entityid: ObjectId,
year: 2018,
value: 25
}
and use $group on entityid while fetching data if you want to group data.

In Mongoose, how to limit find, update, delete to first 100 documents in the collection?

A simplify situation is this:
There are 1000+ documents in the MongoDb collection, certain user(e.g. free account) can only operate on the first 100 documents.
The operation includes: find, update, and delete.
How to limit operation to first 100 documents in a collection? I have the following algorithem in mind:
1) find the first 100 documents
2) do find, update, delete, paginate only for this sub set of documents.
How to achieve this? If possible, please provide some sample code.
I would suggest to keep some numeric id field in your collection
In this way you can easily filter out your first n customers and you will not have to search records first and then process it according to your need.
Find a customer having first name as 'John'
Modal.find({
id: {
$gte: 1,
$lte: 100
},
first_name: 'John'
}
)
delete a customer which has first name as 'John'
Modal.deleteOne({
id: {
$gte: 1,
$lte: 100
},
first_name: "John"
}
)
Paginate by 10 doc per Page
Modal.find({
id: {
$gte: 1,
$lte: 100
}
}
).limit(10).skip(10)

Find holes in sequential MongoDB field

I have MongoDB collection with sequential integer field.
I need to find best approach to find "holes" in that sequence that happens due to records deletion.
For example if I have collection with these documents:
{ _id: "aab", seq: 1 ... }, { _id: "aac", seq: 2 ... }, { _id: "aad", seq: 4 ... }
The next insert I do, needs to be:
{ _id: "aae", seq: 3 ... }
May be you can create a MongoDB Stored Function to achieve this.
You can create a JavaScript Stored function which store's the "Sequence Numbers" of the document, getting deleted in some collection "StoreDeletedSequenceNumCollection".
Whenever you perform a Delete operation on the collection, make sure you call the "Stored Function " which actually stores the Sequence Number of the Document which is getting deleted in the collection "StoreDeletedSequenceNumCollection".
db.system.js.save( { _id: "DeletedSeqNum",
value: function (seqNum) {
db.StoreDeletedSequenceNumCollection.insert(
{ _id : seqNum}); return 1 ; }
}
);
When you are inserting a document, you can check if there are any sequence numbers present in the "StoreDeletedSequenceNumCollection", If present you can get the MINIMUM Sequence number from it and can use for insertion, Else if the "StoreDeletedSequenceNumCollection" is empty, then you can get the "MAXIMUM" sequence number form your actual collection and can use " MAXIMUM + 1" as a sequence number for insertion.

Why is my MongoDb query inserting an embedded document on Update?

This is my MongoDB query:
db.events.update({date:{$gte: ISODate("2014-09-01T00:00:00Z")}},{$set:{"artists.$.soundcloud_toggle":false}},{multi:true,upsert:false})
Apparently I cannot use "artists.$.soundcloud_toggle" to update all artist documents within the artists array:
"The $ operator can update the first array element that matches
multiple query criteria specified with the $elemMatch() operator.
http://docs.mongodb.org/manual/reference/operator/update/positional/"
I'm happy to run the query a number of times changing the index of the array in order to set the soundcloud_toggle property of every artist in every event that matches the query e.g
artists.0.soundcloud_toggle
artists.1.soundcloud_toggle
artists.2.soundcloud_toggle
artists.3.soundcloud_toggle
The problem is: when there is say, only one artist document in the artists array and I run the query with "artists.1.soundcloud_toggle" It will insert an artist document into the artist array with a single property:
{
"soundcloud_toggle" : true
},
(I have declared "upsert:false", which should be false by default anyways)
How do I stop the query from inserting a document and setting soundcloud_toggle:false when there is no existing document there? I only want it to update the property if an artist exists at the given artists array index.
If, like you said, you don't mind completing the operation with multiple queries, you can add an $exists condition to your filter.
E.g. in the 5th iteration, when updating index=4, add: "artists.4": {$exists: true}, like:
db.events.update(
{ date: {$gte: ISODate("2014-09-01T00:00:00Z")},
"artists.4": {$exists: true} },
{ $set:{ "artists.4.soundcloud_toggle" :false } },
{ multi: true, upsert: false }
)

mongoose compund unique index does not work

I have following index:
PaymentSchema.index({ driver_id: 1, year: 1, month: 1 },{ unique: true });
So I want this collection to hold just one record for each different combination of fields driver_id, year and month. I want to update collection with upsert option:
var query = {
driver_id: req.params.driver_id,
year: req.params.year,
month: req.params.month,
amount: req.params.old_value
};
var update = {
$set: {
amount: req.params.new_value
}
};
var options = {
upsert: true
};
Payment.update(query,update,options,function(err,rows){
if(err) return next(err);
res.json({});
});
So what I want is to update document with given unique key (driver_id+year+month) and with additional condition amount = .... If query conditions are ok document should be updated - and it works. If document is not found according to this conditions and document with unique index does not exist it is created. But if document with unique index exists (only amount condition is incorrect) then a new document in created with same unique index (driver_id + year + month). It is strange because I declared unique index on those 3 fields (driver_id+year+month) and I can see in mongoshell that there exist two documents with those fields the same...
Solved: I had to restart mongod and delete database (probably reindex option would work too).