MongoDB count concurrent - mongodb

I have a collection that has a start_time and end_time denoting a session
I need to count max concurrent sessions in a given hour.
something like aggregate and group by the hour.
what's the most efficient way to do this?

Your Query to do so will be something like:
db.collection_name.aggregate(
[ { $group : { _id : $hour, no_of_sessions : { $sum:1 } } } ]
)
Here: $hour is your time variable (assuming you are just storing the hour, if not you can apply (hour: { $hour: "$date" }) function to get it from date).
If hours are like 1:01 to 2:59 then you will need to define _id as compound key. something like: _id: {start_time: $start_time , end_time : $end_time}.
To get a more specific answer, please give the exact case.
Cheers!

The problems with this type of aggregation come in that a "session" with a "start_time" and an "end_time" can actually therefore "emit" hours that cross each grouped hour, so it is present in more that one hourly time period until the session ends. This can potentially span hours
The other main problem here is that the session may indeeed "start" before the time period you want to look at, or even end "after" the specified range, such as a day. Here you need to consider that you are generally looking for a "start_time" that is less than the end of the day you are looking at, and that the "end_time" is greater than the start of the day you are looking at.
Even so there are other considerations, such as does something have an "end_time" at all at the time it is analysed? Usually the best way to deal with this is consider a reasonable "session life" value, and factor that in to the basic query selection.
So with a few variables at play, we basically come of with the "base criteria" for selection:
var startDay = new Date("2015-08-30"),
endDay = new Date("2015-08-31"),
oneHour = 1000*60*60,
sessionTime = 3*oneHour;
var query = {
"start_time": {
"$gte": new Date(startDay.valueOf()-sessionTime),
"$lt": endDay
},
"$or": [
{ "end_time": { "$exists": false } },
{ "end_time": null },
{ "end_time": {
"$lt": new Date(endDay.valueOf()+sessionTime),
"$gte": startDay
}}
]
};
Working with a 3 hour window here for example to also include found dates outside of the current day to include in "possible" output.
Next consider some data to work with as a sample:
{ "_id": 1, "start_time": new Date("2015-08-29T23:30"), "end_time": new Date("2015-08-29T23:45") },
{ "_id": 2, "start_time": new Date("2015-08-29T23:30"), "end_time": new Date("2015-08-30T00:45") },
{ "_id": 3, "start_time": new Date("2015-08-30T00:30"), "end_time": new Date("2015-08-30T01:30") },
{ "_id": 4, "start_time": new Date("2015-08-30T01:30"), "end_time": new Date("2015-08-30T01:45") },
{ "_id": 5, "start_time": new Date("2015-08-30T01:30"), "end_time": new Date("2015-08-30T03:45") },
{ "_id": 6, "start_time": new Date("2015-08-30T01:45"), "end_time": new Date("2015-08-30T02:30") },
{ "_id": 7, "start_time": new Date("2015-08-30T23:30"), "end_time": null },
{ "_id": 8, "start_time": new Date("2015-08-30T23:30") },
{ "_id": 9, "start_time": new Date("2015-08-31T01:30") }
If we look at the criteria for the date range and the general query selection, then you can expect that records 2 through 8 would be considered in the day we are looking at as they either "ended" within the day or "started" within the day. The "session window" is mainly because some data does not have an "end_time", being either null or not present. That "window" helps filter out other irrelevant data that may be from more recent dates than what is being looked at, and keeps the size reasonable.
A quick visual scan should tell you that the counts per hour should be this:
0: 2
1: 4,
2: 2,
3: 1
23: 2
The actual process is better handled with mapReduce than any other aggregation medium. This is because the conditional logic required allows a "single document" to be "emitted" as a value valid for multiple periods. So there is an inherrent "looping" required here
db.sessions.mapReduce(
function() {
var oneHour = 1000*60*60,
start = (this.start_time > startDay)
? ( this.start_time.valueOf() - ( this.start_time.valueOf() % oneHour ))
: startDay,
end = (this.hasOwnProperty("end_time") && this.end_time != null)
? ( this.end_time.valueOf() - ( this.end_time.valueOf() % oneHour ))
: endDay;
// Uncomment to Emit blank values for each hour on first iteration
/*
if ( count == 0 ) {
for ( var x = 1; x <= 24; x++ ) {
emit(x,0);
}
count++;
}
*/
for ( var y = start; y <= end && (y-startDay)/oneHour < 24; y+= oneHour) {
emit(
(y-startDay ==0) ? 0 : ((y-startDay)/oneHour)
,1
);
}
},
function(key,values) {
return Array.sum(values);
},
{
"out": { "inline": 1 },
"scope": {
"startDay": startDay.valueOf(),
"endDay": endDay.valueOf(),
"count": 0
},
"query": query
}
)
Combined with the variable set earlier, this will correcty count how many sessions are currently running within each hour:
"results" : [
{
"_id" : 0,
"value" : 2
},
{
"_id" : 1,
"value" : 4
},
{
"_id" : 2,
"value" : 2
},
{
"_id" : 3,
"value" : 1
},
{
"_id" : 23,
"value" : 2
}
],
The basic actions for each record are:
Round out the start and end time each to 1 hour
Replace each value with either the startDay for the day being looked at or the endDay where the start was before the current day or the end_time is not present
From the start time, loop with one hour increments until the end time is reached or one day difference is reached. Each emission is a "count" for the hours difference from the startDay.
Reduce to sum the totals per each hour
There is an optional section which will also emit 0 values for every hour of the day, so that if no data is recorded then at least there is output for that hour as 0.

Related

MongoDB query: how to select the longest period of time of a matched value

I have a mongo database with many records in the format of:
{
id: {
$_id
},
date: {
$date: YYYY-MM-DDThh:mm:ssZ
},
reading: X.XX
}
where the date is a timestamp in mongo and reading is a float (id is just the unique identifier for the data point) .
I would like to be able to count the longest period of time when the reading was a certain value (lets say 0.00 for ease of use) and return the start and end points of this time period. If there were more than one time period of the same length I would like them all returned.
Ultimately, for example, I would like to be able to say
"The longest time period the reading is 0.00 and 1.25 hours
between
2000-01-01T00:00:00 - 2000-01-01T01:15:00,
2000-06-01T02:00:00 - 2000-06-01T03:15:00,
2000-11-11T20:00:00 - 2000-11-11T21:15:00 ."
For my mongo aggregation query I am thinking of doing this:
get the timeframe I am interested in (eg 2000-01-01 to
2001-01-01)
sort the data by date descending
somehow select the longest run when the reading is 0.00.
This is the query I have so far:
[
{
$match: {
date: { $gte: ISODate("2000-01-01T00:00:00.0Z"), $lt: ISODate("2001-01-01T00:00:00.0Z") }
}
},
{ "$sort": { "date": -1 } },
{
"$group" : {
"_id": null,
"Maximum": { "$max": { "max": "$reading", "date": "$date" } },
"Longest": { XXX: { start_dates: [], end_dates: [] } }
}
},
{
"$project": {
"_id": 0,
"max": "$Maximum",
"longest": "$Longest"
}
}
]
I do not know how to select the longest run. How would you do this?
(You will notice I am also interested in the maximum reading within the time period and the dates on which that maximum reading happens. At the moment I am only recording the latest date/time this occurs but would like it to record all the dates/times the maximum value occurs on eventually.)

MongoDB time is in stored as a string, how to filter on the parameter?

there is a data set where unfortunately time is not stored as datetime ISO format, but as a string, something like
{"time" : "2015-08-28 09:24:30"}
Is there a way to filter records based on this variable time?
Changing all data to timestamp is one of the right way, , but is there a way to do without it?
So the "real" anwer here is "don't do it", as converting your "strings" to a "BSON date" is a very trival process. Best done in the mongodb shell as a "one off" operation:
var bulk = db.collection.initializeOrderedBulkOp(),
count = 0;
db.collection.find({ "time": { "$type": 2 } }).forEach(function(doc) {
bulk.find({ "_id": doc._id }).updateOne({
"$set": { "time": new Date( doc.time.replace(" ","T") ) }
});
count++;
if ( count % 1000 == 0 ) {
bulk.execute();
bulk = db.collection.initializeOrderedBulkOp();
}
});
if ( count % 1000 != 0 )
bulk.execute();
Of course adjusting for "timezone" as required, but a fairly simple case anyway.
And then all "strings" are now BSON dates that you can query for a "day" for example with:
db.collection.find({
"time": { "$gte": new Date("2015-08-28"), "$lt": new Date("2015-08-29") }
})
And do so with relative ease, and no matter what your langauge is as long as the Date object passed in is supported for serialization via the driver.
But of course, as long as your strings are "lexical" ( which basically means "yyyy-mm-dd hh:mm:ss" ) then you can actually use a "range" with "string values" instead:
db.collection.find({
"time": {
"$gte": "2015-08-28 00:00:00",
"$lt": "2015-08-29 00:00:00"
}
})
And it works, but it just is not "wise".
Change your "strings" to BSON Date. It takes less storage and there is no "mucking around" with working the data into a real "Date" for your language API when you actually need it as such. The work is already done.

Moving averages with MongoDB's aggregation framework?

If you have 50 years of temperature weather data (daily) (for example) how would you calculate moving averages, using 3-month intervals, for that time period? Can you do that with one query or would you have to have multiple queries?
Example Data
01/01/2014 = 40 degrees
12/31/2013 = 38 degrees
12/30/2013 = 29 degrees
12/29/2013 = 31 degrees
12/28/2013 = 34 degrees
12/27/2013 = 36 degrees
12/26/2013 = 38 degrees
.....
The agg framework now has $map and $reduce and $range built in so array processing is much more straightfoward. Below is an example of calculating moving average on a set of data where you wish to filter by some predicate. The basic setup is each doc contains filterable criteria and a value, e.g.
{sym: "A", d: ISODate("2018-01-01"), val: 10}
{sym: "A", d: ISODate("2018-01-02"), val: 30}
Here it is:
// This controls the number of observations in the moving average:
days = 4;
c=db.foo.aggregate([
// Filter down to what you want. This can be anything or nothing at all.
{$match: {"sym": "S1"}}
// Ensure dates are going earliest to latest:
,{$sort: {d:1}}
// Turn docs into a single doc with a big vector of observations, e.g.
// {sym: "A", d: d1, val: 10}
// {sym: "A", d: d2, val: 11}
// {sym: "A", d: d3, val: 13}
// becomes
// {_id: "A", prx: [ {v:10,d:d1}, {v:11,d:d2}, {v:13,d:d3} ] }
//
// This will set us up to take advantage of array processing functions!
,{$group: {_id: "$sym", prx: {$push: {v:"$val",d:"$date"}} }}
// Nice additional info. Note use of dot notation on array to get
// just scalar date at elem 0, not the object {v:val,d:date}:
,{$addFields: {numDays: days, startDate: {$arrayElemAt: [ "$prx.d", 0 ]}} }
// The Juice! Assume we have a variable "days" which is the desired number
// of days of moving average.
// The complex expression below does this in python pseudocode:
//
// for z in range(0, size of value vector - # of days in moving avg):
// seg = vector[n:n+days]
// values = seg.v
// dates = seg.d
// for v in seg:
// tot += v
// avg = tot/len(seg)
//
// Note that it is possible to overrun the segment at the end of the "walk"
// along the vector, i.e. not enough date-values. So we only run the
// vector to (len(vector) - (days-1).
// Also, for extra info, we also add the number of days *actually* used in the
// calculation AND the as-of date which is the tail date of the segment!
//
// Again we take advantage of dot notation to turn the vector of
// object {v:val, d:date} into two vectors of simple scalars [v1,v2,...]
// and [d1,d2,...] with $prx.v and $prx.d
//
,{$addFields: {"prx": {$map: {
input: {$range:[0,{$subtract:[{$size:"$prx"}, (days-1)]}]} ,
as: "z",
in: {
avg: {$avg: {$slice: [ "$prx.v", "$$z", days ] } },
d: {$arrayElemAt: [ "$prx.d", {$add: ["$$z", (days-1)] } ]}
}
}}
}}
]);
This might produce the following output:
{
"_id" : "S1",
"prx" : [
{
"avg" : 11.738793632512115,
"d" : ISODate("2018-09-05T16:10:30.259Z")
},
{
"avg" : 12.420766702631376,
"d" : ISODate("2018-09-06T16:10:30.259Z")
},
...
],
"numDays" : 4,
"startDate" : ISODate("2018-09-02T16:10:30.259Z")
}
The way I would tend to do this in MongoDB is maintain a running sum of the past 90 days in the document for each day's value, e.g.
{"day": 1, "tempMax": 40, "tempMaxSum90": 2232}
{"day": 2, "tempMax": 38, "tempMaxSum90": 2230}
{"day": 3, "tempMax": 36, "tempMaxSum90": 2231}
{"day": 4, "tempMax": 37, "tempMaxSum90": 2233}
Whenever a new data point needs to be added to the collection, instead of reading and summing 90 values you can efficiently calculate the next sum with two simple queries, one addition and one subtraction like this (psuedo-code):
tempMaxSum90(day) = tempMaxSum90(day-1) + tempMax(day) - tempMax(day-90)
The 90-day moving average for at each day is then just the 90-day sum divided by 90.
If you wanted to also offer moving averages over different time-scales, (e.g. 1 week, 30 day, 90 day, 1 year) you could simply maintain an array of sums with each document instead of a single sum, one sum for each time-scale required.
This approach costs additional storage space and additional processing to insert new data, however is appropriate in most time-series charting scenarios where new data is collected relatively slowly and fast retrieval is desirable.
The accepted answer helped me, but it took a while for me to understand how it worked and so I thought i'd explain my method to help others out. Particularly in your context I think my answer will help
This works on smaller datasets ideally
First group the data by day, then append all days in an array to each day:
{
"$sort": {
"Date": -1
}
},
{
"$group": {
"_id": {
"Day": "$Date",
"Temperature": "$Temperature"
},
"Previous Values": {
"$push": {
"Date": "$Date",
"Temperature": "$Temperature"
}
}
}
This will leave you with a record that looks like this (it'll be ordered correctly):
{"_id.Day": "2017-02-01",
"Temperature": 40,
"Previous Values": [
{"Day": "2017-03-01", "Temperature": 20},
{"Day": "2017-02-11", "Temperature": 22},
{"Day": "2017-01-18", "Temperature": 03},
...
]},
Now that each day has all days appended to it, we need to remove the items from the Previous Values array that are more recent than the this _id.Day field, as the moving average is backward looking:
{
"$project": {
"_id": 0,
"Date": "$_id.Date",
"Temperature": "$_id.Temperature",
"Previous Values": 1
}
},
{
"$project": {
"_id": 0,
"Date": 1,
"Temperature": 1,
"Previous Values": {
"$filter": {
"input": "$Previous Values",
"as": "pv",
"cond": {
"$lte": ["$$pv.Date", "$Date"]
}
}
}
}
},
Each item in the Previous Values array will only contain the dates that are less than or equal to the date for each record:
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": [
{"Day": "2017-01-31", "Temperature": 33},
{"Day": "2017-01-30", "Temperature": 36},
{"Day": "2017-01-29", "Temperature": 33},
{"Day": "2017-01-28", "Temperature": 32},
...
]}
Now we can pick our average window size, since the data is by day, for week we'd take the first 7 records of the array; for monthly, 30; or 3-monthly, 90 days:
{
"$project": {
"_id": 0,
"Date": 1,
"Temperature": 1,
"Previous Values": {
"$slice": ["$Previous Values", 0, 90]
}
}
},
To average the previous temperatures we unwind the Previous Values array then group by the date field. The unwind operation does this:
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-31",
"Temperature": 33}
},
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-30",
"Temperature": 36}
},
{"Day": "2017-02-01",
"Temperature": 40,
"Previous Values": {
"Day": "2017-01-29",
"Temperature": 33}
},
...
See that the Day field is the same, but we now have a document for each of the previous dates from the Previous Values array. Now we can group back on day, then average Previous Values.Temperature to get the moving average:
{"$group": {
"_id": {
"Day": "$Date",
"Temperature": "$Temperature"
},
"3 Month Moving Average": {
"$avg": "$Previous Values.Temperature"
}
}
}
That's it! I know that joining every record to every record isn't ideal, but this works fine on smaller datasets
Starting in Mongo 5, it's a perfect use case for the new $setWindowFields aggregation operator:
Note that I'm consider the rolling average to have a 3-days window for simplicity (today and the 2 previous days):
// { date: ISODate("2013-12-26"), temp: 38 }
// { date: ISODate("2013-12-27"), temp: 36 }
// { date: ISODate("2013-12-28"), temp: 34 }
// { date: ISODate("2013-12-29"), temp: 31 }
// { date: ISODate("2013-12-30"), temp: 29 }
// { date: ISODate("2013-12-31"), temp: 38 }
// { date: ISODate("2014-01-01"), temp: 40 }
db.collection.aggregate([
{ $setWindowFields: {
sortBy: { date: 1 },
output: {
movingAverage: {
$avg: "$temp",
window: { range: [-2, "current"], unit: "day" }
}
}
}}
])
// { date: ISODate("2013-12-26"), temp: 38, movingAverage: 38 }
// { date: ISODate("2013-12-27"), temp: 36, movingAverage: 37 }
// { date: ISODate("2013-12-28"), temp: 34, movingAverage: 36 }
// { date: ISODate("2013-12-29"), temp: 31, movingAverage: 33.67 }
// { date: ISODate("2013-12-30"), temp: 29, movingAverage: 31.33 }
// { date: ISODate("2013-12-31"), temp: 38, movingAverage: 32.67 }
// { date: ISODate("2014-01-01"), temp: 40, movingAverage: 35.67 }
This:
sorts chronologically sorts documents: sortBy: { date: 1 }
creates for each document a span of documents (the window) that:
includes the "current" document and all previous documents within a "2"-"day" window
and within that window, averages temperatures: $avg: "$temp"
I think I may have an answer for my own question. Map Reduce would do it. First use emit to map each document to it's neighbors that it should be averaged with, then use reduce to avg each array... and that new array of averages should be the moving averages plot overtime since it's id would be the new date interval that you care about
I guess I needed to understand map-reduce better ...
:)
For instance... if we wanted to do it in memory (later we can create collections)
GIST https://gist.github.com/mrgcohen/3f67c597a397132c46f7
Does that look right?
I don't believe the aggregation framework can do this for multiple dates in the current version (2.6), or, at least, can't do this without some serious gymnastics. The reason is that the aggregation pipeline processes one document at a time and one document only, so it would be necessary to somehow create a document for each day that contains the previous 3 months worth of relevant information. This would be as a $group stage that would calculate the average, meaning that the prior stage would have produced about 90 copies of each day's record with some distinguishing key that can be used for the $group.
So I don't see a way to do this for more than one date at a time in a single aggregation. I'd be happy to be wrong and have to edit/remove this answer if somebody finds a way to do it, even if it's so complicated it's not practical. A PostgreSQL PARTITION type function would do the job here; maybe that function will be added someday.

MongoDB - Querying between a time range of hours

I have a MongoDB datastore set up with location data stored like this:
{
"_id" : ObjectId("51d3e161ce87bb000792dc8d"),
"datetime_recorded" : ISODate("2013-07-03T05:35:13Z"),
"loc" : {
"coordinates" : [
0.297716,
18.050614
],
"type" : "Point"
},
"vid" : "11111-22222-33333-44444"
}
I'd like to be able to perform a query similar to the date range example but instead on a time range. i.e. Retrieve all points recorded between 12AM and 4PM (can be done with 1200 and 1600 24 hour time as well).
e.g.
With points:
"datetime_recorded" : ISODate("2013-05-01T12:35:13Z"),
"datetime_recorded" : ISODate("2013-06-20T05:35:13Z"),
"datetime_recorded" : ISODate("2013-01-17T07:35:13Z"),
"datetime_recorded" : ISODate("2013-04-03T15:35:13Z"),
a query
db.points.find({'datetime_recorded': {
$gte: Date(1200 hours),
$lt: Date(1600 hours)}
});
would yield only the first and last point.
Is this possible? Or would I have to do it for every day?
Well, the best way to solve this is to store the minutes separately as well. But you can get around this with the aggregation framework, although that is not going to be very fast:
db.so.aggregate( [
{ $project: {
loc: 1,
vid: 1,
datetime_recorded: 1,
minutes: { $add: [
{ $multiply: [ { $hour: '$datetime_recorded' }, 60 ] },
{ $minute: '$datetime_recorded' }
] }
} },
{ $match: { 'minutes' : { $gte : 12 * 60, $lt : 16 * 60 } } }
] );
In the first step $project, we calculate the minutes from hour * 60 + min which we then match against in the second step: $match.
Adding an answer since I disagree with the other answers in that even though there are great things you can do with the aggregation framework, this really is not an optimal way to perform this type of query.
If your identified application usage pattern is that you rely on querying for "hours" or other times of the day without wanting to look at the "date" part, then you are far better off storing that as a numeric value in the document. Something like "milliseconds from start of day" would be granular enough for as many purposes as a BSON Date, but of course gives better performance without the need to compute for every document.
Set Up
This does require some set-up in that you need to add the new fields to your existing documents and make sure you add these on all new documents within your code. A simple conversion process might be:
MongoDB 4.2 and upwards
This can actually be done in a single request due to aggregation operations being allowed in "update" statements now.
db.collection.updateMany(
{},
[{ "$set": {
"timeOfDay": {
"$mod": [
{ "$toLong": "$datetime_recorded" },
1000 * 60 * 60 * 24
]
}
}}]
)
Older MongoDB
var batch = [];
db.collection.find({ "timeOfDay": { "$exists": false } }).forEach(doc => {
batch.push({
"updateOne": {
"filter": { "_id": doc._id },
"update": {
"$set": {
"timeOfDay": doc.datetime_recorded.valueOf() % (60 * 60 * 24 * 1000)
}
}
}
});
// write once only per reasonable batch size
if ( batch.length >= 1000 ) {
db.collection.bulkWrite(batch);
batch = [];
}
})
if ( batch.length > 0 ) {
db.collection.bulkWrite(batch);
batch = [];
}
If you can afford to write to a new collection, then looping and rewriting would not be required:
db.collection.aggregate([
{ "$addFields": {
"timeOfDay": {
"$mod": [
{ "$subtract": [ "$datetime_recorded", Date(0) ] },
1000 * 60 * 60 * 24
]
}
}},
{ "$out": "newcollection" }
])
Or with MongoDB 4.0 and upwards:
db.collection.aggregate([
{ "$addFields": {
"timeOfDay": {
"$mod": [
{ "$toLong": "$datetime_recorded" },
1000 * 60 * 60 * 24
]
}
}},
{ "$out": "newcollection" }
])
All using the same basic conversion of:
1000 milliseconds in a second
60 seconds in a minute
60 minutes in an hour
24 hours a day
The modulo from the numeric milliseconds since epoch which is actually the value internally stored as a BSON date is the simple thing to extract as the current milliseconds in the day.
Query
Querying is then really simple, and as per the question example:
db.collection.find({
"timeOfDay": {
"$gte": 12 * 60 * 60 * 1000, "$lt": 16 * 60 * 60 * 1000
}
})
Of course using the same time scale conversion from hours into milliseconds to match the stored format. But just like before you can make this whatever scale you actually need.
Most importantly, as real document properties which don't rely on computation at run-time, you can place an index on this:
db.collection.createIndex({ "timeOfDay": 1 })
So not only is this negating run-time overhead for calculating, but also with an index you can avoid collection scans as outlined on the linked page on indexing for MongoDB.
For optimal performance you never want to calculate such things as in any real world scale it simply takes an order of magnitude longer to process all documents in the collection just to work out which ones you want than to simply reference an index and only fetch those documents.
The aggregation framework may just be able to help you rewrite the documents here, but it really should not be used as a production system method of returning such data. Store the times separately.

How to return index of array item in Mongodb?

The document is like below.
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
"score": 21,
},
]
}
The daily active score is intended to increase once the book is opened by a reader. The first solution comes to mind is use "$" to find whether target date has a score or not, and deal with it.
err = bookCollection.Update(
{"title":"Book1", "dailyactivescore.date": 2013-06-06},
{"$inc":{"dailyactivescore.$.score": 1}})
if err == ErrNotFound {
bookCollection.Update({"title":"Book1"}, {"$push":...})
}
But I cannot help to think is there any way to return the index of an item inside array? If so, I could use one query to do the job rather than two. Like this.
index = bookCollection.Find(
{"title":"Book1", "dailyactivescore.date": 2013-06-06}).Select({"$index"})
if index != -1 {
incTarget = FormatString("dailyactivescore.%d.score", index)
bookCollection.Update(..., {"$inc": {incTarget: 1}})
} else {
//push here
}
Incrementing a field that's not present isn't the issue as doing $inc:1 on it will just create it and set it to 1 post-increment. The issue is when you don't have an array item corresponding to the date you want to increment.
There are several possible solutions here (that don't involve multiple steps to increment).
One is to pre-create all the dates in the array elements with scores:0 like so:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-01,
"score": 0,
},
{
"date": 2013-06-02,
"score": 0,
},
{
"date": 2013-06-03,
"score": 0,
},
{
"date": 2013-06-04,
"score": 0,
},
{
"date": 2013-06-05,
"score": 0,
},
{
"date": 2013-06-06,
"score": 0
}, { etc ... }
]
}
But how far into the future to go? So one option here is to "bucket" - for example, have an activities document "per month" and before the start of a month have a job that creates the new documents for next month. Slightly yucky. But it'll work.
Other options involve slight changes in schema.
You can use a collection with book, date, activity_scores. Then you can use a simple upsert to increment a score:
db.books.update({title:"Book1", date:"2013-06-02", {$inc:{score:1}}, {upsert:true})
This will increment the score or insert new record with score:1 for this book and date and your collection will look like this:
{
"title": "Book1",
"date": 2013-06-01,
"score": 10,
},
{
"title": "Book1",
"date": 2013-06-02,
"score": 1,
}, ...
Depending on how much you simplified your example from your real use case, this might work well.
Another option is to stick with the array but switch to using the date string as a key that you increment:
Schema:
{
"title": "Book1",
"dailyactiviescores":{
{ "2013-06-01":10},
{ "2013-06-02":8}
}
}
Note it's now a subdocument and not an array and you can do:
db.books.update({title:"Book1"}, {"dailyactivityscores.2013-06-03":{$inc:1}})
and it will add a new date into the subdocument and increment it resulting in:
{
"title": "Book1",
"dailyactiviescores":{
{ "2013-06-01":10},
{ "2013-06-02":8},
{ "2013-06-03":1}
}
}
Note it's now harder to "add-up" the scores for the book so you can atomically also update a "subtotal" in the same update statement whether it's for all time or just for the month.
But here it's once again problematic to keep adding days to this subdocument - what happens when you're still around in a few years and these book documents grow hugely?
I suspect that unless you will only be keeping activity scores for the last N days (which you can do with capped array feature in 2.4) it will be simpler to have a separate collection for book-activity-score tracking where each book-day is a separate document than to embed the scores for each day into the book in a collection of books.
According to the docs:
The $inc operator increments a value of a field by a specified amount.
If the field does not exist, $inc sets the field to the specified
amount.
So, if there won't be a score field in the array item, $inc will set it to 1 in your case, like this:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
},
]
}
bookCollection.Update(
{"title":"Book1", "dailyactivescore.date": 2013-06-06},
{"$inc":{"dailyactivescore.$.score": 1}})
will result into:
{
"title": "Book1",
"dailyactiviescores":[
{
"date": 2013-06-05,
"score": 10,
},
{
"date": 2013-06-06,
"score": 1
},
]
}
Hope that helps.