I have a collection in MongoDB. Model is:
{
currency: String,
price: Number,
time: Date
}
Documents are recorded to that collection any time the official rate for currency changes.
I am given a timestamp, and I need to fetch rates for all available currencies to that time. So first I need to filter all documents whose time $lte then required, then I need to fetch only those with max timestamps. For each currency.
after seeing your requirement , I think you want max number of price and time , use max operator
db.collection.aggregate(
[
{
$group:
{
_id: "$currency",
time: { $max: "$time"},
price: { $max: "$price" }
}
}
]
)
You can use mongo aggregate function to do so. Please find the example below:
db.<collection_name>.aggregate([
// First sort all the docs by time in descending
{$sort: {time: -1}},
// Take the first 3 of those
{$limit: 3}
])
Hope this helps !!
Related
I currently have a database with about 270 000 000 documents. They look like this:
[{
'location': 'Berlin',
'product': 4531,
'createdAt': ISODate(...),
'value': 3523,
'minOffer': 3215,
'quantity': 7812
},{
'location': 'London',
'product': 1231,
'createdAt': ISODate(...),
'value': 53523,
'minOffer': 44215,
'quantity': 2812
}]
The database currently holds a bit over one month of data and has ~170 locations (in EU and US) with ~8000 products. These documents represent timesteps, so there are about ~12-16 entries per day, per product per location (at most 1 per hour though).
My goal is to retrieve all timesteps of a product in a given location for the last 7 days. For a single location this query works reasonable fast (150ms) with the index { product: 1, location: 1, createdAt: -1 }.
However, I also need these timesteps not just for a single location, but an entire region (so about 85 locations). I'm currently doing that with this aggregation, which groups all the entries per hour and averages the desired values:
this.db.collection('...').aggregate([
{ $match: { { location: { $in: [array of ~85 locations] } }, product: productId, createdAt: { $gte: new Date(Date.now() - sevenDaysAgo) } } }, {
$group: {
_id: {
$toDate: {
$concat: [
{ $toString: { $year: '$createdAt' } },
'-',
{ $toString: { $month: '$createdAt' } },
'-',
{ $toString: { $dayOfMonth: '$createdAt' } },
' ',
{ $toString: { $hour: '$createdAt' } },
':00'
]
}
},
value: { $avg: '$value' },
minOffer: { $avg: '$minOffer' },
quantity: { $avg: '$quantity' }
}
}
]).sort({ _id: 1 }).toArray()
However, this is really really slow, even with the index { product: 1, createdAt: -1, location: 1 } (~40 secs). Is there any way to speed up this aggregation so it goes down to a few seconds at most? Is this even possible, or should I think about using something else?
I've thought about saving these aggregations in another database and just retrieving that and aggregating the rest, this is however really awkward for the first users on the site who have to sit 40 secs through waiting.
These are some ideas which can benefit the querying and performance. Whether all these will work together is matter of some trials and testing. Also, note that changing the way data is stored and adding new indexes means that there will changes to application, i.e., capturing data, and the other queries on the same data need to be carefully verified (that they are not affected in a wrong way).
(A) Storing a Day's Details in a Document:
Store (embed) a day's data within the same document as an array of sub-documents. Each sub-document represents an hour's entry.
From:
{
'location': 'London',
'product': 1231,
'createdAt': ISODate(...),
'value': 53523,
'minOffer': 44215,
'quantity': 2812
}
to:
{
location: 'London',
product: 1231,
createdAt: ISODate(...),
details: [ { value: 53523, minOffer: 44215, quantity: 2812 }, ... ]
}
This means about ten entries per document. Adding data for an entry will be pushing data into the details array, instead of adding a document as in present application. In case the hour's info (time) is required it can also be stored as part of the details sub-document; it will entirely depend upon your application needs.
The benefits of this design:
The number of documents to maintain and query will reduce (per
product per day about ten documents).
In the query, the group stage will go away. This will be just a
project stage. Note that the $project supports accumulators $avg and $sum.
The following stage will create the sums and averages for the day (or a document).
{
$project: { value: { $avg: '$value' }, minOffer: { $avg: '$minOffer' }, quantity: { $avg: '$quantity' } }
}
Note the increase in size of the document is not much, with the amount of details being stored per day.
(B) Querying by Region:
The present matching of multiple locations (or a region) with this query filer: { location: { $in: [array of ~85 locations] } }. This filter says : location: location-1, -or- location: location-3, -or- ..., location: location-50. Adding a new field , region, will filter with one value matching.
The query by region will change to:
{
$match: {
region: regionId,
product: productId,
createdAt: { $gte: new Date(Date.now() - sevenDaysAgo) }
}
}
The regionId variable is to be supplied to match with the region field.
Note that, both the queries, "by location" and "by region", will benefit with the above two considerations, A and B.
(C) Indexing Considerations:
The present index: { product: 1, location: 1, createdAt: -1 }.
Taking into consideration, the new field region, newer indexing will be needed. The query with region cannot benefit without an index on the region field. A second index will be needed; a compound index to suit the query. Creating an index with the region field means additional overhead on write operations. Also, there will be memory and storage considerations.
NOTES:
After adding the index, both the queries ("by location" and "by region") need to be verified using explain if they are using their respective indexes. This will require some testing; a trial-and-error process.
Again, adding new data, storing data in a different format, adding new indexes requires to consider these:
Careful testing and verifying that the other existing queries perform as usual.
The change in data capture needs.
Testing the new queries and verifying if the new design performs as expected.
Honestly your aggregation is pretty much as optimized as it can get, especially if you have { product: 1, createdAt: -1, location: 1 } as an index like you stated.
I'm not exactly sure how your entire product is built, however the best solution in my opinion is to have another collection containing just the "relevant" documents from the past week.
Then you could query that collection with ease, This is quite easy to do in Mongo as well using a TTL Index.
If this not an option you could add a temporary field to the "relevant" documents and query on that making it somewhat faster to retrieve them, but maintaining this field will require you to have a process running every X time which could make your results now 100% accurate depending when you decide to run it.
My schema implementation is influenced from this tutorial on official mongo site
{
_id: String,
data:[
{
point_1: Number,
ts: Date
}
]
}
This is basically schema designed for time series data and I store data for each hour per device in an array in a single document. I create _id field combining device id which is sending the data and time. For example if a device having id xyz1234 sends a data at 2018-09-11 12:30:00 then my _id field becomes xyz1234:2018091112.
I create new doc if the document for that hour for that device doesn't exist otherwise I just push my data to the data array.
client.db('iot')
.collection('iotdata')
.update({_id:id},{$push:{data:{point_1,ts:date}}},{upsert:true});
Now I am facing problem while doing aggregation. I am trying to get these types of values
Min point_1 value for many devices in last 24 hours by grouping on device id
Max point_1 value for many devices in last 24 hours by grouping on device id
Average point_1 for many devices in last 24 hours by grouping on device id
I thought this is very simple aggregation then I realized device id is not direct but mixed with time data so it's not so direct to group data by device id. How can I split the _id and group based on device id? I tried my level best to write the question as clearly as possible so please ask questions in comments if any part of the question is not clear.
You can start with $unwind on data to get single document per entry. Then you can get deviceId using $substr and $indexOfBytes operators. Then you can apply your filtering condition (last 24 hours) and use $group to get min, max and avg
db.col.aggregate([
{
$unwind: "$data"
},
{
$project: {
point_1: "$data.point_1",
deviceId: { $substr: [ "$_id", 0, { $indexOfBytes: [ "$_id", ":" ] } ] },
dateTime: "$data.ts"
}
},
{
$match: {
dateTime: { $gte: ISODate("2018-09-10T12:00:00Z") }
}
},
{
$group: {
_id: "$deviceId",
min: { $min: "$point_1" },
max: { $max: "$point_1" },
avg: { $avg: "$point_1" }
}
}
])
You can use below query in 3.6.
db.colname.aggregate([
{"$project":{
"deviceandtime":{"$split":["$_id", ":"]},
"minpoint":{"$min":"$data.point_1"},
"maxpoint":{"$min":"$data.point_1"},
"sumpoint":{"$sum":"$data.point_1"},
"count":{"$size":"$data.point_1"}
}},
{"$match":{"$expr":{"$gte":[{"$arrayElemAt":["$deviceandtime",1]},"2018-09-10 00:00:00"]}}},
{"$group":{
"_id":{"$arrayElemAt":["$deviceandtime",0]},
"minpoint":{"$min":"$minpoint"},
"maxpoint":{"$max":"$maxpoint"},
"sumpoint":{"$sum":"$sumpoint"},
"countpoint":{"$sum":"$count"}
}},
{"$project":{
"minpoint":1,
"maxpoint":1,
"avgpoint":{"$divide":["$sumpoint","$countpoint"]}
}}
])
I have this huge dataset for which every entry has a datetime field. The data was inserted irregularly. For example:
2015-04-20 : 500 entries,
2015-04-23 : 300 entries,
2015-05-01 : 600 entries
The thing is, I do not know when these active days are. What I would like is a mongodb query which returns some sort of array containing all days which occur in the database, like so:
['2015-04-20,
'2015-04-23,
'2015-04-23,
'2015-04-25,
'2015-05-01,
'2015-05-05,
'2015-05-09]
Is this possible, and if so: how can I achieve this?
There is a "distinct" command that has shell wrapper, which can be used something like:
db.collection.distinct(dateFieldName, query)
If you are not running it from shell, check whether your driver wraps this command, if not you can use the command directly:
{ distinct: "<collection>", key: "<field>", query: <query> }
http://docs.mongodb.org/manual/reference/command/distinct/#dbcmd.distinct
If your time stamp field needs some additinal processing, you can use aggregation framework.
db.collection.aggregate([{$group: {_id: $substr: ["$timestamp", 0, 10]}}]
http://docs.mongodb.org/v2.6/core/aggregation-introduction/
Assuming a field named dateField that contains Date values, you can use the aggregation date operators with $group to do this.
It's easiest if you're using Mongo 3.x where the $dateToString operator is available:
db.dates.aggregate([
{$group: {
_id: {$dateToString: {format: '%Y-%m-%d', date: '$dateField'}},
count: {$sum: 1}
}},
{$sort: {count: -1}}
])
Prior to 3.0 you need to use multiple date operators to piece together the date into the _id when grouping:
db.dates.aggregate([
{$group: {
_id: {
year: {$year: '$dateField'},
month: {$month: '$dateField'},
day: {$dayOfMonth: '$dateField'}
},
count: {$sum: 1}
}},
{$sort: {count: -1}}
])
In both cases, note the use of $sort to order the results by the number of docs on each day, descending.
I am trying to group by DayHours in a mongo aggregate function to get the past 24 hours of data.
For example: if the time of an event was 6:00 Friday the "DayHour" would be 6-5.
I'm easily able to group by hour with the following query:
db.api_log.aggregate([
{ '$group': {
'_id': {
'$hour': '$time'
},
'count': {
'$sum':1
}
}
},
{ '$sort' : { '_id': -1 } }
])
I feel like there is a better way to do this. I've tried concatenation in the $project statement, however you can only concatenate strings in mongo(apparently).
I effectively just need to end up grouping by day and hour, however it gets done. Thank You.
I assume that time field contains ISODate.
If you want only last 24 hours you can use this:
var yesterday = new Date((new Date).setDate(new Date().getDate() - 1));
db.api_log.aggregate(
{$match: {time: {$gt: yesterday}}},
{$group: {
_id: {
hour: {$hour: "$time"},
day: {$dayOfMonth: "$time"},
},
count: {$sum: 1}
}}
)
If you want general grouping by day-hour you can use this:
db.api_log.aggregate(
{$group: {
_id: {
hour: {$hour: "$time"},
day: {$dayOfMonth: "$time"},
month: {$month: "$time"},
year: {$year: "$time"}
},
count: {$sum: 1}
}}
)
Also this is not an answer per se (I do not have mongodb now to come up with the answer), but I think that you can not do this just with aggregation framework (I might be wrong, so I will explain myself).
You can obtain date and time information from mongoId using .getTimestamp method. The problem that you can not output this information in mongo query (something like db.find({},{_id.getTimestamp}) does not work). You also can not search by this field (except of using $where clause).
So if it is possible to achieve, it can be done only using mapreduce, where in reduce function you group based on the output of getTimestamp.
If this is the query you are going to do quite often I would recommend actually adding date field to your document, because using this field you will be able properly aggregate your data and also you can use indeces not to scan all your collection (like you are doing with $sort -1, but to $match only the part which is bigger then current date - 24 hours).
I hope this can help even without a code. If no one will be able to answer this, I will try to play with it tomorrow.
I have time series data stored in a mongodb database, where one of the fields is an ISODate object. I'm trying to retrieve all items for which the ISODate object has a zero value for minutes and seconds. That is, all the objects that have a timestamp at a round hour.
Is there any way to do that, or do I need to create separate fields for hour, min, second, and query for them directly by doing, e.g., find({"minute":0, "second":0})?
Thanks!
You could do this as #Devesh says or if it fits better you could use the aggregation framework:
db.col.aggregate([
{$project: {_id:1, date: {mins: {$minute: '$dateField'}, secs: {$second: '$dateField'}}}},
{$match: {mins: 0, secs: 0}}
]);
Like so.
Use the $expr operator along with the date aggregate operators $minute and $second in your find query as:
db.collection.find({
'$expr': {
'$and': [
{ '$eq': [ { '$minute': '$dateField' }, 0 ] },
{ '$eq': [ { '$second': '$dateField' }, 0 ] },
]
}
})
Can you have one more column added in the collection only containing the datetime without minutes and seconds . It will make your query faster and easy to use. It will be datetime column with no minutes and seconds parts