How is change in a value over time calculated in mongodb?
Streaming values over time.
Streaming data is collected at sub-second random time intervals into individual documents.
Streamed data is grouped and averaged over 1 minute time group.
Goal is to compare each value to the one minute average one hour later.
Example data:
[
{
_id: ObjectId("63a318c36ccc42d2330fae5e"),
timestamp: ISODate("2022-12-21T14:30:31.172Z"),
value: 3.8
},
{
_id: ObjectId("63a318c46ccc42d2330fae8d"),
timestamp: ISODate("2022-12-21T14:30:32.189Z"),
value: 4.0
},
{
_id: ObjectId("63a318c36ccc42d2330fae5e"),
timestamp: ISODate("2022-12-21T15:30:14.025Z"),
value: 5.0
},
{
_id: ObjectId("63a318c36ccc42d2330fae5e"),
timestamp: ISODate("2022-12-21T15:30:18.025Z"),
value: 5.5
}
]
values grouped and averaged in one minute groups:
{$group:{_id:{
"code": "$code",
"year": { "$year": "$timestamp" },
"dayOfYear": { "$dayOfYear": "$timestamp" },
"hour": { "$hour": "$timestamp" },
"minute":{$minute:"$timestamp"}
},
value:{$avg:"$value"},
timestamp:{$first:"$timestamp"},
this gets close to the goal, but aggregates all the prices over an hour interval:
{$group:{_id:{
"code": "$code",
"year": { "$year": "$timestamp" },
"dayOfYear": { "$dayOfYear": "$timestamp" },
"hour": { "$hour": "$timestamp" }
},
value:{$first:"$value"},
valueLast:{$last:"$value"},
timestamp:{$first:"$timestamp"},
}
},
Instead, I want to look at change in the individual documents
That is, what is the 14:30 value at 15:30, and what is the 15:35 value at 16:35:
How do I compare a value to one hour later for each document?
[
{
_id: ObjectId("63a318c36ccc42d2330fae5e"),
timestamp: ISODate("2022-12-21T14:30:31.172Z"),
value: 3.8,
valueLast: 5.25,
gainPct: .382
},
{
_id: ObjectId("63a318c46ccc42d2330fae8d"),
timestamp: ISODate("2022-12-21T14:30:32.189Z"),
value: 4.0,
valueLast: 5.25,
gainPct: .313
},
]
One option is to use $setWindowFields with time range for this:
It allows you to group by code sort by cleanTimeStamp and preform an accumulation function ($avg) on all document within a (time) range from your current document (each document in context):
db.collection.aggregate([
{$set: {
cleanTimeStamp: {
$dateTrunc: {
date: "$timestamp",
unit: "minute"
}
}
}},
{$setWindowFields: {
partitionBy: "$code",
sortBy: {cleanTimeStamp: 1},
output: {
valueLast: {
$avg: "$value",
window: {range: [59, 60], unit: "minute"}
}
}
}},
{$set: {
gainPct: {$round: [{$divide: [{$subtract: ["$valueLast", "$value"]}, "$value"]}, 3]},
cleanTimeStamp: "$$REMOVE"
}
}
])
See how it works on the playground example
It is not clear to me if you want the result for each document or for a specific timestamp. If you only want the query to return results for a specific minute, you can add one more step of $match, as a first step, to limit the context of your documents to be between the wanted timestamp and 1 hour after it.
Related
I have a "status" collection like this strcture -
{
_id: ObjectId("545a0b63b03dbcd1238b4567"),
status: 1004,
comment: "Rem dolor ipsam placeat omnis non. Aspernatur nobis qui nisi similique.",
created_at: ISODate("2014-11-05T11:34:59.804Z")
},
{
_id: ObjectId("545a0b66b03dbcd1238b4568"),
status: 1001,
comment: "Sint et eos vero ipsa voluptatem harum. Hic unde voluptatibus et blanditiis quod modi.",
created_at: ISODate("2014-11-05T11:35:02.814Z")
}
....
....
I need to get result grouped by 15 minutes interval from that collection.
There are a couple of ways to do this.
The first is with Date Aggregation Operators, which allow you to dissect the "date" values in documents. Specifically for "grouping" as the primary intent:
db.collection.aggregate([
{ "$group": {
"_id": {
"year": { "$year": "$created_at" },
"dayOfYear": { "$dayOfYear": "$created_at" },
"hour": { "$hour": "$created_at" },
"interval": {
"$subtract": [
{ "$minute": "$created_at" },
{ "$mod": [{ "$minute": "$created_at"}, 15] }
]
}
}},
"count": { "$sum": 1 }
}}
])
The second way is by using a little trick of when a date object is subtracted (or other direct math operation) from another date object, then the result is a numeric value representing the epoch timestamp milliseconds between the two objects. So just using the epoch date you get the epoch milliseconds representation. Then use date math for the interval:
db.collection.aggregate([
{ "$group": {
"_id": {
"$subtract": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
{ "$mod": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
1000 * 60 * 15
]}
]
},
"count": { "$sum": 1 }
}}
])
So it depends on what kind of output format you want for the grouping interval. Both basically represent the same thing and have sufficient data to re-construct as a "date" object in your code.
You can put anything else you want in the "grouping operator" portion after the grouping _id. I'm just using the basic "count" example in lieu of any real statement from yourself as to what you really want to do.
MongoDB 4.x and Upwards
There were some additions to Date Aggregation Operators since the original writing, but from MongoDB 4.0 there will be actual "real casting of types" as opposed to the basic math tricks done here with BSON Date conversion.
For instance we can use $toLong and $toDate as new helpers here:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": "$created_at" },
{ "$mod": [ { "$toLong": "$created_at" }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
That's a bit shorter and does not require defining an external BSON Date for the "epoch" value as a constant in defining the pipeline so it's pretty consistent for all language implementations.
Those are just two of the "helper" methods for type conversion which all tie back to the $convert method, which is a "longer" form of the implementation allowing for custom handling on null or error in conversion.
It's even possible with such casting to get the Date information from the ObjectId of the primary key, as this would be a reliable source of "creation" date:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
So "casting types" with this sort of conversion can be pretty powerful tool.
Warning - ObjectId values are limited to precision to the second only for the internal time value that makes up part of their data allowing the $toDate conversion. The actual inserted "time" is most probably dependent on the driver in use. Where precision is required, it's still recommended to use a discrete BSON Date field instead of relying on ObjectId values.
I like the other answer here, and mostly for the use of date math instead of aggregation date operators which while helpful can also be a little obscure.
The only thing I want to add here is that you can also return a Date object from the aggregation framework by this approach as opposed to the "numeric" timestamp as the result. It's just a little extra math on the same principles, using $add:
db.collection.aggregate([
{ "$group": {
"_id": {
"$add": [
{ "$subtract": [
{ "$subtract": [ "$current_date", new Date(0) ] },
{ "$mod": [
{ "$subtract": [ "$current_date", new Date(0) ] },
1000 * 60 * 15
]}
] },
new Date(0)
]
},
"count": { "$sum": 1 }
}}
])
The Date(0) contructs in JavaScript here represent the same "epoch" date in a shorter form, as 0 millisecond from epoch is epoch. But the main point is that when the "addition" to another BSON date object is done with a numeric identifier, then the inverse of the described condition is true and the end result is actually now a Date.
All drivers will return the native Date type to their language by this approach.
Another useful way:
db.collection.aggregate([
{$group: {
_id: {
overallTime: {
$dateToString: { format: "%Y-%m-%dT%H", date: "$created_at" }
},
interval: { $trunc: { $divide: [{ $minute: "$created_at" }, 15 ]}}
},
}},
])
And more easier for min, hour, day intervals:
var format = "%Y-%m-%dT%H:%M"; // 1 min
var format = "%Y-%m-%dT%H"; // 1 hour
var format = "%Y-%m-%d"; // 1 day
db.collection.aggregate([
{$group: {
_id: { $dateToString: { format: format, date: "$created_at" } },
}},
])
A little more beautiful for mongo db.version() < 3.0
db.collection.aggregate([
{$match: {created_at:{$exists:1}}},
{$group: {
_id: {$add:[
{$dayOfYear: "$created_at" },
{$multiply: [{$year: "$created_at"}, 1000]}
]},
count: {$sum: 1 }
}},
{$sort:{_id:-1}}
])
MongoDB 5.x and Upwards
date truncation is now supported in aggergation pipelines, example:
{
$group: {
"_id": { "$dateTrunc": { date: "$created_at", unit: "minute", binSize: 15 } },
"count" : { $sum: 1 }
}
},
You can also find useful info about window functions and dateTrunc here
#Neil Lunn's answer at https://stackoverflow.com/a/26814496/8474325 for MongoDb 4.x upwards is fantastic. But there is a small mistake in the code where he uses ObjectId for the aggregation. The Line { "$toDate": "_id" } has to be changed to { "$toDate": "$_id" } for the code to work.
Here's the corrected code.
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
With MongoDB v5.0+, you can use $setWindowFields to perform computation on nearby documents(i.e. documents within 15 minute interval). In following example, it will count all documents which is 15 minutes before or after the current document. You can adjust it by changing the window param.
db.collection.aggregate([
{
$setWindowFields: {
partitionBy: null,
sortBy: {
created_at: 1
},
output: {
count: {
$count: {},
window: {
range: [
-15,
15
],
unit: "minute"
}
}
}
}
}
])
Here is the Mongo Playground for your reference.
I have a "status" collection like this strcture -
{
_id: ObjectId("545a0b63b03dbcd1238b4567"),
status: 1004,
comment: "Rem dolor ipsam placeat omnis non. Aspernatur nobis qui nisi similique.",
created_at: ISODate("2014-11-05T11:34:59.804Z")
},
{
_id: ObjectId("545a0b66b03dbcd1238b4568"),
status: 1001,
comment: "Sint et eos vero ipsa voluptatem harum. Hic unde voluptatibus et blanditiis quod modi.",
created_at: ISODate("2014-11-05T11:35:02.814Z")
}
....
....
I need to get result grouped by 15 minutes interval from that collection.
There are a couple of ways to do this.
The first is with Date Aggregation Operators, which allow you to dissect the "date" values in documents. Specifically for "grouping" as the primary intent:
db.collection.aggregate([
{ "$group": {
"_id": {
"year": { "$year": "$created_at" },
"dayOfYear": { "$dayOfYear": "$created_at" },
"hour": { "$hour": "$created_at" },
"interval": {
"$subtract": [
{ "$minute": "$created_at" },
{ "$mod": [{ "$minute": "$created_at"}, 15] }
]
}
}},
"count": { "$sum": 1 }
}}
])
The second way is by using a little trick of when a date object is subtracted (or other direct math operation) from another date object, then the result is a numeric value representing the epoch timestamp milliseconds between the two objects. So just using the epoch date you get the epoch milliseconds representation. Then use date math for the interval:
db.collection.aggregate([
{ "$group": {
"_id": {
"$subtract": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
{ "$mod": [
{ "$subtract": [ "$created_at", new Date("1970-01-01") ] },
1000 * 60 * 15
]}
]
},
"count": { "$sum": 1 }
}}
])
So it depends on what kind of output format you want for the grouping interval. Both basically represent the same thing and have sufficient data to re-construct as a "date" object in your code.
You can put anything else you want in the "grouping operator" portion after the grouping _id. I'm just using the basic "count" example in lieu of any real statement from yourself as to what you really want to do.
MongoDB 4.x and Upwards
There were some additions to Date Aggregation Operators since the original writing, but from MongoDB 4.0 there will be actual "real casting of types" as opposed to the basic math tricks done here with BSON Date conversion.
For instance we can use $toLong and $toDate as new helpers here:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": "$created_at" },
{ "$mod": [ { "$toLong": "$created_at" }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
That's a bit shorter and does not require defining an external BSON Date for the "epoch" value as a constant in defining the pipeline so it's pretty consistent for all language implementations.
Those are just two of the "helper" methods for type conversion which all tie back to the $convert method, which is a "longer" form of the implementation allowing for custom handling on null or error in conversion.
It's even possible with such casting to get the Date information from the ObjectId of the primary key, as this would be a reliable source of "creation" date:
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
So "casting types" with this sort of conversion can be pretty powerful tool.
Warning - ObjectId values are limited to precision to the second only for the internal time value that makes up part of their data allowing the $toDate conversion. The actual inserted "time" is most probably dependent on the driver in use. Where precision is required, it's still recommended to use a discrete BSON Date field instead of relying on ObjectId values.
I like the other answer here, and mostly for the use of date math instead of aggregation date operators which while helpful can also be a little obscure.
The only thing I want to add here is that you can also return a Date object from the aggregation framework by this approach as opposed to the "numeric" timestamp as the result. It's just a little extra math on the same principles, using $add:
db.collection.aggregate([
{ "$group": {
"_id": {
"$add": [
{ "$subtract": [
{ "$subtract": [ "$current_date", new Date(0) ] },
{ "$mod": [
{ "$subtract": [ "$current_date", new Date(0) ] },
1000 * 60 * 15
]}
] },
new Date(0)
]
},
"count": { "$sum": 1 }
}}
])
The Date(0) contructs in JavaScript here represent the same "epoch" date in a shorter form, as 0 millisecond from epoch is epoch. But the main point is that when the "addition" to another BSON date object is done with a numeric identifier, then the inverse of the described condition is true and the end result is actually now a Date.
All drivers will return the native Date type to their language by this approach.
Another useful way:
db.collection.aggregate([
{$group: {
_id: {
overallTime: {
$dateToString: { format: "%Y-%m-%dT%H", date: "$created_at" }
},
interval: { $trunc: { $divide: [{ $minute: "$created_at" }, 15 ]}}
},
}},
])
And more easier for min, hour, day intervals:
var format = "%Y-%m-%dT%H:%M"; // 1 min
var format = "%Y-%m-%dT%H"; // 1 hour
var format = "%Y-%m-%d"; // 1 day
db.collection.aggregate([
{$group: {
_id: { $dateToString: { format: format, date: "$created_at" } },
}},
])
A little more beautiful for mongo db.version() < 3.0
db.collection.aggregate([
{$match: {created_at:{$exists:1}}},
{$group: {
_id: {$add:[
{$dayOfYear: "$created_at" },
{$multiply: [{$year: "$created_at"}, 1000]}
]},
count: {$sum: 1 }
}},
{$sort:{_id:-1}}
])
MongoDB 5.x and Upwards
date truncation is now supported in aggergation pipelines, example:
{
$group: {
"_id": { "$dateTrunc": { date: "$created_at", unit: "minute", binSize: 15 } },
"count" : { $sum: 1 }
}
},
You can also find useful info about window functions and dateTrunc here
#Neil Lunn's answer at https://stackoverflow.com/a/26814496/8474325 for MongoDb 4.x upwards is fantastic. But there is a small mistake in the code where he uses ObjectId for the aggregation. The Line { "$toDate": "_id" } has to be changed to { "$toDate": "$_id" } for the code to work.
Here's the corrected code.
db.collection.aggregate([
{ "$group": {
"_id": {
"$toDate": {
"$subtract": [
{ "$toLong": { "$toDate": "$_id" } },
{ "$mod": [ { "$toLong": { "$toDate": "$_id" } }, 1000 * 60 * 15 ] }
]
}
},
"count": { "$sum": 1 }
}}
])
With MongoDB v5.0+, you can use $setWindowFields to perform computation on nearby documents(i.e. documents within 15 minute interval). In following example, it will count all documents which is 15 minutes before or after the current document. You can adjust it by changing the window param.
db.collection.aggregate([
{
$setWindowFields: {
partitionBy: null,
sortBy: {
created_at: 1
},
output: {
count: {
$count: {},
window: {
range: [
-15,
15
],
unit: "minute"
}
}
}
}
}
])
Here is the Mongo Playground for your reference.
I am trying to aggregate some data and group it by Time Intervals as well as maintaining a sub-category, if you will. I want to be able to chart this data out so that I will have multiple different Lines corresponding to each Office that was called. The X axis will be the Time Intervals and the Y axis would be the Average Ring Time.
My data looks like this:
Calls: [{
created: ISODate(xyxyx),
officeCalled: 'ABC Office',
answeredAt: ISODate(xyxyx)
},
{
created: ISODate(xyxyx),
officeCalled: 'Office 2',
answeredAt: ISODate(xyxyx)
},
{
created: ISODate(xyxyx),
officeCalled: 'Office 3',
answeredAt: ISODate(xyxyx)
}];
My goal is to get my calls grouped by Time Intervals (30 Minutes/1 Hour/1 Day) AND by the Office Called. So when my aggregate completes, I'm looking for data like this:
[{"_id":TimeInterval1,"calls":[{"office":"ABC Office","ringTime":30720},
{"office":"Office2","ringTime":3070}]},
{"_id":TimeInterval2,"calls":[{"office":"Office1","ringTime":1125},
{"office":"ABC Office","ringTime":15856}]}]
I have been poking around for the past few hours and I was able to aggregate my data, but I haven't figured out how to group it properly so that I have each time interval along with the office data. Here is my latest code:
Call.aggregate([
{$match: {
$and: [
{created: {$exists: 1}},
{answeredAt: {$exists: 1}}]}},
{$project: { created: 1,
officeCalled: 1,
answeredAt: 1,
timeToAns: {$subtract: ["$answeredAt", "$created"]}}},
{$group: {_id: {"day": {"$dayOfYear": "$created"},
"hour": {
"$subtract": [
{"$hour" : "$created"},
{"$mod": [ {"$hour": "$created"}, 2]}
]
},
"officeCalled": "$officeCalled"
},
avgRingTime: {$avg: '$timeToAns'},
total: {$sum: 1}}},
{"$group": {
"_id": "$_id.day",
"calls": {
"$push": {
"office": "$_id.officeCalled",
"ringTime": "$avgRingTime"
},
}
}},
{$sort: {_id: 1}}
]).exec(function(err, results) {
//My results look like this
[{"_id":118,"calls":[{"office":"ABC Office","ringTime":30720},
{"office":"Office 2","ringTime":31384.5},
{"office":"Office 3","ringTime":7686.066666666667},...];
});
This just doesn't quite get it...I get my data but it's broken down by Day only. Not my 2 hour time interval that I was shooting for. Let me know if I'm doing this all wrong, please --- I am VERY NEW to aggregation so your help is very much appreciated.
Thank you!!
All you really need to do is include the both parts of the _id value your want in the final group. No idea why you thought to only reference a single field.
Also "loose the $project" as it is just wasted cycles and processing, when you can just use directly in $group on the first try:
Call.aggregate(
[
{ "$match": {
"created": { "$exists": 1 },
"answeredAt": { "$exists": 1 }
}},
{ "$group": {
"_id": {
"day": {"$dayOfYear": "$created"},
"hour": {
"$subtract": [
{"$hour" : "$created"},
{"$mod": [ {"$hour": "$created"}, 2]}
]
},
"officeCalled": "$officeCalled"
},
"avgRingTime": {
"$avg": { "$subtract": [ "$answeredAt", "$created" ] }
},
"total": { "$sum": 1 }
}},
{ "$group": {
"_id": {
"day": "$_id.day",
"hour": "$_id.hour"
},
"calls": {
"$push": {
"office": "$_id.officeCalled",
"ringTime": "$avgRingTime"
},
},
"total": { "$sum": "$total" }
}},
{ "$sort": { "_id": 1 } }
]
).exec(function(err, results) {
});
Also note the complete omission of $and. This is not needed as all MongoDB query arguments are already "AND" conditions anyway, unless specifically stated otherwise. Just stick to what is simple. It's meant to be simple.
I have a Collection containing a date field. I want to group it by dayOfMonth but at the time of projection I want to project the complete Date and associated count.
I have a raw Collection in mongodb containing a Timestamp (Date field)
This is my Aggregation query:
db.raw.aggregate(
{
"$match" : { "Timestamp":{$gte:new Date("2012-05-30T00:00:00.000Z"),$lt:new Date("2014-05-31T00:00:00.000Z")}}
},
{
$group:
{
_id: { ApplicationId: "$ApplicationId", date: {$dayOfMonth: '$Timestamp'} },
count: { $sum: 1 }
}
}
)
In the above query I'm grouping with dayOfMonth but how can I project complete the Date with count?
Your "Timestamp" values are clearly actual points in time so there really isn't a "complete date" to return. You could just generally "do the math" based on the date range you are applying and the "day of month" values returned as you process the results returned.
But alternately you could just "apply the math" to the date values in order by rounding the "timestamp" values out to the day. The returned values are no longer date objects, but they are the millisecond since epoch values, so it is relatively easy to "seed" those to date functions:
db.raw.aggregate([
{ "$match" : {
"Timestamp":{
"$gte": new Date("2012-05-30"),
"$lt": new Date("2014-05-31")
}
}},
{ "$group": {
"_id": {
"$subtract": [
{ "$subtract": [ "$Timestamp", new Date("1970-01-01") ] },
{ "$mod": [
{ "$subtract": [ "$Timestamp", new Date("1970-01-01") ] },
1000 * 60 * 60 * 24
])
]
},
"count": { "$sum": 1 }
}}
])
So when you subtract one date object from another the difference is milliseconds is returned as a number. So this just normalizes to epoch seconds by subtracting the epoch date. The rest is basic date math to round the result to the current day.
Alternately again you could just use other date aggregation operators and concatenate to a string, but there would be usually a bit more work involved unless those values were for direct use:
db.raw.aggregate([
{ "$match" : {
"Timestamp":{
"$gte": new Date("2012-05-30"),
"$lt": new Date("2014-05-31")
}
}},
{ "$group": {
"_id": {
"$concat": [
{ "$substr": [{ "$year": "$Timestamp" },0,4] },
"-",
{ "$substr": [{ "$month": "$Timestamp" },0,2] },
"-",
{ "$substr": [{ "$dayOfMonth": "$Timestamp" },0,2] }
]
},
"count": { "$sum": 1 }
}}
])
Neil Lunn has provides a great answer.
Theirs one more approach that u can use :
db.raw.aggregate([
{
"$match" :
{
"Timestamp":{"$gte": new Date("2012-05-30"), "$lt": new Date("2014-07-31")}
}
},
{
"$group" :
{
"_id":{"$dayOfMonth": "$Timestamp"},
"Date":{"$first":"$Timestamp"},
"count": { "$sum": 1 }
}
}
])
It will return you date.
Hope so this helps you.
I have a timeseries dataset with a few hundred thousand records in it. I am trying to create an aggregate query in mongo to group this data in intervals all while averaging the price.
Ideally I would want 10minute intervals (600000ms) and the price averages. I'm not too sure how to carry on from where I am at.
Data ~a few hundred thousand records:
{
"time" : 1391485215000,
"price" : "0.00133355",
}
query = [
{
"$project": {
"_id":"$_id",
"price":"$price",
"time": {
xxxx
}
}
},
{
"$group": {xxxx}
}
]
So it would appear that I had a fundamental flaw in my Schema. I was using an epoch timestamp instead of mongo's Date type, as well as storing the other numbers as strings instead of doubles. I tried a few workarounds but it doesn't look like you are able to use the built in aggregate functions unless they are of the correct type.
$project: {
year: { $year: '$time'},
month: { $month: '$time'},
day: { $dayOfMonth: '$time'},
hour: { $hour: '$time'},
price: 1,
total: 1,
amount: 1
}
},
{
$group : {
_id: { year: '$year', month: '$month', day: '$day', hour: '$hour' },
price:{
$avg: "$price"
},
high:{
$max: "$price"
},
low:{
$min: "$price"
},
amount:{
$sum: "$amount"
},
total:{
$sum: "$total"
}
}