In a database in MongoDB I am trying to group some data by their date (one group for each day of the year), and then add an additional field that would be the result of the multiplication of two of the already existing fields.
The data structure is:
{
"_id" : ObjectId("567a7c6d9da4bc18967a3947"),
"units" : 3.0,
"price" : 50.0,
"name" : "Name goes here",
"datetime" : ISODate("2015-12-23T10:50:21.560+0000")
}
I first tried a two stage approach using $project and then $group like this
db.things.aggregate(
[
{
$project: {
"_id" : 1,
"name" : 1,
"units" : 1,
"price" : 1,
"datetime":1,
"unitsprice" : { $multiply: [ "$price", "$units" ] }
}
},
{
$group: {
"_id" : {
"day" : {
"$dayOfMonth" : "$datetime"
},
"month" : {
"$month" : "$datetime"
},
"year" : {
"$year" : "$datetime"
}
},
"things" : {
"$push" : "$$ROOT"
}
}
}
],
)
in this case, the first step (the $project) gives the expected output (with the expected value of unitsprice), but then when doing the second $group step, it outputs this error:
"errmsg":$multiply only supports numeric types, not String",
"code":16555
I tried also turning around things, doing the $group step first and then the $project
db.things.aggregate(
[
{
$group: {
"_id" : {
"day" : {
"$dayOfMonth" : "$datetime"
},
"month" : {
"$month" : "$datetime"
},
"year" : {
"$year" : "$datetime"
}
},
"things" : {
"$push" : "$$ROOT"
}
}
},
{
$project: {
"_id" : 1,
"things":{
"name" : 1,
"units" : 1,
"price" : 1,
"datetime":1,
"unitsprice" : { $multiply: [ "$price", "$units" ] }
}
}
}
],
);
But in this case, the result of the multiplication is: unitsprice:null
Is there any way of doing this multiplication? Also, it would be nice to do it in a way that the output would not have nested fields, so it would look like:
{"_id":
"units":
"price":
"name":
"datetime":
"unitsprice":
}
Thanks in advance
PS:I am running MongoDB 3.2
Finally found the error. When importing one of the fields, a few of the price fields were created as a string. Surprisingly, the error didn't came out when first doing the multiplication in the project step (the output was normal until it reached the first wrong field, then it stopped), but when doing the group step.
In order to find the text fields I used this query:
db.things.find( { price: { $type: 2 } } );
Thanks for the hints
Related
I have a MongoDB aggregation pipeline that has been frustrating me for a while now, because it never seems to be accurate or correct to my needs. The aim is to count the number of new unique users each day per chatbot, starting from the very beginning.
Here's what my pipeline looks like right now.
[
{
"$project" : {
"_id" : 0,
"bot_id" : 1,
"customer_id" : 1,
"timestamp" : {
"$ifNull" : [
'$incoming_log.created_at', '$outcome_log.created_at'
]
}
}
},
{
"$project" : {
"customer_id" : 1,
"bot_id" : 1,
"timestamp" : {
"$dateFromString" : {
"dateString" : {
"$substr" : [
"$timestamp", 0, 10
]
}
}
}
}
},
{
"$group" : {
"_id" : "$customer_id",
"timestamp" : {
"$first" : "$timestamp"
},
"bot_id" : {
"$addToSet" : "$bot_id"
}
}
},
{
"$unwind" : "$bot_id"
},
{
"$group" : {
"_id" : {
"bot_id" : "$bot_id",
"customer_id" : "$_id"
},
"timestamp" : {
"$first" : "$timestamp"
}
}
},
{
"$project" : {
"_id" : 0,
"timestamp" : 1,
"customer_id" : "$_id.customer_id",
"bot_id" : "$_id.bot_id"
}
},
{
"$group" : {
"_id": {
"timestamp" : "$timestamp",
"bot_id" : "$bot_id"
},
"new_users" : {
"$sum" : 1
}
}
},
{
"$project" : {
"_id" : 0,
"timestamp" : "$_id.timestamp",
"bot_id" : "$_id.bot_id",
"new_users" : 1
}
}
]
Some sample data for an idea of what the data looks like...
{
"mid" : "...",
"bot_id" : "...",
"bot_name" : "JOBBY",
"customer_id" : "U122...",
"incoming_log" : {
"created_at" : ISODate("2020-12-08T09:14:16.237Z"),
"event_payload" : "",
"event_type" : "text"
},
"outcome_log" : {
"created_at" : ISODate("2020-12-08T09:14:18.145Z"),
"distance" : 0.25,
"incoming_msg" : "🥺"
}
}
My expected outcome is something along the lines of:
{
"new_users" : 1187.0,
"timestamp" : ISODate("2021-01-27T00:00:00.000Z"),
"bot_id" : "5ffd......."
},
{
"new_users" : 1359.0,
"timestamp" : ISODate("2021-01-27T00:00:00.000Z"),
"bot_id" : "6def......."
}
Have I overcomplicated my pipeline somewhere? I seem to get a reasonable number of new users per bot each day, but for some reason my colleague tells me that the number is too high. I need some tips, please!
I have really no idea what you are looking for.
"The aim is to count the number of new unique users each day per chatbot, starting from the very beginning."
What is "new unique users"? What do you mean by "starting from the very beginning"? You ask for count per day but you use {"$group": {"_id": "$customer_id", "timestamp": { "$first": "$timestamp" } } }
For me your grouping does not make any sense. With only one single sample document, it is almost impossible to guess what you like to count.
Regarding group per day: I prefer to work always with Date values, rather than strings. It is less error prone. Maybe you have to consider time zones, because UTC midnight is not your local midnight. When you work with Dates then you have better control over it.
The $project stages are useless when you do $group afterwards. Typically you have only one $project stage at the end.
So, put something to start.
db.collection.aggregate([
{
$set: {
day: {
$dateToParts: {
date: { $ifNull: ["$incoming_log.created_at", "$outcome_log.created_at"] }
}
}
}
},
{
$group: {
_id: "$customer_id",
timestamp: {$min: { $dateFromParts: { year: "$day.year", month: "$day.month", day: "$day.day" } }}
}
}
]);
I have a database that contains information about flights. I'm trying to find the category that has the least minutes of delays. I managed to find and show the number of the minimum minutes of the category but not the category itself.
I've tried to put ":true" after each field to show it
db.delayData.aggregate([{
$group: {
"_id": "$carrier",
"arr_sum": {
$sum: "$arr_delay"
},
"carrier_sum": {
$sum: "$carrier_delay"
},
"weather_sum": {
$sum: "$weather_delay"
},
"nas_sum": {
$sum: "$nas_delay"
},
"sec_sum": {
$sum: "$security_delay"
},
"late_air_sum": {
$sum: "$late_aircraft_delay"
}
}
},
{
$project {
"min_delay_category": {
$min["$arr_sum", "$carrier_sum", "$weather_sum", "$nas_sum", "$sec_sum", "$late_air_sum"]
}
}
]).pretty()
I want to have something like that:
{ "_id" : "VX", "min_delay_category" : 1449, "sec_sum"... }
I've tried to write:
..."$sec_sum":1,"$late_air_sum":1]
but the error message is:
"missing ] after element list"
when I wrote:
...{"sec_sum":1},{"late_air_sum":1}]
I don't have error message but it will give me the least second result, not the first one.
for example:
{ "_id" : "VX", "min_delay_category" : 69081 }
but the true result for "VX" is 1449
The following query can get us the expected output:
db.collection.aggregate([
{
$project:{
"carrier":1,
"category.arr_delay":"$arr_delay",
"category.carrier_delay":"$carrier_delay",
"category.weather_delay":"$weather_delay",
"category.nas_delay":"$nas_delay",
"category.security_delay":"$security_delay",
"category.late_aircraft_delay":"$late_aircraft_delay"
}
},
{
$project:{
"carrier":1,
"categories":{
$objectToArray:"$category"
}
}
},
{
$unwind:"$categories"
},
{
$group:{
"_id":{
"carrier":"$carrier",
"category":"$categories.k"
},
"carrier":{
$first:"$carrier"
},
"category":{
$first:"$categories.k"
},
"total_delay":{
$sum:"$categories.v"
}
}
},
{
$sort:{
"total_delay":1
}
},
{
$group:{
"_id": "$carrier",
"carrier":{
$first:"$carrier"
},
"category":{
$first:"$category"
},
"minimum_delay":{
$first:"$total_delay"
}
}
},
{
$project:{
"_id":0
}
}
]).pretty();
Data set:
{
"_id" : ObjectId("5d5b5058435c7584459b7bae"),
"year" : 2003,
"month" : 6,
"carrier" : "AA",
"carrier_name" : "American Airlines Inc.",
"airport" : "ABQ",
"airport_name" : "Albuquerque, NM: Albuquerque International Sunport",
"arr_flights" : 307,
"arr_del15" : 56,
"carrier_ct" : 14.68,
"weather_ct" : 10.79,
"nas_ct" : 19.09,
"security_ct" : 1.48,
"late_aircraft_ct" : 9.96,
"arr_cancelled" : 1,
"arr_diverted" : 1,
"arr_delay" : 2530,
"carrier_delay" : 510,
"weather_delay" : 621,
"nas_delay" : 676,
"security_delay" : 25,
"late_aircraft_delay" : 698,
"" : ""
},
{
"_id" : ObjectId("5d5b5058435c7584459b7bbe"),
"year" : 2003,
"month" : 6,
"carrier" : "AA",
"carrier_name" : "American Airlines Inc.",
"airport" : "ABQ",
"airport_name" : "Albuquerque, NM: Albuquerque International Sunport",
"arr_flights" : 307,
"arr_del15" : 56,
"carrier_ct" : 14.68,
"weather_ct" : 10.79,
"nas_ct" : 19.09,
"security_ct" : 1.48,
"late_aircraft_ct" : 9.96,
"arr_cancelled" : 1,
"arr_diverted" : 1,
"arr_delay" : 2530,
"carrier_delay" : 510,
"weather_delay" : 621,
"nas_delay" : 676,
"security_delay" : 2512,
"late_aircraft_delay" : 698,
"" : ""
}
Output:
{ "carrier" : "AA", "category" : "carrier_delay", "minimum_delay" : 1020 }
Aggregation stage details:
STAGE I: Projecting all delays as a part of category document
STAGE II: Converting category into an array of key-value pair
where 'k' is delay type and 'v' is a delay
STAGE III: Unwinding the prepared array
STAGE IV: Grouping on the basis of carrier and delay type(k) and summing up delay for each type
STAGE V: Sorting on total calculated delay in ascending order
STAGE VI: Grouping on carrier and fetching the first document
which holds the minimum delay
I am working on a software that uses MongoDB as a database. I have a collection like this (this is just one document)
{
"_id" : ObjectId("5aef51e0af42ea1b70d0c4dc"),
"EndpointId" : "89799bcc-e86f-4c8a-b340-8b5ed53caf83",
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Url" : "test",
"Tags" : [
{
"Uid" : "E2:02:00:18:DA:40",
"Type" : 1,
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Sensors" : [
{
"Type" : 1,
"Value" : NumberDecimal("-98")
},
{
"Type" : 2,
"Value" : NumberDecimal("-65")
}
]
},
{
"Uid" : "12:3B:6A:1A:B7:F9",
"Type" : 1,
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Sensors" : [
{
"Type" : 1,
"Value" : NumberDecimal("-95")
},
{
"Type" : 2,
"Value" : NumberDecimal("-59")
},
{
"Type" : 3,
"Value" : NumberDecimal("12.939770381907275")
}
]
}
]
}
and I want to run this query on it.
db.myCollection.aggregate([
{ $unwind: "$Tags" },
{
$match: {
$and: [
{
"Tags.DateTime": {
$gte: ISODate("2018-05-06T19:05:02Z"),
$lte: ISODate("2018-05-06T19:05:09Z"),
},
},
{ "Tags.Uid": { $in: ["C1:3D:CA:D4:45:11"] } },
],
},
},
{ $unwind: "$Tags.Sensors" },
{ $match: { "$Tags.Sensors.Type": { $in: [1, 2] } } },
{
$project: {
_id: 0,
EndpointId: "$EndpointId",
TagId: "$Tags.Uid",
Url: "$Url",
TagType: "$Tags.Type",
Date: "$Tags.DateTime",
SensorType: "$Tags.Sensors.Type",
Value: "$Tags.Sensors.Value",
},
},
])
the problem is, the second match (that checks $Tags.Sensors.Type) doesn't work and doesn't affect the result of the query.
How can I solve that?
If this is not the right way, what is the right way to run these conditions?
The $match stage accepts field names without a leading $ sign. You've done that correctly in your first $match stage but in the second one you write $Tags.Sensors.Type. Simply removing the leading $ sign should make your query work.
Mind you, the whole thing can be a bit simplified (and some beautification doesn't hurt, either):
You don't need to use $and in your example since it's assumed by default if you specify more than one criterion in a filter.
The $in that you use for the Tags.Sensors.Type filter can be a simple : kind of equality operator unless you have more than one element in the list of acceptable values.
In the $project stage, instead of (kind of) duplicating identical field names you can use the <field>: 1 syntax unless the order of the fields matters.
So the final query would be something like this.
db.myCollection.aggregate([
{
"$unwind" : "$Tags"
},
{
"$match" : {
"Tags.DateTime" : { "$gte" : ISODate("2018-05-06T19:05:02Z"), "$lte" : ISODate("2018-05-06T19:05:09Z") },
"Tags.Uid" : { "$in" : ["C1:3D:CA:D4:45:11"] }
}
}, {
"$unwind" : "$Tags.Sensors"
}, {
"$match" : {
"Tags.Sensors.Type" : { "$in" : [1,2] }
}
},
{
"$project" : {
"_id" : 0,
"EndpointId" : 1,
"TagId" : "$Tags.Uid",
"Url" : 1,
"TagType" : "$Tags.Type",
"Date" : "$Tags.DateTime",
"SensorType" : "$Tags.Sensors.Type",
"Value" : "$Tags.Sensors.Value"
}
}])
Our project database has a capped collection called values which gets updated every few minutes with new data from sensors. These sensors all belong to a single sensor node, and I would like to query the last data from these nodes in a single aggregation. The problem I am having is filtering out just the last of ALL the types of sensors while still having only one (efficient) query. I looked around and found the $group argument, but I can't seem to figure out how to use it correctly in this case.
The database is structured as follows:
nodes:
{
"_id": 681
"sensors": [
{
"type": "foo"
},
{
"type": "bar"
}
]
}
values:
{
"_id" : ObjectId("570cc8b6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 10
}
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 681,
"value" : 20
}
{
"_id" : ObjectId("167bc997bb66750d5740665e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 200,
"value" : 20
}
{
"_id" : ObjectId("110cc9c6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-09T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 12
}
so let's imagine I want the data from node 681, I would want a structure like this:
nodes:
{
"_id": 681
"sensors": [
{
"_id" : ObjectId("570cc8b6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 10
},
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 681,
"value" : 20
}
]
}
Notice how one value of foo is not queried, because I want to only get the latest value possible if there are more than one value (which is always going to be the case). The ordering of the collection is already according to the timestamp because the collection is capped.
I have this query, but it just gets all the values from the database (which is waaay too much to do in a lifetime, let alone one request of the web app), so I was wondering how I would filter it before it gets aggregated.
query:
db.nodes.aggregate(
[
{
$unwind: "$sensors"
},
{
$match:{
nodeid: 681
}
},
{
$lookup:{
from: "values", localField: "sensors.type", foreignField: "type", as: "sensors"
}
}
}
]
)
Try this
// Pipeline
[
// Stage 1 - sort the data collection if not already done (optional)
{
$sort: {
"timestamp":1
}
},
// Stage 2 - group by type & nodeid then get first item found in each group
{
$group: {
"_id":{type:"$type",nodeid:"$nodeid"},
"sensors": {"$first":"$$CURRENT"} //consider using $last if your collection is on reverse
}
},
// Stage 3 - project the fields in desired
{
$project: {
"_id":"$sensors._id",
"timestamp":"$sensors.timestamp",
"type":"$sensors.type",
"nodeid":"$sensors.nodeid",
"value":"$sensors.value"
}
},
// Stage 4 - group and push it to array sensors
{
$group: {
"_id":{nodeid:"$nodeid"},
"sensors": {"$addToSet":"$$CURRENT"}
}
}
]
as far as I got document structure, there is no need to use $lookup as all data is in readings(values) collection.
Please see proposed solution:
db.readings.aggregate([{
$match : {
nodeid : 681
}
},
{
$group : {
_id : {
type : "$type",
nodeid : "$nodeid"
},
readings : {
$push : {
timestamp : "$timestamp",
value : "$value",
id : "$_id"
}
}
}
}, {
$project : {
_id : "$_id",
readings : {
$slice : ["$readings", -1]
}
}
}, {
$unwind : "$readings"
}, {
$project : {
_id : "$readings.id",
type : "$_id.type",
nodeid : "$_id.nodeid",
timestamp : "$readings.timestamp",
value : "$readings.value",
}
}, {
$group : {
_id : "$nodeid",
sensors : {
$push : {
_id : "$_id",
timestamp : "$timestamp",
value : "$value",
type:"$type"
}
}
}
}
])
and output:
{
"_id" : 681,
"sensors" : [
{
"_id" : ObjectId("110cc9c6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-09T12:06:46.344Z"),
"value" : 12,
"type" : "foo"
},
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"value" : 20,
"type" : "bar"
}
]
}
Any comments welcome!
I have a collection that has records looking like this:
"_id" : ObjectId("550424ef2f44472856286d56"), "accountId" : "123",
"contactOperations" :
[
{ "contactId" : "1", "operation" : 1, "date" : 500 },
{ "contactId" : "1", "operation" : 2, "date" : 501 },
{ "contactId" : "2", "operation" : 1, "date" : 502 }
]
}
I want to know the latest operation number that has been applied on a certain contact.
I'm using the aggregation framework to first unwind the contactOperations and then grouping by accountId and contactOperations.contactId and max contactOperations.date.
aggregate([{$unwind : "$contactOperations"}, {$group : {"_id":{"accountId":"$accountId", "contactId":"$contactOperations.contactId"}, "date":{$max:"$contactOperations.date"} }}])
The result I get is:
"_id" : { "accountId" : "123", "contactId" : "2" }, "time" : 502 }
"_id" : { "accountId" : "123", "contactId" : "1" }, "time" : 501 }
Which seems correct so far, but I also need the contactOperations.operation field that was recorded with $max date. How can I select that?
You have to sort the unwind values then apply $last operator to get operation for max date. Hope this query will solve your problem.
aggregate([
{
$unwind: "$contactOperations"
},
{
$sort: {
"date": 1
}
},
{
$group: {
"_id": {
"accountId": "$accountId",
"contactId": "$contactOperations.contactId"
},
"date": {
$max: "$contactOperations.date"
},
"operationId": {
$last: "$contactOperations.operation"
}
}
}
])