I have a database that contains information about flights. I'm trying to find the category that has the least minutes of delays. I managed to find and show the number of the minimum minutes of the category but not the category itself.
I've tried to put ":true" after each field to show it
db.delayData.aggregate([{
$group: {
"_id": "$carrier",
"arr_sum": {
$sum: "$arr_delay"
},
"carrier_sum": {
$sum: "$carrier_delay"
},
"weather_sum": {
$sum: "$weather_delay"
},
"nas_sum": {
$sum: "$nas_delay"
},
"sec_sum": {
$sum: "$security_delay"
},
"late_air_sum": {
$sum: "$late_aircraft_delay"
}
}
},
{
$project {
"min_delay_category": {
$min["$arr_sum", "$carrier_sum", "$weather_sum", "$nas_sum", "$sec_sum", "$late_air_sum"]
}
}
]).pretty()
I want to have something like that:
{ "_id" : "VX", "min_delay_category" : 1449, "sec_sum"... }
I've tried to write:
..."$sec_sum":1,"$late_air_sum":1]
but the error message is:
"missing ] after element list"
when I wrote:
...{"sec_sum":1},{"late_air_sum":1}]
I don't have error message but it will give me the least second result, not the first one.
for example:
{ "_id" : "VX", "min_delay_category" : 69081 }
but the true result for "VX" is 1449
The following query can get us the expected output:
db.collection.aggregate([
{
$project:{
"carrier":1,
"category.arr_delay":"$arr_delay",
"category.carrier_delay":"$carrier_delay",
"category.weather_delay":"$weather_delay",
"category.nas_delay":"$nas_delay",
"category.security_delay":"$security_delay",
"category.late_aircraft_delay":"$late_aircraft_delay"
}
},
{
$project:{
"carrier":1,
"categories":{
$objectToArray:"$category"
}
}
},
{
$unwind:"$categories"
},
{
$group:{
"_id":{
"carrier":"$carrier",
"category":"$categories.k"
},
"carrier":{
$first:"$carrier"
},
"category":{
$first:"$categories.k"
},
"total_delay":{
$sum:"$categories.v"
}
}
},
{
$sort:{
"total_delay":1
}
},
{
$group:{
"_id": "$carrier",
"carrier":{
$first:"$carrier"
},
"category":{
$first:"$category"
},
"minimum_delay":{
$first:"$total_delay"
}
}
},
{
$project:{
"_id":0
}
}
]).pretty();
Data set:
{
"_id" : ObjectId("5d5b5058435c7584459b7bae"),
"year" : 2003,
"month" : 6,
"carrier" : "AA",
"carrier_name" : "American Airlines Inc.",
"airport" : "ABQ",
"airport_name" : "Albuquerque, NM: Albuquerque International Sunport",
"arr_flights" : 307,
"arr_del15" : 56,
"carrier_ct" : 14.68,
"weather_ct" : 10.79,
"nas_ct" : 19.09,
"security_ct" : 1.48,
"late_aircraft_ct" : 9.96,
"arr_cancelled" : 1,
"arr_diverted" : 1,
"arr_delay" : 2530,
"carrier_delay" : 510,
"weather_delay" : 621,
"nas_delay" : 676,
"security_delay" : 25,
"late_aircraft_delay" : 698,
"" : ""
},
{
"_id" : ObjectId("5d5b5058435c7584459b7bbe"),
"year" : 2003,
"month" : 6,
"carrier" : "AA",
"carrier_name" : "American Airlines Inc.",
"airport" : "ABQ",
"airport_name" : "Albuquerque, NM: Albuquerque International Sunport",
"arr_flights" : 307,
"arr_del15" : 56,
"carrier_ct" : 14.68,
"weather_ct" : 10.79,
"nas_ct" : 19.09,
"security_ct" : 1.48,
"late_aircraft_ct" : 9.96,
"arr_cancelled" : 1,
"arr_diverted" : 1,
"arr_delay" : 2530,
"carrier_delay" : 510,
"weather_delay" : 621,
"nas_delay" : 676,
"security_delay" : 2512,
"late_aircraft_delay" : 698,
"" : ""
}
Output:
{ "carrier" : "AA", "category" : "carrier_delay", "minimum_delay" : 1020 }
Aggregation stage details:
STAGE I: Projecting all delays as a part of category document
STAGE II: Converting category into an array of key-value pair
where 'k' is delay type and 'v' is a delay
STAGE III: Unwinding the prepared array
STAGE IV: Grouping on the basis of carrier and delay type(k) and summing up delay for each type
STAGE V: Sorting on total calculated delay in ascending order
STAGE VI: Grouping on carrier and fetching the first document
which holds the minimum delay
Related
I have a report that has been developed in PowerBI. It runs over a collection of jobs, and for a given month and year counts the number of jobs that were created, due or completed in that month using measures.
I am attempting to reproduce this report using a mongoDB aggregation pipeline. At first, I thought I could just use the $group stage to do this, but quickly realised that grouping by a specific date would exclude jobs.
Some sample documents are below (most fields excluded as they are not relevant):
{
"_id": <UUID>,
"createdOn": ISODate("2022-07-01T00:00"),
"dueOn": ISODate("2022-08-01T00:00"),
"completedOn": ISODate("2022-07-29T00:00")
},
{
"_id": <UUID>,
"createdOn": ISODate("2022-06-01T00:00"),
"dueOn": ISODate("2022-08-01T00:00"),
"completedOn": ISODate("2022-07-24T00:00")
}
For example, if I group by created date, the record for July 2022 would show 1 created job and only 1 completed job, but it should show 2.
How can I go about recreating this report? One idea was that I needed to determine the minimum and maximum of all the possible dates across those 3 date fields in my collection, but I don't know where to go from there
I ended up solving this by using a facet. I followed this process:
Each facet field grouped by a different date field from the source document, and then aggregated the relevant field (e.g. counts, or sums as required). I ensured each of these fields in the facet had a unique name.
I then did a project stage where I took each of the facet stage fields (arrays), and concat them into a single array
I unwound the array, and then replaced the root to make it simpler to work with
I then grouped again by the _id field which was set to the relevant date during the facet field, and then grabbed the relevant fields.
The relevant parts of the pipeline are below:
db.getCollection("jobs").aggregate(
// Pipeline
[
// Stage 3
{
$facet: {
//Facet 1, group by created date, count number of jobs created
//facet 2, group by completed date, count number of jobs completed
//facet 3, group by due date, count number of jobs due
"created" : [
{
$addFields : {
"monthStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$createdAt",
"unit" : "month",
"binSize" : 1.0,
"timezone" : "$timezone",
"startOfWeek" : "mon"
}
},
"timezone" : "$timezone"
}
}
}
},
"yearStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$createdAt",
"unit" : "year",
"binSize" : 1.0,
"timezone" : "$timezone"
}
},
"timezone" : "$timezone"
}
}
}
}
}
},
{
$group : {
"_id" : {
"year" : "$yearStarting",
"month" : "$monthStarting"
},
"monthStarting" : {
"$first" : "$monthStarting"
},
"yearStarting" : {
"$first" : "$yearStarting"
},
"createdCount": {$sum: 1}
}
}
],
"completed" : [
{
$addFields : {
"monthStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$completedDate",
"unit" : "month",
"binSize" : 1.0,
"timezone" : "$timezone",
"startOfWeek" : "mon"
}
},
"timezone" : "$timezone"
}
}
}
},
"yearStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$completedDate",
"unit" : "year",
"binSize" : 1.0,
"timezone" : "$timezone"
}
},
"timezone" : "$timezone"
}
}
}
}
}
},
{
$group : {
"_id" : {
"year" : "$yearStarting",
"month" : "$monthStarting"
},
"monthStarting" : {
"$first" : "$monthStarting"
},
"yearStarting" : {
"$first" : "$yearStarting"
},
"completedCount": {$sum: 1}
}
}
],
"due": [
{
$match: {
"dueDate": {$ne: null}
}
},
{
$addFields : {
"monthStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$dueDate",
"unit" : "month",
"binSize" : 1.0,
"timezone" : "$timezone",
"startOfWeek" : "mon"
}
},
"timezone" : "$timezone"
}
}
}
},
"yearStarting" : {
"$dateFromString" : {
"dateString" : {
"$dateToString" : {
"date" : {
"$dateTrunc" : {
"date" : "$dueDate",
"unit" : "year",
"binSize" : 1.0,
"timezone" : "$timezone"
}
},
"timezone" : "$timezone"
}
}
}
}
}
},
{
$group : {
"_id" : {
"year" : "$yearStarting",
"month" : "$monthStarting"
},
"monthStarting" : {
"$first" : "$monthStarting"
},
"yearStarting" : {
"$first" : "$yearStarting"
},
"dueCount": {$sum: 1},
"salesRevenue": {$sum: "$totalSellPrice"},
"costGenerated": {$sum: "$totalBuyPrice"},
"profit": {$sum: "$profit"},
"avgValue": {$avg: "$totalSellPrice"},
"finalisedRevenue": {$sum: {
$cond: {
"if": {$in: ["$status",["Finalised","Closed"]]},
"then": "$totalSellPrice",
"else": 0
}
}}
}
}
]
}
},
// Stage 4
{
$project: {
"docs": {$concatArrays: ["$created","$completed","$due"]}
}
},
// Stage 5
{
$unwind: {
path: "$docs",
}
},
// Stage 6
{
$replaceRoot: {
// specifications
"newRoot": "$docs"
}
},
// Stage 7
{
$group: {
_id: "$_id",
"monthStarting" : {
"$first" : "$monthStarting"
},
"yearStarting" : {
"$first" : "$yearStarting"
},
"monthStarting" : {
"$first" : "$monthStarting"
},
"createdCountSum" : {
"$sum" : "$createdCount"
},
"completedCountSum" : {
"$sum" : "$completedCount"
},
"dueCountSum" : {
"$sum" : "$dueCount"
},
"salesRevenue" : {
"$sum" : "$salesRevenue"
},
"costGenerated" : {
"$sum" : "$costGenerated"
},
"profit" : {
"$sum" : "$profit"
},
"finalisedRevenue" : {
"$sum" : "$finalisedRevenue"
},
"avgJobValue": {
$sum: "$avgValue"
}
}
},
],
);
I have a MongoDB aggregation pipeline that has been frustrating me for a while now, because it never seems to be accurate or correct to my needs. The aim is to count the number of new unique users each day per chatbot, starting from the very beginning.
Here's what my pipeline looks like right now.
[
{
"$project" : {
"_id" : 0,
"bot_id" : 1,
"customer_id" : 1,
"timestamp" : {
"$ifNull" : [
'$incoming_log.created_at', '$outcome_log.created_at'
]
}
}
},
{
"$project" : {
"customer_id" : 1,
"bot_id" : 1,
"timestamp" : {
"$dateFromString" : {
"dateString" : {
"$substr" : [
"$timestamp", 0, 10
]
}
}
}
}
},
{
"$group" : {
"_id" : "$customer_id",
"timestamp" : {
"$first" : "$timestamp"
},
"bot_id" : {
"$addToSet" : "$bot_id"
}
}
},
{
"$unwind" : "$bot_id"
},
{
"$group" : {
"_id" : {
"bot_id" : "$bot_id",
"customer_id" : "$_id"
},
"timestamp" : {
"$first" : "$timestamp"
}
}
},
{
"$project" : {
"_id" : 0,
"timestamp" : 1,
"customer_id" : "$_id.customer_id",
"bot_id" : "$_id.bot_id"
}
},
{
"$group" : {
"_id": {
"timestamp" : "$timestamp",
"bot_id" : "$bot_id"
},
"new_users" : {
"$sum" : 1
}
}
},
{
"$project" : {
"_id" : 0,
"timestamp" : "$_id.timestamp",
"bot_id" : "$_id.bot_id",
"new_users" : 1
}
}
]
Some sample data for an idea of what the data looks like...
{
"mid" : "...",
"bot_id" : "...",
"bot_name" : "JOBBY",
"customer_id" : "U122...",
"incoming_log" : {
"created_at" : ISODate("2020-12-08T09:14:16.237Z"),
"event_payload" : "",
"event_type" : "text"
},
"outcome_log" : {
"created_at" : ISODate("2020-12-08T09:14:18.145Z"),
"distance" : 0.25,
"incoming_msg" : "🥺"
}
}
My expected outcome is something along the lines of:
{
"new_users" : 1187.0,
"timestamp" : ISODate("2021-01-27T00:00:00.000Z"),
"bot_id" : "5ffd......."
},
{
"new_users" : 1359.0,
"timestamp" : ISODate("2021-01-27T00:00:00.000Z"),
"bot_id" : "6def......."
}
Have I overcomplicated my pipeline somewhere? I seem to get a reasonable number of new users per bot each day, but for some reason my colleague tells me that the number is too high. I need some tips, please!
I have really no idea what you are looking for.
"The aim is to count the number of new unique users each day per chatbot, starting from the very beginning."
What is "new unique users"? What do you mean by "starting from the very beginning"? You ask for count per day but you use {"$group": {"_id": "$customer_id", "timestamp": { "$first": "$timestamp" } } }
For me your grouping does not make any sense. With only one single sample document, it is almost impossible to guess what you like to count.
Regarding group per day: I prefer to work always with Date values, rather than strings. It is less error prone. Maybe you have to consider time zones, because UTC midnight is not your local midnight. When you work with Dates then you have better control over it.
The $project stages are useless when you do $group afterwards. Typically you have only one $project stage at the end.
So, put something to start.
db.collection.aggregate([
{
$set: {
day: {
$dateToParts: {
date: { $ifNull: ["$incoming_log.created_at", "$outcome_log.created_at"] }
}
}
}
},
{
$group: {
_id: "$customer_id",
timestamp: {$min: { $dateFromParts: { year: "$day.year", month: "$day.month", day: "$day.day" } }}
}
}
]);
I'm trying to clean a huge database.
Sample DB :
{
"_id" : ObjectId("59fc5249d5ab401d99f3de7f"),
"addedAt" : ISODate("2017-11-03T11:26:01.744Z"),
"__v" : 0,
"check" : 17602,
"lastCheck" : ISODate("2018-04-05T11:47:00.609Z"),
"tracking" : [
{
"timeCheck" : ISODate("2017-11-06T13:17:20.861Z"),
"_id" : ObjectId("5a0060e00f3c330012bafe39"),
"rank" : 2395,
},
{
"timeCheck" : ISODate("2017-11-06T13:22:31.254Z"),
"_id" : ObjectId("5a0062170f3c330012bafe77"),
"rank" : 2395,
},
{
"timeCheck" : ISODate("2017-11-06T13:27:40.551Z"),
"_id" : ObjectId("5a00634c0f3c330012bafebe"),
"rank" : 2379,
},
{
"timeCheck" : ISODate("2017-11-06T13:32:41.084Z"),
"_id" : ObjectId("5a0064790f3c330012baff03"),
"rank" : 2395,
},
{
"timeCheck" : ISODate("2017-11-06T13:37:51.012Z"),
"_id" : ObjectId("5a0065af0f3c330012baff32"),
"rank" : 2379,
},
{
"timeCheck" : ISODate("2017-11-07T13:37:51.012Z"),
"_id" : ObjectId("5a0065af0f3c330012baff34"),
"rank" : 2379,
}]
}
I have a lot of duplicate value but I need to clean only by day.
To obtain this for example :
{
"_id" : ObjectId("59fc5249d5ab401d99f3de7f"),
"addedAt" : ISODate("2017-11-03T11:26:01.744Z"),
"__v" : 0,
"check" : 17602,
"lastCheck" : ISODate("2018-04-05T11:47:00.609Z"),
"tracking" : [
{
"timeCheck" : ISODate("2017-11-06T13:17:20.861Z"),
"_id" : ObjectId("5a0060e00f3c330012bafe39"),
"rank" : 2395,
},
{
"timeCheck" : ISODate("2017-11-06T13:27:40.551Z"),
"_id" : ObjectId("5a00634c0f3c330012bafebe"),
"rank" : 2379,
},
{
"timeCheck" : ISODate("2017-11-07T13:37:51.012Z"),
"_id" : ObjectId("5a0065af0f3c330012baff34"),
"rank" : 2379,
}]
}
How can I aggregate by day and after delete last value duplicate?
I need to keep the values per day even if they are identical with another day.
The aggregation framework cannot update data at this stage. However, you can use the following aggregation pipeline in order to get the desired output and then use e.g. a bulk replace to update all your documents:
db.collection.aggregate({
$unwind: "$tracking" // flatten the "tracking" array into separate documents
}, {
$sort: {
"tracking.timeCheck": 1 // sort by timeCheck to allow us to use the $first operator in the next stage reliably
}
}, {
$group: {
_id: { // group by
"_id": "$_id", // "_id" and
"rank": "$tracking.rank", // "rank" and
"date": { // the "date" part of the "timeCheck" field
$dateFromParts : {
year: { $year: "$tracking.timeCheck" },
month: { $month: "$tracking.timeCheck" },
day: { $dayOfWeek: "$tracking.timeCheck" }
}
}
},
"doc": { $first: "$$ROOT" } // only keep the first document per group
}
}, {
$sort: {
"doc.tracking.timeCheck": 1 // restore ascending sort order - may or may not be needed...
}
}, {
$group: {
_id: "$_id._id", // merge everything again per "_id"
"addedAt": { $first: "$doc.addedAt" },
"__v": { $first: "$doc.__v" },
"check": { $first: "$doc.check" },
"lastCheck": { $first: "$doc.lastCheck" },
"tracking": { $push: "$doc.tracking" } // in order to join the tracking values into an array again
}
})
I have been learning MongoDB, while doing so I tried to implement the aggregation property for my database collection. I grouped the details of the employee based on their age and by using match function, my question is it possible to display the other key-value once they pass the age criteria?
db.employee.aggregate([
{ $match: { age: { $gte: 23 } } },
{
$group: {
_id:'$age',
total: { $sum: 1 },
name: { $addToSet: '$name' }
}
}
])
and the output was like this
{ "_id" : 27, "total" : 2, "name" : [ "indhu", "logesh" ] }
{ "_id" : 26, "total" : 1, "name" : [ "keerthana" ] }
{ "_id" : 25, "total" : 1, "name" : [ "sneha" ] }
{ "_id" : 24, "total" : 1, "name" : [ "dhiva" ] }
{ "_id" : 23, "total" : 1, "name" : [ "elango" ] }
where _id denotes their age.
In a database in MongoDB I am trying to group some data by their date (one group for each day of the year), and then add an additional field that would be the result of the multiplication of two of the already existing fields.
The data structure is:
{
"_id" : ObjectId("567a7c6d9da4bc18967a3947"),
"units" : 3.0,
"price" : 50.0,
"name" : "Name goes here",
"datetime" : ISODate("2015-12-23T10:50:21.560+0000")
}
I first tried a two stage approach using $project and then $group like this
db.things.aggregate(
[
{
$project: {
"_id" : 1,
"name" : 1,
"units" : 1,
"price" : 1,
"datetime":1,
"unitsprice" : { $multiply: [ "$price", "$units" ] }
}
},
{
$group: {
"_id" : {
"day" : {
"$dayOfMonth" : "$datetime"
},
"month" : {
"$month" : "$datetime"
},
"year" : {
"$year" : "$datetime"
}
},
"things" : {
"$push" : "$$ROOT"
}
}
}
],
)
in this case, the first step (the $project) gives the expected output (with the expected value of unitsprice), but then when doing the second $group step, it outputs this error:
"errmsg":$multiply only supports numeric types, not String",
"code":16555
I tried also turning around things, doing the $group step first and then the $project
db.things.aggregate(
[
{
$group: {
"_id" : {
"day" : {
"$dayOfMonth" : "$datetime"
},
"month" : {
"$month" : "$datetime"
},
"year" : {
"$year" : "$datetime"
}
},
"things" : {
"$push" : "$$ROOT"
}
}
},
{
$project: {
"_id" : 1,
"things":{
"name" : 1,
"units" : 1,
"price" : 1,
"datetime":1,
"unitsprice" : { $multiply: [ "$price", "$units" ] }
}
}
}
],
);
But in this case, the result of the multiplication is: unitsprice:null
Is there any way of doing this multiplication? Also, it would be nice to do it in a way that the output would not have nested fields, so it would look like:
{"_id":
"units":
"price":
"name":
"datetime":
"unitsprice":
}
Thanks in advance
PS:I am running MongoDB 3.2
Finally found the error. When importing one of the fields, a few of the price fields were created as a string. Surprisingly, the error didn't came out when first doing the multiplication in the project step (the output was normal until it reached the first wrong field, then it stopped), but when doing the group step.
In order to find the text fields I used this query:
db.things.find( { price: { $type: 2 } } );
Thanks for the hints