MongoDb aggregation framework value of a field where max another field - mongodb

I have a collection that has records looking like this:
"_id" : ObjectId("550424ef2f44472856286d56"), "accountId" : "123",
"contactOperations" :
[
{ "contactId" : "1", "operation" : 1, "date" : 500 },
{ "contactId" : "1", "operation" : 2, "date" : 501 },
{ "contactId" : "2", "operation" : 1, "date" : 502 }
]
}
I want to know the latest operation number that has been applied on a certain contact.
I'm using the aggregation framework to first unwind the contactOperations and then grouping by accountId and contactOperations.contactId and max contactOperations.date.
aggregate([{$unwind : "$contactOperations"}, {$group : {"_id":{"accountId":"$accountId", "contactId":"$contactOperations.contactId"}, "date":{$max:"$contactOperations.date"} }}])
The result I get is:
"_id" : { "accountId" : "123", "contactId" : "2" }, "time" : 502 }
"_id" : { "accountId" : "123", "contactId" : "1" }, "time" : 501 }
Which seems correct so far, but I also need the contactOperations.operation field that was recorded with $max date. How can I select that?

You have to sort the unwind values then apply $last operator to get operation for max date. Hope this query will solve your problem.
aggregate([
{
$unwind: "$contactOperations"
},
{
$sort: {
"date": 1
}
},
{
$group: {
"_id": {
"accountId": "$accountId",
"contactId": "$contactOperations.contactId"
},
"date": {
$max: "$contactOperations.date"
},
"operationId": {
$last: "$contactOperations.operation"
}
}
}
])

Related

Whats the alternative to $replaceRoot on mongoDB? $replaceRoot is incompatible with documentDB

The problem: I'm trying to make a query on MongoDB, but I'm using the DocumentDb from amazon, where some operations are no supported. I wanted to find an alternative to get the same result, if possible. Basically I want to change the root of the result, instead of being the first entity, I need it to be some merging of some values in different levels of the document.
So, I have the following structure in my collection:
{
"_id" : ObjectId("5e598bf4d98f7c70f9aa3b58"),
"status" : "active",
"invoices" : [
{
"_id" : ObjectId("5e598bf13b24713f50600375"),
"value" : 1157.52,
"receivables" : [
{
"situation" : {
"status" : "active",
"reason" : []
},
"rec_code" : "001",
"_id" : ObjectId("5e598bf13b24713f50600374"),
"expiration_date" : ISODate("2020-03-25T00:00:00.000Z"),
"value" : 1157.52
}
],
"invoice_code" : 9773,
"buyer" : {
"legal_name" : "test name",
"buyer_code" : "223132165498797"
}
},
],
"seller" : {
"code" : "321654897986",
"name" : "test name 2"
}
}
What I want to achieve is to list all "receivables" like this, where the _id is the _id of the receivable:
[{
"_id" : ObjectId("5e598bf13b24713f50600374"),
"situation" : {
"status" : "active",
"reason" : []
},
"rec_code" : "001",
"expiration_date" : ISODate("2020-03-25T00:00:00.000Z"),
"value" : 1157.52,
"status" : "active",
"seller" : {
"cnpj" : "321654897986",
"name" : "test name 2"
},
"invoice_code" : 9773.0,
"buyer" : {
"legal_name" : "test name",
"cnpj" : "223132165498797"
}
}]
This I can do with $replaceRoot in with the query below on MongoDB, but using documentDB I can't use $replaceRoot or $mergeObjects. Do you know how can I get the same result with other operators?:
db.testCollection.aggregate([
{ $unwind: "$invoices" },
{ $replaceRoot: {
newRoot: {
$mergeObjects: ["$$ROOT","$invoices"]}
}
},
{$project: {"_id": 0, "value": 0, "created_at": 0, "situation": 0}},
{ $unwind: "$receivables" },
{ $replaceRoot: {
newRoot: {
$mergeObjects: ["$receivables", "$$ROOT"]
}
}
},
{$project:{"created_at": 0, "receivables": 0, "invoices": 0}}
])
After going through mongodb operations, I could get a similar result fro what I wanted with the following query without $replaceRoot. It turns out it was a better query, I think:
db.testCollection.aggregate([
{$unwind: "$invoices"},
{$project : {
created_at: 1,
seller: "$seller",
buyer: "$invoices.buyer",
nnf: "$invoices.nnf",
receivable: '$invoices.receivables'
}
},
{$unwind: "$receivable"},
{$project : {
_id: '$receivable._id',
seller: 1,
buyer: 1,
invoice_code: 1,
receivable: 1,
created_at: 1,
}
},
{$sort: {"created_at": -1}},
])
This query resulted in the following structure list:
[{
"created_at" : ISODate("2020-03-06T09:47:26.161Z"),
"seller" : {
"name" : "Test name",
"cnpj" : "21231232131232"
},
"buyer" : {
"cnpj" : "21322132164654",
"legal_name" : "Test name 2"
},
"invoice_code" : 66119,
"receivable" : {
"rec_code" : "001",
"_id" : ObjectId("5e601bb5efff82b92935bad4"),
"expiration_date" : ISODate("2020-03-17T00:00:00.000Z"),
"value" : 6540.7,
"situation" : {
"status" : "active",
"reason" : []
}
},
"_id" : ObjectId("5e601bb5efff82b92935bad4")
}]
Support for $replaceRoot was added to Amazon DocumentDB in January 2021.

mongodb aggregation $max and corresponding timestamp

I'm being challenged by the $group $max in an aggregation with MongoDB on Nodes Express app. Here is the a sample of the collection;
{"_id":"5b7e78cf022be03c35776bec",
"humidity":60,
"pressure":1014.18,
"temperature":26.8,
"light":2464,
"timestampiso":"2018-08-23T09:05:19.112Z",
"timestamp":1535015119112
},
{
"_id":"5b7e7892022be03c35776bea",
"humidity":60.4,
"pressure":1014.14,
"temperature":26.7,
"light":2422,
"timestampiso":"2018-08-23T09:04:18.115Z",
"timestamp":1535015058115
},
{
"_id":"5b7e7855022be03c35776be8",
"humidity":60.6,
"pressure":1014.2,
"temperature":26.6,
"light":2409,
"timestampiso":"2018-08-23T09:03:17.113Z",
"timestamp":1535014997113
}]
What I'm trying to do is to query the collection, by first retrieving the entries of the last hour based on the timestamp and then looking for highest pressure of the sample (should be 60 entries as there is one entry per minute)
What I can de is find this value. What I'm stuggling on to have the timestamp related to that max value.
When I run
db.collection("ArduinoSensorMkr1000").
aggregate([{ "$match" : {"timestamp" : {"$gte" : (Date.now()-60*60*1000)}}},
{ "$group" : {"_id" : null, maxpressure : {"$max" : "$pressure"}
}
},
{
"$project" : { "_id" : 0 }
}
])
Fine, the output is correct and I get the maxpressure as so
[{"maxpressure":1014.87}]
but what I'm trying to output is the maxpressure field but with it, its corresponding timestamp. The output should look as so
[{"maxpressure":1014.87,"timestamp":1535015058115}]
Any hints on how I get this timestamp value to show?
Thank you for your support
You can try this first need to sort your data using $sort and you can pick max value by using $first
QUERY
db.col.aggregate([
{ "$match": { "timestamp": { "$gte": (Date.now() - 60 * 60 * 1000) } } },
{ "$sort": { "pressure": -1 } },
{
"$group": {
"_id": null, "maxpressure": { "$first": "$pressure" },
"timestamp": { "$first": "$timestamp" }
}
},
{
"$project": { "_id": 0 }
}
])
DATA
[{
"_id" : "5b7e78cf022be03c35776bec",
"humidity" : 60.0,
"pressure" : 1014.18,
"temperature" : 26.8,
"light" : 2464.0,
"timestampiso" : "2018-08-23T09:05:19.112Z",
"timestamp" : 1535015119112.0
},
{
"_id" : "5b7e7892022be03c35776bea",
"humidity" : 60.4,
"pressure" : 1014.14,
"temperature" : 26.7,
"light" : 2422.0,
"timestampiso" : "2018-08-23T09:04:18.115Z",
"timestamp" : 1535015058115.0
},
{
"_id" : "5b7e7855022be03c35776be8",
"humidity" : 60.6,
"pressure" : 1014.2,
"temperature" : 26.6,
"light" : 2409.0,
"timestampiso" : "2018-08-23T09:03:17.113Z",
"timestamp" : 1535014997113.0
}]
THE OUTPUT
{
"maxpressure" : 1014.87,
"timestamp" : 1535015058115.0
}
My suggestion is to use sort/limit instead of grouping. By this way you can get entire document before project only interesting fields :
db['ArduinoSensorMkr1000'].aggregate(
[{ "$match" : {"timestamp" : {"$gte" : (Date.now()-5*60*60*1000)}}},
{$sort:{pressure:-1}},
{$limit:1},{
"$project" : { "_id" : 0,"timestamp":1,"pressure":1 }}
])

Aggregate (group by) query in MongoDB by 2 fields

I am using MongoDB. My collection object structure is like the following:
{
"_id" : ObjectId("5a58800acebcda57188bf0aa"),
"title" : "Article title",
"categories" : "politics",
"url" : "https://example.com",
"article_date" : ISODate("2018-01-11T10:00:00.000Z"),
"content" : "content here..."
},
{
"_id" : ObjectId("5a58800acebcda57188bf0aa"),
"title" : "Article title 2",
"categories" : "economics",
"url" : "https://example.com",
"article_date" : ISODate("2018-01-12T10:00:00.000Z"),
"content" : "content here..."
}
Articles are publishing each day and I have many categories.
How can I group the data by date and count documents by specific category, for example:
{
"date": ISODate("2018-01-11T10:00:00.000Z"),
"result": [{
"category": "politics",
"count": 2
}, {
"category": "economics",
"count": 1
}]
},
{
"date": ISODate("2018-01-12T10:00:00.000Z"),
"result": [{
"category": "politics",
"count": 2
}, {
"category": "economics",
"count": 1
}]
}
Thank you in advance
you need to $group twice to get the result, first by article_date and categories then $group on article_date
db.art.aggregate([
{$group : {
_id : {article_date : "$article_date", categories : "$categories"},
count : {$sum : 1}
}},
{$group : {
_id : {article_date : "$_id.article_date"},
result : {$push : {category : "$_id.categories", count : "$count"}}
}},
{$addFields :{
_id : "$_id.article_date"
}}
]).pretty()
result for sample data in question
{
"_id" : ISODate("2018-01-11T10:00:00Z"),
"result" : [
{
"category" : "politics",
"count" : 1
}
]
}
{
"_id" : ISODate("2018-01-12T10:00:00Z"),
"result" : [
{
"category" : "economics",
"count" : 1
}
]
}

Sort a match group by id in aggregate

(Mongo newbie here, sorry) I have a mongodb collection, result of a mapreduce with this schema :
{
"_id" : "John Snow",
"value" : {
"countTot" : 500,
"countCall" : 30,
"comment" : [
{
"text" : "this is a text",
"date" : 2016-11-17 00:00:00.000Z,
"type" : "call"
},
{
"text" : "this is a text",
"date" : 2016-11-12 00:00:00.000Z,
"type" : "visit"
},
...
]
}
}
My goal is to have a document containing all the comments of a certain type. For example, a document John snow with all the calls.
I manage to have all the comments for a certain type using this :
db.general_stats.aggregate(
{ $unwind: '$value.comment' },
{ $match: {
'value.comment.type': 'call'
}}
)
However, I can't find a way to group the data received by the ID (for example john snow) even using the $group property. Any idea ?
Thanks for reading.
Here is the solution for your query.
db.getCollection('calls').aggregate([
{ $unwind: '$value.comment' },
{ $match: {
'value.comment.type': 'call'
}},
{
$group : {
_id : "$_id",
comment : { $push : "$value.comment"},
countTot : {$first : "$value.countTot"},
countCall : {$first : "$value.countCall"},
}
},
{
$project : {
_id : 1,
value : {"countTot":"$countTot","countCall":"$countCall","comment":"$comment"}
}
}
])
or either you can go with $project with $filter option
db.getCollection('calls').aggregate([
{
$project: {
"value.comment": {
$filter: {
input: "$value.comment",
as: "comment",
cond: { $eq: [ "$$comment.type", 'call' ] }
}
},
"value.countTot":"$value.countTot",
"value.countCall":"$value.countCall",
}
}
])
In both case below is my output.
{
"_id" : "John Snow",
"value" : {
"countTot" : 500,
"countCall" : 30,
"comment" : [
{
"text" : "this is a text",
"date" : "2016-11-17 00:00:00.000Z",
"type" : "call"
},
{
"text" : "this is a text 2",
"date" : "2016-11-17 00:00:00.000Z",
"type" : "call"
}
]
}
}
Here is the query which is the extension of the one present in OP.
db.general_stats.aggregate(
{ $unwind: '$value.comment' },
{ $match: {
'value.comment.type': 'call'
}},
{$group : {_id : "$_id", allValues : {"$push" : "$$ROOT"}}},
{$project : {"allValues" : 1, _id : 0} },
{$unwind : "$allValues" }
);
Output:-
{
"allValues" : {
"_id" : "John Snow",
"value" : {
"countTot" : 500,
"countCall" : 30,
"comment" : {
"text" : "this is a text",
"date" : ISODate("2016-11-25T10:46:49.258Z"),
"type" : "call"
}
}
}
}
Got my answer looking at this :
How to retrieve all matching elements present inside array in Mongo DB?
using the $addToSet property in the $group one.

MongoDB filtering out subdocuments with lookup aggregation

Our project database has a capped collection called values which gets updated every few minutes with new data from sensors. These sensors all belong to a single sensor node, and I would like to query the last data from these nodes in a single aggregation. The problem I am having is filtering out just the last of ALL the types of sensors while still having only one (efficient) query. I looked around and found the $group argument, but I can't seem to figure out how to use it correctly in this case.
The database is structured as follows:
nodes:
{
"_id": 681
"sensors": [
{
"type": "foo"
},
{
"type": "bar"
}
]
}
values:
{
"_id" : ObjectId("570cc8b6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 10
}
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 681,
"value" : 20
}
{
"_id" : ObjectId("167bc997bb66750d5740665e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 200,
"value" : 20
}
{
"_id" : ObjectId("110cc9c6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-09T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 12
}
so let's imagine I want the data from node 681, I would want a structure like this:
nodes:
{
"_id": 681
"sensors": [
{
"_id" : ObjectId("570cc8b6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "foo",
"nodeid" : 681,
"value" : 10
},
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"type" : "bar",
"nodeid" : 681,
"value" : 20
}
]
}
Notice how one value of foo is not queried, because I want to only get the latest value possible if there are more than one value (which is always going to be the case). The ordering of the collection is already according to the timestamp because the collection is capped.
I have this query, but it just gets all the values from the database (which is waaay too much to do in a lifetime, let alone one request of the web app), so I was wondering how I would filter it before it gets aggregated.
query:
db.nodes.aggregate(
[
{
$unwind: "$sensors"
},
{
$match:{
nodeid: 681
}
},
{
$lookup:{
from: "values", localField: "sensors.type", foreignField: "type", as: "sensors"
}
}
}
]
)
Try this
// Pipeline
[
// Stage 1 - sort the data collection if not already done (optional)
{
$sort: {
"timestamp":1
}
},
// Stage 2 - group by type & nodeid then get first item found in each group
{
$group: {
"_id":{type:"$type",nodeid:"$nodeid"},
"sensors": {"$first":"$$CURRENT"} //consider using $last if your collection is on reverse
}
},
// Stage 3 - project the fields in desired
{
$project: {
"_id":"$sensors._id",
"timestamp":"$sensors.timestamp",
"type":"$sensors.type",
"nodeid":"$sensors.nodeid",
"value":"$sensors.value"
}
},
// Stage 4 - group and push it to array sensors
{
$group: {
"_id":{nodeid:"$nodeid"},
"sensors": {"$addToSet":"$$CURRENT"}
}
}
]
as far as I got document structure, there is no need to use $lookup as all data is in readings(values) collection.
Please see proposed solution:
db.readings.aggregate([{
$match : {
nodeid : 681
}
},
{
$group : {
_id : {
type : "$type",
nodeid : "$nodeid"
},
readings : {
$push : {
timestamp : "$timestamp",
value : "$value",
id : "$_id"
}
}
}
}, {
$project : {
_id : "$_id",
readings : {
$slice : ["$readings", -1]
}
}
}, {
$unwind : "$readings"
}, {
$project : {
_id : "$readings.id",
type : "$_id.type",
nodeid : "$_id.nodeid",
timestamp : "$readings.timestamp",
value : "$readings.value",
}
}, {
$group : {
_id : "$nodeid",
sensors : {
$push : {
_id : "$_id",
timestamp : "$timestamp",
value : "$value",
type:"$type"
}
}
}
}
])
and output:
{
"_id" : 681,
"sensors" : [
{
"_id" : ObjectId("110cc9c6ac55850d5740784e"),
"timestamp" : ISODate("2016-04-09T12:06:46.344Z"),
"value" : 12,
"type" : "foo"
},
{
"_id" : ObjectId("190ac8b6ac55850d5740776e"),
"timestamp" : ISODate("2016-04-12T12:06:46.344Z"),
"value" : 20,
"type" : "bar"
}
]
}
Any comments welcome!