Slow mongodb query, how can it happen? - mongodb

I have the following query in MongoDB:
db.getCollection('message').aggregate([
{
"$match": {
"who" : { "$in" : ["manager", "woker"] },
"sendTo": { "$in": ["userId:243369", "userId:160921"] },
"exceptSendTo": { "$nin": ["userId:37355"] },
"msgTime": { "$lt": 1559716155 },
"isInvalid": { "$exists": false }
}
},
{
"$sort": { "msgTime": 1, "who": 1, "sendTo": 1 }
},
{
"$group": { "_id": "$who", "doc": { "$first": "$type" } }
}
], { allowDiskUse: true})
forget about the field meaning. and I have this index:
/* 1 */
[
{
"v" : 1,
"key" : {
"_id" : 1
},
"name" : "_id_",
"ns" : "db.message"
},
{
"v" : 1,
"key" : {
"who" : 1.0,
"sendTo" : 1.0
},
"name" : "who_sendTo",
"ns" : "db.message"
},
{
"v" : 1,
"key" : {
"msgTime" : 1.0
},
"name" : "msgTime_1",
"ns" : "db.message"
},
{
"v" : 1,
"key" : {
"msgTime" : 1.0,
"who" : 1.0,
"sendTo" : 1.0
},
"name" : "msgTime_1.0_who_1.0_sendTo_1.0",
"ns" : "db.message",
"background" : true
}
]
Perform the query above, It cost 1.52s, use explain to see it indeed has used msgTime_1.0_who_1.0_sendTo_1.0 index.
Why is query is still low while index has been used? and is there any way to solve the low problem like change index or something?

I dont think you are using the sort at all the way you intend to use it.
The $firs argument requires a sort on the actual first arguement
https://docs.mongodb.com/manual/reference/operator/aggregation/first/
You need to sort the key you want the first element of.
OR you could use $$ROOT, witch returns the first document.
I think you should modify it to something like:
{"$sort": {"who": 1, "msgTime": 1, "sendTo": 1}},
{"$group": {"_id": "$who", "doc": {"$first": "$$root"}}},
In this case the $group operator can find the result for each group "instantly" since they are all next to each other.
If you are only interested in the type, add an projection:
{'$project': {'doc.type': 1}

Related

Partition data around a match query during aggregation

What I have been trying to get my head around is to perform some kind of partitioning(split by predicate) in a mongo query. My current query looks like:
db.posts.aggregate([
{"$match": { $and:[ {$or:[{"toggled":false},{"toggled":true, "status":"INACTIVE"}]} , {"updatedAt":{$gte:1549786260000}} ] }},
{"$unwind" :"$interests"},
{"$group" : {"_id": {"iid": "$interests", "pid":"$publisher"}, "count": {"$sum" : 1}}},
{"$project":{ _id: 0, "iid": "$_id.iid", "pid": "$_id.pid", "count": 1 }}
])
This results in the following output:
{
"count" : 3.0,
"iid" : "INT456",
"pid" : "P789"
}
{
"count" : 2.0,
"iid" : "INT789",
"pid" : "P789"
}
{
"count" : 1.0,
"iid" : "INT123",
"pid" : "P789"
}
{
"count" : 1.0,
"iid" : "INT123",
"pid" : "P123"
}
All good so far, but then I had realized that for the documents that match the specific filter {"toggled":true, "status":"INACTIVE"}, I would rather decrement the count (-1). (considering the eventual value can be negative as well.)
Is there a way to somehow partition the data after match to make sure different grouping operations are performed for both the collection of documents?
Something that sounds similar to what I am looking for is
$mergeObjects, or maybe $reduce, but not much that I can relate from the documentation examples.
Note: I can sense, one straightforward way to deal with this would be to perform two queries, but I am looking for a single query to perform the operation.
Sample documents for the above output would be:
/* 1 */
{
"_id" : ObjectId("5d1f7******"),
"id" : "CON123",
"title" : "Game",
"content" : {},
"status" : "ACTIVE",
"toggle":false,
"publisher" : "P789",
"interests" : [
"INT456"
],
"updatedAt" : NumberLong(1582078628264)
}
/* 2 */
{
"_id" : ObjectId("5d1f8******"),
"id" : "CON456",
"title" : "Home",
"content" : {},
"status" : "INACTIVE",
"toggle":true,
"publisher" : "P789",
"interests" : [
"INT456",
"INT789"
],
"updatedAt" : NumberLong(1582078628264)
}
/* 3 */
{
"_id" : ObjectId("5d0e9******"),
"id" : "CON654",
"title" : "School",
"content" : {},
"status" : "ACTIVE",
"toggle":false,
"publisher" : "P789",
"interests" : [
"INT123",
"INT456",
"INT789"
],
"updatedAt" : NumberLong(1582078628264)
}
/* 4 */
{
"_id" : ObjectId("5d207*******"),
"id" : "CON789",
"title":"Stack",
"content" : { },
"status" : "ACTIVE",
"toggle":false,
"publisher" : "P123",
"interests" : [
"INT123"
],
"updatedAt" : NumberLong(1582078628264)
}
What I am looking forward to as a result though is
{
"count" : 1.0, (2-1)
"iid" : "INT456",
"pid" : "P789"
}
{
"count" : 0.0, (1-1)
"iid" : "INT789",
"pid" : "P789"
}
{
"count" : 1.0,
"iid" : "INT123",
"pid" : "P789"
}
{
"count" : 1.0,
"iid" : "INT123",
"pid" : "P123"
}
This aggregation gives the desired result.
db.posts.aggregate( [
{ $match: { updatedAt: { $gte: 1549786260000 } } },
{ $facet: {
FALSE: [
{ $match: { toggle: false } },
{ $unwind : "$interests" },
{ $group : { _id : { iid: "$interests", pid: "$publisher" }, count: { $sum : 1 } } },
],
TRUE: [
{ $match: { toggle: true, status: "INACTIVE" } },
{ $unwind : "$interests" },
{ $group : { _id : { iid: "$interests", pid: "$publisher" }, count: { $sum : -1 } } },
]
} },
{ $project: { result: { $concatArrays: [ "$FALSE", "$TRUE" ] } } },
{ $unwind: "$result" },
{ $replaceRoot: { newRoot: "$result" } },
{ $group : { _id : "$_id", count: { $sum : "$count" } } },
{ $project:{ _id: 0, iid: "$_id.iid", pid: "$_id.pid", count: 1 } }
] )
[ EDIT ADD ]
The output from the query using the input data from the question post:
{ "count" : 1, "iid" : "INT123", "pid" : "P789" }
{ "count" : 1, "iid" : "INT123", "pid" : "P123" }
{ "count" : 0, "iid" : "INT789", "pid" : "P789" }
{ "count" : 1, "iid" : "INT456", "pid" : "P789" }
[ EDIT ADD 2 ]
This query gets the same result with different approach (code):
db.posts.aggregate( [
{
$match: { updatedAt: { $gte: 1549786260000 } }
},
{
$unwind : "$interests"
},
{
$group : {
_id : {
iid: "$interests",
pid: "$publisher"
},
count: {
$sum: {
$switch: {
branches: [
{ case: { $eq: [ "$toggle", false ] },
then: 1 },
{ case: { $and: [ { $eq: [ "$toggle", true] }, { $eq: [ "$status", "INACTIVE" ] } ] },
then: -1 }
]
}
}
}
}
},
{
$project:{
_id: 0,
iid: "$_id.iid",
pid: "$_id.pid",
count: 1
}
}
] )
[ EDIT ADD 3 ]
NOTE:
The facet query runs the two facets (TRUE and FALSE) on the same set of documents; it is like two queries running in parallel. But, there is some duplication of code as well as additional stages for shaping the documents down the pipeline to get the desired output.
The second query avoids the code duplication, and there are much lesser stages in the aggregation pipeline. This will make difference when the input dataset has a large number of documents to process - in terms of performance. In general, lesser stages means lesser iterations of the documents (as a stage has to scan the documents which are output from the previous stage).

Count of MongoDB aggregation match results

I'm working with a MongoDB collection that has a lot of duplicate keys. I regularly do aggregation queries to find out what those duplicates are, so that I can dig in and find out what is and isn't different about them.
Unfortunately the database is huge and duplicates are often intentional. What I'd like to do is to find the count of keys that have duplicates, instead of printing a result with thousands of lines of output. Is this possible?
(Side Note: I do all of my querying through the shell, so solutions that don't require external tools or a lot of code would be preferred, but I understand that's not always possible.)
Example Records:
{ "_id" : 1, "type" : "example", "key" : "111111", "value" : "abc" }
{ "_id" : 2, "type" : "example", "key" : "222222", "value" : "def" }
{ "_id" : 3, "type" : "example", "key" : "222222", "value" : "ghi" }
{ "_id" : 4, "type" : "example", "key" : "333333", "value" : "jkl" }
{ "_id" : 5, "type" : "example", "key" : "333333", "value" : "mno" }
{ "_id" : 6, "type" : "example", "key" : "333333", "value" : "pqr" }
{ "_id" : 7, "type" : "example", "key" : "444444", "value" : "stu" }
{ "_id" : 8, "type" : "example", "key" : "444444", "value" : "vwx" }
{ "_id" : 9, "type" : "example", "key" : "444444", "value" : "yz1" }
{ "_id" : 10, "type" : "example", "key" : "444444", "value" : "234" }
Here is the query that I've been using to find duplicates based on key:
db.collection.aggregate([
{
$match: {
type: "example"
}
},
{
$group: {
_id: "$key",
count: {
$sum: 1
}
}
},
{
$match: {
count: {
$gt: 1
}
}
}
])
Which gives me an output of:
{
"_id": "222222",
"count": 2
},
{
"_id": "333333",
"count": 3
},
{
"_id": "444444",
"count": 4
}
The result I want to get instead:
3
You are almost there, just missing the last $count:
db.collection.aggregate([
{
$match: {
type: "example"
}
},
{
$group: {
_id: "$key",
count: {
$sum: 1
}
}
},
{
$match: {
count: {
$gt: 1
}
}
},
{
$count: "count"
}
])
Akrion's answer seems to be correct, but I can't test it because we're on an older version of MongoDB. A coworker gave me an alternative solution that works on 3.2 (not sure about other versions).
Adding .toArray() will convert the results to an array, and you can then get the size of the array using .length.
db.collection.aggregate([
{
$match: {
type: "example"
}
},
{
$group: {
_id: "$key",
count: {
$sum: 1
}
}
},
{
$match: {
count: {
$gt: 1
}
}
}
]).toArray().length

How can I do match after second level unwind in mongodb?

I am working on a software that uses MongoDB as a database. I have a collection like this (this is just one document)
{
"_id" : ObjectId("5aef51e0af42ea1b70d0c4dc"),
"EndpointId" : "89799bcc-e86f-4c8a-b340-8b5ed53caf83",
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Url" : "test",
"Tags" : [
{
"Uid" : "E2:02:00:18:DA:40",
"Type" : 1,
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Sensors" : [
{
"Type" : 1,
"Value" : NumberDecimal("-98")
},
{
"Type" : 2,
"Value" : NumberDecimal("-65")
}
]
},
{
"Uid" : "12:3B:6A:1A:B7:F9",
"Type" : 1,
"DateTime" : ISODate("2018-05-06T19:05:04.574Z"),
"Sensors" : [
{
"Type" : 1,
"Value" : NumberDecimal("-95")
},
{
"Type" : 2,
"Value" : NumberDecimal("-59")
},
{
"Type" : 3,
"Value" : NumberDecimal("12.939770381907275")
}
]
}
]
}
and I want to run this query on it.
db.myCollection.aggregate([
{ $unwind: "$Tags" },
{
$match: {
$and: [
{
"Tags.DateTime": {
$gte: ISODate("2018-05-06T19:05:02Z"),
$lte: ISODate("2018-05-06T19:05:09Z"),
},
},
{ "Tags.Uid": { $in: ["C1:3D:CA:D4:45:11"] } },
],
},
},
{ $unwind: "$Tags.Sensors" },
{ $match: { "$Tags.Sensors.Type": { $in: [1, 2] } } },
{
$project: {
_id: 0,
EndpointId: "$EndpointId",
TagId: "$Tags.Uid",
Url: "$Url",
TagType: "$Tags.Type",
Date: "$Tags.DateTime",
SensorType: "$Tags.Sensors.Type",
Value: "$Tags.Sensors.Value",
},
},
])
the problem is, the second match (that checks $Tags.Sensors.Type) doesn't work and doesn't affect the result of the query.
How can I solve that?
If this is not the right way, what is the right way to run these conditions?
The $match stage accepts field names without a leading $ sign. You've done that correctly in your first $match stage but in the second one you write $Tags.Sensors.Type. Simply removing the leading $ sign should make your query work.
Mind you, the whole thing can be a bit simplified (and some beautification doesn't hurt, either):
You don't need to use $and in your example since it's assumed by default if you specify more than one criterion in a filter.
The $in that you use for the Tags.Sensors.Type filter can be a simple : kind of equality operator unless you have more than one element in the list of acceptable values.
In the $project stage, instead of (kind of) duplicating identical field names you can use the <field>: 1 syntax unless the order of the fields matters.
So the final query would be something like this.
db.myCollection.aggregate([
{
"$unwind" : "$Tags"
},
{
"$match" : {
"Tags.DateTime" : { "$gte" : ISODate("2018-05-06T19:05:02Z"), "$lte" : ISODate("2018-05-06T19:05:09Z") },
"Tags.Uid" : { "$in" : ["C1:3D:CA:D4:45:11"] }
}
}, {
"$unwind" : "$Tags.Sensors"
}, {
"$match" : {
"Tags.Sensors.Type" : { "$in" : [1,2] }
}
},
{
"$project" : {
"_id" : 0,
"EndpointId" : 1,
"TagId" : "$Tags.Uid",
"Url" : 1,
"TagType" : "$Tags.Type",
"Date" : "$Tags.DateTime",
"SensorType" : "$Tags.Sensors.Type",
"Value" : "$Tags.Sensors.Value"
}
}])

Mongodb output field with multiple $cond

Here's an example of documents I use :
{
"_id" : ObjectId("554a1f5fe36a768b362ea5c0"),
"store_state" : 1,
"services" : [
{
"id" : "XXX",
"state" : 1,
"active": true
},
{
"id" : "YYY",
"state" : 1,
"active": true
},
...
]
}
I want to output a new field with "Y" if the id is "XXX" and active is true and "N" in any other cases. The service element with "XXX" as id is not present on every documents (output "N" in this case).
Here's my query for the moment :
db.stores.aggregate({
$match : {"store_state":1}
},
{ $project : {
"XXX_active": {
$cond: [ {
$and:[
{$eq:["services.$id","XXX"]},
{$eq:["services.$active",true]}
]},"Y","N"
] }
}
}).pretty()
But it always output "N" for "XXX_active" field.
The expected output I need is :
{
"_id" : ObjectId("554a1f5de36a768b362e7e6f"),
"XXX_active" : "Y"
},
{
"_id" : ObjectId("554a1f5ee36a768b362e9d25"),
"XXX_active" : "N"
},
{
"_id" : ObjectId("554a1f5de36a768b362e73a5"),
"XXX_active" : "Y"
}
Other example of possible result :
{
"_id" : ObjectId("554a1f5de36a768b362e7e6f"),
"XXX_active" : "Y",
"YYY_active" : "N"
},
{
"_id" : ObjectId("554a1f5ee36a768b362e9d25"),
"XXX_active" : "N",
"YYY_active" : "N"
},
{
"_id" : ObjectId("554a1f5de36a768b362e73a5"),
"XXX_active" : "Y",
"YYY_active" : "Y"
}
Only one XXX_active per object and no duplicates objects but I need all objects with an XXX_active even if the services id element "XXX" is not present. Could someone help please?
First $unwind services array and then used $cond as below :
db.stores.aggregate({
"$match": {
"store_state": 1
}
}, {
"$unwind": "$services"
}, {
"$project": {
"XXX_active": {
"$cond": [{
"$and": [{
"$eq": ["$services.id", "XXX"]
}, {
"$eq": ["$services.active", true]
}]
}, "Y", "N"]
}
}
},{"$group":{"_id":"$_id","XXX_active":{"$first":"$XXX_active"}}}) //group by id
The following aggregation pipeline will give the desired result. You would need to first apply the $unwind operator on the services array field first as your initial aggregation pipeline step. This will deconstruct the services array field from the input documents to output a document for each element. Each output document replaces the array with an element value.
db.stores.aggregate([
{
"$match" : {"store_state": 1}
},
{
"$unwind": "$services"
},
{
"$project": {
"store_state" : 1,
"services": 1,
"XXX_active": {
"$cond": [
{
"$and": [
{"$eq":["$services.id", "XXX"]},
{"$eq":["$services.active",true]}
]
},"Y","N"
]
}
}
},
{
"$match": {
"services.id": "XXX"
}
},
{
"$group": {
"_id": {
"_id": "$_id",
"store_state": "$store_state",
"XXX_active": "$XXX_active"
},
"services": {
"$push": "$services"
}
}
},
{
"$project": {
"_id": "$_id._id",
"store_state" : "$_id.store_state",
"services": 1,
"XXX_active": "$_id.XXX_active"
}
}
])

MongoDB $ne in sub documents

{
"_id" : ObjectId("53692eb238ed04c824679f18"),
"firstUserId" : 1,
"secondUserId" : 17,
"messages" : [
{
"_id" : ObjectId("5369338997b964b81d579fc6"),
"read" : true,
"dateTime" : 1399403401,
"message" : "d",
"userId" : 1
},
{
"_id" : ObjectId("536933c797b964b81d579fc7"),
"read" : false,
"dateTime" : 1399403463,
"message" : "asdf",
"userId" : 17
}
]
}
I'm trying to select all documents that have firstUserId = 1 and also have sub documents
that have userId differnet ($ne) to 1 and read = false.
I tried:
db.usermessages.find({firstUserId: 1, "messages.userId": {$ne: 1}, "messages.read": false})
But it returns empty cause messages have both 1 and 17.
And also how to count subdocuments that have given case?
Are you trying to get the count of all the documents which are returned after your match criteria? If Yes, then you might consider using aggregation framework. http://docs.mongodb.org/manual/aggregation/
Something like below could be done to get you the counts:
db.usermessages.aggregate(
{ "$unwind": "$messages" },
{ "$match":
{ "firstUserId": 1,
"messages.userId": { "$ne" : 1},
"messages.read": false
}
},
{ "$group": { "_id" :null, "count" : { "$sum": 1 } } }
)
Hope this helps.
PS: I have not tried this on my system.