This seems like a simple question but I couldn't find an answer. This is very easy to do in SQL.
I want to query mongodb by searching for a value that is a combination of two fields. For example I want to search for documents where fieldA + fieldB == ID01
I tried the following but it doesn't work.
collection.find({{$concat:[$fieldA, $fieldB]}: 'ID01'})
You can try using aggregate framework -
db.collection.aggregate(
[
{ $project: { newField: { $concat: [ "$fieldA", "$fieldB" ] }, fieldA: 1, fieldB: 1 } },
{ $match: { newField: 'ID01' } }
]
)
Related
I need to group all entries by id using $group. However, the id is under different objects, so is $student.id for some and $teacher.id for others. I've tried $cond, $is and any other conditional I could find but haven't had any luck. What I'd want is something like this:
lessons.aggregate([
// matches, lookups, etc.
{$group: {
"_id":{
"id": (if student exists "student.id", else if teacher exists "teacher.id"
},
// other fields
}
}}]);
How can I do this? I've scoured the MongoDB docs for hours yet nothing works. I'm new to this company and trying to debug something so not familiar with the tech yet, so apologies if this is rudimentary stuff!
Update: providing some sample data to demo what I'd want. Shortened from the real thing to fit the question. After all the matches, lookups, etc and before using $group, the data looks like this. As student.id of first and second objects are the same, I want them to be grouped.
{
student: {
_id: new ObjectId("61dc0fce904d07184b461c03"),
name: Jess W
},
duration: 30
},
{
student:{
_id: new ObjectId("61dc0fce904d07184b461c03"),
name: Jess W
},
duration: 30
},
{
teacher: {
_id: new ObjectId("61dc0f6a904d07184b461be7"),
name: Michael S
},
duration: 30
},
{
teacher: {
_id: new ObjectId("61dc1087904d07184b461c6a"),
name: Andrew J
},
duration: 30
},
If the fields exist "exclusive only", then you can simply combine them:
{ $group: {_id: {student: "$student.id", teacher: "$teacher.id"} }, // other fields }
concatenate them should also work:
{ $group: {_id: {$concat: [ "$student.id", "$teacher.id" ] }, // other fields }
You can just use $ifNull and chain them, like so:
db.collection.aggregate([
{
$group: {
_id: {
"$ifNull": [
"$student._id",
{
$ifNull: [
"$teacher._id",
"$archer._id"
]
}
]
}
}
}
])
I have a query collection in mongodb which contains document in the below format :
{
_id : ObjectId("61aced92ede..."),
query : "How to solve...?",
answer : []
is_solved : false
}
Now, I want to filter the documents with the following condition
filter all documents that are not solved. (is_solved : true)
filter "n" number of document that are solved.
So, That result will have all unsolved documents and only 10 solved documents in an array.
You can use this aggregation query:
First use $facet to create two ways: The document solved, and document not solved.
Into each way do the necessary $match and $limit the solved documents.
Then concatenate the values using $concatArrays.
db.collection.aggregate([
{
"$facet": {
"not_solved": [
{
"$match": {
"is_solved": false
}
}
],
"solved": [
{
"$match": {
"is_solved": true
}
},
{
"$limit": 10
}
]
}
},
{
"$project": {
"result": {
"$concatArrays": [
"$not_solved",
"$solved"
]
}
}
}
])
Example here where I've used $limit: 1 to see easier.
Also, if you want, you can add $unwind at the end of the aggregation to get values at the top level like this example
in nodejs and mongodb, usin mongoose ,
how can I query multiple collections?
for example: I have 3 collections:
mycollection1, mycollection2, mycollection3
I want to create query like findOne or findMany on mycollection*
and the query will return al the documents that exist in those collections
*(the same as I can do in Elasticsearch)
Thanks,
Larry
you can use $unionWith
db.coll1.aggregate([
{
"$unionWith": {
"coll": "coll2"
}
},
{
"$unionWith": {
"coll": "coll3"
}
},
{
"$match": {
"$expr": {
"$eq": [
"$a",
1
]
}
}
}
])
Test it here
Its better to do the filters before the union, and use this(if you have filters) (the above filters after), you can take the $match and add it to the each union.
{ $unionWith: { coll: "<collection>", pipeline: [ <stage1>, ... ] } }
I have an article collection which stores a list tags as following:
{
id: 1,
title: "Sample title"
tags: ["tag1", "tag2", "tag3", "tag4"]
}
In order to match articles to user's interest I use aggregation "match" and "setIntersection"
to count how many common tags between user's interest and articles tags then sort them to get best match.
db.article.aggregate([
{
"$match": {
{"tags": {"$in": ["tags", ["tag1", ..., "tag100"]}}
}
},
{
"$project": {
"tags_match": {
"$setIntersection": ["tags", ["tag1", ..., "tag100"]]
},
}
},
{
"$project": {
"tags_match_size": {
"$size": "$tags_match"
},
}
},
{"$sort": {"tags_match_size" : 1}}
{ "$limit" : 40 }
]
);
It works fine if I have few hundred documents in the article collection. Now I have around 1M articles, it takes around half an hour to finish.
I can't create index for "tags_match_size" to run faster as it is a new field in aggregate query.
How can I make the query run faster?
Thank you.
Create an index for tags field. Index will work for only first $match.
Here's a sample document of my Mongo DB structure:
{ _id: 1
records: [{n: "Name", v: "Kevin"},
{n: "Age", v: "100"},
...,
{n: "Field25", v: "Value25"} ]
}
To search on all documents having Name of "Kevin" and an Age of "100", I'm using $all with $elemMatch. I need to use $elemMatch's for an exact sub-document match of n: "Name" and v: "Kevin", as well as $all since I'm querying on an array.
db.collection.find({"records" : { $all: [
{$elemMatch: {n: "Name", v: "Kevin"},
{$elemMatch: {n: "Age", v: "100"}
]}})
However, the $all operator is inefficient when the first $elemMatch argument is non-selective, i.e. there are many documents that match this field.
The Mongo Docs elaborate:
In the current release, queries that use the $all operator must scan
all the documents that match the first element in the query array. As
a result, even with an index to support the query, the operation may
be long running, particularly when the first element in the array is
not very selective.
Is there a better alternative for my queries?
I would suggest an radical change to your structure, and this would simplify the queries. Sorry if this change is not possible, but without more data I do not see any problem with that:
{ _id: 1
records: [{"Name":"Kevin",
"Age":"100",
...,
"Field25":"Value25"} ]
}
And the query:
db.collection.find("records":{$elemMatch:{Name:"Kevin","Age":100}})
This will return all the objects that have a record (assuming there are more records, for example if they are students of a class) matching all of the conditions mentioned.
But in case you want to have just a document per _id, forget about the records array:
{ _id: 1,
"Name":"Kevin",
"Age":"100",
...,
"Field25":"Value25"} ]
}
and the query:
db.collection.find({Name:"Kevin","Age":100})
Hope this helps.
I think this should be the best solution.
db.collection.find({},
{
records: {
$filter: {
input: "$records",
as: "record",
cond: {
$or: [
{
$eq: [
"$$record.v",
"Kevin"
]
},
{
$eq: [
"$$record.v",
"100"
]
}
]
}
}
}
})
solution link: https://mongoplayground.net/p/aQAw0cG9Ipm