MongoDB paginate 2 collections together on common field - mongodb

I've two mongo collections - File and Folder.
Both have some common fields like name, createdAt etc. I've a resources API that returns a response having items from both collections, with a type property added. type can be file or folder
I want to support pagination and sorting in this list, for example sort by createdAt. Is it possible with aggregation, and how?
Moving them to a container collection is not a preferred option, as then I have to maintain the container collection on each create/update/delete on either of the collection.
I'm using mongoose too, if it has got any utility function for this, or a plugin.

In this case, you can use $unionWith. Something like:
Folder.aggregate([
{ $project: { name: 1, createdAt: 1 } },
{
$unionWith: {
coll: "files", pipeline: [ { $project: { name: 1, createdAt: 1 } } ]
}
},
... // your sorting go here
])

Related

How to build a MongoDB query that combines two field temporarily?

I have a schema which has one field named ownerId and a field which is an array named participantIds. In the frontend users can select participants. I'm using these ids to filter documents by querying the participantIds with the $all operator and the list of participantsIds from the frontend. This is perfect except that the participantsIds in the document don't include the ownerId. I thought about using aggregate to add a new field which consists of a list like this one: [participantIds, ownerId] and then querying against this new field with $all and after that delete the field again since it isn't need in the frontend.
How would such a query look like or is there any better way to achieve this behavior? I'm really lost right now since I'm trying to implement this with mongo_dart for the last 3 hours.
This is how the schema looks like:
{
_id: ObjectId(),
title: 'Title of the Event',
startDate: '2020-09-09T00:00:00.000',
endDate: '2020-09-09T00:00:00.000',
startHour: 1,
durationHours: 1,
ownerId: '5f57ff55202b0e00065fbd10',
participantsIds: ['5f57ff55202b0e00065fbd14', '5f57ff55202b0e00065fbd15', '5f57ff55202b0e00065fbd13'],
classesIds: [],
categoriesIds: [],
roomsIds: [],
creationTime: '2020-09-10T16:42:14.966',
description: 'Some Desc'
}
Tl;dr I want to query documents with the $all operator on the participantsIds field but the ownerId should be included in this query.
What I want is instead of querying against:
participantsIds: ['5f57ff55202b0e00065fbd14', '5f57ff55202b0e00065fbd15', '5f57ff55202b0e00065fbd13']
I want to query against:
participantsIds: ['5f57ff55202b0e00065fbd14', '5f57ff55202b0e00065fbd15', '5f57ff55202b0e00065fbd13', '5f57ff55202b0e00065fbd10']
Having fun here, by the way, it's better to use Joe answer if you are doing the query frequently, or even better a "All" field on insertion.
Additional Notes: Use projection at the start/end, to get what you need
https://mongoplayground.net/p/UP_-IUGenGp
db.collection.aggregate([
{
"$addFields": {
"all": {
$setUnion: [
"$participantsIds",
[
"$ownerId"
]
]
}
}
},
{
$match: {
all: {
$all: [
"5f57ff55202b0e00065fbd14",
"5f57ff55202b0e00065fbd15",
"5f57ff55202b0e00065fbd13",
"5f57ff55202b0e00065fbd10"
]
}
}
}
])
Didn't fully understand what you want to do but maybe this helps:
db.collection.find({
ownerId: "5f57ff55202b0e00065fbd10",
participantsIds: {
$all: ['5f57ff55202b0e00065fbd14',
'5f57ff55202b0e00065fbd15',
'5f57ff55202b0e00065fbd13']
})
You could use the pipeline form of update to either add the owner to the participant list or add a new consolidated field:
db.collection.update({},[{$set:{
allParticipantsIds: {$setUnion: [
"$participantsIds",
["$ownerId"]
]}
}}])

MongoDB querying aggregation in one single document

I have a short but important question. I am new to MongoDB and querying.
My database looks like the following: I only have one document stored in my database (sorry for blurring).
The document consists of different fields:
two are blurred and not important
datum -> date
instance -> Array with an Embedded Document Object; Our instance has an id, two not important fields and a code.
Now I want to query how many times an object in my instance array has the group "a" and a text "sample"?
Is this even possible?
I only found methods to count how many documents have something...
I am using Mongo Compass, but i can also use Pymongo, Mongoengine or every other different tool for querying the mongodb.
Thank you in advance and if you have more questions please leave a comment!
You can try this
db.collection.aggregate([
{
$unwind: "$instance"
},
{
$unwind: "$instance.label"
},
{
$match: {
"instance.label.group": "a",
"instance.label.text": "sample",
}
},
{
$group: {
_id: {
group: "$instance.label.group",
text: "$instance.label.text"
},
count: {
$sum: 1
}
}
}
])

Returning whole object in MongoDB aggregation

I have Item schema in which I have item details with respective restaurant. I have to find all items of particular restaurant and group by them with 'type' and 'category' (type and category are fields in Item schema), I am able to group items as I want but I wont be able to get complete item object.
My query:
db.items.aggregate([{
'$match': {
'restaurant': ObjectId("551111450712235c81620a57")
}
}, {
'$group': {
id: {
'$push': '$_id'
}
, _id: {
type: '$type'
, category: '$category'
}
}
}, {
$project: {
id: '$id'
}
}])
I have seen one method by adding each field value to group then project it. As I have many fields in my Item schema I don't feel this will good solution for me, Can I get complete object instead of Ids only.
Well you can always use $$ROOT providing that your server is MongoDB 2.6 or greater:
db.items.aggregate([
{ '$match': {'restaurant': ObjectId("551111450712235c81620a57")}},
{ '$group':{
_id : {
type : '$type',
category : '$category'
},
id: { '$push': '$$ROOT' },
}}
])
Which is going to place every whole object into the members of the array.
You need to be careful when doing this as with larger results you are certain to break BSON limits.
I would suggest that you are trying to contruct some kind of "search results", with "facet counts" or similar. For that you are better off running a separate query for the "aggregation" part and one for the actual document results.
That is a much safer and flexible approach than trying to group everything together.

Refine/Restructure data from Mongodb query

Im using NodeJs, MongoDB Native 2.0+
The following query fetch one client document containing arrays of embedded staff and services.
db.collection('clients').findOne({_id: sessId}, {"services._id": 1, "staff": {$elemMatch: {_id: reqId}}}, callback)
Return a result like this:
{
_id: "5422c33675d96d581e09e4ca",
staff:[
{
name: "Anders"
_id: "5458d0aa69d6f72418969428"
// More fields not relevant to the question...
}
],
services: [
{
_id: "54578da02b1c54e40fc3d7c6"
},
{
_id: "54578da42b1c54e40fc3d7c7"
},
{
_id: "54578da92b1c54e40fc3d7c9"
}
]
}
Note that each embedded object in services actually contains several fields, but _id is the only field returned by means of the projection of the query.
From this returned data I start by "pluck" all id's from services and save them in an array later used for validation. This is by no means a difficult operation... but I'm curious... Is there an easy way to do some kind of aggregation instead of find, to get an array of already plucked objectId's directly from the DB. Something like this:
{
_id: "5422c33675d96d581e09e4ca",
staff:[
{
name: "Anders"
_id: "5458d0aa69d6f72418969428"
// More fields not relevant to the question...
}
],
services: [
"54578da02b1c54e40fc3d7c6",
"54578da42b1c54e40fc3d7c7",
"54578da92b1c54e40fc3d7c9"
]
}
One way of doing it is to first,
$unwind the document based on the staff field, this is done to
select the intended staff. This step is required due to the
unavailability of the $elemMatch operator in the aggregation
framework.
There is an open ticket here: Jira
Once the document with the correct staff is selected, $unwind, based on $services.
The $group, together $pushing all the services _id together in an array.
This is then followed by a $project operator, to show the intended fields.
db.clients.aggregate([
{$match:{"_id":sessId}},
{$unwind:"$staff"},
{$match:{"staff._id":reqId}},
{$unwind:"$services"},
{$group:{"_id":"$_id","services_id":{$push:"$services._id"},"staff":{$first:"$staff"}}},
{$project:{"services_id":1,"staff":1}}
])

Create MongoDB fields with names based on sub-document values using aggregation pipeline?

Given a MongoDB collection with the following document structure:
{
array_of_subdocs:
[
{
animal: "cat",
count: 10
},
{
animal: "dog",
count: 20
},
...
]
}
where each document contains an array of sub-documents, I want transform the collection into documents of the structure:
{
cat: { count: 10 },
dog: { count: 20 },
...
}
where each sub-document is now the value of a new field in the main document named after one of the values within the sub-document (in the example, the values of the animal field is used to create the name of the new fields, i.e. cat and dog).
I know how to do this with eval with a Javascript snippet. It's slow. My question is: how can this be done using the aggregation pipeline?
According to this feature request and its solution, the new feature will be added for this functionality - Function called arrayToObject.
db.foo.aggregate([
{$project: {
array_of_subdocs: {
$arrayToObject: {
$map: {
input: "$array_of_subdocs",
as: "pair",
in: ["$$pair.animal", "$$pair.count"]
}
}
}
}}
])
But at the moment, no solution. I suggest you to change your data structure. As far as I see there are many feature requests labeled as Major but not done for years.