I have a document model that looks like this
{
_id: objectId,
per: [{
_pid: objectId,
thing1: sting,
thing2: string
}],
ore: [{
_oid: objectId,
thing1: sting,
thing2: string
}],
tre: [{
_tid: objectId,
thing1: sting,
thing2: string
}]
}
and I want to pull back a tabular representation
[
{_id,_pid,thing1,thing2},
{_id,_pid,thing1,thing2},
{_id,_oid,thing1,thing2},
{_id,_oid,thing1,thing2},
{_id,_oid,thing1,thing2},
{_id,_tid,thing1,thing2}
]
How would I go about doing this - Im sure it's an aggregation thing
$setUnion within aggregation would let you combine multiple arrays into one:
See live at MongoPlayground
db.collection.aggregate([
{ $group: {
_id: "$_id",
array: { $push: {
$setUnion: [
"$per",
"$ore",
"$tre"
]
}
}
}
}
]
From that point, it just unwinding the result to your liking.
Here is the completed example with all the unwinding: https://mongoplayground.net/p/Z9-HHMoQOPA
Related
Input data
{
user_name:"jon_doe",
followers: ["useroneID", "usertwoID"],
followers_count: 2
}
my code
db.user.updateOne(
{user_name: "jon_doe"},
{
$addToSet: {followers: "userthreeID"},
$set: {followers_count: {$size: "$followers"}}
}
Expected output
{
user_name:"jon_doe",
followers: ["useroneID", "usertwoID","userthreeID"],
followers_count: 3
}
Is it possible with mongoDB and how do I do it because the code above doesn't work
Work on the update with the aggregation pipeline.
$set - Set followers field with concat arrays with followers and new value as an array. Work with $setUnion to prevent insertion of duplicate entries.
$set - Set followers_count field with get the size of followers array.
db.user.updateOne({
user_name: "jon_doe"
},
[
{
$set: {
followers: {
$setUnion: [
"$followers",
[
"userthreeID"
]
]
}
}
},
{
$set: {
followers_count: {
$size: "$followers"
}
}
}
])
Sample Mongo Playground
I recently updated my subschemas (called Courses) to have timestamps and am trying to backfill existing documents to include createdAt/updatedAt fields.
Courses are stored in an array called courses in the user document.
// User document example
{
name: "Joe John",
age: 20,
courses: [
{
_id: <id here>,
name: "Intro to Geography",
units: 4
} // Trying to add timestamps to each course
]
}
I would also like to derive the createdAt field from the Course's Mongo ID.
This is the code I'm using to attempt adding the timestamps to the subdocuments:
db.collection('user').updateMany(
{
'courses.0': { $exists: true },
},
{
$set: {
'courses.$[elem].createdAt': { $toDate: 'courses.$[elem]._id' },
},
},
{ arrayFilters: [{ 'elem.createdAt': { $exists: false } }] }
);
However, after running the code, no fields are added to the Course subdocuments.
I'm using mongo ^4.1.1 and mongoose ^6.0.6.
Any help would be appreciated!
Using aggregation operators and referencing the value of another field in an update statement requires using the pipeline form of update, which is not available until MongoDB 4.2.
Once you upgrade, you could use an update like this:
db.collection.updateMany({
"courses": {$elemMatch: {
_id:{$exists:true},
createdAt: {$exists: false}
}}
},
[{$set: {
"courses": {
$map: {
input: "$courses",
in: {
$mergeObjects: [
{createdAt: {
$convert: {
input: "$$this._id",
to: "date",
onError: {"error": "$$this._id"}
}
}},
"$$this"
]
}
}
}
}
}
])
My requirement is to write a Mongo aggregation which returns a List of "virtual" Documents by grouping some existing "actual" Documents from the collection.
I intend to use this result as-is on my UI project, I'm looking for ways I can add a unique and decodable ID to it during the aggregation itself.
Example:
[
{... pipeline stages},
{
$group: {
_id: {
bookCode: '$bookCode',
bookName: '$bookName'
}
books: {
$push: '$bookId'
}
}
},
{
$project: {
//virtual unique Id by combining bookCode and bookName
virtualId: {
$concat: [
{
$ifNull: [ '$_id.bookCode', '~' ]
},
'-',
{
$ifNull: [ '$_id.bookName', '~' ]
}
]
},
books: '$books'
}
}
]
Sample Output:
[
{
virtualId: 'BC01-BOOKNAME01'
books: ['BID01', 'BID02']
},
{
virtualId: 'BC02-BOOKNAME01'
books: ['BID03', 'BID04']
},
{
virtualId: '~-BOOKNAME01'
books: ['BID05', 'BID06']
},
{
virtualId: 'BC02-~'
books: ['BID07', 'BID08']
},
{
virtualId: '~-~'
books: ['BID09', 'BID10']
},
]
This method of concatenating grouping fields to generate virtualId works, but is there a way to make it more terse?
Perhaps some way I could convert this to an unreadable by human but decodable format.
TLDR: I'm looking for ways to create an ID for each result document in the aggregation query itself, that would give back it's contributing fields if I decode it later.
MongoDB Version: 4.0.0
use this aggregation we use funtion and generate code with js function
db.collection.aggregate([
{
"$project": {
books: 1,
virtualId: {
"$function": {
"body": "function(a){var t = '';for(i=0;i<a.length;i++){t=a.charCodeAt(i)+t;};return t;}",
"args": [
"$virtualId"
],
"lang": "js"
}
}
}
}
])
https://mongoplayground.net/p/Lm_VjIG54BG
I have no idea about how to build a query which does this:
I have a collection of users, each user has a field userdata which contains an array of String.
Each string is the string of the ObjectID of other documents (news already seen) in another collection.
I need, knowing the username of this user, to perform a query which gets all the news but not those which have been already seen.
I think the $nin operator does what I need but I don't know how to mix it with data from another collection.
Users
user
username: String
userdata: Object
news: Array of String
News
news1
_id: ObjectID
news2
_id: ObjectID
EXAMPLE:
Users: [{
username: 'mario',
userdata: {
news: ['10', '11']
}
}]
News: [{
_id: '10',
content: 'hello world10'
},{
_id: '11',
content: 'hello world11'
},{
_id: '12',
content: 'hello world12'
}]
Passing to the query the username (as a String) 'mario', I need to query the collection News and get back only the one with _id '12'.
Thanks
You need to run $lookup with custom pipeline. There's no $nin for aggregations but you can use $not along with $in. Then you can also try $unwind with $replaceRoot to promote filtered News to the root level:
db.Users.aggregate([
{ $match: { username: "mario" } },
{
$lookup: {
from: "News",
let: { user_news: "$userdata.news" },
pipeline: [{ $match: { $expr: { $not: { $in: [ "$_id", "$$user_news" ] } } } }],
as: "filteredNews"
}
},
{ $unwind: "$filteredNews" },
{ $replaceRoot: { newRoot: "$filteredNews" }}
])
I have a large collection called posts, like so:
[{
_id: 349348jf49rk,
user: frje93u45t,
comments: [{
_id: fks9272ewt
user: 49wnf93hr9,
comment: "Hello world"
}, {
_id: j3924je93h
user: 49wnf93hr9,
comment: "Heya"
}, {
_id: 30283jt9dj
user: dje394ifjef,
comment: "Text"
}, {
_id: dkw9278467
user: fgsgrt245,
comment: "Hola"
}, {
_id: 4irt8ej4gt
user: 49wnf93hr9,
comment: "Test"
}]
}]
My comments subdocument can sometimes be 100s of documents long. My question is, how can I return just the 3 newest documents (based on the ID) instead of all the documents, and return the length of all documents as totalNumberOfComments as a count instead? I need to do this for 100s of posts sometimes. This is what the final result would look like:
[{
_id: 349348jf49rk,
user: frje93u45t,
totalNumberOfComments: 5,
comments: [{
_id: fks9272ewt
user: 49wnf93hr9,
comment: "Hello world"
}, {
_id: j3924je93h
user: 49wnf93hr9,
comment: "Heya"
}, {
_id: 30283jt9dj
user: dje394ifjef,
comment: "Text"
}]
}]
I understand that this could be completed after MongoDB returns the data by splicing, although I think it would be best to do this within the query so that Mongo doesn't have to return all comments for every single post all the time.
Does this solve your problem? try plugging in the _id values and see what you are missing and post them here.
begin with this query
db.collection.aggregate([{$match: {_id: 349348jf49rk}},
{$project:{
_id:1,
user:1,
totalNumberOfComments: { $size: "$comments" },
comments: {$slice:3}
}
}
])