MongoDB- Subtract - mongodb

Basically I want to subtract finish-start for each object.
So in the below example, the output should be 2 and 4 ((7-5=2) and (8-4=4)) and then I want to add that both fields into my existing documents.
How can I do this in MongoDB?
{
"title" : "The Hobbit",
"rating_average" : "???",
"ratings" : [
{
"title" : "best book ever",
"start" : 5,
"finish" : 7
},
{
"title" : "good book",
"start" : 4,
"finish" : 8
}
]
}

Not sure what exactly are the requirements for query output but this might help..
Query:
db.books.aggregate([
{ $unwind: "$ratings" },
{ $addFields: { "ratings.diff": { $subtract: [ "$ratings.finish", "$ratings.start" ] } } }
])
Pipeline Explanation:
Unwind the array because the $subtract cannot act on array data.
Add a new field called diff in the 'ratings' sub documents that has
the calculated value
EDIT:
OP asks for the results to be stored in the same subdocument. After discussions with friends and family I discovered this is possible but only with MongoDB version 4.2 or later. Here is an update statement that can achieve the desired results. This is an update not an aggregation.
Update: (MongoDB 4.2 or later specific)
db.books.update({
},
[
{
$replaceWith: {
ratings: {
$map: {
input: "$ratings",
in: {
start: "$$this.start",
finish: "$$this.finish",
diff: {
$subtract: [
"$$this.finish",
"$$this.start"
]
}
}
}
}
}
}
])

Related

NoSQL aggregation query Shakespeare’s dataset

I'm trying to learn NoSQL aggregation queries and here is dataset (name - shakespeare_plays) structure:
"_id" : "Romeo and Juliet",
"acts" : [
{
"title" : "ACT I",
"scenes" : [
{
"title" : "SCENE I. Verona. A public place.",
"action" : [
{
"character" : "SAMPSON",
"says" : [
"Gregory, o' my word, we'll not carry coals."
]
},
{
"character" : "GREGORY",
"says" : [
"No, for then we should be colliers."
]
},
// ...
{
"character" : "GREGORY",
"says" : [
"To move is to stir; and to be valiant is to stand:",
"therefore, if thou art moved, thou runn'st away."
]
},
{
"character" : "SAMPSON",
"says" : [
"A dog of that house shall move me to stand: I will",
"take the wall of any man or maid of Montague's."
]
},
{
"character" : "GREGORY",
"says" : [
"That shows thee a weak slave; for the weakest goes",
"to the wall."
]
},
// ...
},
// ...
]
},
// ...
]
}
What tasks am I trying to do:
What characters are found in more than one play
How many replicas does Juliet have
Number of characters in Othello
Any tips how to do it via aggregate?
You're on the right track. Sharing some queries to achieve your goal.
From where you're right now, you can get a list of all characters by adding $group stage
db.getCollection('shakespeare_plays').aggregate([{
$unwind: "$acts"
}, {
$unwind: "$acts.scenes"
}, {
$unwind: "$acts.scenes.action"
}, {
$group: {
_id: "$acts.scenes.action.character"
}
}])
Going further, you want to see who has appeared how many times, you can use $sum operator inside $group
db.getCollection('shakespeare_plays').aggregate([{
$unwind: "$acts"
}, {
$unwind: "$acts.scenes"
}, {
$unwind: "$acts.scenes.action"
}, {
$group: {
_id: "$acts.scenes.action.character",
count: {$sum: 1}
}
}])
//Results : [{ "_id" : "GREGORY", "count" : 4 }]
You can export the results to an array and perform any logic you want to perform on the results which will give you all the answers you needed
var myResults = db.getCollection('shakespeare_plays').aggregate([pipelineQuery]).toArray();
//Here you can perform any logic on the variable myResults in your programming language
Read more about $group and $sum

Project values of different columns into one field

{
"_id" : ObjectId("5ae84dd87f5b72618ba7a669"),
"main_sub" : "MATHS",
"reporting" : [
{
"teacher" : "ABC"
}
],
"subs" : [
{
"sub" : "GEOMETRIC",
"teacher" : "XYZ",
}
]
}
{
"_id" : ObjectId("5ae84dd87f5b72618ba7a669"),
"main_sub" : "SOCIAL SCIENCE",
"reporting" : [
{
"teacher" : "XYZ"
}
],
"subs" : [
{
"sub" : "CIVIL",
"teacher" : "ABC",
}
]
}
I have simplified the structure of the documents that i have.
The basic structure is that I have a parent subject with an array of reporting teachers and an array of sub-subjects(each having a teacher)
I now want to extract all the subject(parent/sub-subjects) along with the condition if they are sub-subjects or not which are taught by a particular teacher.
For eg:
for teacher ABC i want the following structure:
[{'subject':'MATHS', 'is_parent':'True'}, {'subject':'CIVIL', 'is_parent':'FALSE'}]
-- What is the most efficient query possible ..? I have tried $project with $cond and $switch but in both the cases I have had to repeat the conditional statement for 'subject' and 'is_parent'
-- Is it advised to do the computation in a query or should I get the data dump and then modify the structure in the server code? AS in, I could $unwind and get a mapping of the parent subjects with each sub-subject and then do a for loop.
I have tried
db.collection.aggregate(
{$unwind:'$reporting'},
{$project:{
'result':{$cond:[
{$eq:['ABC', '$reporting.teacher']},
"$main_sub",
"$subs.sub"]}
}}
)
then I realised that even if i transform the else part into another query for the sub-subjects I will have to write the exact same thing for the property of is_parent
You have 2 arrays, so you need to unwind both - the reporting and the subs.
After that stage each document will have at most 1 parent teacher-subj and at most 1 sub teacher-subj pairs.
You need to unwind them again to have a single teacher-subj per document, and it's where you define whether it is parent or not.
Then you can group by teacher. No need for $conds, $filters, or $facets. E.g.:
db.collection.aggregate([
{ $unwind: "$reporting" },
{ $unwind: "$subs" },
{ $project: {
teachers: [
{ teacher: "$reporting.teacher", sub: "$main_sub", is_parent: true },
{ teacher: "$subs.teacher", sub: "$subs.sub", is_parent: false }
]
} },
{ $unwind: "$teachers" },
{ $group: {
_id: "$teachers.teacher",
subs: { $push: {
subject: "$teachers.sub",
is_parent: "$teachers.is_parent"
} }
} }
])

Compare a date of two elements

My problem is difficult to explain :
In my website I save every action of my visitors (view, click, buy etc).
I have a simple collection named "flow" where my data is registered
{
"_id" : ObjectId("534d4a9a37e4fbfc0bf20483"),
"profile" : ObjectId("534bebc32939ffd316a34641"),
"activities" : [
{
"id" : ObjectId("534bebc42939ffd316a3af62"),
"date" : ISODate("2013-12-13T22:39:45.808Z"),
"verb" : "like",
"product" : "5"
},
{
"id" : ObjectId("534bebc52939ffd316a3f480"),
"date" : ISODate("2013-12-20T19:19:10.098Z"),
"verb" : "view",
"product" : "6"
},
{
"id" : ObjectId("534bebc32939ffd316a3690f"),
"date" : ISODate("2014-01-01T07:11:44.902Z"),
"verb" : "buy",
"product" : "5"
},
{
"id" : ObjectId("534bebc42939ffd316a3741b"),
"date" : ISODate("2014-01-11T08:49:02.684Z"),
"verb" : "favorite",
"product" : "26"
}
]
}
I would like to aggregate these data to retrieve the number of people who made an action (for example "view") and then another later in time (for example "buy"). To to that I need to compare "date" inside my "activities" array...
I tried to use aggregation framework to do that but I do not see how too make this request
This is my beginning :
db.flows.aggregate([
{ $project: { profile: 1, activities: 1, _id: 0 } },
{ $match: { $and: [{'activities.verb': 'view'}, {'activities.verb': 'buy'}] }}, //First verb + second verb
{ $unwind: '$activities' },
{ $match: { 'activities.verb': {$in:['view', 'buy']} } }, //First verb + second verb,
{
$group: {
_id: '$profile',
view: { $push: { $cond: [ { $eq: [ "$activities.verb", "view" ] } , "$activities.date", null ] } },
buy: { $push: { $cond: [ { $eq: [ "$activities.verb", "buy" ] } , "$activities.date", null ] } }
}
}
])
Maybe the format of my collection "flow" is not the best to do what I want...If you have any better idea dont hesitate
Thank you for your help !
Here is the aggregation that will give you the total number of buyers who viewed first and then bought (though not necessarily the same product that they viewed).
db.flow.aggregate(
{$match: {"activities.verb":{$all:["view","buy"]}}},
{$unwind :"$activities"},
{$match: {"activities.verb":{$in:["view","buy"]}}},
{$group: {
_id:"$_id",
firstViewed:{$min:{$cond:{
if:{$eq:["$activities.verb","view"]},
then : "$activities.date",
else : new Date(9999,0,1)
}}},
lastBought: {$max:{$cond:{
if:{$eq:["$activities.verb","buy"]},
then:"$activities.date",
else:new Date(1900,0,1)}
}}}
},
{$project: {viewedThenBought:{$cond:{
if:{$gt:["$lastBought","$firstViewed"]},
then:1,
else:0
}}}},
{$group:{_id:null,totalViewedThenBought:{$sum:"$viewedThenBought"}}}
)
Here you first pass through the pipeline only the documents that have all the "verbs" you are interested in. When you group the first time, you want to use the earliest "view" and the last "buy" and the next project compares them to see if they viewed before they bought.
The last step gives you the count of all the people who satisfied your criteria.
Be careful to leave out all $project phases that don't actually compute any new fields (like you very first $project). The aggregation framework is smart enough to never pass through any fields that it sees are not used in any later stages, so there is never a need to $project just to "eliminate" fields as that will happen automatically.
For your query:
I would like to aggregate these data to retrieve the number of people who made an action
Try this:
db.flows.aggregate([
// De-normalize the array into individual documents
{"$unwind" : "$activities"},
// Match for the verbs you are interested in
{"$match" : {"activities.verb":{$in:["buy", "view"]}}},
// Group by verb to get the count
{"$group" : {_id:"$activities.verb", count:{$sum:1}}}
])
The above query would produce an output like:
{
"result" : [
{
"_id" : "buy",
"count" : 1
},
{
"_id" : "view",
"count" : 1
}
],
"ok" : 1
}
Note: The $and operator in your query ({ $match: { $and: [{'activities.verb': 'view'}, {'activities.verb': 'buy'}] }}) is not required as that's the default if you specify multiple conditions. Only if you need a logical OR, $or operator is required.
If you want to use the date in the aggregation query to do queries like how many "views by day", etc.. the Date Aggregation Operators will come in handy.
I see where you are going with this and I think you are basically on the right track. So more or less un-altered (but for formatting preference) and the few tweeks at the end:
db.flows.aggregate([
// Try to $match "first" always to make sure you can get an index
{ "$match": {
"$and": [
{"activities.verb": "view"},
{"activities.verb": "buy"}
]
}},
// Don't worry, the optimizer "sees" this and will sort of "blend" with
// with the first stage.
{ "$project": {
"profile": 1,
"activities": 1,
"_id": 0
}},
{ "$unwind": "$activities" },
{ "$match": {
"activities.verb": { "$in":["view", "buy"] }
}},
{ "$group": {
"_id": "$profile",
"view": { "$min": { "$cond": [
{ "$eq": [ "$activities.verb", "view" ] },
"$activities.date",
null
]}},
"buy": { "$max": { "$cond": [
{ "$eq": [ "$activities.verb", "buy" ] },
"$activities.date",
null
]}}
}},
{ "$project": {
"viewFirst": { "$lt": [ "$view", "$buy" ] }
}}
])
So essentially the $min and $max operators should be self explanatory in the context in that you should be looking for the "first" view to correspond with the "last" purchase. As for me, and would make sense, you would actually be matching these by product (but hint: "Grouping") but I'll leave that part up to you.
The other advantage here is that the false values will always be negated if there is an actual date to match the "verb". Otherwise this goes through as false and this turns out to be okay.
That is because the next thing you do is $project to "compare" the values and ask the question "Did the 'view' happen before the 'buy'?" which is a logical evaluation of the "less than" $lt operator.
As for the schema itself. If you are storing a lot of these "events" then you are probably better off flattening things out into separate documents and finding some way to mark each with the same "session" identifier if that is separate to "profile".
Getting away from large arrays ( which this seems to lead to ) if likely going to help performance, and with care, makes little different to the aggregation process.

way to update multiple documents with different values

I have the following documents:
[{
"_id":1,
"name":"john",
"position":1
},
{"_id":2,
"name":"bob",
"position":2
},
{"_id":3,
"name":"tom",
"position":3
}]
In the UI a user can change position of items(eg moving Bob to first position, john gets position 2, tom - position 3).
Is there any way to update all positions in all documents at once?
You can not update two documents at once with a MongoDB query. You will always have to do that in two queries. You can of course set a value of a field to the same value, or increment with the same number, but you can not do two distinct updates in MongoDB with the same query.
You can use db.collection.bulkWrite() to perform multiple operations in bulk. It has been available since 3.2.
It is possible to perform operations out of order to increase performance.
From mongodb 4.2 you can do using pipeline in update using $set operator
there are many ways possible now due to many operators in aggregation pipeline though I am providing one of them
exports.updateDisplayOrder = async keyValPairArr => {
try {
let data = await ContestModel.collection.update(
{ _id: { $in: keyValPairArr.map(o => o.id) } },
[{
$set: {
displayOrder: {
$let: {
vars: { obj: { $arrayElemAt: [{ $filter: { input: keyValPairArr, as: "kvpa", cond: { $eq: ["$$kvpa.id", "$_id"] } } }, 0] } },
in:"$$obj.displayOrder"
}
}
}
}],
{ runValidators: true, multi: true }
)
return data;
} catch (error) {
throw error;
}
}
example key val pair is: [{"id":"5e7643d436963c21f14582ee","displayOrder":9}, {"id":"5e7643e736963c21f14582ef","displayOrder":4}]
Since MongoDB 4.2 update can accept aggregation pipeline as second argument, allowing modification of multiple documents based on their data.
See https://docs.mongodb.com/manual/reference/method/db.collection.update/#modify-a-field-using-the-values-of-the-other-fields-in-the-document
Excerpt from documentation:
Modify a Field Using the Values of the Other Fields in the Document
Create a members collection with the following documents:
db.members.insertMany([
{ "_id" : 1, "member" : "abc123", "status" : "A", "points" : 2, "misc1" : "note to self: confirm status", "misc2" : "Need to activate", "lastUpdate" : ISODate("2019-01-01T00:00:00Z") },
{ "_id" : 2, "member" : "xyz123", "status" : "A", "points" : 60, "misc1" : "reminder: ping me at 100pts", "misc2" : "Some random comment", "lastUpdate" : ISODate("2019-01-01T00:00:00Z") }
])
Assume that instead of separate misc1 and misc2 fields, you want to gather these into a new comments field. The following update operation uses an aggregation pipeline to:
add the new comments field and set the lastUpdate field.
remove the misc1 and misc2 fields for all documents in the collection.
db.members.update(
{ },
[
{ $set: { status: "Modified", comments: [ "$misc1", "$misc2" ], lastUpdate: "$$NOW" } },
{ $unset: [ "misc1", "misc2" ] }
],
{ multi: true }
)
Suppose after updating your position your array will looks like
const objectToUpdate = [{
"_id":1,
"name":"john",
"position":2
},
{
"_id":2,
"name":"bob",
"position":1
},
{
"_id":3,
"name":"tom",
"position":3
}].map( eachObj => {
return {
updateOne: {
filter: { _id: eachObj._id },
update: { name: eachObj.name, position: eachObj.position }
}
}
})
YourModelName.bulkWrite(objectToUpdate,
{ ordered: false }
).then((result) => {
console.log(result);
}).catch(err=>{
console.log(err.result.result.writeErrors[0].err.op.q);
})
It will update all position with different value.
Note : I have used here ordered : false for better performance.

Mongo order by length of array

Lets say I have mongo documents like this:
Question 1
{
answers:[
{content: 'answer1'},
{content: '2nd answer'}
]
}
Question 2
{
answers:[
{content: 'answer1'},
{content: '2nd answer'}
{content: 'The third answer'}
]
}
Is there a way to order the collection by size of answers?
After a little research I saw suggestions of adding another field, that would contain number of answers and use it as a reference but may be there is native way to do it?
I thought you might be able to use $size, but that's only to find arrays of a certain size, not ordering.
From the mongo documentation:
http://www.mongodb.org/display/DOCS/Advanced+Queries#AdvancedQueries-%24size
You cannot use $size to find a range of sizes (for example: arrays with more than 1 element). If you need to query for a range, create an extra size field that you increment when you add elements. Indexes cannot be used for the $size portion of a query, although if other query expressions are included indexes may be used to search for matches on that portion of the query expression.
Looks like you can probably fairly easily do this with the new aggregation framework, edit: which isn't out yet.
http://www.mongodb.org/display/DOCS/Aggregation+Framework
Update Now the Aggregation Framework is out...
> db.test.aggregate([
{$unwind: "$answers"},
{$group: {_id:"$_id", answers: {$push:"$answers"}, size: {$sum:1}}},
{$sort:{size:1}}]);
{
"result" : [
{
"_id" : ObjectId("5053b4547d820880c3469365"),
"answers" : [
{
"content" : "answer1"
},
{
"content" : "2nd answer"
}
],
"size" : 2
},
{
"_id" : ObjectId("5053b46d7d820880c3469366"),
"answers" : [
{
"content" : "answer1"
},
{
"content" : "2nd answer"
},
{
"content" : "The third answer"
}
],
"size" : 3
}
],
"ok" : 1
}
I use $project for this:
db.test.aggregate([
{
$project : { answers_count: {$size: { "$ifNull": [ "$answers", [] ] } } }
},
{
$sort: {"answers_count":1}
}
])
It also allows to include documents with empty answers.
But also has a disadvantage (or sometimes advantage): you should manually add all needed fields in $project step.
you can use mongodb aggregation stage $addFields which will add extra field to store count and then followed by $sort stage.
db.test.aggregate([
{
$addFields: { answers_count: {$size: { "$ifNull": [ "$answers", [] ] } } }
},
{
$sort: {"answers_count":1}
}
])
You can use $size attribute to order by array length.
db.getCollection('test').aggregate([
{$project: { "answers": 1, "answer_count": { $size: "$answers" } }},
{$sort: {"answer_count": -1}}])