I'm new to Mongo. By new I mean couple of hours new.
Basically I have this document structure:
{
_id: ObjectId("614513461af3bf569fdc420e"),
item: 'postcard',
status: 'A',
size: { h: 10, w: 15.25, uom: 'cm' },
instock: [ { warehouse: 'B', qty: 15 }, { warehouse: 'C', qty: 35 } ]
}
I would like if possible to extract particular field (i.e. its value) from instock's last element. In this case I just need to extract 35 i.e. qty field.
I have managed to do this:
db.offer.find( { _id: ObjectId("614513461af3bf569fdc420e") }, { instock: 1, _id: 0} )
Which results in :
{ instock: [ { warehouse: 'B', qty: 15 }, { warehouse: 'C', qty: 35 } ] }
I don't know how to reach to last object in array and than its qty field and everything needs to be as single query.
Aggregate solution
(requires MongoDB 5, else query would be a little bigger)
Query
filter for the _id with the $match stage
get last element of $instock, and then field qty
project to keep only the above part
*we do it like we would do it in a programming language, get last element, and get a field value.
Test code here
db.collection.aggregate([
{"$match": {"_id": ObjectId("614513461af3bf569fdc420e")}},
{
"$project": {
"_id": 0,
"qty": {"$getField": {"field": "qty","input": {"$last": "$instock"}}}
}
}
])
Related
Currently stuck with an issue using MongoDB aggregation. I have a array of '_ids' that I need to check exist in a specific collection.
Example:
I have 3 records in 'Collection 1' with _id 1,2,3. I can find the matching values using:
$match: {
_id: {
$in: [1, 2, 3, 4]
}
}
However what I want to know is from the values I have passed in (1,2,3,4). Which ones don't match up to a record. (In this case _id 4 will not have a matching record)
So instead of returning records with _id 1, 2, 3. It needs to return the _id that doesn't exist. So in this example '_id: 4'
The query should also disregard any extra records in the collection. Example, if the collection held records with ID 1-10, and I passed in a query to determine if the _ids: 1, 7, 15 existed. The the value i'm expecting would be along the lines of ' _id: 15 doesn't exist
The first thought was to use to use $project within a aggregation to hold each _id that was passed in, and then attach each record in the collection. To the matching _id passed in. E.g:
Record 1:
{
_id: 1,
Collection1: [
record details: ...,
...
...
]
},
{
_id: 2,
Collection1: [] // This _id passed in, doesn't have a matching collection
}
However cant seem to get a working example in this instance. Any help would be appreciated!
If the input documents are:
{ _id: 1 },
{ _id: 2 },
{ _id: 5 },
{ _id: 10 }
And the array to match is:
var INPUT_ARRAY = [ 1, 7, 15 ]
The following aggregation:
db.test.aggregate( [
{
$match: {
_id: {
$in: INPUT_ARRAY
}
}
},
{
$group: {
_id: null,
matches: { $push: "$_id" }
}
},
{
$project: {
ids_not_exist: { $setDifference: [ INPUT_ARRAY, "$matches" ] },
_id: 0
}
}
] )
Returns:
{ "ids_not_exist" : [ 7, 15 ] }
Are you looking for $not ?
MDB Docs
When I try to use db.collection.insert(document), the inserted document is an array, when one element of the array occurred duplication error, can all the other elements in this array be successfully inserted into the collection?
If you use the ordered parameter as false, the insert statement will insert all documents except the duplicate documents.
For example :
db.products.insertMany( [
{ _id: 10, item: "large box", qty: 20 },
{ _id: 11, item: "small box", qty: 55 },
{ _id: 11, item: "medium box", qty: 30 },
{ _id: 12, item: "envelope", qty: 100},
{ _id: 13, item: "stamps", qty: 125 },
{ _id: 13, item: "tape", qty: 20},
{ _id: 14, item: "bubble wrap", qty: 30}
], { ordered: false } );
In the above insert statement, all document will be inserted except the duplicate it 11 and 13.
So, I have the next schema, it's an index of car parts that is used to automatically find the part types according to the keywords.
nameEnglish: {
type: String
},
keywords: [{
type: String
}]
I have two documents on the database, one is:
[
{ _id: 1, nameEnglish: 'abcdef', keywords: ['a', 'b', 'c', 'd', 'e', 'f'] },
{ _id: 2, nameEnglish: 'cde', keywords: ['c', 'd', 'e'] }
]
What I am want to do now is query this collection with the MOST number of matched keywords.
myArr = ['b', 'c', 'f'];
db.collection.find({ keywords: { $in: myArr } });
I want this to always return the first document, since it has 3 matched keywords, and the second has only one. How can I achieve this?
You can try below aggregation using $setIntersection to find the matching elements between keywords array and input array followed by $size to count the matching elements and $sort descending and $limit 1 to output the most matched document.
You can drop the count field using project exclusion {$project:{count:0}} as final stage.
db.col.aggregate(
[{
$addFields: {
"count": {
$size: {
$setIntersection: ["$keywords", myArr]
}
}
}
}, {
$sort: {
"count": -1
}
}, {
$limit: 1
}]
)
You need to use aggregation with $setIntersection to get the number of matches, and sort by the no of matches, limit to expected number of results.
db.col.aggregate([
{$addFields : {noOfMatch :{$size :{$setIntersection : ["$keywords", ['b', 'c', 'f']]}}}},
{$sort : {"noOfMatch" : -1}},
{$limit : 1}
])
I have a collection, where each document contains user_ids as a property, which is an Array field. Example document(s) would be :
[{
_id: 'i3oi1u31o2yi12o3i1',
unique_prop: 33,
prop1: 'some string value',
prop2: 212,
user_ids: [1, 2, 3 ,4]
},
{
_id: 'i3oi1u88ffdfi12o3i1',
unique_prop: 34,
prop1: 'some string value',
prop2: 216,
user_ids: [2, 3 ,4]
},
{
_id: 'i3oi1u8834432ddsda12o3i1',
unique_prop: 35,
prop1: 'some string value',
prop2: 211,
user_ids: [2]
}]
My goal is to get number of documents per user, so sample output would be :
[
{user_id: 1, count: 1},
{user_id: 2, count: 3},
{user_id: 3, count: 2},
{user_id: 4, count: 2}
]
I've tried couple of things none of which worked, lastly I tried :
aggregate([
{ $group: {
_id: { unique_prop: "$unique_prop"},
users: { "$addToSet": "$user_ids" },
count: { "$sum": 1 }
}}
]
But it just returned the users per document. I m still trying to learn the any resource or advice would help.
You need to $unwind the "user_ids" array and in the $group stage count the number of time each "id" appears in the collection.
db.collection.aggregate([
{ "$unwind": "$user_ids" },
{ "$group": { "_id": "$user_ids", "count": {"$sum": 1 }}}
])
MongoDB aggregation performs computation on group of values from documents in a collection and return computed result through executing its stages in a pipeline.
According to above mentioned description please try executing following aggregate query in MongoDB shell.
db.collection.aggregate(
// Pipeline
[
// Stage 1
{
$unwind: "$user_ids"
},
// Stage 2
{
$group: {
_id:{user_id:'$user_ids'},
total:{$sum:1}
}
},
// Stage 3
{
$project: {
_id:0,
user_id:'$_id.user_id',
count:'$total'
}
},
]
);
In above aggregate query initially $unwind operator breaks an array field user_ids of each document into multiple documents for each element of array field and then it groups documents by value of user_ids field contained into each document and performs summation of documents for each value of user_ids field.
I'm trying to count documents containing
{ date, direction, procedure } e.g
{'Dec 12', 'West', 'Up' }
and I want output: foreach date, foreach direction, count each procedure type
Dec 12
North Up 2 Down 3
South Up 4 Down 17
etc
It's fairly easy using javascript but I'd like to use mongodb if possible. I can't get aggregate group to filter more than one level and I'm not sure if map_reduce would help. I don't properly understand either.
I would appreciate a little guidance. Thanks
Some detail:
It's a schema-less collection but the interesting bits look like this:
{ "_id" : ObjectId(), "direction" : String, "procedure" : String, "date" : String, .... , "format" : "procedure" }
direction: "North" | "East" | "South" | "West"
procedure: "Arrive" | "Depart"
date: "Mmm dd"
.... lots of other stuff
The output is not critical - it could be:
[ { date: "Mmm dd",
direction: { procedure: count, procedure: count },
direction: { procedure: count, ... },
....
}
{ ... }
...
]
e.g:
[ { date: "Dec 12",
"West": { "Arrive": 5, "Depart": 5 },
"East": { "Arrive": 1, "Depart": 7 },
...
},
{ date: ...},
...
]
The more I play with it the more I think it's a bit of a stretch - That could be good advise :-)
This is a solution for your aggregation pipeline:
[{
'$group': {
'_id': {
'date': '$date',
'direction': '$direction',
'procedure': '$procedure'
},
'count': {'$sum': 1}
}
},
{
'$group': {
'_id': '$_id.date',
'directions': {
'$push': {
'direction': '$_id.direction',
'procedure': '$_id.procedure',
'count': '$count'
}
}
}
}]
Giving the following result:
{
_id: "Dec 12",
directions: [
{ "direction": "North", "procedure": "Arrive", "count": 5},
{ "direction": "North", "procedure": "Depar", "count": 3},
{ "direction": "South", "procedure": "Arrive", "count": 1},
...
]
},
...
Explanation
Basically what you are asking for is a count for each (date, direction, procedure) tuple. You just want it to be a little reorganized, and more precisely: grouped by date with for each date all possible (direction, procedure) couples, and the corresponding count.
So we are exactly doing this:
first $group stage in the pipeline groups by unique (date, direction, procedure), putting them in the _id field, and counting occurences; at this stage the output is:
[{
_id: {
date: "Dec 12",
direction: "North",
procedure: "Depar"
},
count: 4
},
...
]
second $group stage just re-groups the results by date pushing other fields (which are embedded in a document at the _id field, as result of the previous stage) into an array at the new directions field, as (direction, procedure, count) tuples with the same date.