My understanding is that NumberDecimal forces the 100th place regardless of numbers and I do see that in the aggregate pipeline but when I look at the raw json pure without mongodb types it reverts back to tenth place and cuts off the 2nd 0.
I have tried $round with 2 decimal places with original 2997 value:
db.getCollection("collection").aggregate([
{
$set:
{
"column_Decimal":
{
$round: [
{
$arrayElemAt:
["$column", 0]
},
2
]
}
}
}
])
and I see
"column_Decimal" : NumberDecimal("2997.00")
Then I switch the view to JSON Pure and it shows this:
"column_Decimal" : 2997.0
How do I avoid this happening and get 2997.00 regardless in Json Raw.
Related
Is there a way to remove certain amounts of elements from start of an array in MongoDB?
Suppose I don't know about element's details (like id or uuid) but I know that I want to remove the first N elements from the start of it. Is there a way to do it in mongoDB? I know I can fetch the whole document and process it in my own programming language environment but I thought it would be nicer if mongoDB already implemented a way to achieve it atomically by its own query language.
There is a $pop operator to remove a single element from the array either from top or bottom position,
and there is a closed jira support request SERVER-4798 regarding multiple pop operations, but in comment they have suggested to use update with aggregation pipeline option.
So you can try update with aggregation pipeline starting from MongoDB 4.2,
$slice, pass negative number and it will slice elements from 0 index
let n = 2;
db.collection.updateOne(
{}, // your query
[{
$set: { arr: { $slice: ["$arr", -n] } }
}]
)
Playground
What #turivishal metioned is true however it only works if array's size is always 4. For it to work for all array sizes we should consider size of the array in aggregation, so:
let n = 2;
db.collection.update({},
[
{
$set: {
arr: {
$slice: [
"$arr",
{
$subtract: [
-n,
{
$size: "$arr"
}
]
},
]
}
}
}
])
Playground
I want the total number of cases in all my documents,
This is the query I tried to use:
db.coviddatajson.aggregate([
{ $group: { _id: null, total: { $sum: "$total_cases"} } }
])
For some reason the result is 0 which does not make sense, as it's supposed to be 1000+ at least and the expected result anything that is not zero will make sense but it's supposed to be a few thousands or something like that.
This is the dataset I am using:
https://covid.ourworldindata.org/data/owid-covid-data.json
What am I doing wrong here?
Any ideas on how to fix this query?
The total_cases field is inside data array, and $sum requires field type as number in $group stage, so before we need to do total($sum) of data.total_cases in current document and then pass it to $group stage and count total sum,
db.coviddatajson.aggregate([
{
$project: { total_cases: { $sum: "$data.total_cases" } }
},
{
$group: {
_id: null,
total: { $sum: "$total_cases" }
}
}
])
Playground
The data set has some issues.
The document size is bigger than 16MiB, you cannot load documents >16MiB into MongoDB. This in an internal limitation. You would need to split the document into sub-documents.
The document contains data for each country but also summarized data for "World". Do you have to exclude the "World" data? Can you use it, instead of manual summary?
The data is not consistent. For example some countries do not provide a number of male/female smokers or median age. Not all countries provide all data for each date, you may have missing values. How to deal with them?
Do you like a simple sum of all total_cases? If yes, the query would be easy, however the result would be pointless (15'773'189'214 total cases, twice population of the world).
In mongodb I want to calculate sum of partialAmount field which is of string type and in this field values are stored as "20,00","15,00".
How to calculate sum of all values. Both of the queries I have tried are returning 0.
collection.aggregate([
{
$group: {
_id: null,
sum: { $sum: "$$partialAmount" }
}
}
]);
And:
collection.aggregate([
{
$group: {
_id: null,
totalAmount: {
$sum: {
$toDouble: "$partialAmount"
}
}
}
}
]);
Your first query is obviously not going to work cause you're trying to sum strings, and also you have an extra "$" in "$$partialAmount".
Your second query would work if your partialAmount-s were stored in the format "15.00" and "20.00", see here.
If they are saved as "15,00" and "20,00" in the db, your second query should throw an error, not return 0. (If you are actually getting a zero result, then maybe your "partialAmount" field is misspelled in the db, or the field gets lost in a previous stage of the pipeline)
In this case you need either change the values in your db to the "20.00" format, or if this is not feasible, use $split and $concat to convert to the proper format like this, before converting to double and summing up the values.
I tried to update a field in a document which was long integer. But it was updated to the value '14818435007969200' instead of '14818435007969199'.
db.getCollection('title').updateMany({},
{$set:{'skillId':[NumberLong(14818435007969199)]}})
db.getCollection('title').find({})
{
"_id" : ObjectId("5853351c0274072315da2426"),
"skillId" : [
NumberLong(14818435007969200)
]
}
Is there any solution? I am using robomongo 0.9.0.
The mongo shell treats all numbers as floating point values. So while using the NumberLong() wrapper pass the long value as string or risk the loss for precision or conversion mismatches.
This should work as expected.
db.getCollection('title').updateMany({},
{$set:{'skillId':[NumberLong("14818435007969199")]}})
Just to demonstrate for example.
So when converting 14818435007969199 to binary base 2 value you get 110100101001010100100111000010110001101111011110110000 which when converted back to base 10 is 14818435007969200
You can checkout the floating point arithmetic for more details.
here is an example with where condition in the query
db.CustomerRatibs.update(
{ custRatibId:'8b19bfdbac7b468b9c3edafc37ad5409' },
{ $set:
{
uAt : NumberLong(1536581726000)
}
},
{
multi:false
}
)
I want to build a query for a very dynamic collection.
An example:
I have a collection like
{
_id: ObjectId(),
value: x
// some other data
}
The example dataset has the values
{
value: 1
},
{
value: 1
},
{
value: 2
},
{
value: 3
},
{
value: 3
}
As you can see the same value can be there multiple times.
But if I run the following query it only returns the first with value: 3
db.collection.aggregate([
{
$sort: "$value"
},
{
$limit: 4
}
])
But what I want is at least 4 documents which include all occurrences of the values in them. So I want all where value: 3.
Edit
Sorry, the question might be a bit misleading. I want to have a complete result. So all with value: 3. It is for a public transport database and the value is the departure time. So I want at least the next 30 departures, but if 30 and 31 depart at the same time, I want the 31 also.
I now use a small python function which extends the limit as I want. Since the query returns a cursor I do not waste resources. I do not specify a limit in the query.
def extend_limit(cursor, original_limit):
result = []
try:
while original_limit > 0:
result.append(cursor.next())
original_limit -= 1
last_element = result[-1]
while True:
next_element = next(cursor)
if last_element['value'] != next_element['value']:
break
result.append(next_element)
except StopIteration:
pass
return result
Thanks to Adam Comerford
There is no need to use aggregation here, just do a normal find with a projection, a sort and a limit:
db.collection.find({}, {_id : 0, value : 1}).sort({value : 1}).limit(4)
I'd recommend that you actually query on some criteria (rather than empty in my example) and that the criteria have an appropriate index that includes the sorted field if possible (for performance reasons).