I'm trying to build a near query with additional condition:
query = {
$and : [
{ address : { $near : [x, y] } },
{ available: 1 }
]
};
db.points.find(query)
It gives me an error:
error: {
"$err" : "can't find any special indices: 2d (needs index), 2dsphere (needs index), for: { $and: [ { ipaddr: { $near: [ -82.49412043543862, 0.0 ] } }, { available: 1.0 } ] }",
"code" : 13038
}
Otherwise, the query like this works fine
query = { address : { $near : [x, y] }, available : 1 }
I need to use $and to build complex query.
Can I build $near query with $and keyword?
see this topic - https://jira.mongodb.org/browse/SERVER-4572 - looks like it's a bug and it's not fixed yet..
Probably not the greatest solution but I found a way to work around this issue. What I did is split the query into 2 parts 1) query the nearest addresses and get the objects ids 2) use it in the second query using $in operator.
Related
Version: MongoDB v 4.2.18.
I have the following MongoDb script I'm trying to run in the MongoDB Shell:
const operations = [ {
"updateOne" : {
"filter" : { <snipped> },
"update" : [
{ "$pull" : { <snipped> } },
{ "$set" : { <snipped> } }
]
}
} ]
db.accounts.bulkWrite(operations);
I've snipped out the schema for the filter, pull and set. I've used them before in single operations and they worked 100% fine.
BulkWrite is good when you've got an array of operations. This example has one atomic operation. I've just removed the other items, for simplicity of this question. So please don't say "don't use BW. just do a normal Update query".
The shell basically errors with:
WriteCommandError({
"ok" : 0,
"errmsg" : "Unrecognized pipeline stage name: '$pull'",
"code" : 40324,
"codeName" : "Location40324"
})
So, can you do a $pull in a MongoDB Bulk Write which has an array of update operations?
Btw, this works, if I:
don't do 2x atomic operations, but just one
don't do an array of operations
const operations = [ {
"updateOne" : {
"filter" : { <snipped> },
"update" : { "$pull" : { <snipped> } }
}
} ]
Finally, I've found some information in the official docs about $pull and bulkWrite but the example they give only has a single operation, not an array of operations. As highlighted above, I can get it working if I have a single operation (like the example). But I cannot with an array :(
The syntax your using for the update is the aggregation pipeline update snytax, this means you're executing a limited "aggregation pipeline" as your update body.
$pull is not an aggregation expression which means it cannot be used in an aggregation pipeline. $set is working for you because it does have an aggregate version.
So if you want to keep the current logic we just have to use an aggregation operator that can do the same thing $pull can, Specifically the $filter operator seems like a good fit:
const operations = [
{
"updateOne": {
"filter": {},
"update": [
{
"$set": {
arrayYouPullFrom: {
$filter: {
$input: "$arrayYouPullFrom",
cond: {$ne: ["$$this", "valueToPull"]}
}
}
}
},
{"$set": {}}
]
}
}
]
db.accounts.bulkWrite(operations);
Here I have a collection, say test, storing data with a field named timestamp (in ms). Documents in this collection are densely inserted with timestamp interval 60000. That's to say, I can always find one and only one document whose timestamp is 1 minute before that of a refered one (except for the very first one, of course). Now I want to perform a join to correlate each document with that whose timestamp is 1 minute before. I've tried this aggregation:
...
$lookup : {
from: 'test',
let : { lastTimestamp: '$timestamp'-60000 },
pipeline : [
{$match : {timestamp:'$timestamp'}}
],
as: 'lastObjArr'
},
...
which intends to find the array of the very document and set it as the value of key lastObjArr. But in fact lastObjArr is always an empty one. What happend?
Your $lookup pipeline is incomplete as it's missing the necessary math operators. For a start, lastObjArr is empty due to a number of factors, one of them being that the expression
let : { lastTimestamp: '$timestamp'-60000 },
doesn't evaluate correctly, it needs to use the $subtract operator
let : { lastTimestamp: { $subtract: ['$timestamp', 60000] } },
Also, the $match pipeline step needs to use the $expr operator together with $eq for the query to work, i.e.
$lookup : {
from: 'test',
let : { lastTimestamp: { $subtract: ['$timestamp', 60000] } },
pipeline : [
{ $match : {
$expr: { $eq: ['$timestamp', '$$lastTimestamp'] }
} }
],
as: 'lastObjArr'
}
you defined a variable called "lastTimestamp" and you assign it with
'$timestamp'-60000
But you never use it, change your code as following it should work:
$lookup : {
from: 'test',
let : { lastTimestamp: '$timestamp'-60000 },
pipeline : [
{$match : {timestamp:'$$lastTimestamp'}}
],
as: 'lastObjArr'
},
new to Mongo. Trying to group across different sub fields of a document based on a condition. The condition is a regex on a field value. Looks like -
db.collection.aggregate([{
{
"$group": {
"$cond": [{
"upper.leaf": {
$not: {
$regex: /flower/
}
}
},
{
"_id": {
"leaf": "$upper.leaf",
"stem": "$upper.stem"
}
},
{
"_id": {
"stem": "$upper.stem",
"petal": "$upper.petal"
}
}
]
}
}])
Using api v4.0: cond in the docs shows - { $cond: [ <boolean-expression>, <true-case>, <false-case> ] }
The error I get with the above code is - "Syntax error: dotted field name 'upper.leaf' can not used in a sub object."
Reading up on that I tried $let to re-assign the dotted field name. But started to hit various syntax errors with no obvious issue in the query.
Also tried using $project to rename the fields, but got - Field names may not start with '$'
Thoughts on the best approach here? I can always address this at the application level and split my query into two but it's attractive potentially to solve it natively in mongo.
$group syntax is wrong
{
$group:
{
_id: <expression>, // Group By Expression
<field1>: { <accumulator1> : <expression1> },
...
}
}
You tried to do
{
$group:
<expression>
}
And even if your expression resulted in the same code, its invalid syntax for $group (check from the documentation where you are allowed to use expressions)
One other problem is that you use the query operator for regex, and not the aggregate regex operators (you can't do that, if you aggregate you can use only aggregate operators, only $match is the exception that you can use both if you add $expr)
You need this i think
[{
"$group" : {
"_id" : {
"$cond" : [ {
"$not" : [ {
"$regexMatch" : {
"input" : "$upper.leaf",
"regex" : "/flower/"}}]},
{"leaf" : "$upper.leaf","stem" : "$upper.stem"},
{"stem" : "$upper.stem","petal" : "$upper.petal"}]
}
}}]
Its similar code, but expression gets as value of the "_id" and $regexMatch
is used that is aggregate operator.
I didnt tested the code.
I don't understand the behaviour of the command $exists.
I have two simple documents in the collection 'user':
/* 1 */
{
"_id" : ObjectId("59788c2f6be212c210c73233"),
"user" : "google"
}
/* 2 */
{
"_id" : ObjectId("597899a80915995e50528a99"),
"user" : "werty",
"extra" : "very important"
}
I want to retrieve documents which contain the field "extra" and the value is not equal to 'unimportant':
The query:
db.getCollection('users').find(
{"extra":{$exists:true},"extra": {$ne:"unimportant"}}
)
returns both two documents.
Also the query
db.getCollection('users').find(
{"extra":{$exists:false},"extra": {$ne:"unimportant"}}
)
returns both two documents.
It seems that $exists (when used with another condition on the same field) works like an 'OR'.
What I'm doing wrong? Any help appreciated.
I used mongodb 3.2.6 and 3.4.9
I have seen Mongo $exists query does not return correct documents
but i haven't sparse indexes.
Per MongoDB documentation (https://docs.mongodb.com/manual/reference/operator/query/and/):
Using an explicit AND with the $and operator is necessary when the same field or operator has to be specified in multiple expressions.
Therefore, and in order to enforce the cumpliment of both clauses, you should use the $and operator like follows:
db.getCollection('users').find({ $and : [ { "extra": { $exists : true } }, { "extra" : { $ne : "unimportant" } } ] });
The way you constructed your query is wrong, nothing to do with how $exists works. Because you are checking two conditions, you would need a query that does a logical AND operation to satisfy the two conditions.
The correct syntax for the query
I want to retrieve documents which contain the field "extra" and the
value is not equal to 'unimportant'
should follow:
db.getCollection('users').find(
{
"extra": {
"$exists": true,
"$ne": "unimportant"
}
}
)
or using the $and operator as:
db.getCollection('users').find(
{
"$and": [
{ "extra": { "$exists": true } },
{ "extra": { "$ne": "unimportant" } }
]
}
)
I am using aggregation with mongoDB now i am facing a problem here, i am trying to match my documents which are present in my input array by using $in operator. Now i want to know the index of the lement from the input array now can anyone please tell me how can i do that.
My code
var coupon_ids = ["58455a5c1f65d363bd5d2600", "58455a5c1f65d363bd5d2601","58455a5c1f65d363bd5d2602"]
couponmodel.aggregate(
{ $match : { '_id': { $in : coupons_ids }} },
/* Here i want to know index of coupon_ids element that is matched because i want to perform some operation in below code */
function(err, docs) {
if (err) {
} else {
}
});
Couponmodel Schema
var CouponSchema = new Schema({
category: {type: String},
coupon_name: {type: String}, // this is a string
});
UPDATE-
As suggested by user3124885 that aggregation is not better in performance, can anyone please tell me the performance difference between aggregation and normal query in mongodb. And which one is better ??
Update-
I read this question on SO mongodb-aggregation-match-vs-find-speed. Here the user himself commented that both take same time, also by seeing vlad-z answer i think aggregation is better. Please if anyone of you have worked on mongodb Then please tell me what are your opinion about this.
UPDATE-
I used sample json data containing 30,000 rows and tried match with aggregation v/s find query aggregation got executed in 180 ms where find query took 220ms. ALso i ran $lookup it is also taking not much than 500ms so think aggregation is bit faster than normal query. Please correct me guys if any one of you have tried using aggregation and if not then why ??
UPDATE-
I read this post where user uses below code as a replacement of $zip SERVER-20163 but i am not getting how can i solve my problem using below code. So can anybody please tell me how can i use below code to solve my issue.
{$map: {
input: {
elt1: "$array1",
elt2: "$array2"
},
in: ["$elt1", "$elt2"]
}
Now can anyone please help me, it would be really be a great favor for me.
So say we have the following in the database collection:
> db.couponmodel.find()
{ "_id" : "a" }
{ "_id" : "b" }
{ "_id" : "c" }
{ "_id" : "d" }
and we wish to search for the following ids in the collections
var coupons_ids = ["c", "a" ,"z"];
We'll then have to build up a dynamic projection state so that we can project the correct indexes, so we'll have to map each id to its corresponding index
var conditions = coupons_ids.map(function(value, index){
return { $cond: { if: { $eq: ['$_id', value] }, then: index, else: -1 } };
});
Then we can then inject this in to our aggregation pipeline
db.couponmodel.aggregate([
{ $match : { '_id' : { $in : coupons_ids } } },
{ $project: { indexes : conditions } },
{ $project: {
index : {
$filter: {
input: "$indexes", as: "indexes", cond: { $ne: [ "$$indexes", -1 ] }
}
}
}
},
{ $unwind: '$index' }
]);
Running the above will now output each _id and it's corresponding index within the coupons_ids array
{ "_id" : "a", "index" : 1 }
{ "_id" : "c", "index" : 0 }
However we can also add more items in to the pipeline at the end and reference $index to get the current matched index.
I think you could do it in a faster way simply retrieving the array and search manually. Remember that aggregation don't give you performance.
//$match,$in,$and
$match:{
$and:[
{"uniqueID":{$in:["CONV0001"]}},
{"parentID":{$in:["null"]}},
]
}
}])