I have a mongo query like this
aggregate([
{
"$geoNear": {
"near": { "type": "Point", "coordinates": [ 35.709770973477255 , 51.404043066431775 ] },
"distanceField": "dist.calculated",
"maxDistance": 5000,
"spherical": True,
"query": { "active_delivery_categories": { "$in" : ["biker"]}, "ban_status": False},
"num": 100
}
}
])
when i add new filed to my query like this:
"query": { "availability_status":"idle","active_delivery_categories": { "$in" : ["biker"]}, "ban_status": False}
response time grow doubles
it happen just on this filed
for example on this query it not happen!
"query": { "city":"London","active_delivery_categories": { "$in" : ["biker"]}, "ban_status": False}
do you have any idea?
One possibility that I can think of is as follows... your original query is able to make use of an appropriate index, and is therefore fast. With the added condition, it is not able to use the index, and is therefore slower.
Look at the .explain() output to see if this is so.
But then, I also notice that you have said it happens only by adding this field. Are you sure of that?
Related
db.units.aggregate([
{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -3.70256, 40.4165 ]
},
"distanceField": "dist.calculated",
"spherical": true,
"maxDistance": 50000
}
},
{
$match: {
"some.field.a": true,
"otherField": null
}
}
]).explain("executionStats");
Gives me:
nReturned: 671,
executionTimeMillis: 8,
totalKeysExamined: 770,
totalDocsExamined: 671,
However:
db.units.aggregate([
{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -3.70256, 40.4165 ]
},
"distanceField": "dist.calculated",
"spherical": true,
"maxDistance": 50000,
"query": {
"some.field.a": true,
"otherField": null
}
}
}
]).explain("executionStats");
Gives me:
nReturned: 67,
executionTimeMillis: 6,
totalKeysExamined: 770,
totalDocsExamined: 1342,
The first question which comes to my mind is, why the number of returned documents is different?
The second one is, why the totalDocsExamined is higher when using query of $geoNear?
Updated
When query field of $geoNear is used, there is a COLLSCAN to find all documents matching the query filter. Unless you create a compound index with all fields:
db.units.createIndex({coordinates:'2dsphere', 'some.field.': 1, otherField:1 )
So it seems like the behavior in case of using query is by default a COLLSCAN except if you have a compounded index with the geospatial field plus the ones included in query.
Reason is that query param of geoNear decides the number of docs examined.
Limits the results to the documents that match the query. The query syntax is the usual MongoDB read operation query syntax.
In your first case, it's considered as pipeline. geoNear executes first then match stage. Hence the number changes.
I'm wondering if anyone can help me solve a problem with this query.
I'm trying to query all my items with a $geoNear operator but with a very large maxDistance it doesn't seem to search in all records.
The logs show this error "Too many geoNear results for query" which apparently means that the query hit the 16MB limit, but the output is only 20 records and claims the total is 1401 where I would expect 17507 as total.
The average record is 12345 bytes. At 1401 records it stops because it hit 16MB limit.
How can I run this query so that it returns the first 20 results taken from the entire pool of items?
This is the query I'm running:
db.getCollection('items').aggregate([
{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": [
10,
30
]
},
"minDistance": 0,
"maxDistance": 100000,
"spherical": true,
"distanceField": "location",
"limit": 100000
}
},
{
"$sort": {
"createdAt": -1
}
},
{
"$facet": {
"results": [
{
"$skip": 0
},
{
"$limit": 20
}
],
"total": [
{
"$count": "total"
}
]
}
}
])
This is the output of the query (and the error is added to the log):
{
"results" : [
// 20 items
],
"total" : [
{
"total" : 1401
}
]
}
I changed my query to use a separate find() and count() call. The facet was severely slowing down the query and since it really isn't a complex query, there was no reason to use an aggregate.
I initially used the aggregate because it made sense to do 1 db call instead of multiple and with $facet you'd have built in paging with a total count but it 1 aggregate call took 600ms where as now a find() and count() call take 20ms.
The 16MB limit is also no problem anymore.
I have a dataset of ~400k objects in the format:
{
"trip": {
"type": "Feature",
"geometry": {
"type": "LineString",
"coordinates": [
[
-73.9615,
40.6823
],
[
-73.9704,
40.7849
]
]
},
"properties": {
......
}
}
}
I tried making a 2dsphere index on mLab like so:
{"trip.geometry" : "2dsphere"}
Which I assume just calls:
db.collection.createIndex( {"trip.geometry" : "2dsphere"} )
When I try to do a $geoWithin query like so (about 500 hits):
db.collection.find(
{
"trip.geometry": {
$geoWithin: {
$geometry: {
type : "Polygon" ,
coordinates: [
[
[-74.0345,40.7267],
[-73.9824,40.7174],
[-73.9934,40.7105],
[-74.0345,40.7267]
]
]
}
}
}}
)
I noticed it was very slow, ~2 seconds. I then tried deleting the index entirely, and the time increase was very slight. ~0.5 seconds. Is it possible that this query is not using the index the I had set? I've included the explain() here.
By my interpretation, the winning plan first fetches all the data based on a simple filter, then uses the 2dindex. Shouldn't it start out using the 2dindex, given that the lat and lon data aren't indexed directly?
I'm trying to make use of some geolocation functionality in mongodb. Using a find query with $near doesn't seem to work!
I currently have this object in my database:
{
"Username": "Deano",
"_id": {
"$oid": "533f0b722ad3a8d39b6213c3"
},
"location": {
"type": "Point",
"coordinates": [
51.50998,
-0.1337
]
}
}
I have the following index set up as well:
{
"v": 1,
"key": {
"location": "2dsphere"
},
"ns": "heroku_app23672911.catchmerequests",
"name": "location_2dsphere",
"background": true
}
When I run this query:
db.collectionname.find({ "location" : { $near : [50.0 , -0.1330] , $maxDistance : 10000 }})
I get this error:
error: {
"$err" : "can't parse query (2dsphere): { $near: [ 50.0, -0.133 ], $maxDistance: 10000.0 }",
"code" : 16535
}
Does anyone know where I'm going wrong? Any help would be much appreciated!
It seems you need to use the GeoJSON format if your data is in GeoJSON format too, as yours is. If you use:
db.collectionname.find({
"location": {
$near: {
$geometry:
{ type: "Point", coordinates: [50.0, -0.1330] }, $maxDistance: 500
}
}
})
it should work. I could replicate your error using GeoJSON storage format for the field, but what the docs call legacy points in the query expression. I think the docs are a bit unclear in that they suggest you can use both GeoJSON and legacy coordinates with a 2dsphere index 2dsphere
I am using 2.4.10, for what it is worth, as there were some big changes to spatial in the 2.4 release.
This isn't exactly a solution as I never got the above working, but using geoNear I managed to get what I wanted.
db.runCommand( { geoNear : 'catchmerequests', near:
{ type: 'Point', coordinates : [50, 50] }, spherical : true } );
If anyone can find out why the original $near attempt failed that would still be appreciated, but I'm posting this for anyone else who else who is looking for a working alternative.
I have a document :
{
"_id": ObjectId("5324d5b30cf2df0b84436141"),
"value": 0,
"metaId": {
"uuid": "8df088b2-9aa1-400a-8766-3080a6206ed1",
"domain": "domain1"
}
}
Also I have ensured indexes of this type:
ensureIndex({"metaId.uuid" : 1})
Now here comes two queries:
db.test.find({"metaId" : {"uuid" : "8df088b2-9aa1-400a-8766-3080a6206ed1"}}).explain()
"cursor" : "BasicCursor"
NO Index used!
db.test.find({"metaId.uuid" : "8df088b2-9aa1-400a-8766-3080a6206ed1"}).explain()
"cursor" : "BtreeCursor metaId.uuid_1"
Index used!
Is there a way to make both queries use index ?
Firstly, the following document:
{
"_id": ObjectId("5324d5b30cf2df0b84436141"),
"value": 0,
"metaId": {
"uuid": "8df088b2-9aa1-400a-8766-3080a6206ed1",
"domain": "domain1"
}
}
Would not match the Query:
db.test.find({
"metaId": {
"uuid": "8df088b2-9aa1-400a-8766-3080a6206ed1"
}
});
Because, it's querying by the value of "metaId" which has to match exactly to:
{
"uuid": "8df088b2-9aa1-400a-8766-3080a6206ed1",
"domain": "domain1"
}
In this case, you'd be using the index on "metaId".
There is a known issue on this, SERVER-2953. You can vote that up if you wish.
In the meantime you could do this instead:
{
"value": 0,
"metaId": [{
"uuid": "8df088b2-9aa1-400a-8766-3080a6206ed1",
"domain": "domain1"
}]
}
And with a slightly different query form then the index will be selected:
db.test.find(
{"metaId" : {
"$elemMatch": {
"uuid" : "8df088b2-9aa1-400a-8766-3080a6206ed1"
}
}}
).explain()
And actually that query will match the index with your current data form as well. However it will not return results. But with the data in this form it will return a match.
It is generally better to use an array element with a "contained" sub-document, even if it is only one. This allows for much more flexible searching, especially if you want to expand on the different field keys in the sub-document in the future.