Mongodb find near maxdistance - mongodb

I am firing the following query in mongodb
db.acollection.find({
"field.location": {
"$near": [19.0723058, 73.00067739999997]
},
$maxDistance : 100000
}).count()
and getting the following error -
uncaught exception: count failed: {
"shards" : {
},
"cause" : {
"errmsg" : "exception: unknown top level operator: $maxDistance",
"code" : 2,
"ok" : 0
},
"code" : 2,
"ok" : 0,
"errmsg" : "failed on : Shard ShardA"
}

You did it wrong. The $maxDistance argument is a "child" of the $near operator:
db.acollection.find({
"field.location": {
"$near": [19.0723058, 73.00067739999997],
"$maxDistance": 100000
}
}).count()
Has to be within the same expression.
Also look at GeoJSON when you are making a new application. It is the way you should be storing in the future.

Related

MongoDB hint() fails - not sure if it is because index is still indexing

In SSH session 1, I have ran operation to create partial index in MongoDB as follows:
db.scores.createIndex(
... { event_time: 1, "writes.k": 1 },
... { background: true,
... partialFilterExpression: {
... "writes.l_at": null,
... "writes.d_at": null
... }});
The creation of the index is quite large and lasts about 30+ minutes. While it is still running I started SSH session 2.
In SSH session 2 to cluster, I described indexes on my collection scores, and it looks like it is already there...
db.scores.getIndexes()
[
...,
{
"v" : 1,
"key" : {
"event_time" : 1,
"writes.k" : 1
},
"name" : "event_time_1_writes.k_1",
"ns" : "leaderboard.scores",
"background" : true,
"partialFilterExpression" : {
"writes.l_at" : null,
"writes.d_at" : null
}
}
]
When trying to count with hint to this index, I get below error:
db.scores.find().hint('event_time_1_writes.k_1').count()
2019-02-06T22:35:38.857+0000 E QUERY [thread1] Error: count failed: {
"ok" : 0,
"errmsg" : "error processing query: ns=leaderboard.scoresTree: $and\nSort: {}\nProj: {}\n planner returned error: bad hint",
"code" : 2,
"codeName" : "BadValue"
} : _getErrorWithCode#src/mongo/shell/utils.js:25:13
DBQuery.prototype.count#src/mongo/shell/query.js:383:11
#(shell):1:1
Never seen this below, but need confirmation to check if its failing because indexing is still running ?
Thanks!

Mongodb $near returning error

I have a basic document, like so:
{
"_id" : ObjectId("5760fe623f6d3ad25e387ffc"),
"type": 5,
"product" : {
"location" : {
"geometry" : [ 153.39999999999998, -28.016667 ],
"name" : "Gold Coast QLD, Australia",
"id" : "ChIJt2BdK0cakWsRcK_e81qjAgM"
}
}
}
I am trying to query the location using the $near method provided by Mongodb.
This is my query:
db.posts.find({
'product.location.geometry': {
$near: [ 153.39999999999998, -28.016667 ]
}
})
Within the Mongodb documentation, it states that:
To specify a point using legacy coordinates, $near requires a 2d index
and has the following syntax:
{
$near: [ <x>, <y> ],
$maxDistance: <distance in radians>
}
It even gives this example on their site:
db.legacy2d.find({
location : { $near : [ -73.9667, 40.78 ], $maxDistance: 0.10 }
})
This is the error it is producing:
Error: error: {
"waitedMS" : NumberLong(0),
"ok" : 0,
"errmsg" : "error processing query: ns=mytestnodedb.postsTree: GEONEAR field=product.location.geometry maxdist=1.79769e+308 isNearSphere=0\nSort: {}\nProj: {}\n planner returned error: unable to find index for $geoNear query",
"code" : 2
}
I am unable to identify anything that is wrong with my query. Mongo states that the $near must be longitude followed by latitude, which I am definitely doing. I am purposefully leaving out $maxDistance since Mongo states that it will return results sorted from nearest to farthest.
Well error is pretty much self explainatory. Query requires 2d index which it can't find.
I'd create index as:
db.collection.createIndex({"product.location.geometry":"2d"})
Now if I run your query on sample data, I get
{
"_id" : ObjectId("5760fe623f6d3ad25e387ffc"),
"type" : 5.0,
"product" : {
"location" : {
"geometry" : [
153.39999999999998,
-28.016667
],
"name" : "Gold Coast QLD, Australia",
"id" : "ChIJt2BdK0cakWsRcK_e81qjAgM"
}
}
}

MongoDB spatial query - error unable to find index or no results

I am having trouble while executing some spatial queries in MongoDb. I have a collection "cities15000" for which every record has this format
"_id" : ObjectId("5624aefe4728347a51b1d751"),
"geonameid" : "292932",
"name" : "Ajman",
"asciiname" : "Ajman",
"latitude" : "25.41111",
"longitude" : "55.43504",
"feature_code" : "PPLA",
"country_code" : "AE",
"population" : "226172",
"elevation" : "",
"timezone" : "Asia/Dubai",
"geography" : {
"type" : "Point",
"loc" : [
"55.43504",
"25.41111"
]
}
I have created a 2dsphere index db.cities15000.ensureIndex({loc : '2dsphere'}) and then i tried to get results using $near or $geonear.
While using $near db.cities15000.find({loc: {$near: [55.400 ,25.400]} }) i get this error message
Unable to execute query: error processing query...n planner returned
error: unable to find index for $geoNear query"
which means that i have a wrong index (i think).
But then when i use db.runCommand({geoNear: "cities15000",near: { type: "Point", coordinates: [ 55.400 ,25.400 ] },spherical: true})
i get:
{
"results" : [],
"stats" : {
"nscanned" : 0,
"objectsLoaded" : 0,
"avgDistance" : NaN,
"maxDistance" : 0.0000000000000000,
"time" : 0
},
"ok" : 1.0000000000000000
}
which means that it doesn't find anything near (false). there are many similar topics but i have tried what is proposed there and nothing worked.
you have an error in index creation and the query:
according to your json schema there is no loc but instead there is
"geography.loc"
i.e.
db.cities15000.find({loc: {$near: [55.400 ,25.400]} })
should be something like
db.cities15000.find({"geography.loc": {$near: [55.400 ,25.400]} })

MongoDB aggregation query

I am using mongoDb 2.6.4 and still getting an error:
uncaught exception: aggregate failed: {
"errmsg" : "exception: aggregation result exceeds maximum document size (16MB)",
"code" : 16389,
"ok" : 0,
"$gleStats" : {
"lastOpTime" : Timestamp(1422033698000, 105),
"electionId" : ObjectId("542c2900de1d817b13c8d339")
}
}
Reading different advices I came across of saving result in another collection using $out. My query looks like this now:
db.audit.aggregate([
{$match: { "date": { $gte : ISODate("2015-01-22T00:00:00.000Z"),
$lt : ISODate("2015-01-23T00:00:00.000Z")
}
}
},
{ $unwind : "$data.items" } ,
{
$out : "tmp"
}]
)
But I am getting different error:
uncaught exception: aggregate failed:
{"errmsg" : "exception: insert for $out failed: { lastOp: Timestamp 1422034172000|25, connectionId: 625789, err: \"insertDocument :: caused by :: 11000 E11000 duplicate key error index: duties_and_taxes.tmp.agg_out.5.$_id_ dup key: { : ObjectId('54c12d784c1b2a767b...\", code: 11000, n: 0, ok: 1.0, $gleStats: { lastOpTime: Timestamp 1422034172000|25, electionId: ObjectId('542c2900de1d817b13c8d339') } }",
"code" : 16996,
"ok" : 0,
"$gleStats" : {
"lastOpTime" : Timestamp(1422034172000, 26),
"electionId" : ObjectId("542c2900de1d817b13c8d339")
}
}
Can someone has a solution?
The error is due to the $unwind step in your pipeline.
When you unwind by a field having n elements, n copies of the same documents are produced with the same _id. Each copy having one of the elements from the array that was used to unwind. See the below demonstration of the records after an unwind operation.
Sample demo:
> db.t.insert({"a":[1,2,3,4]})
WriteResult({ "nInserted" : 1 })
> db.t.aggregate([{$unwind:"$a"}])
{ "_id" : ObjectId("54c28dbe8bc2dadf41e56011"), "a" : 1 }
{ "_id" : ObjectId("54c28dbe8bc2dadf41e56011"), "a" : 2 }
{ "_id" : ObjectId("54c28dbe8bc2dadf41e56011"), "a" : 3 }
{ "_id" : ObjectId("54c28dbe8bc2dadf41e56011"), "a" : 4 }
>
Since all these documents have the same _id, you get a duplicate key exception(due to the same value in the _id field for all the un-winded documents) on insert into a new collection named tmp.
The pipeline will fail to complete if the documents produced by the
pipeline would violate any unique indexes, including the index on the
_id field of the original output collection.
To solve your original problem, you could set the allowDiskUse option to true. It allows, using the disk space whenever it needs to.
Optional. Enables writing to temporary files. When set to true,
aggregation operations can write data to the _tmp subdirectory in the
dbPath directory. See Perform Large Sort Operation with External Sort
for an example.
as in:
db.audit.aggregate([
{$match: { "date": { $gte : ISODate("2015-01-22T00:00:00.000Z"),
$lt : ISODate("2015-01-23T00:00:00.000Z")
}
}
},
{ $unwind : "$data.items" }] , // note, the pipeline ends here
{
allowDiskUse : true
});

assertion exception in mongo mapreduce

I have a collection that stores search query logs. It's two main attributes are user_id and search_query. user_id is null for a logged out user. I am trying to run a mapreduce job to find out the count and terms per user.
var map = function(){
if(this.user_id !== null){
emit(this.user_id, this.search_query);
}
}
var reduce = function(id, queries){
return Array.sum(queries + ",");
}
db.searchhistories.mapReduce(map,
reduce,
{
query: { "time" : {
$gte : ISODate("2013-10-26T14:40:00.000Z"),
$lt : ISODate("2013-10-26T14:45:00.000Z")
}
},
out : "mr2"
}
)
throws the following exception
Wed Nov 27 06:00:07 uncaught exception: map reduce failed:{
"errmsg" : "exception: assertion src/mongo/db/commands/mr.cpp:760",
"code" : 0,
"ok" : 0
}
I looked at mr.cpp L#760 but could not gather any vital information. What could be causing this?
My Collection has values like
> db.searchhistories.find()
{ "_id" : ObjectId("5247a9e03815ef4a2a005d8b"), "results" : 82883, "response_time" : 0.86, "time" : ISODate("2013-09-29T04:17:36.768Z"), "type" : 0, "user_id" : null, "search_query" : "awareness campaign" }
{ "_id" : ObjectId("5247a9e0606c791838005cba"), "results" : 39545, "response_time" : 0.369, "time" : ISODate("2013-09-29T04:17:36.794Z"), "type" : 0, "user_id" : 34225174, "search_query" : "eficaz eficiencia efectividad" }
Looking at the docs I could see that this is not possible in the slave. It will work perfectly fine in the master though. If you still want to use the slave then you have to use the following syntax.
db.searchhistories.mapReduce(map,
reduce,
{
query: { "time" : {
$gte : ISODate("2013-10-26T14:40:00.000Z"),
$lt : ISODate("2013-10-26T14:45:00.000Z")
}
},
out : { inline : 1 }
}
)
** Ensure that the output document size does not exceed 16MB limit while using inline function.