Mongoose aggregate match returns empty array - mongodb

I'm working with mongodb aggregations using mongoose and a I'm doubt what am I doing wrong in my application.
Here is my document:
{
"_id": "5bf6fe505ca52c2088c39a45",
"loc": {
"type": "Point",
"coordinates": [
-43.......,
-19......
]
},
"name": "......",
"friendlyName": "....",
"responsibleName": "....",
"countryIdentification": "0000000000",
"categories": [
"5bf43af0f9b41a21e03ef1f9"
]
"created_at": "2018-11-22T19:06:56.912Z",
"__v": 0
}
At the context of my application I need to search documents by GeoJSON, and I execute this search using geoNear. Ok it works fine! But moreover I need to "match" or "filter" specific "categories" in the document. I think it's possible using $match but certainly I'm doing the things wrong. Here is the code:
CompanyModel.aggregate(
[
{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": [pageOptions.loc.lng, pageOptions.loc.lat]
},
"distanceField": "distance",
"spherical": true,
"maxDistance": pageOptions.distance
}
},
{
"$match": {
categories: { "$in": [pageOptions.category] }
}
}
]
).then(data => {
resolve({ statusCode: 200, data: data });
}).catch(err => {
console.log(err);
reject({ statusCode: 500, error: "Error getting documents", err: err });
})
pageOptions:
var pageOptions = {
loc: {
lat: parseFloat(req.query.lat),
lng: parseFloat(req.query.lng)
},
distance: parseInt(req.query.distance) || 10000,
category: req.params.category || ""
}
If I remove $match I get all the documents by location, but I need to filter specific categories... I don't believe that I need to filter it manually, I believe it can be possible with aggregation functions...
So anyone can help me with this mongoose implementation?
Thanks for all help

In MongoDB you need to make sure that data type in your document matches the type in your query. In this case you have a string stored in the database and you're trying to use ObjectId to build the $match stage. To fix that you can use valueOf() operator on pageOptions.category, try:
{
"$match": {
categories: { "$in": [pageOptions.category.valueOf()] }
}
}

Related

Azure CosmosDb MongoAPI nearSphere return nothing

I'm using CosmosDB with MongoDb API and I'd like to use $nearSphere to find Documents.
Here is the exemple of one document I have in my collection "locations":
{
"_id": {
"$oid": "60523b3dd72e4d4aa21f473b"
},
"data": [{
"Indicateur": "pr50mm",
"Value": 0,
"score": 0,
"Exposure": "low"
}],
"Domain": "EU-CORDEX",
"GCMS": "ICHEC-EC-EARTH",
"RCMS": "RACMO22E",
"Scenario": "rcp85",
"Horizon": "Medium (1941-1970)",
"location": {
"type": "Point",
"coordinates": [26.32, 27.25]
}
}
I would like to find the Document with location the nearest from [26, 27].
When I execute the following request, it returns nothing. However, this same command works fine we a MongoDB database : It returns the documents order by distance with point with coordonates [26, 27].
db.locations.find({
'location': {
$nearSphere: {
$geometry: {
type: 'Point',
coordinates: [
26, 27
]
}
}
}
})
Do you know how I can make it work for Azure CosmosDB?
Thank you in advance.
It looks like for Azure CosmosDB I have to specify location.coordinates in my query. Here is the working query:
db.locations.find({
'location.coordinates': {
$nearSphere: {
$geometry: {
type: 'Point',
coordinates: [
26, 27
]
}
}
}
})
This exemple about $nearSphere helped me: https://github.com/MicrosoftDocs/azure-docs/blob/master/articles/cosmos-db/mongodb-feature-support.md
I hope this will help other people!

How to group by geospatial attribute in mongodb?

I have a set of documents in mongodb and I am trying to group the document set using the nearest geopoint coordinates within distance of 100m radius to a given document, and get the average value of type and the $first value for cordinates. A sample document set is as below. Is there a way to do this using existing functions in mongodb aggregation pipeline or do I have to use newly introduced $function to build a custom aggregation function. Any suggestions are highly appreciated.
{"_id":{"$oid":"5e790cfe46fa8260f41d2626"},
"cordinates":[103.96277219999999,1.3437526],
"timestamp":1584991486436,
"user":{"$oid":"5e4bbbc31eac8e2e3ca219a6"},
"type": 1,
"__v":0}
{"_id":{"$oid":"5e790d7346fa8260f41d2627"},
"cordinates":[103.97242539965999,1.33508],
"timestamp":1584991603400,
"user":{"$oid":"5e4bbbc31eac8e2e3ca219a6"},
"type": 1,
"__v":0}
{"_id":{"$oid":"5e790d7346fa8260f41d2627"},
"cordinates":[103.97242539990099,1.33518],
"timestamp":1584991603487,
"user":{"$oid":"5e4bbbc31eac8e2e3ca219a6"},
"type": 2,
"__v":0}
A sample document that would be expected as output after aggregation pipeline.
{"avgCordinates":[103.97242539990099,1.33518],
"avgType": 1.6,
}
I managed solve this by by building a custom function to represent a single value for the geospatial coordinate and then grouping by the returned values. I was able to group nearby coordinates to a single document as the function I used to transform values would also map to nearby scalar values. So far it has given me expected outputs for the heatmap. But still I'm not sure this is the correct way to do this. There should be a better answer for this. I have posted my aggregation pipeline below. Any suggestions for improving this are appreciated.
[
{
'$match': {
'timestamp': {
'$gte': 1599889338000
}
}
}, {
'$addFields': {
'singleCoordinate': {
'$function': {
'body': 'function(coordinates){return ((coordinates[1]+90)*180+coordinates[0])*1000000000000;}',
'args': [
'$coordinates', '$geonear'
],
'lang': 'js'
}
}
}
}, {
'$group': {
'_id': {
'$subtract': [
'$singleCoordinate', {
'$mod': [
'$singleCoordinate', 100
]
}
]
},
'coordinates': {
'$first': '$coordinates'
},
'avgType': {
'$avg': '$type'
}
}
}, {
'$addFields': {
'latitude': {
'$arrayElemAt': [
'$coordinates', 1
]
},
'longitude': {
'$arrayElemAt': [
'$coordinates', 0
]
},
'weight': {
'$multiply': [
'$avgType', '$_id'
]
}
}
}, {
'$project': {
'_id': false,
'coordinates': false,
'avgType': false
}
}
]

Forbid usage of the specifix index for the query

I have a mongodb collection with the following schema:
{
"description": "some arbitrary text",
"type": "TYPE", # there are a lot of different types
"status": "STATUS" # there are a few different statuses
}
I also have two indexes: for "type" and for "status".
Now I run a query:
db.obj.count({
type: { $in: ["SOME_TYPE"] },
status: { $ne: "SOME_STATUS" },
description: { $regex: /.*/ }
})
MongoDB chooses to use an index for "status", while "type" would be much better:
"query": {
"count": "obj",
"query": {
"description": Regex('.*', 2),
"status": {
"$ne": "SOME_STATUS"
},
"type": {
"$in": [
"SOME_TYPE"
]
}
}
},
"planSummary": "IXSCAN { status: 1 }"
I know I can use hint to specify an index to use, but I have different queries (which should use different indexes) and I can't annotate every one of them.
As far as I can see, a possible solution would be to forbid usage of "status" index for all queries that contain status: { $ne: "SOME_STATUS" } condition.
Is there a way to do it? Or maybe I want something weird and there is a better way?

$geoWithin not returning anything

I'm trying to use $geoWithin and $centerSpehere to return a list of items within a radius, but no luck.
This is my item's schema:
var ItemSchema = new Schema({
type : String,
coordinates : []
});
ItemSchema.index({coordinates: '2dsphere'});
This is my database item that I should be seeing:
{
"_id": {
"$oid": "552fae4c13f82d0000000002"
},
"type": "Point",
"coordinates": [
6.7786656,
51.2116958
],
"__v": 0
}
This is running on the server currently just to test, the coordinates seen here will eventually be variable.
Item.find( {
coordinates: { $geoWithin: { $centerSphere: [ [ 51, 6 ], 100/6378.1 ] } }
}, function(err, items) {
console.log(items); // undefined
});
Items are always undefined, even though that coordinate is within 100Km from the other coordinate.
I get no errors in the console.
Any ideas of what's happening? Is the schema wrong?
Thanks.
The format's wrong. The GeoJSON needs to live under one field:
{
"location" : {
"type": "Point",
"coordinates": [6.7786656, 51.2116958]
}
}
See e.g. create a 2dsphere index.

Waypoint matching query

We have collection as follows. Each document represent a trip of driver, loc property contains way-points, time property contains time corresponding to way-points. For example, in Trip A, Driver would be at GeoLocation tripA.loc.coordinates[0] at the time tripA.time[0]
{
tripId : "Trip A",
time : [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:48:43.589Z",
"2015-03-08T04:49:43.589Z",
"2015-03-08T04:50:43.589Z",
],
loc: {
type: "MultiPoint",
coordinates: [
[ -73.9580, 40.8003 ],
[ -73.9498, 40.7968 ],
[ -73.9737, 40.7648 ],
[ -73.9814, 40.7681 ]
]
}
}
{
tripId : "Trip B",
time : [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:48:43.589Z",
"2015-03-08T04:49:43.589Z",
"2015-03-08T04:50:43.589Z",
],
loc: {
type: "MultiPoint",
coordinates: [
[ -72.9580, 41.8003 ],
[ -72.9498, 41.7968 ],
[ -72.9737, 41.7648 ],
[ -72.9814, 41.7681 ]
]
}
}
We would like to query for trips which starts near (1km) location "[long1,lat1]" around the time t (+-10 minutes) and ends at [long2,lat2].
Is there simple and efficient way to formulate above query for MongoDB or Elasticsearch?
If so could you please give the query to do so. either in MongoDB or Elasticsearch. (MongoDB preferable)
This did start as a comment but was clearly getting way to long. So it's a long explanation of the limitations and the approach.
The bottom line of what you are asking to achieve here is effectively a "union query" which is generally defined as two separate queries where the end result is the "set intersection" of each of the results. In more brief, where the selected "trips" from your "origin" query matches results found in your "destination" query.
In general database terms we refer to a "union" operation as a "join" or at least a condition where the selection of one set of criteria "and" the selection of another must both meet with a common base grouping identifier.
The base points in MongoDB speak as I believe also applies to elastic search indexes is that neither datastore mechanism supports the notion of a "join" in any way from a direct singular query.
There is another MongoDB principle here considering your proposed or existing modelling, in that even with items specified in "array" terms, there is no way to implement an "and" condition with a geospatial search on coordinates and that also considering your choice of modelling as a GeoJSON "MultiPoint" the query cannot "choose" which element of that object to match the "nearest" to. Therefore "all points" would be considered when considering the "nearest match".
Your explanation is quite clear in the intent. So we can see that "origin" is both notated as and represented within what is essentially "two arrays" in your document structure as the "first" element in each of those arrays. The representative data being a "location" and "time" for each progressive "waypoint" in the "trip". Naturally ending in your "destination" at the end element of each array, considering of course that the data points are "paired".
I see the logic in thinking that this is a good way to store things, but it does not follow the allowed query patterns of either of the storage solutions you mention here.
As I mentioned already, this is indeed a "union" in intent so while I see the thinking that led to the design it would be better to store things like this:
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:47:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9580, 40.8003 ]
},
"seq": 0
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:48:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9498, 40.7968 ]
},
"seq": 1
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:49:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9737, 40.7648 ]
},
"seq": 2
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:50:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9814, 40.7681 ]
},
"seq": 3,
"isEnd": true
}
In the example, I'm just inserting those documents into a collection called "geojunk", and then issuing a 2dsphere index for the "loc" field:
db.geojunk.ensureIndex({ "loc": "2dsphere" })
The processing of this is then done with "two" .aggregate() queries. The reason for .aggregate() is because you want to match the "first" document "per trip" in each case. This represents the nearest waypoint for each trip found by the queries. Then basically you want to "merge" these results into some kind of "hash" structure keyed by "tripId".
The end logic says that if both an "origin" and a "destination" matched your query conditions for a given "trip", then that is a valid result for your overall query.
The code I give here is an arbitrary nodejs implementaion. Mostly because it's a good base to represent issuing the queries in "parallel" for best performance and also because I'm choosing to use nedb as an example of the "hash" with a little more "Mongolike" syntax:
var async = require('async'),
MongoClient = require("mongodb").MongoClient;
DataStore = require('nedb');
// Common stream upsert handler
function streamProcess(stream,store,callback) {
stream.on("data",function(data) {
// Clean "_id" to keep nedb happy
data.trip = data._id;
delete data._id;
// Upsert to store
store.update(
{ "trip": data.trip },
{
"$push": {
"time": data.time,
"loc": data.loc
}
},
{ "upsert": true },
function(err,num) {
if (err) callback(err);
}
);
});
stream.on("err",callback)
stream.on("end",callback);
}
MongoClient.connect('mongodb://localhost/test',function(err,db) {
if (err) throw err;
db.collection('geojunk',function(err,collection) {
if (err) throw err;
var store = new DataStore();
// Parallel execution
async.parallel(
[
// Match origin trips
function(callback) {
var stream = collection.aggregate(
[
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -73.9580, 40.8003 ],
},
"query": {
"time": {
"$gte": new Date("2015-03-08T04:40:00.000Z"),
"$lte": new Date("2015-03-08T04:50:00.000Z")
},
"seq": 0
},
"maxDistance": 1000,
"distanceField": "distance",
"spherical": true
}},
{ "$group": {
"_id": "$tripId",
"time": { "$first": "$time" },
"loc": { "$first": "$loc" }
}}
],
{ "cursor": { "batchSize": 1 } }
);
streamProcess(stream,store,callback);
},
// Match destination trips
function(callback) {
var stream = collection.aggregate(
[
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -73.9814, 40.7681 ]
},
"query": { "isEnd": true },
"maxDistance": 1000,
"distanceField": "distance",
"spherical": true
}},
{ "$group": {
"_id": "$tripId",
"time": { "$first": "$time" },
"loc": { "$first": "$loc" }
}}
],
{ "cursor": { "batchSize": 25 } }
);
streamProcess(stream,store,callback);
}
],
function(err) {
if (err) throw err;
// Just documents that matched origin and destination
store.find({ "loc": { "$size": 2 }},{ "_id": 0 },function(err,result) {
if (err) throw err;
console.log( JSON.stringify( result, undefined, 2 ) );
db.close();
});
}
);
});
});
On the sample data as I listed it this will return:
[
{
"trip": "Trip A",
"time": [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:50:43.589Z"
],
"loc": [
{
"type": "Point",
"coordinates": [
-73.958,
40.8003
]
},
{
"type": "Point",
"coordinates": [
-73.9814,
40.7681
]
}
]
}
]
So it found the origin and destination that was nearest to the queried locations, also being an "origin" within the required time and something that is defined as a destination, i.e. "isEnd".
So the $geoNear operation does the matching with the returned results being the documents nearest to the point and other conditions. The $group stage is required because other documents in the same trip could "possibly" match the conditions,so it's just a way of making sure. The $first operator makes sure that the already "sorted" results will contain only one result per "trip". If you are really "sure" that will not happen with the conditions, then you could just use a standard $nearSphere query outside of aggregation instead. So I'm playing it safe here.
One thing to note there that even with the inclusion on "nedb" here and though it does support dumping output to disk, the data is still accumulated in memory. If you are expecting large results then rather than this type of "hash table" implementation, you would need to output in a similar fashion to what is shown to another mongodb collection and retrieve the matching results from there.
That doesn't change the overall logic though, and yet another reason to use "nedb" to demonstrate, since you would "upsert" to documents in the results collection in the same way.