Forbid usage of the specifix index for the query - mongodb

I have a mongodb collection with the following schema:
{
"description": "some arbitrary text",
"type": "TYPE", # there are a lot of different types
"status": "STATUS" # there are a few different statuses
}
I also have two indexes: for "type" and for "status".
Now I run a query:
db.obj.count({
type: { $in: ["SOME_TYPE"] },
status: { $ne: "SOME_STATUS" },
description: { $regex: /.*/ }
})
MongoDB chooses to use an index for "status", while "type" would be much better:
"query": {
"count": "obj",
"query": {
"description": Regex('.*', 2),
"status": {
"$ne": "SOME_STATUS"
},
"type": {
"$in": [
"SOME_TYPE"
]
}
}
},
"planSummary": "IXSCAN { status: 1 }"
I know I can use hint to specify an index to use, but I have different queries (which should use different indexes) and I can't annotate every one of them.
As far as I can see, a possible solution would be to forbid usage of "status" index for all queries that contain status: { $ne: "SOME_STATUS" } condition.
Is there a way to do it? Or maybe I want something weird and there is a better way?

Related

How to use Atlas Search to find a text containing a subtext

I have a collection hosted on Atlas,
I currently have declared an Atlas Search index with the default configuration, but I am unable to use it to find documents that partially matches the text.
For instance, I have the following documents :
[
{
_id: 'ABC123',
designation: 'ENPHASE IQ TERMINAL CABLE 3PH-1 UD',
supplierIdentifier: 205919
},
{
_id: 'DEF456',
designation: 'ENPHASE CABLE VERT IQ 60/72CELLS 400VAC',
supplierIdentifier: 205919
},
{
_id: 'GHI789',
designation: 'P/SOLAR PC ASTROENERGY 275W 60 CELULAS',
supplierIdentifier: 206382
}
]
If I use the text search to search "EN", Nothing is returned :
[{ "$search" : { "index" : "default", "text" : { "query" : "EN", "path" : { "wildcard" : "*"}}, "count": {"type": "total"}}}]
No result
But if i use the regex search, my documents are correctly returned :
db.testproducts.aggregate([{ "$search" : { "index" : "default", "regex" : { "query" : "(.*)EN(.*)", "allowAnalyzedField" : true, "path" : { "wildcard" : "*"}}, "count": {"type": "total"}}}])
[
{
_id: 'ABC123',
designation: 'ENPHASE IQ TERMINAL CABLE 3PH-1 UD',
supplierIdentifier: 205919
},
{
_id: 'DEF456',
designation: 'ENPHASE CABLE VERT IQ 60/72CELLS 400VAC',
supplierIdentifier: 205919
},
{
_id: 'GHI789',
designation: 'P/SOLAR PC ASTROENERGY 275W 60 CELULAS',
supplierIdentifier: 206382
}
]
As the regex operator is pretty slow, how to achieve the same with the text search ?
Gfhyser, you have a few options and I'm not sure which one you will like the best as they both have limitations.
Option 1, you can specify a path. As you can imagine, wildcard paths and leading ad trailing regex can be expensive. If you know the path you want search is designation, performance will be better if you change your existing query to:
db.testproducts.aggregate([{ "$search" : { "index" : "default", "regex" : { "query" : "(.*)EN(.*)", "allowAnalyzedField" : true, "path" : "designation", "count": {"type": "total"}}}])
Option 2, you can refine your search. Ask yourself if you are truly looking for Enphase and Energy wherever they appear in the same result.
Option 3,The final option is somewhat experimental for me because I need to spend more time on it. I simply want to help. It might be the best performing, involves you reversing your tokens indexed and when querying with a custom analyzer because it can speed up leading wild card queries.If you don't mind a bit of complexity, here is how it would look. Let me know if works out as I don't use regular expressions as much these days.
I create a custom analyzer in the sample_airbnb.listings_and_reviews dataset to search with leading wildcard characters. The index looks like:
{
"analyzer": "lucene.keyword",
"mappings": {
"dynamic": false,
"fields": {
"name": [
{
"dynamic": true,
"type": "document"
},
{
"type": "string"
}
],
"summary": {
"analyzer": "fastRegex",
"type": "string"
}
}
},
"analyzers": [
{
"charFilters": [],
"name": "fastRegex",
"tokenFilters": [
{
"type": "reverse"
}
],
"tokenizer": {
"type": "keyword"
}
}
]
}
And a query that exploits this speed and has the flexibility to potentially match both of your desired terms would look like this:
[
{
'$search': {
'index': 'reviews_search',
'compound': {
'should': [
{
'wildcard': {
'query': '*cated*',
'path': 'summary',
'allowAnalyzedField': true
}
}
]
}
}
}
]

Mongoose aggregate match returns empty array

I'm working with mongodb aggregations using mongoose and a I'm doubt what am I doing wrong in my application.
Here is my document:
{
"_id": "5bf6fe505ca52c2088c39a45",
"loc": {
"type": "Point",
"coordinates": [
-43.......,
-19......
]
},
"name": "......",
"friendlyName": "....",
"responsibleName": "....",
"countryIdentification": "0000000000",
"categories": [
"5bf43af0f9b41a21e03ef1f9"
]
"created_at": "2018-11-22T19:06:56.912Z",
"__v": 0
}
At the context of my application I need to search documents by GeoJSON, and I execute this search using geoNear. Ok it works fine! But moreover I need to "match" or "filter" specific "categories" in the document. I think it's possible using $match but certainly I'm doing the things wrong. Here is the code:
CompanyModel.aggregate(
[
{
"$geoNear": {
"near": {
"type": "Point",
"coordinates": [pageOptions.loc.lng, pageOptions.loc.lat]
},
"distanceField": "distance",
"spherical": true,
"maxDistance": pageOptions.distance
}
},
{
"$match": {
categories: { "$in": [pageOptions.category] }
}
}
]
).then(data => {
resolve({ statusCode: 200, data: data });
}).catch(err => {
console.log(err);
reject({ statusCode: 500, error: "Error getting documents", err: err });
})
pageOptions:
var pageOptions = {
loc: {
lat: parseFloat(req.query.lat),
lng: parseFloat(req.query.lng)
},
distance: parseInt(req.query.distance) || 10000,
category: req.params.category || ""
}
If I remove $match I get all the documents by location, but I need to filter specific categories... I don't believe that I need to filter it manually, I believe it can be possible with aggregation functions...
So anyone can help me with this mongoose implementation?
Thanks for all help
In MongoDB you need to make sure that data type in your document matches the type in your query. In this case you have a string stored in the database and you're trying to use ObjectId to build the $match stage. To fix that you can use valueOf() operator on pageOptions.category, try:
{
"$match": {
categories: { "$in": [pageOptions.category.valueOf()] }
}
}

Mongodb aggregate match query with priority on full match

I am attempting to do a mongodb regex query on a field. I'd like the query to prioritize a full match if it finds one and then partials afterwards.
For instance if I have a database full of the following entries.
{
"username": "patrick"
},
{
"username": "robert"
},
{
"username": "patrice"
},
{
"username": "pat"
},
{
"username": "patter"
},
{
"username": "john_patrick"
}
And I query for the username 'pat' I'd like to get back the results with the direct match first, followed by the partials. So the results would be ordered ['pat', 'patrick', 'patrice', 'patter', 'john_patrick'].
Is it possible to do this with a mongo query alone? If so could someone point me towards a resource detailing how to accomplish it?
Here is the query that I am attempting to use to perform this.
db.accounts.aggregate({ $match :
{
$or : [
{ "usernameLowercase" : "pat" },
{ "usernameLowercase" : { $regex : "pat" } }
]
} })
Given your precise example, this could be accomplished in the following way - if your real world scenario is a little bit more complex you may hit problems, though:
db.accounts.aggregate([{
$match: {
"username": /pat/i // find all documents that somehow match "pat" in a case-insensitive fashion
}
}, {
$addFields: {
"exact": {
$eq: [ "$username", "pat" ] // add a field that indicates if a document matches exactly
},
"startswith": {
$eq: [ { $substr: [ "$username", 0, 3 ] }, "pat" ] // add a field that indicates if a document matches at the start
}
}
}, {
$sort: {
"exact": -1, // sort by our primary temporary field
"startswith": -1 // sort by our seconday temporary
}
}, {
$project: {
"exact": 0, // get rid of the "exact" field,
"startswith": 0 // same for "startswith"
}
}])
Another way would be using $facet which may prove a bit more powerful by enabling more complex scenarios but slower (several people here will hate me, though, for this proposal):
db.accounts.aggregate([{
$facet: { // run two pipelines against all documents
"exact": [{ // this one will capture all exact matches
$match: {
"username": "pat"
}
}],
"others": [{ // this one will capture all others
$match: {
"username": { $ne: "pat", $regex: /pat/i }
}
}]
}
}, {
$project: {
"result": { // merge the two arrays
$concatArrays: [ "$exact", "$others" ]
}
}
}, {
$unwind: "$result" // flatten the resulting array into separate documents
}, {
$replaceRoot: { // restore the original document structure
"newRoot": "$result"
}
}])

Waypoint matching query

We have collection as follows. Each document represent a trip of driver, loc property contains way-points, time property contains time corresponding to way-points. For example, in Trip A, Driver would be at GeoLocation tripA.loc.coordinates[0] at the time tripA.time[0]
{
tripId : "Trip A",
time : [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:48:43.589Z",
"2015-03-08T04:49:43.589Z",
"2015-03-08T04:50:43.589Z",
],
loc: {
type: "MultiPoint",
coordinates: [
[ -73.9580, 40.8003 ],
[ -73.9498, 40.7968 ],
[ -73.9737, 40.7648 ],
[ -73.9814, 40.7681 ]
]
}
}
{
tripId : "Trip B",
time : [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:48:43.589Z",
"2015-03-08T04:49:43.589Z",
"2015-03-08T04:50:43.589Z",
],
loc: {
type: "MultiPoint",
coordinates: [
[ -72.9580, 41.8003 ],
[ -72.9498, 41.7968 ],
[ -72.9737, 41.7648 ],
[ -72.9814, 41.7681 ]
]
}
}
We would like to query for trips which starts near (1km) location "[long1,lat1]" around the time t (+-10 minutes) and ends at [long2,lat2].
Is there simple and efficient way to formulate above query for MongoDB or Elasticsearch?
If so could you please give the query to do so. either in MongoDB or Elasticsearch. (MongoDB preferable)
This did start as a comment but was clearly getting way to long. So it's a long explanation of the limitations and the approach.
The bottom line of what you are asking to achieve here is effectively a "union query" which is generally defined as two separate queries where the end result is the "set intersection" of each of the results. In more brief, where the selected "trips" from your "origin" query matches results found in your "destination" query.
In general database terms we refer to a "union" operation as a "join" or at least a condition where the selection of one set of criteria "and" the selection of another must both meet with a common base grouping identifier.
The base points in MongoDB speak as I believe also applies to elastic search indexes is that neither datastore mechanism supports the notion of a "join" in any way from a direct singular query.
There is another MongoDB principle here considering your proposed or existing modelling, in that even with items specified in "array" terms, there is no way to implement an "and" condition with a geospatial search on coordinates and that also considering your choice of modelling as a GeoJSON "MultiPoint" the query cannot "choose" which element of that object to match the "nearest" to. Therefore "all points" would be considered when considering the "nearest match".
Your explanation is quite clear in the intent. So we can see that "origin" is both notated as and represented within what is essentially "two arrays" in your document structure as the "first" element in each of those arrays. The representative data being a "location" and "time" for each progressive "waypoint" in the "trip". Naturally ending in your "destination" at the end element of each array, considering of course that the data points are "paired".
I see the logic in thinking that this is a good way to store things, but it does not follow the allowed query patterns of either of the storage solutions you mention here.
As I mentioned already, this is indeed a "union" in intent so while I see the thinking that led to the design it would be better to store things like this:
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:47:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9580, 40.8003 ]
},
"seq": 0
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:48:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9498, 40.7968 ]
},
"seq": 1
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:49:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9737, 40.7648 ]
},
"seq": 2
},
{
"tripId" : "Trip A",
"time" : ISODate("2015-03-08T04:50:43.589Z"),
"loc": {
"type": "Point",
"coordinates": [ -73.9814, 40.7681 ]
},
"seq": 3,
"isEnd": true
}
In the example, I'm just inserting those documents into a collection called "geojunk", and then issuing a 2dsphere index for the "loc" field:
db.geojunk.ensureIndex({ "loc": "2dsphere" })
The processing of this is then done with "two" .aggregate() queries. The reason for .aggregate() is because you want to match the "first" document "per trip" in each case. This represents the nearest waypoint for each trip found by the queries. Then basically you want to "merge" these results into some kind of "hash" structure keyed by "tripId".
The end logic says that if both an "origin" and a "destination" matched your query conditions for a given "trip", then that is a valid result for your overall query.
The code I give here is an arbitrary nodejs implementaion. Mostly because it's a good base to represent issuing the queries in "parallel" for best performance and also because I'm choosing to use nedb as an example of the "hash" with a little more "Mongolike" syntax:
var async = require('async'),
MongoClient = require("mongodb").MongoClient;
DataStore = require('nedb');
// Common stream upsert handler
function streamProcess(stream,store,callback) {
stream.on("data",function(data) {
// Clean "_id" to keep nedb happy
data.trip = data._id;
delete data._id;
// Upsert to store
store.update(
{ "trip": data.trip },
{
"$push": {
"time": data.time,
"loc": data.loc
}
},
{ "upsert": true },
function(err,num) {
if (err) callback(err);
}
);
});
stream.on("err",callback)
stream.on("end",callback);
}
MongoClient.connect('mongodb://localhost/test',function(err,db) {
if (err) throw err;
db.collection('geojunk',function(err,collection) {
if (err) throw err;
var store = new DataStore();
// Parallel execution
async.parallel(
[
// Match origin trips
function(callback) {
var stream = collection.aggregate(
[
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -73.9580, 40.8003 ],
},
"query": {
"time": {
"$gte": new Date("2015-03-08T04:40:00.000Z"),
"$lte": new Date("2015-03-08T04:50:00.000Z")
},
"seq": 0
},
"maxDistance": 1000,
"distanceField": "distance",
"spherical": true
}},
{ "$group": {
"_id": "$tripId",
"time": { "$first": "$time" },
"loc": { "$first": "$loc" }
}}
],
{ "cursor": { "batchSize": 1 } }
);
streamProcess(stream,store,callback);
},
// Match destination trips
function(callback) {
var stream = collection.aggregate(
[
{ "$geoNear": {
"near": {
"type": "Point",
"coordinates": [ -73.9814, 40.7681 ]
},
"query": { "isEnd": true },
"maxDistance": 1000,
"distanceField": "distance",
"spherical": true
}},
{ "$group": {
"_id": "$tripId",
"time": { "$first": "$time" },
"loc": { "$first": "$loc" }
}}
],
{ "cursor": { "batchSize": 25 } }
);
streamProcess(stream,store,callback);
}
],
function(err) {
if (err) throw err;
// Just documents that matched origin and destination
store.find({ "loc": { "$size": 2 }},{ "_id": 0 },function(err,result) {
if (err) throw err;
console.log( JSON.stringify( result, undefined, 2 ) );
db.close();
});
}
);
});
});
On the sample data as I listed it this will return:
[
{
"trip": "Trip A",
"time": [
"2015-03-08T04:47:43.589Z",
"2015-03-08T04:50:43.589Z"
],
"loc": [
{
"type": "Point",
"coordinates": [
-73.958,
40.8003
]
},
{
"type": "Point",
"coordinates": [
-73.9814,
40.7681
]
}
]
}
]
So it found the origin and destination that was nearest to the queried locations, also being an "origin" within the required time and something that is defined as a destination, i.e. "isEnd".
So the $geoNear operation does the matching with the returned results being the documents nearest to the point and other conditions. The $group stage is required because other documents in the same trip could "possibly" match the conditions,so it's just a way of making sure. The $first operator makes sure that the already "sorted" results will contain only one result per "trip". If you are really "sure" that will not happen with the conditions, then you could just use a standard $nearSphere query outside of aggregation instead. So I'm playing it safe here.
One thing to note there that even with the inclusion on "nedb" here and though it does support dumping output to disk, the data is still accumulated in memory. If you are expecting large results then rather than this type of "hash table" implementation, you would need to output in a similar fashion to what is shown to another mongodb collection and retrieve the matching results from there.
That doesn't change the overall logic though, and yet another reason to use "nedb" to demonstrate, since you would "upsert" to documents in the results collection in the same way.

Are there restrictions on MongoDB collections property names?

I've a document structure wich contains a property named shares which is an array of objects.
Now I tried to match all documents where shared contains the matching _account string with dot notation (shares._account).
It's not working but it seems it's because of the _ char in front of property _account.
So if I put the string to search for inside the name property in that object everything works fine with dot notation.
Are there any limitations on property names?
Thought an _ is allowed because the id has it also in mongodb and for me it's a kind of convention to daclare bindings.
Example:
// Collection Item example
{
"_account": { "$oid" : "526fd2a571e1e13b4100000c" },
"_id": { "$oid" : "5279456932db6adb60000003" },
"name": "shared.jpg",
"path": "/upload/24795-4ui95s.jpg",
"preview": "/img/thumbs/24795-4ui95s.jpg",
"shared": false,
"shares": [
{
"name": "526fcb177675f27140000001",
"_account": "526fcb177675f27140000001"
},
{
"name": "tim",
"_account": "526fd29871e1e13b4100000b"
}
],
"tags": [
"grüngelb",
"farbe"
],
"type": "image/jpeg"
},
I tried to get the item with following query:
// Query example
{
"$or": [
{
"$and": [
{
"type": {
"$in": ["image/jpeg"]
}
}, {
"shares._account": "526fcb177675f27140000001" // Not working
//"shares.name": "526fcb177675f27140000001" // Working
}
]
}
]
}
Apart from the fact that $and can be omitted and $or is pointless "image/jpeg" != "image/jpg":
db.foo.find({
"type": {"$in": ["image/jpeg"]},
"shares._account": "526fcb177675f27140000002"
})
Or if you really want old one:
db.foo.find({
"$or": [
{
"$and": [
{
"type": {
"$in": ["image/jpeg"]
}
}, {
"shares._account": "526fcb177675f27140000002"
}
]
}
]
}
Both will return example document.
Your current query has some unnecessarily complicated constructs:
you don't need the $or and $and clauses ("and" is the default behaviour)
you are matching a single value using $in
The query won't find match the sample document because your query doesn't match the data:
the type field you are looking for is "image/jpg" but your sample document has "image/jpeg"
the shares._account value you are looking for is "526fcb177675f27140000001" but your sample document doesn't include this.
A simplified query that should work is:
db.shares.find(
{
"type": "image/jpeg",
"shares._account": "526fcb177675f27140000002"
}
)