Extensive filtering - mongodb

Example:
{
shortName: "KITT",
longName: "Knight Industries Two Thousand",
fromZeroToSixty: 2,
year: 1982,
manufacturer: "Pontiac",
/* 25 more fields */
}
Ability to query by at least 20 fields which means that only 10 fields are left unindexed
There's 3 fields (all number) that could be used for sorting (both ways)
This leaves me wondering that how does sites with lots of searchable fields do it: e.g real estate or car sale sites where you can filter by every small detail and can choose between several sort options.
How could I pull this off with MongoDB? How should I index that kind of collection?
Im aware that there are dbs specifically made for searching but there must be general rules of thumb to do this (even if less performant) in every db. Im sure not everybody uses Elasticsearch or similar.
---
Optional reading:
My reasoning is that index could be huge but the index order matters. You'll always make sure that fields that return the least results are first and most generic fields are last in index. However, what if user chooses only generic fields? Should I include non-generic fields to query anyway? How to solve ordering in both ways? Or index intersection saves the day and I should just add 20 different indexes?

text index is your friend.
Read up on it here: https://docs.mongodb.com/v3.2/core/index-text/
In short, it's a way to tell mongodb that you want full text search over a specific field, multiple fields, or all fields (yay!)
To allow text indexing of all fields, use the special symbol $**, and define it of type 'text':
db.collection.createIndex( { "$**": "text" } )
you can also configure it with Case Insensitivity or Diacritic Insensitivity, and more.
To perform text searches using the index, use the $text query helper, see: https://docs.mongodb.com/v3.2/reference/operator/query/text/#op._S_text
Update:
In order to allow user to select specific fields to search on, it's possible to use weights when creating the text-index: https://docs.mongodb.com/v3.2/core/index-text/#specify-weights
If you carefully select your fields' weights, for example using different prime numbers only, and then add the $meta text score to your results you may be able to figure out from the "textScore" which field was matched on this query, and so filter out the results that didn't get a hit from a selected search field.
Read more here: https://docs.mongodb.com/v3.2/tutorial/control-results-of-text-search/

Related

How to both search text then sort the results by another field in MongoDB efficiently?

So let's say I have the collection posts in the format:
_id
user
title
body
tags
likes
shares
date
I have a text index on title, body, tags, then just regular indexes on all the other fields.
What I want to achieve is for the user to be able to search the text fields and have the results sorted by either likes, shares or date (likes being the number of likes...and shares being the number of shares, it was possibly ambiguous)
Now, currently, find and sort are very fast on any field - surprisingly, the text fields can even be queried faster than the numeric fields - with something like db.posts.find({$text: {$search: "computer technology"}}).limit(20) returning in 0ms. Likewise, if I want to sort based on the likes field with db.posts.find().sort({likes: 1}).limit(20), it will also return in 0ms.
The problem is, however, that if I want a query that both finds and sorts on these fields db.posts.find({$text: {$search: "computer technology"}}).sort({likes: 1}).limit(20), then the query takes 8-9s to complete.
With this in mind, I was curious to see if adding a compound index like db.posts.createIndex({likes: 1, title: "text", body: "text",g: "text", tags: "text"},{name: "ltbt"}). Of course I realised this would be inefficient from a storage point of view, but I received the error message only one text index per collection allowed, found existing text index 'tbt' anyway, which I kind of expected might be the case. Likewise, this approach wouldn't really be viable...since even if you could have multiple text indexes, you'd need the compound index of the text fields with each of the numeric fields you'd want to search by.
So I'm just curious if I have missed something really obvious here or, if not, is there at least some way to improve performance?

Difference between wildcard search and individual text search

Is there a difference between a wildcard search index like $** and text indexes that I create for each of the fields in the collection ?
I do see a small difference in response time when I individually create text indexes. Using individual indexes, returns a better response. I am not able to post an example now, but will try to.
A wildcard text search will index every field that contains string data for each document in the collection (https://docs.mongodb.com/manual/core/index-text/#wildcard-text-indexes).
Because you are essentially increasing the number of fields indexed with a wild card text index, it would take longer to run compared to targeting specific fields for a text index.
Since you can only have one text index per collection (https://docs.mongodb.com/manual/core/index-text/#create-text-index), its worth considering which fields you plan on querying against beforehand.

What should the indexing strategy be to support queries that are a combination of different fields?

Lets say I have a User collection, where a document looks like this
{
"name": "Starlord",
"age": 24,
"gender": "Male",
"height": 180,
"weight": 230,
"hobbies": "Flying Spaceships"
}
Now, I want someone to be able to search for User based on one or more of these fields. So I add a compound index containing all these fields in the order above.
The issue is that MongoDB indexing works great when the query fields are a prefix of the indexed fields. For example, if I query by name, age and gender then the performance of the query is great. If I query by name, gender and weight, then the performance of the query is not so great (although it still uses the index and is faster than no-index).
What indexing strategy do you use when you have a use case like this?
The reason why your query by name, age and gender works great while the query by name, gender and weight does not is because the order of the fields matter significantly for compound indexes in MongoDB, especially the index's prefixes. As explained in this page in the documentation, a compound index can support queries on any prefix of its fields. So assuming you created the index in the order you presented the fields, the query for name, age and gender is a prefix of your compound index, while name, gender and weight can only take advantage of the name part of the index.
Supporting all possible combinations of queries on these fields would require you to create enough compound indexes so that all possible queries are prefixes of your indexes. I would say that this is not something you would want to do. Since your question asks about indexing strategies for queries with multiple fields, I would suggest that you look into the specific data access patterns that are most useful for your data set and create a few compound indexes that support these, taking advantage of the prefixes concept and omitting certain fields with low cardinality from the index, such as gender.
If you need to be able to query for all combinations, the number of indexes requires explodes quickly. The feature that comes to the rescue is called "index intersection".
Create a simple index on each field and trust the query optimizer to perform the correct index intersection. This feature is relatively new (from 2.6) and not as feature complete as in the well-known RBDMSses. It makes sense to track the Jira Ticket for index intersections to know the limitations, because the limitations are quite severe. It usually makes sense to carefully mix simple indexes (that can be intersected) and compound indexes (for very common queries).
In your specific case, you can utilize the fact that many fields are numeric and the range of valid values is very limited (e.g., for age, height and weight). The gender field has low selectivity and shouldn't be indexed in any case. Filter the gender in the last step, because it will, on average, only double the amount of data that must be processed.
Creating n! compound indexes is almost certainly not an option for n > 3...

MongoDB fulltext search + workaround for partial word match

Since it is not possible to find "blueberry" by the word "blue" by using a mongodb full text search, I want to help my users to complete the word "blue" to "blueberry". To do so, is it possible to query all the words in a mongodb full text index -> that I can use the words as suggestions i.e. for typeahead.js?
Language stemming in text search uses an algorithm to try to relate words derived from a common base (eg. "running" should match "run"). This is different from the prefix match (eg. "blue" matching "blueberry") that you want to implement for an autocomplete feature.
To most effectively use typeahead.js with MongoDB text search I would suggest focusing on the prefetch support in typeahead:
Create a keywords collection which has the common words (perhaps with usage frequency count) used in your collection. You could create this collection by running a Map/Reduce across the collection you have the text search index on, and keep the word list up to date using a periodic Incremental Map/Reduce as new documents are added.
Have your application generate a JSON document from the keywords collection with the unique keywords (perhaps limited to "popular" keywords based on word frequency to keep the list manageable/relevant).
You can then use the generated keywords JSON for client-side autocomplete with typeahead's prefetch feature:
$('.mysearch .typeahead').typeahead({
name: 'mysearch',
prefetch: '/data/keywords.json'
});
typeahead.js will cache the prefetch JSON data in localStorage for client-side searches. When the search form is submitted, your application can use the server-side MongoDB text search to return the full results in relevance order.
A simple workaround I am doing right now is to break the text into individual chars stored as a text indexed array.
Then when you do the $search query you simply break up the query into chars again.
Please note that this only works for short strings say length smaller than 32 otherwise the indexing building process will take really long thus performance will be down significantly when inserting new records.
You can not query for all the words in the index, but you can of course query the original document's fields. The words in the search index are also not always the full words, but are stemmed anyway. So you probably wouldn't find "blueberry" in the index, but just "blueberri".
Don't know if this might be useful to some new people facing this problem.
Depending on the size of your collection and how much RAM you have available, you can make a search by $regex, by creating the proper index. E.g:
db.collection.find( {query : {$regex: /querywords/}}).sort({'criteria': -1}).limit(limit)
You would need an index as follows:
db.collection.ensureIndex( { "query": 1, "criteria" : -1 } )
This could be really fast if you have enough memory.
Hope this helps.
For those who have not yet started implementing any database architecture and are here for a solution, go for Elasticsearch. Its a json document driven database similar to mongodb structurally. It has "edge-ngram" analyzer which is really really efficient and quick in giving you did you mean for mis-spelled searches. You can also search partially.

The fastest way to show Documents with certain property first in MongoDB

I have collections with huge amount of Documents on which I need to do custom search with various different queries.
Each Document have boolean property. Let's call it "isInTop".
I need to show Documents which have this property first in all queries.
Yes. I can easy do sort in this field like:
.sort( { isInTop: -1 } );
And create proper index with field "isInTop" as last field in it. But this will be work slowly, as indexes in mongo works best with unique fields.
So is there is solution to show Documents with field "isInTop" on top of each query?
I see two solutions here.
First: set Documents wich need to be in top the _id from "future". As you know, ObjectId contains timestamp. So I can create ObjectId with timestamp from future and use natural order
Second: create separate collection for Ducuments wich need to be in top. And do queries in it first.
Is there is any other solutions for this problem? Which will work fater?
UPDATE
I have done this issue with sorting on custom field which represent rank.
Using the _id field trick you mention has the problem that at some point in time you will reach the special time, and you can't change the _id field (without inserting a new document and removing the old one).
Creating a special collection which just holds the ones you care about is probably the best option. It gives you the ability to logically (and to some extent, physically) separate the documents.
Newly introduced in mongodb there is also support for a "sparse" index which may fulfill your needs as well. You could only set the "isInTop" field when you want it to be special, and then create a sparse index on it which would not have the problems you would normally have with a single indexed boolean field (in btrees).