Mongodb indexing: 'key too large to index' - mongodb

I have a dataset whose schema is like this:
{..., "url":"www.google.com", "time::143703672, "geo":"US-NJ", ...}
I want search documents by: (I'm using pymongo)
data.find({'url':url, 'geo':user_geo, 'time':{"$gt": userlog_gmt_time - 10, "$lt": userlog_gmt_time + 10}})
I tried to build a compound index:
db.AdxRevenueData_Jul.createIndex({url:1, geo:1, time:1})
However, I receive an error "key too long to index". The reason is that some url is extremely long. I believe that there are some "wrong" url in the dataset.
Then, I tried to build a compound index based on only 'geo' and 'time'. I thought I could iterate the return result and find the documents with the url. However, this method was too slow...
Someone suggested me to set parameters to skip those long urls.
sudo mongod --setParameter failIndexKeyTooLong=false
But, I am not sure if all long urls are wrong. I do not want to skip a lot documents.
My question is, is there any other solution, except changing DB?
For instance, should I use "text" index? Will it work?
db.AdxRevenueData_Jul.createIndex({url:'text', geo:1, time:1})
Or,
Should I additionally build url as a 'hashed' single index? Can it be a compromise?
db.AdxRevenueData_Jul.createIndex({url:'hashed'})
db.AdxRevenueData_Jul.createIndex({geo:1, time:1})

Using text index will not be helpful there.
I would recommend to use hashed index for this field.

Related

mongodb index on regex fields not working

I'm new in mongoDB and I'm facing an issue about performance that need your help. I have a collection with 400k records, when not create index for any field on the collection it takes 20-30s for each query then I create indexs for fields that usually using for search query, but the problem is, when using $regex to search for a string field with index on it, mongoDB does not use index on that field, mongodb still scan for all records in that collection, I've searched on internet with this keyword: "index on regex fields mongodb" and I found some answers which say that "MongoDB use prefix of RegEx to lookup indexes" which means you have to use "^" prefix for the index to work like "db.users.find({name: /^key word/})", but that is not working for me, does "index on $regex field" need MongoDB Atlas to work? because i'm using comunity version of mongoDB. Thanks!
There's a lot to unpack here. We'll split the answer into two parts, the first to try and answer some of the direct questions about index usage and the second to explore solutions to satisfy the application requirements.
Index Usage with $regex
As is true with an index in any database that captures the full string value as the key, MongoDB can use the index for a $regex operation but its efficiency in doing so greatly depends on the regex being applied. That is what the Index Use documentation from the comments and the other answers you reference are describing.
In the comments you mention that an example query might be db.users.find({name: {$regex: '.*keyword.*', $options: 'i'}}). That means that the regex is a both unanchored and case-insensitive. The aforementioned doumentation states directly:
Case insensitive regular expression queries generally cannot use indexes effectively.
Why is this? because the substring that you are searching for can be found in any string value captured by the index. So the document with matching value {name: 'a keyword'} would be located at one end of the index, {name: 'keyWord' }, may be somewhere in the middle, and {name: 'Z keyword'} may be at the end. The only way to ensure correct results is for the database to scan the index for all string values. So while it is still using the index, it may not be efficient as most of the scanned values will not be match and will be discarded.
You may always use .explain() to better understand how the database is answering the query, such as if and how it is using an index.
Solutions
So what do we do about this?
Well as #rickhg12hs suggests in the comments, it depends on exactly what you are trying to achieve. You reiterate that that you are looking for 'full regex search capability', but that is really an approach/solution rather than a goal. If what you really need, for example, is just to match an exact string in a case insensitive manner, then something as simple as a case insensitive index would likely do the trick.
However if truly do wish to perform arbitrary substring searching, then you are really looking at search engine capabilities. In that situation your best bets would probably be to emulate their indexes directly in MongoDB (e.g. have the application manually tokenize the strings to be indexed), stand up something like Solr/Elasticsearch next to MongoDB, or use MongoDB's Atlas Search offering. The $text operator mentioned in the comment has limitations when it comes to substring searching (such as just part of a word), which may or may not be relevant for your needs.

Mongodb searching on array / indexing

I'm using the airbnb sample set and it has a field that looks like:
"amenities": ["TV", "Cable TV", "Wifi"....
So I'm trying to do a case-INsensitive, wildcard search (on one or more values passed in).
Only thing I've found that works is:
{ amenities: { $in: [ /wi/ ] }}
Is that the best way?
So I ran it in Compass as the dataset was imported (5600 docs), and the Explain says it took ~20ms on my machine and warned there was no index. I then created an index on the amenities column and the same search jumped up to ~100ms. I just created the index through the Compass UI, so not sure why its taking 5x as long with an index? Or if there is a better way to do this?
The way to run that query is:
{ amenities: /wi/i }
//better but not always useful
{ amenities: /wi/i }, { amenities:1, _id:0 }
It already traverses the array, and to be case insensitive it must be on the options.
For multikey indexes the second query won't be a covered query. Otherwise, it would be blazing fast.
I've tested a similar search with and without index though. Exec. time is reduced 10X. (1500ms to 150ms, in a huge collection). Measure with Mongo Hacker.
As you report executionTimeMilliseconds is not that different. But still smaller.
The reason why you don't see a huge decrease in time is because the index stores each array entry separately. When it finds a match, it comes back to collection to fetch the whole array field, instead of using the indexes.
Probably indexes aren't very useful for arrays.
When querying with an unanchored regex, the query executor will have to scan every index key to see if there is a match.
You might find a collated index to be helpful.
Create an index with the appropriate collation, like:
(strength 1 and 2 are case-insensitive)
db.collection.createIndex({amenities:1},{collation:{locale:"en",strength:1}})
Then query using the same collation:
db.collection.find({amenities:"wifi"}).collation({locale:"en",strength:1})
The search will be case insensitive, and it can efficiently use the index.

MongoDB: Indexes, Sorting

After having read the official documentations on indexes, sort, intersection, i'm a little bit confuse on how everything work together.
I've trouble making my query use the indexes i've created. I work on a mongodb 3.0.3, on a collection having ~4millions of document.
To simplify, let's say my document is composed of 6 fields:
{
a:<text>,
b:<boolean>,
c:<text>,
d:<boolean>,
e:<date>,
f:<date>
}
The query I want to achieve is the following :
db.mycoll.find({ a:"OK", b:true, c:"ProviderA", d:true, e:{ $gte:ISODate("2016-10-28T12:00:01Z"),$lt:ISODate("2016-10-28T12:00:02") } }).sort({f:1});
So intuitively I've created two indexes
db.mycoll.createIndex({a: 1, b: 1, c: 1, d:1, e:1 }, {background: true,name: "test1"})
db.mycoll.createIndex({f:1}, {background: true,name: "test2"})
But the explain() give me that the first index is not used at all.
I known there is some kind of limitation when there is ranges in play in the filter (in the e field), but I can't find my way around it.
Also instead of having a single index on f, I try a compound index on {e:1,f:1} but it didn't change anything.
So What I have misunderstood?
Thanks for your support.
Update: also I find some time the following predicate for mongodb 2.6 :
A good rule of thumb for queries with sort is to order the indexed fields in this order:
First, the field(s) on which you will query for exact values.
Second, the field(s) on which you will sort.
Finally, field(s) on which you will query for a range of values (e.g., $gt, $lt, $in)
An example of using this rule of thumb is in the section on “Sorting the results of a complex query on a range of values” below, including a link to further reading.
Does this also apply for 3.X version?
Update 2: following above predicate, I created the following index
db.mycoll.createIndex({a: 1, b: 1, c: 1, d:1 , f:1, e:1}, {background: true,name: "test1"})
And for the same query :
db.mycoll.find({ a:"OK", b:true, c:"ProviderA", d:true, e:{ $gte:ISODate("2016-10-28T12:00:01Z"),$lt:ISODate("2016-10-28T12:00:02") } }).sort({f:1});
the index is indeed used. However too much keys seems to be scan, I may need to find a better order the fields in the query/index.
Mongo acts sometimes a bit strange when it comes to the index selection.
Mongo automagically decides what index to use. The smaller an index is the more likely it is used (especially indexes with only one field) - this is my experience. May be this happens because it is more often already loaded in RAM? To find out what index to use when Mongo performs test queries when it is idle. However the result is sometimes unexpected.
Therefore if you know what index to use you can force a query to use a specific index using the $hint option. You should try that.
Your two indexes used in the query and the sort does not overlap so MongoDB can not use them for index intersection:
Index intersection does not apply when the sort() operation requires an index completely separate from the query predicate.

Mongo: Quick way to find the last n inserts into a collection

I am running a query like the following:
db.sales.find({location: "LLLA",
created_at: {$gt: ISODate('2012-07-27T00:00:00Z')}}).
sort({created_at: -1}).limit(3)
This query is working as expected, but its not performing fast enough. I have an index on the location field, and a separate index on the created_at field. Please let me know if you see me missing something obvious here to make it faster.
Thanks for your time on this.
Mongo won't use two indexes for a given query, if you want filter by location and sort by created_at you can add a compound index:
db.sales.ensureIndex({location: 1, created_at: -1})
You should run explain on your queries when you have issues like that. Sort in particular causes non-intuitive complications with indexing, and the explain command can occasionally make it obvious why.
It's also worth noting that you can usually sort by _id to approximate insert time (assuming auto generated ObjectId values), but that won't help you as much as a compound index will in this case since you're filtering on an additional field.
just use IDs:
db.sales.find({location: "LLLA"}).sort({_id: -1}).limit(3)

MongoDB. [Key Too Large To Index]

Some Background: I'm planning to use MongoDB as the publishing frontend db for a few of my websites. The actual data will be kept in a SQL Server db and there will be background jobs that will populate the MongoDB at predefined time intervals for readonly purposes to boost website performance.
The Situation: I have a table 'x' that i translated into a mongo collection, everything worked fine.
'x' has a column 'c' that was originally a NVARCHAR(MAX) in the source db and has multilingual text in it.
When I was searching by column 'c', mongo was doing fullscan on the collection.
So I tried doing an ensureIndex({c : 1 }) which worked but when I checked the mongodb logs it showed me that 90% of the data could not be indexed as [Key Too Large To Index] !!
And thus is has indexed 10% of the data and now only returns results from that 10% !!
What are my alternatives ??
Note: I was using this column to do full text searching in SQL Server, now im not sure if I should go ahead with Mongo or not :(
Try to run your mongod process with this parameter:
sudo mongod --setParameter failIndexKeyTooLong=false
And than try again.
if you need to search text inside a large string you can use one of those:
keyword splitting
regular expression
the former has the downside that you need some "logic" to combine the keyword to make a search, the latter heavily impacts on performance.
probably if you really need full text search the best option is to use an external indexer like solr or lucene.
Since you can do some elaboration, you could extract some key words and put them in a field:
_keywords : [ "mongodb" , "full search" , "nosql" ]
and make an index on that.
Don't use mongo for full text searching
its not designed for that. Yes obviously you will get an error key too large on indexing for long string values.
Better approach would be using full text search servers (solr/lucene or sphinx) if your main concern is search.
Recent (2.4 and above) MongoDB builds provide a couple other options:
As the OP's stated desire is for full text search, the right approach would be to use a text index which directly supports that use case.
For an exact match index on long string values you can use a hashed index.