How to ignore hyphens in mongodb docs values - mongodb

Hi my problem is that I have a collection with data, in the data I have values string with hyphens between words
example: "item:'e-commerce'"
my question is if there any options to set mongo to ignore the hyphens when I query string,
example to query: value to search "e commerce" and the result should be "item:'e-commerce'", the worst solution is to do Normalization to the collections without hyphens.

Normalizing this field sounds like the way to go here.
Another (likely bad) tack would be using a collation that set alternate to "shifted" so that you ignore all whitespace & punctuation while doing comparisons.

Related

MongoDB - Difference between index on text field and text index?

For a MongoDB field that contains strings (for example, state or province names), what (if any) difference is there between creating an index on a string-type field :
db.ensureIndex( { field: 1 } )
and creating a text index on that field:
db.ensureIndex( { field: "text" }
Where, in both cases, field is of string type.
I'm looking for a way to do a case-insensitive search on a text field which would contain a single word (maybe more). Being new to Mongo, I'm having trouble distinguishing between using the above two index methods, and even something like a $regex search.
The two index options are very different.
When you create a regular index on a string field it indexes the
entire value in the string. Mostly useful for single word strings
(like a username for logins) where you can match exactly.
A text index on the other hand will tokenize and stem the content of
the field. So it will break the string into individual words or
tokens, and will further reduce them to their stems so that variants
of the same word will match ("talk" matching "talks", "talked" and
"talking" for example, as "talk" is a stem of all three). Mostly
useful for true text (sentences, paragraphs, etc).
Text Search
Text search supports the search of string content in documents of a
collection. MongoDB provides the $text operator to perform text search
in queries and in aggregation pipelines.
The text search process:
tokenizes and stems the search term(s) during both the index creation and the text command execution.
assigns a score to each document that contains the search term in the indexed fields. The score determines the relevance of a document to a given search query.
The $text operator can search for words and phrases. The query matches
on the complete stemmed words. For example, if a document field
contains the word blueberry, a search on the term blue will not match
the document. However, a search on either blueberry or blueberries
will match.
$regex searches can be used with regular indexes on string fields, to
provide some pattern matching and wildcard search. Not a terribly
effective user of indexes but it will use indexes where it can:
If an index exists for the field, then MongoDB matches the regular
expression against the values in the index, which can be faster than a
collection scan. Further optimization can occur if the regular
expression is a “prefix expression”, which means that all potential
matches start with the same string. This allows MongoDB to construct a
“range” from that prefix and only match against those values from the
index that fall within that range.
http://docs.mongodb.org/manual/core/index-text/
http://docs.mongodb.org/manual/reference/operator/query/regex/
text indexes allow you to search for words inside texts. You can do the same using a regex on a non text-indexed text field, but it would be much slower.
Prior to MongoDB 2.6, text search operations had to be made with their own command, which was a big drawback because you coulnd't combine it with other filters, nor treat the result as a common cursor. As of now, the text search is just another another operator for the typical find method and that's super nice.
So, Why is a text index, and its subsequent searchs faster than a regex on a non-indexed text field? It's because text indexes work as a dictionary, a clever one that's capable of discarding words on a per-language basis (defaults to english). When you run a text search query, you run it against the dictionary, saving yourself the time that would otherwise be spent iterating over the whole collection.
Keep in mind that the text index will grow along with your collection, and it can use a lot of space. I learnt this the hard way when using capped collections. There's no way to cap text indexes.
A regular index on a text field, such as
db.ensureIndex( { field: 1 } )
will be useful only if you search for the whole text. It's used for example to look for alphanumeric hashes. It doesn't make any sense to apply this kind of indexes when storing text paragraphs, phrases, etc.

REST interface and leading zeroes

I'm using Mongo's simple REST interface to query data in my collection.
One field I'm searching on is a mixture of numeric and character data, e.g. 000107011JXK
If I do the following query:
http://[server]:[port]/[db]/[collection]/?filter_[field]=000107011JXK
... Mongo removes leading zeroes and only searches on the numerical part of the criteria (e.g. 107011) which obviously does not bring back the required results.
Is there any way I can get around this issue?
Thanks in advance for any assistance.

Data model built on Mongo: store IDs as one massive string or array of strings? Is Mongo faster at using regular expressions or looking inside arrays?

We could use help on structuring our Mongo database. We need to store country IDs then run queries to return documents containing matching countries. Assume the IDs are strings 6-10 chars long.
Two options:
1) Store the country IDs as one massive string separated by some delimiter
(e.g., /). Ex: "IDIDID1/IDIDID2/IDIDID3/IDIDID4/IDIDID5".
2) Store the IDs in an array.
Ex: ["IDIDID1", "IDIDID2", "IDIDID3", "IDIDID4", "IDIDID5"].
We want to optimize for queries like "Find all documents containing country ID IDIDID3."
For option 1, we plan to use a RegEx to query documents (e.g., /IDIDID3/).
For option 2, we will use the standard $in operator.
Which option yields better read performance?
Does using the string approach yield better performance because you can index strings (as opposed to the limitation of only one array indexable by Mongo)?
We're using MongoMapper.
From MongDB Manual
$regex can only use an index efficiently when the regular expression
has an anchor for the beginning (i.e. ^) of a string and is a case-sensitive match.
Additionally, while /^a/, /^a.*/, and /^a.*$/ match equivalent strings,
they have different performance characteristics.
All of these expressions use an index if an appropriate index exists;
however, /^a.*/, and /^a.*$/ are slower. /^a/ can stop scanning after matching the prefix.
So using an array and a multi key index makes more sense in terms of performance

mongodb retrieve slice of e text

Is there something like "$slice" in mongodb, that retrieves slice of a text feild, instread of array?
I mean as we can get slice of comments in this way:
db.posts.find({}, {comments:{$slice: 5}}) // first 5 comments
get slice of descriptions, in some way like this:
db.posts.find({}, {description:{$slice: 100}}) // first 100 chars
thanks
MongoDB's $slice operator only applies to arrays, not text fields. If you want to trim strings you'll have to take care of this in the display code for your application (or possibly save a "trimmed" version of the field for display).
Note that if you are truncating text like a comment or description, the usual practice is to truncate to the nearest whole word (so the logic is a bit more involved than a simple # of characters).
eg: How to Truncate a string in PHP to the word closest to a certain number of characters?.

Can Kinosearch do mathematical comparisons on numbers like "greater-than"?

I am using Perl's KinoSearch module to index a bunch of text.
Some of the text has numeric fields associated with each word. For example, the word "Pizza" in the index may have a dollar field value like "5.50" (dollars).
How can I write a query in KinoSearch that will find all words that have a dollar value greater than 5?
I'm not even sure if a full-text search engine can do this kind of thing. It seems more like a SQL query.
After a bunch of searching (heh, heh), I found this in the docs: RangeQuery
I may be able to make that work. But it appears the new required classes are not part of the standard release, yet.