Full text search(Postgres) Vs Elastic search - postgresql

Read Query
In Posgres, Full text indexing allows documents to be preprocessed and an index saved for later rapid searching. Preprocessing includes:
Parsing documents into tokens.
Converting tokens into lexemes.
Storing preprocessed documents optimized for searching.
tsvector type is used in Postgres for full text search
tsvector type is different than text type in below aspects:
Eliminates case. Upper/lower case letter are identical
Removes stop words ( and, or, not, she, him, and hundreds of others)-because these words are not relevant for text search
Replaces synonyms and takes word stems (elephant -> eleph). In the full text catalogue, it does not have the word elephant but the word elep.
Can (and should) be indexed with GIST and GIN
Custom ranking with weights & ts_rank
How Elastic search(search engine) has advantage over full text search in Postgres?

fulltext search and elasticsearch are both built on the same basic technology inverted indices so performance is going to be about the same.
FTS is going to be easier to deploy.
ES comes with lucene,
if you want lucene with FTS that will require extra effort.

Related

Is there a way to create an index on a set of words (not all words) in postgres?

I want to have some sort of limited indexed Full-text search. With FTS postgres will index all the words in the text, but I want it to track only a given set of words. For example, I have a database of tweets, and I want them to be indexed by special words that I give: awesome, terrible and etc.
If someone will be interested in such a specific thing, I made it by creating a custom dictionary (thanks Mark).
My findings I documented here: https://hackmd.io/#z889TbyuRlm0vFIqFl_AYQ/r1gKJQBZS

Convert Sphinx Index to Table?

I go through a pretty intense sphinx configuration each day to convert the millions of records into a usable/searchable sphinx index.
However I now need to export that as an xml file, if not that as a new table.
Naturally I could do most/all of the work I do in the Sphinx Index in Mysql as well but it seems like a lot of unncessary work if I've just generated a Sphinx Index. Can I somehow 'export' that index to a table or is the full-text indexing essentially now useless to me as readable data?
Well it depends WHAT you want out.
The Sphinx index, is estiently a Inverted Index. https://en.wikipedia.org/wiki/Inverted_index
... as such its good for finding which 'documents' contain a given word, it litterally stores that as a list. (ideally suited to the fundamental function of a query! Just sphinx does heavy lifting for multi-word queries, as well as ranking results)
... such a structure is NOT organized by document. SO cant directly get a list of which words are in a given document. (to compute htat would have to traverse the entire data structure)
But if it's the inverted index that you DO want can dump it with indextool
http://sphinxsearch.com/docs/current.html#ref-indextool
... eg the --dumpdict and even --dumphitlist commands.
(although dumpdict only works on dict=keywords indexes)
You might be interested in the --dump-rows option on indexer
http://sphinxsearch.com/docs/current.html#ref-indexer
... it dumps out the textual data during indexing, retrieved from mysql.
It's not dumped from the index itself, and is not subject to all the 'magic' tokenizing and normalizing sphinx does (charset_table/wordforms etc)
Back to indextool there is also the --fold, --htmlstrip, --morph, which can be used in stream to tokenize text.
In theory could use these to use the 'power' of sphinx, and the settings from an actual index, to create a processed dataset (similar to what sphinx is doing generating index)

Autocomplete by most frequent words - postgres or lucene?

We're using Postgres and its fulltext feature to search for documents (posts content) in our system, and it works really well.
For autocomplete we want to build index (dictionary?) with all words used in documents and search by most frequent ones.
We will always search for one word. We will never search for phrase.
So if I write:
"th"
I will receive (suppose the most frequent words in our documents):
"this"
"there"
"thoughts"
...
How to do it with Postgres? Or maybe we need some more advanced solution like apache lucene / solr ?
Neither postgres fulltext search (which provides lexems) nor postgres trigrams seems to be suitable for this work. Or maybe I am wrong ?
I don't want to manually parse text and ignore all english stopwords which would be error prone. Postgres does good job with this while building lexems index. But intead of lexems, we need to build and search words dictionary without normalization
Thank you for your assistance

MongoDB Text Index Unsupported Language

I have a large database of Greek Tweets stored in a mongodb database.
(3M Tweets arround 30GB of storage).
I have created a text index on the text and an ordered index on the timestamp fields. However, I found that MongoDB does not support the Greek language for text indexing thus text queries in the Greek language are relativelly slow. How can I face that issue and create an inverted index also for the greek documents?
Use solr to built you index rather than mongodb , it has lot of feature to support multi-lingual search .
I have just found that if I select as language none according to the documentation a simple inverted index using tokenization will be created.
http://docs.mongodb.org/manual/reference/text-search-languages/#text-search-languages
If you specify a language value of "none", then the text search uses
simple tokenization with no list of stop words and no stemming

MongoDB fulltext search + workaround for partial word match

Since it is not possible to find "blueberry" by the word "blue" by using a mongodb full text search, I want to help my users to complete the word "blue" to "blueberry". To do so, is it possible to query all the words in a mongodb full text index -> that I can use the words as suggestions i.e. for typeahead.js?
Language stemming in text search uses an algorithm to try to relate words derived from a common base (eg. "running" should match "run"). This is different from the prefix match (eg. "blue" matching "blueberry") that you want to implement for an autocomplete feature.
To most effectively use typeahead.js with MongoDB text search I would suggest focusing on the prefetch support in typeahead:
Create a keywords collection which has the common words (perhaps with usage frequency count) used in your collection. You could create this collection by running a Map/Reduce across the collection you have the text search index on, and keep the word list up to date using a periodic Incremental Map/Reduce as new documents are added.
Have your application generate a JSON document from the keywords collection with the unique keywords (perhaps limited to "popular" keywords based on word frequency to keep the list manageable/relevant).
You can then use the generated keywords JSON for client-side autocomplete with typeahead's prefetch feature:
$('.mysearch .typeahead').typeahead({
name: 'mysearch',
prefetch: '/data/keywords.json'
});
typeahead.js will cache the prefetch JSON data in localStorage for client-side searches. When the search form is submitted, your application can use the server-side MongoDB text search to return the full results in relevance order.
A simple workaround I am doing right now is to break the text into individual chars stored as a text indexed array.
Then when you do the $search query you simply break up the query into chars again.
Please note that this only works for short strings say length smaller than 32 otherwise the indexing building process will take really long thus performance will be down significantly when inserting new records.
You can not query for all the words in the index, but you can of course query the original document's fields. The words in the search index are also not always the full words, but are stemmed anyway. So you probably wouldn't find "blueberry" in the index, but just "blueberri".
Don't know if this might be useful to some new people facing this problem.
Depending on the size of your collection and how much RAM you have available, you can make a search by $regex, by creating the proper index. E.g:
db.collection.find( {query : {$regex: /querywords/}}).sort({'criteria': -1}).limit(limit)
You would need an index as follows:
db.collection.ensureIndex( { "query": 1, "criteria" : -1 } )
This could be really fast if you have enough memory.
Hope this helps.
For those who have not yet started implementing any database architecture and are here for a solution, go for Elasticsearch. Its a json document driven database similar to mongodb structurally. It has "edge-ngram" analyzer which is really really efficient and quick in giving you did you mean for mis-spelled searches. You can also search partially.