PostgreSQL Full Text Search: Cant get a partial match of tsvector - postgresql

Here's the problem:
I have a table in PostgreSQL with adresses in plain text and tsvectors. And i'm trying to find an adress record in a query like this.
SELECT * FROM address_catalog
WHERE address_catalog.search_vector ## to_tsquery('123456:* & Klingon:* & Empire:* & Kronos:* & city:* & Matrok:* & street:* & 789:*')
But the problem is that I don't know anything about the adress in a query. I can't define where a country, a city or a street is in the incoming string. I don't know what order of words the adress has, or does it contain extra words.
I can only search for countries and cities, but if the incoming string contains street, index or anything else, the search returns nothing because of the conjunction of all vector tokens. At the same time, I simply can't delete some string parts or use disjunction, because I never know where in the string the extra words are.
So, is there any way to construct a tsquery to return some best matches for the incoming string? Or maybe partial matches? When i tried to force it to use OR instead of AND everywhere in tsquery, it returned me nearly the whole database. I need vectors intersection... in postgresql.

I'd recommend using the smlar (PDF) extension for this. It was written by the same guys that wrote text search. It lets you use the TF-IDF similarity measure, which allows for "extraneous" query terms
Here's how to compile it (I haven't figured out how to compile it on Windows):
http://blog.databasepatterns.com/2014/07/postgresql-install-smlar-extension.html
And here's how to use it:
http://blog.databasepatterns.com/2014/08/tf-idf-text-search-in-postgres.html

Related

Postgres fuzzy array intersection

I'm using Postgresql 13 and my problem was easily solved with #> operator like this:
select id from documents where keywords #> '{"winter", "report", "2020"}';
meaning that keywords array should contain all these elements. Also I've created a GIN index on this column.
Is it possible to achieve similar behavior even if I provide my request like '{"re", "202", "w"}' ? I heard that ngrams have semantics like this, but "intersection" capabilities of arrays are crucial for me.
In your example, the matches are all prefixes. Is that the general rule here? If so, you would probably went to use the match feature of full text search, not trigrams. It would require you reformat your data, or at least your query.
select * from
(values (to_tsvector('simple','winter report 2020'))) f(x)
where x## 're:* & 202:* & w:*'::tsquery;
If the strings can contain punctuation which you want preserved, you would need to take pains to properly format them into a quoted tsvector yourself rather than just letting to_tsvector deal with it. Using 'simple' config gets rid of the stemming and stop word removal features, which would interfere with what you want to do.

Use Postgresql full text search to fuzzy match all search terms

I have 2 tables (projects and tasks) that both contain a name field. I want users to be able to search both tables at the same time when entering a new item. I want to rank results based on all the terms entered. A user should be able to enter text in any order he/she chooses.
For example, searching on:
office bmt
should yield these results:
PR BMT Time - Office
BMT Office - Development
BMT Office - Development
...
The following search should also work:
BMT canter
should contain this result:
Canterburry - BMT time
So partial matches need to work too.
Ideally if the user would type a small error like:
ofice bmt
The results should still appear.
I now use something like this:
where to_tsvector(projects.name || ' - ' || tasks.name) ## to_tsquery('OFF:*&BMT:*')
I build the search string itself in the Ruby backend by splitting the user entry according to its spaces.
This works fine, however in some cases it doesn't and I believe that's because it interprets it like English and ignores some words like of, off, in, etc...
For example searching for:
off bmt
Gives results that don't contain Off at all because off is ignored completely.
Is there a way to avoid this but still have good performance and fuzzy search? I'm not keen on having to sync my PG with ElasticSearch for this.
I could do it by building a list of AND statements in the WHERE clause with LIKE '% ... %' but that would probably hurt performance and doesn't support fuzzysearch.
Ideally if the user would type a small error like:
ofice bmt
The results should still appear.
This could be very hard to do on more than a best-effort basis. If someone enters "Canter", how should the system know if they meant a shortening of Canterburry, or a misspelling of "cancer", or of "cantor", or if they really meant a horse's gait? Perhaps you can create a dictionary of common typos for your specific field? Also, without the specific knowledge that time zones are expected and common, "bmt" seems like a misspelling of, well, something.
This works fine, however in some cases it doesn't and I believe that's because it interprets it like English and ignores some words like of, off, in, etc...
Don't just believe, check and see!
select to_tsquery('english','OFF:*&BMT:*');
to_tsquery
------------
'bmt':*
Yes indeed, to_tsquery does omit stop words, even with the :* thingy.
One option is to use 'simple' rather than 'english' as your configuration:
select to_tsquery('simple','OFF:*&BMT:*');
to_tsquery
-------------------
'off':* & 'bmt':*
Another option is to write tsquery directly rather than processing through to_tsquery. Note that in this case, you have to lower-case it yourself:
select 'off:*&bmt:*'::tsquery;
tsquery
-------------------
'off':* & 'bmt':*
Also note that if you do this with 'office:*', you will never get a match in an 'english' configuration, because 'office' in the document gets stemmed to 'offic', while no stemming occurs when you write 'office:*'::tsquery. So you could use 'simple' rather than 'english' to avoid both stemming and stop words. Or you could test each word in the query individually to see if it gets stemmed before deciding to add :* to it.
Is there a way to avoid this but still have good performance and fuzzy search? I'm not keen on having to sync my PG with ElasticSearch for this.
What do you mean by fuzzysearch? You don't seem to be using that now. You are just using prefix matching, and accidentally using stemming and stopwords. How large is your table to be searched, and what kind of performance is acceptable?
If did you use ElasticSearch, how would you then phrase your searches? If you explained how you would phrase the search in ES, maybe someone can help you do the same thing in PostgreSQL. I don't think we can take it as a given that switching to ES will just magically do the right thing.
I could do it by building a list of AND statements in the WHERE clause
with LIKE '% ... %' but that would probably hurt performance and
doesn't support fuzzysearch.
Have you looked into pg_trgm? It can make those types of queries quite fast. Also, LIKE '%...%' is lot more fuzzy than what you are currently doing, so I don't understand how you will lose that. pg_trgm also provides the '<->' operator which is even fuzzier, and might be your best bet. It can deal with typos fairly well when embedded in long strings, but in short strings they can really be a problem.
In your case, to_tsquery() need to indicate that all words are required, you can use to_tsquery('english', 'off & bmt') and indicates a particular dictionary containing the 'off' word, listed in the link 4, below.
Some tips to use tsvector:
Create a field on your table that contains all fields with terms that you want to search, this field should be the type tsvector
Your search should use tsquery as you mentioned in your answer. In search, you can make some good tricks, like as follow:
2.a. Create a rank, with ts_rank(), indicating the search priority, this indicates the priority and how much the tsquery approximates with original terms
2.b. If you have specific words (like my case, search of chemical terms), you can create a dictionary with the commonly words used, this words can be used to extract radical or parts to compare the similarity.
2.c. About the performance: The tsquery works very well with gin and gist indexes. I have used full text search in a table with +200k registers and the search returns in < 0.4secs.
If you need more fuzzy search in words, you can also use the fuzzy match. I used with tsquery, the levenshtein_less_equal search, using a distance of 3. The function searches words with 3 or minus letters differing from the search, for unique words is a good way to search.
tsquery and tsvector: https://www.postgresql.org/docs/10/datatype-textsearch.html
text search: https://www.postgresql.org/docs/10/textsearch-controls.html#TEXTSEARCH-RANKING
Fuzzy: https://www.postgresql.org/docs/11/fuzzystrmatch.html#id-1.11.7.24.6
Lexize: https://www.postgresql.org/docs/10/textsearch-dictionaries.html#TEXTSEARCH-SIMPLE-DICTIONARY

Using full text search in PostgreSQL, how can I make certain words worth less to match?

I am trying to use Postgres full-text search to search an index of company names. There are lots of duplicates, typos, etc. When matching company names, things like LLC and Inc are not quite stop-words (as in, I want them to count for something) but they are not nearly as important as most other words. Is there a way to query such that some words count more than other words when matching?
(I'm doing this all through Django, but if I can figure out the SQL to use I can probably get the rest of the way there...)
You can use the 3-argument form of "setweight" to de-weight specific lexemes. You would do this in the tsvector, not in the tsquery.
select setweight(setweight(to_tsvector('The DBA LLC'),'A'),'D','{llc}');
setweight
-------------------
'dba':2A 'llc':3D

How to make a tsquery perform a partial match?

I have the following situation. In our database, our user has the ability to search part numbers as 'keywords'. Part numbers are attached as 'footnotes' which get attached to certain items. An example of a footnote of this nature would have a description of:
Part Number: 09C888
Our keyword search searches multiple tables through an incredibly fun set of LEFT JOINs eventually forming a ts_vector which then is used against a tsquery. Our current issue is that this methodology seems to only accept exact matches. Example:
select to_tsvector('Part Number: 09C888') ## to_tsquery('09C888:*');
?column?
---------
t
Using the full version of the part number as the search criteria works fine. However...
select to_tsvector('Part Number: 09C888') ## to_tsquery('9C888:*');
?column?
----------
f
Is there a way to modify the above tsquery item to match against 09C888 with values of 09C888 AND 9C888? Normally, I could do something similar with the LIKE construct, but we're currently using full text search for efficiency on large amounts of data. From perusing the postgresql documentation, I cannot figure out an easy way to do this. I am also hesitant to change the overall query since it's doing... well, its doing a lot of stuff of which the text matching is only one part of. (Obviously a potential place for improvement.)
EDIT:
I've actually figured out how to do this using a modified query
select to_tsvector('Part Number: 09C888') ## to_tsquery('09C888|9C888:*');
Is there a better way to determine match than what I've listed above? Mostly because the solution in incredibly specific, but essentially these part numbers may or may not have leading 0s.
Have you considered storing the part number with leading zeroes removed in a separate column and search against that?
+---------------------+-------+
| Part Number: 09C888 | 9C888 |
+---------------------+-------+
CREATE INDEX footnote_part_number_txt_idx
ON footnotes (stripped_part_number text_pattern_ops);
then you can query (using the index)
SELECT footnote_str
FROM footnotes
WHERE stripped_part_number LIKE '9C88%'
See: http://petereisentraut.blogspot.se/2009/10/rethink-your-text-column-indexing-with.html

ends with (suffix) and contains string search using MATCH in SQLite FTS

I am using SQLite FTS extension in my iOS application.
It performs well but the problem is that it matches only string prefixes (or starts with keyword search).
i.e.
This works:
SELECT FROM tablename WHERE columnname MATCH 'searchterm*'
but following two don't:
SELECT FROM tablename WHERE columnname MATCH '*searchterm'
SELECT FROM tablename WHERE columnname MATCH '\*searchterm\*'
Is there any workaround for this or any way to use FTS to build a query similar to LIKE '%searchterm%' query.
EDIT:
As pointed out by Retterdesdialogs, storing the entire text in reverse order and running a prefix search on a reverse string is a possible solution for ends with/suffix search problem, which was my original question, but it won't work for 'contains' search. I have updated the question accordingly.
In my iOS and Android applications, I have shied away from FTS search for exactly the reason that it doesn't support substring matches due to lack of suffix queries.
The workarounds seem complicated.
I have resorted to using LIKE queries, which while being less performant than MATCH, served my needs.
The workaround is to store the reverse string in an extra column. See this link (its not exactly the same it should give a idea):
Search Suffix using Full Text Search
To get it to work for contains queries, you need to store all suffixes of the terms you want to be able to search. This has the downside of making the database really large, but that can be avoided by compressing the data.
SQLite FTS contains and suffix matches