Longest matching substring - postgresql

How would you search for the longest match within a varchar variable? For example, table GOB has entries as follows:
magic_word | prize
===================
sh| $0.20
sha| $0.40
shaz| $0.60
shaza| $1.50
I would like to write a plpgsql function that takes amongst other arguments a string as input (e.g. shazam), and returns the 'prize' column on the row of GOB with the longest matching substring. In the example shown, that would be $1.50 on the row with magic_word shaza.
All the function format I can handle, it's just the matching bit. I can't think of an elegant solution. I'm guessing it's probably really easy, but I am scratching my head. I don't know the input string at the start, as it will be derived from the result of a query on another table.
Any ideas?

Simple solution
SELECT magic_word
FROM gob
WHERE 'shazam' LIKE (magic_word || '%')
ORDER BY magic_word DESC
LIMIT 1;
This works because the longest match sorts last - so I sort DESC and pick the first match.
I am assuming from your example that you want to match left-anchored, from the beginning of the string. If you want to match anywhere in the string (which is more expensive and even harder to back up with an index), use:
...
WHERE 'shazam' LIKE ('%' || magic_word || '%')
...
SQL Fiddle.
Performance
The query is not sargable. It might help quite a bit if you had additional information, like a minimum length that you could base an index on, to reduce the number of rows to consider. It needs to be criteria that gets you less than ~ 5% of the table to be effective. So, initials (a natural minimum pick) may or may not be useful. But two or three letters at the start might help quite a bit.
In fact you could optimize this iteratively. Something along the line of:
Try a partial index of words with 15 letters+
If not found, try 12 letters+
If not found, try 9 letters+
...
A simple case of what I outlined in this related answer on dba.SE:
Can spatial index help a “range - order by - limit” query
Another approach would be to use a trigram index. You'd need the additional module pg_trgm for that. Normally you would search with a short pattern in a table with longer strings. But trigrams work for your reverse approach, too, with some limitations. Obviously you couldn't match a string with just two characters in the middle of a longer string using trigrams ... Test for corner cases.
There are a number of answers here on SO with more information. Example:
Effectively query on column that includes a substring
Advanced solution
Consider the solution under this closely related question for a whole table of search strings. Implemented with a recursive CTE:
Longest Prefix Match

How about
1
select max(FOO.matchingValue)
from
(
select magic_word as matchingValue
from T
where substr( "abracadabra", 1, length(magic_word)) = magic_word
)
as FOO
2
select prize from
T
join
(
select max(FOO.matchingValue) as MaxValue
from
(
select magic_word as matchingValue
from T
where substr( "abracadabra", 1, length(magic_word)) = magic_word
)
as FOO
) as BAR
on BAR.MaxValue = T.magic_word

Related

Efficient way to find ordered string's exact, prefix and postfix match in PostgreSQL

Given a table name table and a string column named column, I want to search for the word word in that column in the following way: exact matches be on top, followed by prefix matches and finally postfix matches.
Currently I got the following solutions:
Solution 1:
select column
from (select column,
case
when column like 'word' then 1
when column like 'word%' then 2
when column like '%word' then 3
end as rank
from table) as ranked
where rank is not null
order by rank;
Solution 2:
select column
from table
where column like 'word'
or column like 'word%'
or column like '%word'
order by case
when column like 'word' then 1
when column like 'word%' then 2
when column like '%word' then 3
end;
Now my question is which one of the two solutions are more efficient or better yet, is there a solution better than both of them?
Your 2nd solution looks simpler for the planner to optimize, but it is possible that the first one gets the same plan as well.
For the Where, is not needed as it is covered by ; it might confuse the DB to do 2 checks instead of one.
But the biggest problem is the third one as this has no way to be optimized by an index.
So either way, PostgreSQL is going to scan your full table and manually extract the matches. This is going to be slow for 20,000 rows or more.
I recommend you to explore fuzzy string matching and full text search; looks like that is what you're trying to emulate.
Even if you don't want the full power of FTS or fuzzy string matching, you definitely should add the extension "pgtrgm", as it will enable you to add a GIN index on the column that will speedup LIKE '%word' searches.
https://www.postgresql.org/docs/current/pgtrgm.html
And seriously, have a look to FTS. It does provide ranking. If your requirements are strict to what you described, you can still perform the FTS query to "prefilter" and then apply this logic afterwards.
There are tons of introduction articles to PostgreSQL FTS, here's one:
https://www.compose.com/articles/mastering-postgresql-tools-full-text-search-and-phrase-search/
And even I wrote a post recently when I added FTS search to my site:
https://deavid.wordpress.com/2019/05/28/sedice-adding-fts-with-postgresql-was-really-easy/

Convert to SARGable query

I want to write a query to search the containing string in the table.
Table:
Create table tbl_sarg
(
colname varchar(100),
coladdres varchar(500)
);
Note: I just want to use Index Seek for searching on 300 millions of records.
Index:
create nonclustered index ncidx_colname on tbl_sarg(colname);
Sample Records:
insert into tbl_sarg values('John A Mak','HNo 102 Street Road Uk');
insert into tbl_sarg values('Shawn A Meben','Church road USA');
insert into tbl_sarg values('Lee Decose','ShopNo 22 K Mark UK');
insert into tbl_sarg values('James Don','A Mall, 90 feet road UAE');
Query 1:
select * from tbl_sarg
where colname like '%ee%'
Actual Execution Plan:
Query 2:
select * from tbl_sarg
where charindex('ee',colname)>0
Actual Execution Plan:
Query 3:
select * from tbl_sarg
where patindex('%ee%',colname)>0
Actual Execution Plan:
How to force the query processor to use the index seek instead table/index scan on large data set?
All the queries that you have posted, by definition are not SARgable, for instance, the use of '%..%'' automatically force the Query Engine to do a Scan, the other case is the use of functions (as charindex or patindex) inside your column inside a predicate.
Here some post: https://bertwagner.com/2017/08/22/how-to-search-and-destroy-non-sargable-queries-on-your-server/
Kimberly Tripp has written very interesting articles about it if for you is mandatory to execute this kind of query with wildcards, maybe it is worth to check about the possibility of using FullTextSearch feature. My point is, or your limit and do a precise predicate into your queries or you will have to change of strategy, almost forget, don't try to force the use of Seek with HINT, I can't see that this medicine will be better than the illness.
A search argument, or SARG in short, is a filter predicate that enables the optimizer to rely on
index order. The filter predicate uses the following form (or a variant with two delimiters of a
range, or with the operand positions flipped):
WHERE <column> <operator> <expression>
Such a filter is sargable if:
You don’t apply manipulation to the filtered column.
The operator identifies a consecutive range of qualifying rows in the index. That’s the
case with operators like =, >, >=, <, <=, BETWEEN, LIKE with a known prefix, and so on.
That’s not the case with operators like <>, LIKE with a wildcard as a prefix.
In most cases, when you apply manipulation to the filtered column, the optimizer doesn’t
try to be too smart and understand the meaning of the calculation, and if index ordering
can still be relied on. It simply assumes that the result values might sort differently than the
source values, and therefore index ordering can’t be trusted.
So why doesn’t SQL Server use the index for the %ee% query? Pretend for a moment that you held a phone book in your hand, and I asked you to find everyone whose last name contains the letters %ee%. You would have to scan every single page in the phone book, because the results would include things like:
Anne Lee
Lee Yung
Kathlee
Aleen
When I asked you for all last names containing %ee% anywhere in the name, my query was not sargable – meaning, you couldn’t leverage the indexes to do an index seek.
That’s where SQL Server’s Full Text Search comes in.

PostgreSQL: Find sentences closest to a given sentence

I have a table of images with sentence captions. Given a new sentence I want to find the images that best match it based on how close the new sentence is to the stored old sentences.
I know that I can use the ## operator with a to_tsquery but tsquery accepts specific words as queries.
One problem is I don't know how to convert the given sentence into a meaningful query. The sentence may have punctuation and numbers.
However, I also feel that some kind of cosine similarity thing is what I need but I don't know how to get that out of PostgresQL. I am using the latest GA version and am happy to use the development version if that would solve my problem.
Full Text Search (FTS)
You could use plainto_tsquery() to (per documentation) ...
produce tsquery ignoring punctuation
SELECT plainto_tsquery('english', 'Sentence: with irrelevant words (and punctuation) in it.')
plainto_tsquery
------------------
'sentenc' & 'irrelev' & 'word' & 'punctuat'
Use it like:
SELECT *
FROM tbl
WHERE to_tsvector('english', sentence) ## plainto_tsquery('english', 'My new sentence');
But that is still rather strict and only provides very limited tolerance for similarity.
Trigram similarity
Might be better suited to search for similarity, even overcome typos to some degree.
Install the additional module pg_trgm, create a GiST index and use the similarity operator % in a nearest neighbour search:
Basically, with a trigram GiST index on sentence:
-- SELECT set_limit(0.3); -- adjust tolerance if needed
SELECT *
FROM tbl
WHERE sentence % 'My new sentence'
ORDER BY sentence <-> 'My new sentence'
LIMIT 10;
More:
Finding similar strings with PostgreSQL quickly
Finding similar posts with PostgreSQL
Slow fulltext search for terms with high occurence
Combine both
You can even combine FTS and trigram similarity:
PostgreSQL FTS and Trigram-similarity Query Optimization
it's a pretty late answer, but I'm adding in case anyone encounters. If you add ": *" to the end of the words, it will bring up similar ones.
Sample:
JS autocomlete -> Codeigniter:
barcode = $ this-> input-> get ("term"). ":*";
Query:
$ query = 'select * from tablaneme where xx ##? LIMIT 15 ';
$ barcodequery = $ this-> db-> query ($ query, array (explode ("", $ barcode)))) -> result_array ();

How to index a postgres table by name, when the name can be in any language?

I have a large postgres table of locations (shops, landmarks, etc.) which the user can search in various ways. When the user wants to do a search for the name of a place, the system currently does (assuming the search is on cafe):
lower(location_name) LIKE '%cafe%'
as part of the query. This is hugely inefficient. Prohibitively so. It is essential I make this faster. I've tried indexing the table on
gin(to_tsvector('simple', location_name))
and searching with
(to_tsvector('simple',location_name) ## to_tsquery('simple','cafe'))
which works beautifully, and cuts down the search time by a couple of orders of magnitude.
However, the location names can be in any language, including languages like Chinese, which aren't whitespace delimited. This new system is unable to find any Chinese locations, unless I search for the exact name, whereas the old system could find matches to partial names just fine.
So, my question is: Can I get this to work for all languages at once, or am I on the wrong track?
If you want to optimize arbitrary substring matches, one option is to use the pg_tgrm module. Add an index:
CREATE INDEX table_location_name_trigrams_key ON table
USING gin (location_name gin_trgm_ops);
This will break "Simple Cafe" into "sim", "imp", "mpl", etc., and add an entry to the index for each trigam in each row. The query planner can then automatically use this index for substring pattern matches, including:
SELECT * FROM table WHERE location_name ILIKE '%cafe%';
This query will look up "caf" and "afe" in the index, find the intersection, fetch those rows, then check each row against your pattern. (That last check is necessary since the intersection of "caf" and "afe" matches both "simple cafe" and "unsafe scaffolding", while "%cafe%" should only match one). The index becomes more effective as the input pattern gets longer since it can exclude more rows, but it's still not as efficient as indexing whole words, so don't expect a performance improvement over to_tsvector.
Catch is, trigrams don't work at all for patterns that under three characters. That may or may not be a deal-breaker for your application.
Edit: I initially added this as a comment.
I had another thought last night when I was mostly asleep. Make a cjk_chars function that takes an input string, regexp_matches the entire CJK Unicode ranges, and returns an array of any such characters or NULL if none. Add a GIN index on cjk_chars(location_name). Then query for:
WHERE CASE
WHEN cjk_chars('query') IS NOT NULL THEN
cjk_chars(location_name) #> cjk_chars('query')
AND location_name LIKE '%query%'
ELSE
<tsvector/trigrams>
END
Ta-da, unigrams!
For full text search in a multi-language environment you need to store the language each datum is in along side the text its self. You can then use the language-specified flavours of the tsearch functions to get proper stemming, etc.
eg given:
CREATE TABLE location(
location_name text,
location_name_language text
);
... plus any appropriate constraints, you might write:
CREATE INDEX location_name_ts_idx
USING gin(to_tsvector(location_name_language, location_name));
and for search:
SELECT to_tsvector(location_name_language,location_name) ## to_tsquery('english','cafe');
Cross-language searches will be problematic no matter what you do. In practice I'd use multiple matching strategies: I'd compare the search term to the tsvector of location_name in the simple configuration and the stored language of the text. I'd possibly also use a trigram based approach like willglynn suggests, then I'd unify the results for display, looking for common terms.
It's possible you may find Pg's fulltext search too limited, in which case you might want to check out something like Lucerne / Solr.
See:
* controlling full text search.
* tsearch dictionaries
Similar to what #willglynn already posted, I would consider the pg_trgm module. But preferably with a GiST index:
CREATE INDEX tbl_location_name_trgm_idx
USING gist(location_name gist_trgm_ops);
The gist_trgm_ops operator class ignore case generally, and ILIKE is just as fast as LIKE. Quoting the source code:
Caution: IGNORECASE macro means that trigrams are case-insensitive.
I use COLLATE "C" here - which is effectively no special collation (byte order instead), because you obviously have a mix of various collations in your column. Collation is relevant for ordering or ranges, for a basic similarity search, you can do without it. I would consider setting COLLATE "C" for your column to begin with.
This index would lend support to your first, simple form of the query:
SELECT * FROM tbl WHERE location_name ILIKE '%cafe%';
Very fast.
Retains capability to find partial matches.
Adds capability for fuzzy search.
Check out the % operator and set_limit().
GiST index is also very fast for queries with LIMIT n to select n "best" matches. You could add to the above query:
ORDER BY location_name <-> 'cafe'
LIMIT 20
Read more about the "distance" operator <-> in the manual here.
Or even:
SELECT *
FROM tbl
WHERE location_name ILIKE '%cafe%' -- exact partial match
OR location_name % 'cafe' -- fuzzy match
ORDER BY
(location_name ILIKE 'cafe%') DESC -- exact beginning first
,(location_name ILIKE '%cafe%') DESC -- exact partial match next
,(location_name <-> 'cafe') -- then "best" matches
,location_name -- break remaining ties (collation!)
LIMIT 20;
I use something like that in several applications for (to me) satisfactory results. Of course, it gets a bit slower with multiple features applied in combination. Find your sweet spot ...
You could go one step further and create a separate partial index for every language and use a matching collation for each:
CREATE INDEX location_name_trgm_idx
USING gist(location_name COLLATE "de_DE" gist_trgm_ops)
WHERE location_name_language = 'German';
-- repeat for each language
That would only be useful, if you only want results of a specific language per query and would be very fast in this case.

Postgres full text search with multiple columns, why concat in index and not at runtime?

I've come across full text search in postgres in the last few days, and I am a little confused about indexing when searching across multiple columns.
The postgres docs talk about creating a ts_vector index on concatenated columns, like so:
CREATE INDEX pgweb_idx ON pgweb
USING gin(to_tsvector('english', title || ' ' || body));
which I can search like so:
... WHERE
(to_tsvector('english', title||' '||body) ## to_tsquery('english', 'foo'))
However, if I wanted to sometimes search just the title, sometimes just the body, and sometimes both, I would need 3 separate indexes. And if I added in a third column, that could potentially be 6 indexes, and so on.
An alternative which I haven't seen in the docs is just to index the two columns seperately, and then just use a normal WHERE...AND query:
... WHERE
(to_tsvector('english', title) ## to_tsquery('english','foo'))
AND
(to_tsvector('english', body) ## to_tsquery('english','foo'))
Benchmarking the two on ~1million rows seems to have basically no difference in performance.
So my question is:
Why would I want to concatenate indexes like this, rather than just indexing columns individually? What are the advantages/disadvantages of both?
My best guess is that if I knew in advance I would only want to ever search both columns (never one at a time) I would only ever need one index by concatenating which use less memory.
Edit
moved to: https://dba.stackexchange.com/questions/15412/postgres-full-text-search-with-multiple-columns-why-concat-in-index-and-not-at
Using one index is easier / faster for a DB;
It will be quite difficult to properly rank results when using two indexes;
You can assign relative weights to columns when creating a single index, so that match in title will be worth more than a match in body;
You are searching for a single word here, what happens if you search for several and they appear separately in different columns?
To answer the question of the implementation of #3, please see https://www.postgresql.org/docs/9.1/textsearch-controls.html:
a weight is one of the letters A, B, C, or D
UPDATE tt SET ti =
setweight(to_tsvector(coalesce(title,'')), 'A') ||
setweight(to_tsvector(coalesce(keyword,'')), 'B') ||
setweight(to_tsvector(coalesce(abstract,'')), 'C') ||
setweight(to_tsvector(coalesce(body,'')), 'D');