PostgreSQL query on a text column ignoring special characters - postgresql

I have a table which contains a text column, say vehicle number.
Now I want to query the table for fields which contain a particular vehicle number.
While matching I do not want to consider non-alphanumeric characters.
example: query condition - DEL123
should match - DEL-123, DEL/123, DEL#123, etc...

If you know which characters to skip, put them as the second parameter of this translate() call (which is faster than regexp functions):
select *
from a_table
where translate(code, '-/#', '') = 'DEL123';
Else, you can compare only alphanumeric characters using regexp_replace():
select *
from a_table
where regexp_replace(code, '[^[:alnum:]]', '', 'g') = 'DEL123';

#klin's answer is great, but is not sargable, so in cases where you're searching through millions of records (maybe not your case, but perhaps someone else with a similar question looking for answers), using regular expressions will likely render much better results.
The following will use indexes on code significantly reducing the number of rows tested:
select *
from a_table
where code ~ '^DEL[^[:alnum:]]*123$';

Related

Matching performance with pattern from table column

I have a query which looks like:
SELECT *
FROM my_table
WHERE 'some_string' LIKE mytable.some_column || '%%'
How can I index some_column to improve this query performance?
Or is the a better way to filter this?
This predicate effectively searches for all prefixes for a given string:
WHERE 'some_string' LIKE mytable.some_column || '%'
Maybe % is a special character in your client, which needs to be escaped with another % to pass a literal %, else '%%' would be just noise and can be replaced with '%'.
The most efficient solution should be a recursive CTE (or similar) that matches to every prefix exactly, starting with some_column = left('some_string', 1), up to some_column = left('some_string', length('some_string')) (= 'some_string').
You only need a plain btree index on the column for this. Depending on details of your implementation, partial expression indexes might improve performance ...
Related:
Reverse pattern matching: find the longest prefix
Algorithm for finding the longest prefix
PostgreSQL LIKE query performance variations
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
I believe you intend to write the following query:
SELECT *
FROM my_table
WHERE mytable.some_column LIKE 'some_string%';
In other words, you want to find records where some column begins with some_string followed by anything, possibly nothing at all.
As far as I know, a regular B-tree index on some_column will be effective, to a point, in your query. The reason is that Postgres can traverse the tree looking for the prefix some_string. Once it has found that entry, beyond that the index might not help. But an index on some_column should give you some performance benefit here.
A condition where an index would not help would be the following:
WHERE mutable.some_column LIKE '%some_string';
In this case, the index is rendered mostly useless, because we have no idea with what prefix the column value should begin.

How do I find a word that certain rows contain in SQLite?

I am having a hard time figuring out how to do the following in SQLite:
I have a table with let's say the following:
table name: terms
golden
waterfall
inception
castaway
I would like to now do a lookup on all of the terms in the table that is contained in a specific string. So a string like "abc_golden#hotmail.com" should return a match. Or "life_waterfall_5" should return a match.
I understand how to do this with the LIKE statement if it was the other way around (if I was looking for matches in the table that contains a specific word. But how do I do it in my case where I have to match all entries that is contained WITHIN my search term?
To find rows that contain a string:
SELECT * FROM tbl WHERE col LIKE '%word%';
To find rows that a string contains, just turn it backwards:
SELECT * FROM tbl WHERE 'some string' LIKE '%' || col || '%';

Searching individual words in a string

I know about full-text search, but that only matches your query against individual words. I want to select strings that contain a word that starts with words in my query. For example, if I search:
appl
the following should match:
a really nice application
apples are cool
appliances
since all those strings contains words that start with appl. In addition, it would be nice if I could select the number of words that match, and sort based on that.
How can I implement this in PostgreSQL?
Prefix matching with Full Text Search
FTS supports prefix matching. Your query works like this:
SELECT * FROM tbl
WHERE to_tsvector('simple', string) ## to_tsquery('simple', 'appl:*');
Note the appended :* in the tsquery. This can use an index.
See:
Get partial match from GIN indexed TSVECTOR column
Alternative with regular expressions
SELECT * FROM tbl
WHERE string ~ '\mappl';
Quoting the manual here:
\m .. matches only at the beginning of a word
To order by the count of matches, you could use regexp_matches()
SELECT tbl_id, count(*) AS matches
FROM (
SELECT tbl_id, regexp_matches(string, '\mappl', 'g')
FROM tbl
WHERE string ~ '\mappl'
) sub
GROUP BY tbl_id
ORDER BY matches DESC;
Or regexp_split_to_table():
SELECT tbl_id, string, count(*) - 1 AS matches
FROM (
SELECT tbl_id, string, regexp_split_to_table(string, '\mappl')
FROM tbl
WHERE string ~ '\mappl'
) sub
GROUP BY 1, 2
ORDER BY 3 DESC, 2, 1;
db<>fiddle here
Old sqlfiddle
Postgres 9.3 or later has index support for simple regular expressions with a trigram GIN or GiST index. The release notes for Postgres 9.3:
Add support for indexing of regular-expression searches in pg_trgm
(Alexander Korotkov)
See:
PostgreSQL LIKE query performance variations
Depesz wrote a blog about index support for regular expressions.
SELECT * FROM some_table WHERE some_field LIKE 'appl%' OR some_field LIKE '% appl%';
As for counting the number of words that match, I believe that would be too expensive to do dynamically in postgres (though maybe someone else knows better). One way you could do it is by writing a function that counts occurrences in a string, and then add ORDER BY myFunction('appl', some_field). Again though, this method is VERY expensive (i.e. slow) and not recommended.
For things like that, you should probably use a separate/complimentary full-text search engine like Sphinx Search (google it), which is specialized for that sort of thing.
An alternative to that, is to have another table that contains keywords and the number of occurrences of those keywords in each string. This means you need to store each phrase you have (e.g. really really nice application) and also store the keywords in another table (i.e. really, 2, nice, 1, application, 1) and link that keyword table to your full-phrase table. This means that you would have to break up strings into keywords as they are entered into your database and store them in two places. This is a typical space vs speed trade-off.

Postgresql ILIKE versus TSEARCH

I have a query with a number of test fields something like this:
SELECT * FROM some-table
WHERE field1 ILIKE "%thing%"
OR field2 ILIKE "%thing"
OR field3 ILIKE "%thing";
The columns are pretty much all varchar(50) or thereabouts. Now I understand to improve performance I should index the fields upon which the search operates. Should I be considering replacing ILIKE with TSEARCH completely?
A full text search setup is not identical to a "contains" like query. It stems words etc so you can match "cars" against "car".
If you really want a fast ILIKE then no standard database index or FTS will help. Fortunately, the pg_trgm module can do that.
http://www.postgresql.org/docs/9.1/static/pgtrgm.html
http://www.depesz.com/2011/02/19/waiting-for-9-1-faster-likeilike/
One thing that is very important: NO B-TREE INDEX will ever improve this kind of search:
where field ilike '%SOMETHING%'
What I am saying is that if you do a:
create index idx_name on some_table(field);
The only access you will improve is where field like 'something%'. (when you search for values starting with some literal). So, you will get no benefit by adding a regular index to field column in this case.
If you need to improve your search response time, definitely consider using FULL TEXT SEARCH.
Adding a bit to what the others have said.
First you can't really use an index based on a value in the middle of the string. Indexes are tree searches generally, and you have no way to know if your search will be faster than just scanning the table, so PostgreSQL will default to a seq scan. Indexes will only be used if they match the first part of the string. So:
SELECT * FROM invoice
WHERE invoice_number like 'INV-2012-435%'
may use an index but like '%44354456%' cannot.
In general in LedgerSMB we use both, depending on what kind of search we are doing. You might see a search like:
select * from parts
WHERE partnumber ilike ? || '%'
and plainto_tsquery(get_default_language(), ?) ## description;
So these are very different. Use each one where it makes the most sense.

ltrim(rtrim(x)) leave blanks on rtl content - anyone knows on a work around?

i have a table [Company] with a column [Address3] defined as varchar(50)
i can not control the values entered into that table - but i need to extract the values without leading and trailing spaces. i perform the following query:
SELECT DISTINCT RTRIM(LTRIM([Address3])) Address3 FROM [Company] ORDER BY Address3
the column contain both rtl and ltr values
most of the data retrieved is retrieved correctly - but SOME (not all) RTL values are returned with leading and or trailing spaces
i attempted to perform the following query:
SELECT DISTINCT ltrim(rTRIM(ltrim(rTRIM([Address3])))) c, ltrim(rTRIM([Address3])) b, [Address3] a, rtrim(LTRIM([Address3])) Address3 FROM [Company] ORDER BY Address3
but it returned the same problem on all columns - anyone has any idea what could cause it?
The rows that return with extraneous spaces might have a kind of space or invisible character the trim functions don't know about. The documentation doesn't even mention what is considered "a blank" (pretty damn sloppy if you ask me). Try taking one of those rows and looking at the characters one by one to see what character they are.
since you are using varchar, just do this to get the ascii code of all the bad characters
--identify the bad character
SELECT
COUNT(*) AS CountOf
,'>'+RIGHT(LTRIM(RTRIM(Address3)),1)+'<' AS LastChar_Display
,ASCII(RIGHT(LTRIM(RTRIM(Address3)),1)) AS LastChar_ASCII
FROM Company
GROUP BY RIGHT(LTRIM(RTRIM(Address3)),1)
ORDER BY 3 ASC
do a one time fix to data to remove the bogus character, where xxxx is the ASCII value identified in the previous select:
--only one bad character found in previous query
UPDATE Company
SET Address3=REPLACE(Address3,CHAR(xxxx),'')
--multiple different bad characters found by previous query
UPDATE Company
SET Address3=REPLACE(REPLACE(Address3,CHAR(xxxx1),''),char(xxxx2),'')
if you have bogus chars in your data remove them from the data and not each time you select the data. you WILL have to add this REPLACE logic to all INSERTS and UPDATES on this column, to keep any new data from having the bogus characters.
If you can't alter the data, you can just select it this way:
SELECT
LTRIM(RTRIM(REPLACE(Address3,CHAR(xxxx),'')))
,LTRIM(RTRIM(REPLACE(REPLACE(Address3,CHAR(xxxx1),''),char(xxxx2),'')))
...