What PostgreSQL type is good for stroring array of strings and offering fast lookup afterwards - postgresql

I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.

This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...

Related

Alphanumeric sorting without any pattern on the strings [duplicate]

I've got a Postgres ORDER BY issue with the following table:
em_code name
EM001 AAA
EM999 BBB
EM1000 CCC
To insert a new record to the table,
I select the last record with SELECT * FROM employees ORDER BY em_code DESC
Strip alphabets from em_code usiging reg exp and store in ec_alpha
Cast the remating part to integer ec_num
Increment by one ec_num++
Pad with sufficient zeors and prefix ec_alpha again
When em_code reaches EM1000, the above algorithm fails.
First step will return EM999 instead EM1000 and it will again generate EM1000 as new em_code, breaking the unique key constraint.
Any idea how to select EM1000?
Since Postgres 9.6, it is possible to specify a collation which will sort columns with numbers naturally.
https://www.postgresql.org/docs/10/collation.html
-- First create a collation with numeric sorting
CREATE COLLATION numeric (provider = icu, locale = 'en#colNumeric=yes');
-- Alter table to use the collation
ALTER TABLE "employees" ALTER COLUMN "em_code" type TEXT COLLATE numeric;
Now just query as you would otherwise.
SELECT * FROM employees ORDER BY em_code
On my data, I get results in this order (note that it also sorts foreign numerals):
Value
0
0001
001
1
06
6
13
۱۳
14
One approach you can take is to create a naturalsort function for this. Here's an example, written by Postgres legend RhodiumToad.
create or replace function naturalsort(text)
returns bytea language sql immutable strict as $f$
select string_agg(convert_to(coalesce(r[2], length(length(r[1])::text) || length(r[1])::text || r[1]), 'SQL_ASCII'),'\x00')
from regexp_matches($1, '0*([0-9]+)|([^0-9]+)', 'g') r;
$f$;
Source: http://www.rhodiumtoad.org.uk/junk/naturalsort.sql
To use it simply call the function in your order by:
SELECT * FROM employees ORDER BY naturalsort(em_code) DESC
The reason is that the string sorts alphabetically (instead of numerically like you would want it) and 1 sorts before 9.
You could solve it like this:
SELECT * FROM employees
ORDER BY substring(em_code, 3)::int DESC;
It would be more efficient to drop the redundant 'EM' from your em_code - if you can - and save an integer number to begin with.
Answer to question in comment
To strip any and all non-digits from a string:
SELECT regexp_replace(em_code, E'\\D','','g')
FROM employees;
\D is the regular expression class-shorthand for "non-digits".
'g' as 4th parameter is the "globally" switch to apply the replacement to every occurrence in the string, not just the first.
After replacing every non-digit with the empty string, only digits remain.
This always comes up in questions and in my own development and I finally tired of tricky ways of doing this. I finally broke down and implemented it as a PostgreSQL extension:
https://github.com/Bjond/pg_natural_sort_order
It's free to use, MIT license.
Basically it just normalizes the numerics (zero pre-pending numerics) within strings such that you can create an index column for full-speed sorting au naturel. The readme explains.
The advantage is you can have a trigger do the work and not your application code. It will be calculated at machine-speed on the PostgreSQL server and migrations adding columns become simple and fast.
you can use just this line
"ORDER BY length(substring(em_code FROM '[0-9]+')), em_code"
I wrote about this in detail in this related question:
Humanized or natural number sorting of mixed word-and-number strings
(I'm posting this answer as a useful cross-reference only, so it's community wiki).
I came up with something slightly different.
The basic idea is to create an array of tuples (integer, string) and then order by these. The magic number 2147483647 is int32_max, used so that strings are sorted after numbers.
ORDER BY ARRAY(
SELECT ROW(
CAST(COALESCE(NULLIF(match[1], ''), '2147483647') AS INTEGER),
match[2]
)
FROM REGEXP_MATCHES(col_to_sort_by, '(\d*)|(\D*)', 'g')
AS match
)
I thought about another way of doing this that uses less db storage than padding and saves time than calculating on the fly.
https://stackoverflow.com/a/47522040/935122
I've also put it on GitHub
https://github.com/ccsalway/dbNaturalSort
The following solution is a combination of various ideas presented in another question, as well as some ideas from the classic solution:
create function natsort(s text) returns text immutable language sql as $$
select string_agg(r[1] || E'\x01' || lpad(r[2], 20, '0'), '')
from regexp_matches(s, '(\D*)(\d*)', 'g') r;
$$;
The design goals of this function were simplicity and pure string operations (no custom types and no arrays), so it can easily be used as a drop-in solution, and is trivial to be indexed over.
Note: If you expect numbers with more than 20 digits, you'll have to replace the hard-coded maximum length 20 in the function with a suitable larger length. Note that this will directly affect the length of the resulting strings, so don't make that value larger than needed.

ILIKE query with indexing for jsonb array data in postgres

I have table in which has city as jsonb column which has json array like below
[{"name":"manchester",..},{"name":"liverpool",....}]
now I want to query table on "name" column with ILIKE query.
I have tried with below but it is not working for me
select * from data where city->>'name' ILIKE '%man%'
while i know, I can search with exact match by below query
select * from data where city->>'name' #> 'manchester'
Also I know we can jsonb functions to make it flat data and search but it will not use than indexing.
is there anyway to search data with ilike in a way it also use indexing?
Index support will be difficult; for that, a schema that adheres to the first normal form would be beneficial.
Other than that, you can use the JSONPATH language from v12 on:
WITH t(c) AS (
SELECT '[{"name":"manchester"},{"name":"liverpool"}]'::jsonb
)
SELECT jsonb_path_exists(
c,
'$.**.name ? (# like_regex "man" flag "i")'::jsonpath
)
FROM t;
jsonb_path_exists
═══════════════════
t
(1 row)
You should really store your data differently.
You can do the ilike query "naturally" but without index support, like this:
select * from data where exists (select 1 from jsonb_array_elements(city) f(x) where x->>'name' ILIKE '%man%');
You can get index support like this:
create index on data using gin ((city::text) gin_trgm_ops);
select * from data where city::text ilike '%man%';
But it will find matches within the text of the keys, as well as the values, and using irrelevant keys/values of any are present. You could get around this by creating a function that returns just the values, all banged together into one string, and then use a functional index. But the index will get less effective as the length of the string gets longer, as there will be more false positives that need to be tracked down and weeded out.
create or replace function concat_val(jsonb, text) returns text immutable language sql as $$
select string_agg(x->>$2,' ') from jsonb_array_elements($1) f(x)
$$ parallel safe;
create index on data using gin (concat_val(city,'name') gin_trgm_ops);
select * from data where concat_val(city,'name') ilike '%man%';
You should really store your data differently.

Indexing on jsonb keys in postgresql

I'm using PostgreSQL.
Is there any way to create index just on dictionary keys, not values.
For example imagine a jsonb column like:
select data from tablename where id = 0;
answer: {1:'v1', 2:'v2'}
I want to index on the key set (or key list) which is [1, 2]. To speed up queries like:
select count(*) from tablename where data ? '2';
As you can see in docs, there is a way for indexing the column entirely (keys + values):
CREATE INDEX idxgin ON api USING GIN (jdoc);
This is not good for me, considering that I store a large amount of data in values.
I tried this before:
CREATE INDEX test ON tablename (jsonb_object_keys(data));
The error was:
ERROR: set-returning functions are not allowed in index expressions
Also, I don't want to store keys in the dictionary as a value.
Can you help me?
Your example doesn't make much sense, as your WHERE clause isn't specifying a JSON operation, and your example output is not valid JSON syntax.
You can hide the set-returning function (and the aggregate) into an IMMUTABLE function:
create function object_keys(jsonb) returns text[] language SQL immutable as $$
select array_agg(jsonb_object_keys) from jsonb_object_keys($1)
$$;
create index on tablename using gin ( object_keys(data));
If you did it this way, you could then query it formulated like this:
select * from tablename where object_keys(data) #> ARRAY['2'];
You could instead make the function return a JSONB containing an array rather than returning a PostgreSQL text array, if you would rather query it that way:
select * from tablename where object_keys_jsonb(data) #> '"2"';
You can't use a ? formulation, because in JSONB that is specifically for objects not arrays. If you really wanted to use ?, you could instead write a function which keeps the object as an object, but converts all the values to JSON null or to empty string, so they take up less space.

Postgresql: query on jsonb column - index doesn't make it quicker

There is a table in Postgresql 9.6, query on jsonb column is slow compared to a relational table, and adding a GIN index on it doesn't make it quicker.
Table:
-- create table
create table dummy_jsonb (
id serial8,
data jsonb,
primary key (id)
);
-- create index
CREATE INDEX dummy_jsonb_data_index ON dummy_jsonb USING gin (data);
-- CREATE INDEX dummy_jsonb_data_index ON dummy_jsonb USING gin (data jsonb_path_ops);
Generate data:
-- generate data,
CREATE OR REPLACE FUNCTION dummy_jsonb_gen_data(n integer) RETURNS integer AS $$
DECLARE
i integer:=1;
name varchar;
create_at varchar;
json_str varchar;
BEGIN
WHILE i<=n LOOP
name:='dummy_' || i::text;
create_at:=EXTRACT(EPOCH FROM date_trunc('milliseconds', now())) * 1000;
json_str:='{
"name": "' || name || '",
"size": ' || i || ',
"create_at": ' || create_at || '
}';
insert into dummy_jsonb(data) values
(json_str::jsonb
);
i:= i + 1;
END LOOP;
return n;
END;
$$ LANGUAGE plpgsql;
-- call function,
select dummy_jsonb_gen_data(1000000);
-- drop function,
DROP FUNCTION IF EXISTS dummy_jsonb_gen_data(integer);
Query:
select * from dummy_jsonb
where data->>'name' like 'dummy_%' and data->>'size' >= '500000'
order by data->>'size' desc
offset 50000 limit 10;
Test result:
The query takes 1.8 seconds on a slow vm.
Adding or removing the index, don't make a difference.
Changing to index gin with jsonb_path_ops, also don't make a difference.
Questions:
Is it possible to make the query quicker, either improve index or sql?
If not, the does it means, within pg a relational table is more proper in this case?
And, in my test, mongodb performs better, does that means mongodb is more proper for such storage & query?
Quote from the manual
The default GIN operator class for jsonb supports queries with top-level key-exists operators ?, ?& and ?| operators and path/value-exists operator #> [...] The non-default GIN operator class jsonb_path_ops supports indexing the #> operator only.
Your query uses LIKE and string comparison with > (which is probably not correct to begin with), neither of those are supported by a GIN index.
But even an index on (data ->> 'name') wouldn't be used for the condition data->>'name' like 'dummy_%' as that is true for all rows because every name starts with dummy.
You can create a regular btree index on the name:
CREATE INDEX ON dummy_jsonb ( (data ->> 'name') varchar_pattern_ops);
Which will be used if the condition is restrictive enough, e.g.:
where data->>'name' like 'dummy_9549%'
If you need to query for the size, you can create an index on ((data ->> 'size')::int) and then use something like this:
where (data->>'size')::int >= 500000
However your use of limit and offset will always force the database to read all rows, sort them and the limit the result. This is never going to be very fast. You might want to read this article for more information why limit/offset is not very efficient.
JSON is a nice addition to the relational world, but only if you use it appropriately. If you don't need dynamic attributes for a row, then use standard columns and data types. Even though JSON support is Postgres is extremely good, this doesn't mean one should use it for everything, just because it's the current hype. Postgres is still a relational database and should be used as such.
Unrelated, but: your function to generate the test data can be simplified to a single SQL statement. You might not have been aware of the generate_series() function for things like that:
insert into dummy_jsonb(data)
select jsonb_build_object('name', 'dummy_'||i,
'size', i::text,
'created_at', (EXTRACT(EPOCH FROM date_trunc('milliseconds', clock_timestamp())) * 1000)::text)
from generate_series(1,1000000) as t(i);
While a btree index (the standard PostgreSQL index based on binary trees) is able to optimize ordering-based queries like >= '500000', the gin index, using an inverted index structure, is meant to quickly find data containing specific elements (it is quite used e.g. for text search to find rows containing given words), so (AFAIK) it can't be used for the query you provide.
PostgreSQL docs on jsonb indexing indicates on which WHERE conditions the index may be applied. As pointed out there, you can create a btree index on specific elements in a jsonb column: indexes on the specific elements referenced in the WHERE clause should work for the query you indicate.
Also, as commented above, think whether you actually need JSON for your use case.

How can I change all occurrences of a particular value in any column in PostgreSQL?

I have three different values in my database that represent a null: an actual null, an empty string, and a string {x:Null}. This value appears across multiple columns.
{x:Null} is normalized on the web front-end, so all these values look exactly the same although they end up ordered differently in a sort. How can I write a query that will take these values and make them actual nulls across every column and every table?
Bonus points if you can tell me how to make sure these other empty values are always inserted as nulls going forward. (Disclaimer: I have no power to grant any actual bonus points. ;)
You can query the information_schema to get a list of all tables and columns with a string type.
SELECT table_name, column_name
FROM information_schema.columns
WHERE data_type IN ('text', 'character', 'character varying')
NOTE double check first what values data_type has, I'm not sure if it will be character or char or what.
Then I would write a small program to update each column in each table. Here it is sketched out in Perl.
while( my($table, $column) = $sth->fetch ) {
my $q_table = $dbh->quote($table);
my $q_column = $dbh->quote($column);
$dbh->do(q[
UPDATE `$q_table`
SET `$q_column` = NULL
WHERE `$q_column` = '{x:Null}'
OR `$q_column` = ''
]);
}
Be sure to SQL escape $table and $column as in my sample.
Going forward, you'll have to set CONSTRAINTS on each and every column. You can use the information_schema.columns to do this as well. Something like
ALTER TABLE `$q_table` ADD CHECK(`$q_column` NOT IN ('{x:Null}', ''))
You could use a trigger to change the values to NULL, but I don't like data stores that silently change basic data for application purposes.
For new columns and tables, you'll have to remember to add that constraint. Same caveats about data_type apply.
However, it's probably a bad idea to say that no column can ever be an empty string. You might want to be bit more selective.
Another thing to note: NULL is a funny thing, its not true and its not false. You might be better off deciding that an empty string is the thing to set empty values to.
I don't think this approach is maintainable. It's scribbling an application rule all over the data layer. What if you have some data that doesn't follow that rule? And it will have to be continuously maintained for any new data schema added. Perhaps instead you should put this at your ORM layer. Or write a few stored procedures to take care of this.
Using the information_schema.columns table, write a procedural language routine which iterates through all applicable tables and columns, executing an update... set *column* = NULL...where column in ('','{x:Null}'). for each eligible column.
As for inserting these values as NULL going forward, you would have to set triggers on your tables to intercept these values and replace them with NULL.
I don't think there is any query that would do this thing for every table and every column. In principle, what you want to do is
UPDATE table SET column=NULL WHERE column='' OR column='{x:Null}';
You could try selecting data from the pg_attribute and pg_class columns to get the names of the tables and names of the columns and then generating automatically the queries. Be sure to select only those columns that contain textual data.
What if somebody has entered a genuine string '{x:Null}'? You would then change it into NULL.
However, you have done a real mistake by letting the situation to be as bad as it's currently. You should always normalize data before putting it into a database.