Postgres: Can I create an index to use in the SELECT clause? - postgresql

I have defined a function that determines the timezone from table tz_world for a set of lon, lat values:
create function get_timezone(numeric, numeric)
returns character varying(30) as $$
select tzid from tz_world where ST_Contains(geom, ST_MakePoint($1, $2));
$$ language SQL immutable;
Now I would like to use this function in the SELECT clause of a query on a different table:
select get_timezone(lon, lat) from event where...;
The function is rather slow, so I tried using an index to speed things up:
create index event_timezone_idx on event (get_timezone(event.lon, event.lat));
While this speeds up queries where the function is used in the WHERE clause, it has no effect on the variant above where get_timezone(lon, lat) is used in the SELECT clause.
Is it possible to rephrase the query and/or index to speed up the timezone determination?
Update
Thank you for the answers!! I decided to include an extra column for the timezone in the end and populate it when creating/updating the events.

I would recommend you create a local temporary table of the part of the select where you want to create the index on and then create an index on the temporary one:
CREATE LOCAL TEMPORARY TABLE temp_table AS (
select
.
.
.
);
CREATE INDEX temp_table idx
ON temp_table
USING btree
(col1,col2,....);
Otherwise write what you want your WHERE condition to be, indexes only work on WHERE clauses and values for the index should be exactly the ones you are trying to filter on.

Related

ILIKE query with indexing for jsonb array data in postgres

I have table in which has city as jsonb column which has json array like below
[{"name":"manchester",..},{"name":"liverpool",....}]
now I want to query table on "name" column with ILIKE query.
I have tried with below but it is not working for me
select * from data where city->>'name' ILIKE '%man%'
while i know, I can search with exact match by below query
select * from data where city->>'name' #> 'manchester'
Also I know we can jsonb functions to make it flat data and search but it will not use than indexing.
is there anyway to search data with ilike in a way it also use indexing?
Index support will be difficult; for that, a schema that adheres to the first normal form would be beneficial.
Other than that, you can use the JSONPATH language from v12 on:
WITH t(c) AS (
SELECT '[{"name":"manchester"},{"name":"liverpool"}]'::jsonb
)
SELECT jsonb_path_exists(
c,
'$.**.name ? (# like_regex "man" flag "i")'::jsonpath
)
FROM t;
jsonb_path_exists
═══════════════════
t
(1 row)
You should really store your data differently.
You can do the ilike query "naturally" but without index support, like this:
select * from data where exists (select 1 from jsonb_array_elements(city) f(x) where x->>'name' ILIKE '%man%');
You can get index support like this:
create index on data using gin ((city::text) gin_trgm_ops);
select * from data where city::text ilike '%man%';
But it will find matches within the text of the keys, as well as the values, and using irrelevant keys/values of any are present. You could get around this by creating a function that returns just the values, all banged together into one string, and then use a functional index. But the index will get less effective as the length of the string gets longer, as there will be more false positives that need to be tracked down and weeded out.
create or replace function concat_val(jsonb, text) returns text immutable language sql as $$
select string_agg(x->>$2,' ') from jsonb_array_elements($1) f(x)
$$ parallel safe;
create index on data using gin (concat_val(city,'name') gin_trgm_ops);
select * from data where concat_val(city,'name') ilike '%man%';
You should really store your data differently.

Syntax for parameterized PostgreSQL function using dynamic SQL

This code:
ALTER TABLE myschema.mytable add column geom geometry (point,4326);
CREATE INDEX mytable_idx on myschema.mytable using GIST(geom);
UPDATE myschema.mytable set geom = st_setsrid(st_point(mytable.long, mytable.lat), 4326);
This works fine when updating a single table. How would you convert it into a dynamic SQL function, with schema and table as parameters?
Since the function input must be an existing table, the simplest safe way would be to use a regclass input parameter like demonstrated here:
Table name as a PostgreSQL function parameter
However, you also need the bare table name for the concatenated index name, so I'll stick with taking text for schema and table separately:
CREATE OR REPLACE FUNCTION create_geom(_sch text, _tab text)
RETURNS void
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE format(
'ALTER TABLE %1$I.%2$I ADD COLUMN geom geometry(POINT,4326);
UPDATE %1$I.%2$I SET geom = st_setsrid(st_point(long, lat), 4326);
CREATE INDEX %3$I ON %1$I.%2$I USING gist(geom);'
, _sch, _tab
, _tab || '_geom_gist_idx');
END
$func$;
Call:
SELECT create_geom('myschema', 'mytable');
Use a single EXECUTE, no need for multiple calls.
Just omit table-qualification for columns in the UPDATE. While not joining additional tables, column names are unambiguous. Else, use a table alias, which can be constant. Like:
UPDATE %1$s AS x SET geom = st_setsrid(st_point(x.long, x.lat), 4326);
But it's smarter to populate the column before you build the index. That's a lot faster and produces a balanced index without bloat. So I switched the commands.
Note how I concatenate the index name first (_tab || '_geom_gist_idx'), and then double-quote as required with %3$I. That's the safe way. Something like %I_idx fails with non-standard names.
That said, it's typically a mistake to add columns with redundant information to a table. (What keeps you from changing one or the other? Why bloat the table?) Either just use an expression index instead of all of the above:
CREATE INDEX ON myschema.mytable USING gist (st_setsrid(st_point(long, lat), 4326));
Or drop the now redundant long & lat from the table. Those can be extracted from the new geom cheaply on the fly.
Or, if you need all columns (for special performance reasons?), consider a generated column instead. See:
Computed / calculated / virtual / derived columns in PostgreSQL
Having your queries as SQL templates and using format function for identifiers:
CREATE OR REPLACE FUNCTION public.create_geom(sch text, tab text)
RETURNS void language plpgsql AS $body$
DECLARE
DYNSQLA constant text := 'ALTER TABLE %I.%I add column geom geometry (point,4326)';
DYNSQLB constant text := 'CREATE INDEX %I_idx on %I.%I using GIST(geom);';
DYNSQLC constant text := 'UPDATE %I.%I set geom = st_setsrid(st_point(%I.long, %I.lat), 4326)';
BEGIN
execute format(DYNSQLA, sch, tab);
execute format(DYNSQLB, tab, sch, tab);
execute format(DYNSQLC, sch, tab, tab, tab);
END;
$body$;
SELECT create_geom('myschema','mytable');

What PostgreSQL type is good for stroring array of strings and offering fast lookup afterwards

I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.
This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...

Postgresql: query on jsonb column - index doesn't make it quicker

There is a table in Postgresql 9.6, query on jsonb column is slow compared to a relational table, and adding a GIN index on it doesn't make it quicker.
Table:
-- create table
create table dummy_jsonb (
id serial8,
data jsonb,
primary key (id)
);
-- create index
CREATE INDEX dummy_jsonb_data_index ON dummy_jsonb USING gin (data);
-- CREATE INDEX dummy_jsonb_data_index ON dummy_jsonb USING gin (data jsonb_path_ops);
Generate data:
-- generate data,
CREATE OR REPLACE FUNCTION dummy_jsonb_gen_data(n integer) RETURNS integer AS $$
DECLARE
i integer:=1;
name varchar;
create_at varchar;
json_str varchar;
BEGIN
WHILE i<=n LOOP
name:='dummy_' || i::text;
create_at:=EXTRACT(EPOCH FROM date_trunc('milliseconds', now())) * 1000;
json_str:='{
"name": "' || name || '",
"size": ' || i || ',
"create_at": ' || create_at || '
}';
insert into dummy_jsonb(data) values
(json_str::jsonb
);
i:= i + 1;
END LOOP;
return n;
END;
$$ LANGUAGE plpgsql;
-- call function,
select dummy_jsonb_gen_data(1000000);
-- drop function,
DROP FUNCTION IF EXISTS dummy_jsonb_gen_data(integer);
Query:
select * from dummy_jsonb
where data->>'name' like 'dummy_%' and data->>'size' >= '500000'
order by data->>'size' desc
offset 50000 limit 10;
Test result:
The query takes 1.8 seconds on a slow vm.
Adding or removing the index, don't make a difference.
Changing to index gin with jsonb_path_ops, also don't make a difference.
Questions:
Is it possible to make the query quicker, either improve index or sql?
If not, the does it means, within pg a relational table is more proper in this case?
And, in my test, mongodb performs better, does that means mongodb is more proper for such storage & query?
Quote from the manual
The default GIN operator class for jsonb supports queries with top-level key-exists operators ?, ?& and ?| operators and path/value-exists operator #> [...] The non-default GIN operator class jsonb_path_ops supports indexing the #> operator only.
Your query uses LIKE and string comparison with > (which is probably not correct to begin with), neither of those are supported by a GIN index.
But even an index on (data ->> 'name') wouldn't be used for the condition data->>'name' like 'dummy_%' as that is true for all rows because every name starts with dummy.
You can create a regular btree index on the name:
CREATE INDEX ON dummy_jsonb ( (data ->> 'name') varchar_pattern_ops);
Which will be used if the condition is restrictive enough, e.g.:
where data->>'name' like 'dummy_9549%'
If you need to query for the size, you can create an index on ((data ->> 'size')::int) and then use something like this:
where (data->>'size')::int >= 500000
However your use of limit and offset will always force the database to read all rows, sort them and the limit the result. This is never going to be very fast. You might want to read this article for more information why limit/offset is not very efficient.
JSON is a nice addition to the relational world, but only if you use it appropriately. If you don't need dynamic attributes for a row, then use standard columns and data types. Even though JSON support is Postgres is extremely good, this doesn't mean one should use it for everything, just because it's the current hype. Postgres is still a relational database and should be used as such.
Unrelated, but: your function to generate the test data can be simplified to a single SQL statement. You might not have been aware of the generate_series() function for things like that:
insert into dummy_jsonb(data)
select jsonb_build_object('name', 'dummy_'||i,
'size', i::text,
'created_at', (EXTRACT(EPOCH FROM date_trunc('milliseconds', clock_timestamp())) * 1000)::text)
from generate_series(1,1000000) as t(i);
While a btree index (the standard PostgreSQL index based on binary trees) is able to optimize ordering-based queries like >= '500000', the gin index, using an inverted index structure, is meant to quickly find data containing specific elements (it is quite used e.g. for text search to find rows containing given words), so (AFAIK) it can't be used for the query you provide.
PostgreSQL docs on jsonb indexing indicates on which WHERE conditions the index may be applied. As pointed out there, you can create a btree index on specific elements in a jsonb column: indexes on the specific elements referenced in the WHERE clause should work for the query you indicate.
Also, as commented above, think whether you actually need JSON for your use case.

Dynamic table name in postgreSQL 9.3

I am using postgreSQL. I want to select data from a table. Such table name contains the current year. such as abc2013. I have tried
select * from concat('abc',date_part('year',current_date))
select *from from concat('abc', extract (year from current_date))
So how to fetch data from such table dynamically?
Please don't do this - look hard at alternatives first, starting with partitioning and constraint exclusion.
If you must use dynamic table names, do it at application level during query generation.
If all else fails you can use a PL/PgSQL procedure like:
CREATE OR REPLACE pleasedont(int year) RETURNS TABLE basetable AS $$
BEGIN
RETURN QUERY EXECUTE format('SELECT col1, col2, col3 FROM %I', 'basetable_'||year);
END;
$$ LANGUAGE plpgsql;
This will only work if you have a base table that has the same structure as the sub-tables. It's also really painful to work with when you start adding qualifiers (where clause constraints, etc), and it prevents any kind of plan caching or effective prepared statement use.