How can I run a postgresql query over array of strings? - postgresql

I'm writing a script which will take all tables from my schema and perform some actions on them. The tables that will be taken have same prefix and different suffix. Now, what I want to do is to declare an array at the beginning of the script, the array will contain all regex for all tables that I need, for example:
base_tables varchar[2] := ARRAY['table_name_format_2%',
'another_format_3%'];
Using this array, I would like to go through all the tables in my schema and take only those that match the name pattern in the array.
I tried to do this as such:
FOR table_item IN
SELECT table_name
FROM information_schema.tables
WHERE table_name LIKE IN base_tables
LOOP
---- Some code goes here -----
END LOOP;
The error I get is :
ERROR: syntax error at or near "IN"
What is the correct way to compare each table name, to the names in my array?
Thanks in advance.

demo:db<>fiddle
To get a match for an array element you have to use:
-- general case
WHERE element = ANY(ARRAY['elem1', 'elem2'])
-- your case
WHERE table_name = ANY(base_tables)
If you want to achieve a LIKE eehrm... like operation you'll need another way:
SELECT table_name
FROM information_schema.tables t
JOIN (SELECT unnest(base_tables) as name) bt
ON t.table_name LIKE bt.name
Joining tables against an unnested base_tables array (unnest expands an array to one row for each element). You can join with the LIKE operator.
demo:db<>fiddle

Related

ILIKE query with indexing for jsonb array data in postgres

I have table in which has city as jsonb column which has json array like below
[{"name":"manchester",..},{"name":"liverpool",....}]
now I want to query table on "name" column with ILIKE query.
I have tried with below but it is not working for me
select * from data where city->>'name' ILIKE '%man%'
while i know, I can search with exact match by below query
select * from data where city->>'name' #> 'manchester'
Also I know we can jsonb functions to make it flat data and search but it will not use than indexing.
is there anyway to search data with ilike in a way it also use indexing?
Index support will be difficult; for that, a schema that adheres to the first normal form would be beneficial.
Other than that, you can use the JSONPATH language from v12 on:
WITH t(c) AS (
SELECT '[{"name":"manchester"},{"name":"liverpool"}]'::jsonb
)
SELECT jsonb_path_exists(
c,
'$.**.name ? (# like_regex "man" flag "i")'::jsonpath
)
FROM t;
jsonb_path_exists
═══════════════════
t
(1 row)
You should really store your data differently.
You can do the ilike query "naturally" but without index support, like this:
select * from data where exists (select 1 from jsonb_array_elements(city) f(x) where x->>'name' ILIKE '%man%');
You can get index support like this:
create index on data using gin ((city::text) gin_trgm_ops);
select * from data where city::text ilike '%man%';
But it will find matches within the text of the keys, as well as the values, and using irrelevant keys/values of any are present. You could get around this by creating a function that returns just the values, all banged together into one string, and then use a functional index. But the index will get less effective as the length of the string gets longer, as there will be more false positives that need to be tracked down and weeded out.
create or replace function concat_val(jsonb, text) returns text immutable language sql as $$
select string_agg(x->>$2,' ') from jsonb_array_elements($1) f(x)
$$ parallel safe;
create index on data using gin (concat_val(city,'name') gin_trgm_ops);
select * from data where concat_val(city,'name') ilike '%man%';
You should really store your data differently.

PostgreSQL, allow to filter by not existing fields

I'm using a PostgreSQL with a Go driver. Sometimes I need to query not existing fields, just to check - maybe something exists in a DB. Before querying I can't tell whether that field exists. Example:
where size=10 or length=10
By default I get an error column "length" does not exist, however, the size column could exist and I could get some results.
Is it possible to handle such cases to return what is possible?
EDIT:
Yes, I could get all the existing columns first. But the initial queries can be rather complex and not created by me directly, I can only modify them.
That means the query can be simple like the previous example and can be much more complex like this:
WHERE size=10 OR (length=10 AND n='example') OR (c BETWEEN 1 and 5 AND p='Mars')
If missing columns are length and c - does that mean I have to parse the SQL, split it by OR (or other operators), check every part of the query, then remove any part with missing columns - and in the end to generate a new SQL query?
Any easier way?
I would try to check within information schema first
"select column_name from INFORMATION_SCHEMA.COLUMNS where table_name ='table_name';"
And then based on result do query
Why don't you get a list of columns that are in the table first? Like this
select column_name
from information_schema.columns
where table_name = 'table_name' and (column_name = 'size' or column_name = 'length');
The result will be the columns that exist.
There is no way to do what you want, except for constructing an SQL string from the list of available columns, which can be got by querying information_schema.columns.
SQL statements are parsed before they are executed, and there is no conditional compilation or no short-circuiting, so you get an error if a non-existing column is referenced.

What PostgreSQL type is good for stroring array of strings and offering fast lookup afterwards

I am using PostgreSQL 11.9
I have a table containing a jsonb column with arbitrary number of key-values. There is a requirement when we perform a search to include all values from this column as well. Searching in jsonb is quite slow so my plan is to create a trigger which will extract all the values from the jsonb column:
select t.* from app.t1, jsonb_each(column_jsonb) as t(k,v)
with something like this. And then insert the values in a newly created column in the same table so I can use this column for faster searches.
My question is what type would be most suitable for storing the keys and then searchin within them. Currently the search looks like this:
CASE
WHEN something IS NOT NULL
THEN EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
END
where the search_term is what the user entered from the front end.
This is not going to be pretty, and normalizing the data model would be better.
You can define a function
CREATE FUNCTION jsonb_values_to_string(
j jsonb,
separator text DEFAULT ','
) RETURNS text LANGUAGE sql IMMUTABLE STRICT
AS 'SELECT string_agg(value->>0, $2) FROM jsonb_each($1)';
Then you can query like
WHERE jsonb_values_to_string(column_jsonb, '|') ILIKE 'search_term'
and you can define a trigram index on the left hand side expression to speed it up.
Make sure that you choose a separator that does not occur in the data or the pattern...

Select * from Table A (variable) issues

Recently I'm reading some DB2 code. But I don't know what's the function of these brackets in the "from" part.
SELECT
A INTO TypeCd
FROM TX
WHERE TOKEN_ID IN (SELECT ELEMENTS
FROM (AB_OWN.ELEMENTS (pv_token_id)) AS T) LIMIT 1;
In the above code, pv_token_id is a variable.
I mean I know the formation of select query is like :
Select * from Table A
But I don't know what this formation means:
Select * from Table A(variable)
what do the brackets and variable do in this query?
This looks like a table function use, but with omitted TABLE keyword.
It should be like below:
SELECT ELEMENTS
FROM TABLE (AB_OWN.ELEMENTS (pv_token_id)) AS T
Look at the table-function-reference description of the table-reference, which you can use in the from-clause of subselect.
pv_token_id here is a variable passed to the function as a parameter.
Consider a table function use in SELECT as use of ordinary table. It's the convenient way to get a result set for further processing (joining with other tables/table functions, filtering, ect.).

Display colon separated values in postgres

I am using postgresql 8.1 and i dont have the new functions in that version. Please help me what to do in that case?
My first table is as follows
unit_office table:
Mandal_ids Name
82:05: test sample
08:20:16: test sample
Mandal Master table:
mandal_id mandal_name
08 Etcherla
16 Hiramandalam
20 Gara
Now when I say select * from unit_office it should display:
Mandal Name of office
Hiramandalam, Gara test sample
i.e in place of ids I want the corresponding names (which are in master table)separated by comma
I have a column in postgres which has colon separated ids. The following is one record from my table.
mandalid
18:82:14:11:08:05:20:16:83:37:23:36:15:06:38:33:26:30:22:04:03:
When I say select * from table, the mandalid column should display the names of the mandals in the id place separated by a comma.
Now i have the corresponding name for the id in a master table.
I want to display the names of the ids in the select query of the first table. like
my first table name is unit office. when i say select * from unit office, I want the names in the place of ids.
I suggest you redesign your tables, but if you cannot, then you may need to define a function, which will split the mandal_ids string into integers, and map them to names. I suggest you read the PostgreSQL documentation on creating functions. The "PL/pgSQL" language may be a good choice. You may use the functions string_to_array and array_to_string.
But if you can, I suggest you define your tables in the following way:
mandals:
id name
16 Hiramandalam
20 Gara
unit_offices:
id name
1 test sample
mandals_in_offices:
office_id mandal_id
1 16
1 20
The output from the following query should be what you need:
SELECT string_agg(m.name,',') AS mandal_names,
max(o.name) AS office_name
FROM mandals_in_offices i
INNER JOIN unit_offices o ON i.office_id = o.id
INNER JOIN mandals m ON i.mandal_id = m.id
GROUP BY o.id;
The function string_agg appeared in PostgreSQL version 9, so if you are using older version, you may need to write similar function yourself. I believe this will not be too hard.
Here's what we did in LedgerSMB:
created a function to do the concatenation
created a custom aggregate to do the aggregation.
This allows you to do this easily.
CREATE OR REPLACE FUNCTION concat_colon(TEXT, TEXT) returns TEXT as
$$
select CASE WHEN $1 IS NULL THEN $2 ELSE $1 || ':' || $2 END;
$$ language sql;
CREATE AGGREGATE concat_colon (
BASETYPE = text,
STYPE = text,
SFUNC = concat_colon
);
Then you can:
select concat_colon(mycol::text) from mytable;
Works just fine.