Create table with for loop postgresql - postgresql

I have a function test_func() that takes in 1 argument (let's say the argument name is X) and returns a table. Now, I have a list of inputs (from a subquery) that I want to pass into argument X and collect all the results of the calls in a table.
In Python, I would do something like
# create empty list
all_results = []
for argument in (1,2,3):
result = test_func(argument)
# Collect the result
all_results.append(result)
return all_results
How can I do the same thing in postgresql?
Thank you.
For the sake of example, my test_func(X) takes in 1 argument and spits out a table with 3 columns. The values for col1 is X, col2 is X+1 and col3 is X+3. For example:
select * from test_func(1)
gives
|col1|col2|col3|
----------------
| 1 | 2 | 3 |
----------------
My list of arguments would be results of a subquery, for example:
select * from (values (1), (2)) x
I expect something like:
|col1|col2|col3|
----------------
| 1 | 2 | 3 |
----------------
| 2 | 3 | 4 |
----------------

demo:db<>fiddle
This gives you a result list of all results:
SELECT
mt.my_data as input,
tf.*
FROM
(SELECT * FROM my_table) mt, -- input data from a subquery
test_func(my_data) tf -- calling every data set as argument
In the fiddle the test_func() gets an integer and generates rows (input argument = generated row count). Furthermore, it adds a text column. For all inputs all generated records are unioned into one result set.

You can join your function to the input values:
select f.*
from (
values (1), (2)
) as x(id)
cross join lateral test_func(x.id) as f;

Related

Does String Value Exists in a List of Strings | Redshift Query

I have some interesting data, I'm trying to query however I cannot get the syntax correct. I have a temporary table (temp_id), which I've filled with the id values I care about. In this example it is only two ids.
CREATE TEMPORARY TABLE temp_id (id bigint PRIMARY KEY);
INSERT INTO temp_id (id) VALUES ( 1 ), ( 2 );
I have another table in production (let's call it foo) which holds multiples those ids in a single cell. The ids column looks like this (below) with ids as a single string separated by "|"
ids
-----------
1|9|3|4|5
6|5|6|9|7
NULL
2|5|6|9|7
9|11|12|99
I want to evaluate each cell in foo.ids, and see if any of the ids in match the ones in my temp_id table.
Expected output
ids |does_match
-----------------------
1|9|3|4|5 |true
6|5|6|9|7 |false
NULL |false
2|5|6|9|7 |true
9|11|12|99 |false
So far I've come up with this, but I can't seem to return anything. Instead of trying to create a new column does_match I tried to filter within the WHERE statement. However, the issue is I cannot figure out how to evaluate all the id values in my temp table to the string blob full of the ids in foo.
SELECT
ids,
FROM foo
WHERE ids = ANY(SELECT LISTAGG(id, ' | ') FROM temp_ids)
Any suggestions would be helpful.
Cheers,
this would work, however not sure about performance
SELECT
ids
FROM foo
JOIN temp_ids
ON '|'||foo.ids||'|' LIKE '%|'||temp_ids.id::varchar||'|%'
you wrap the IDs list into a pair of additional separators, so you can always search for |id| including the first and the last number
The following SQL (I know it's a bit of a hack) returns exactly what you expect as an output, tested with your sample data, don't know how would it behave on your real data, try and let me know
with seq AS ( # create a sequence CTE to implement postgres' unnest
select 1 as i union all # assuming you have max 10 ids in ids field,
# feel free to modify this part
select 2 union all
select 3 union all
select 4 union all
select 5 union all
select 6 union all
select 7 union all
select 8 union all
select 9 union all
select 10)
select distinct ids,
case # since I can't do a max on a boolean field, used two cases
# for 1s and 0s and converted them to boolean
when max(case
when t.id in (
select split_part(ids,'|',seq.i) as tt
from seq
join foo f on seq.i <= REGEXP_COUNT(ids, '|') + 1
where tt != '' and k.ids = f.ids)
then 1
else 0
end) = 1
then true
else false
end as does_match
from temp_id t, foo
group by 1
Please let me know if this works for you!

Why am I getting a false average when applied on temporary table column

I'm trying to get the average of words based on each message.body count of words from messages table
an example of that would be
**message.body**
-------------------
-->"aaz aae aar"
-->"aaz"
-->"aaz aae"
Output must be: AVG( 3 + 1 + 2 ) = 2
For that I've been applying the following query
SELECT AVG(temp.words) FROM (SELECT (array_length(string_to_array(messages.body,' '),1)) AS words FROM messages) AS temp
message.body is just text.
Any help will be appreciated.
giving the result you expect:
t=# with messages(body) as (values('aaz aae aar'),('aaz'),('aaz aae')) SELECT AVG(temp.words) FROM (SELECT (array_length(string_to_array(messages.body,' '),1)) AS words FROM messages) AS temp;
avg
--------------------
2.0000000000000000
(1 row)
t=# with messages(body) as (values('aaz aae aar'),('aaz'),('aaz aae')) SELECT *FROM (SELECT (array_length(string_to_array(messages.body,' '),1)) AS words,messages.body FROM messages) AS temp;
words | body
-------+-------------
3 | aaz aae aar
1 | aaz
2 | aaz aae
(3 rows)
I'm answering my question:
it happens that average function in postgres only accepts Floats as an argument, for that someone needs to cast the input before. Like this:
SELECT(AVG (temporary_.words)) AS average_amount FROM (SELECT CAST(array_length(string_to_array(messages.body,' '),1) AS FLOAT) AS words FROM messages WHERE body!='' ) AS temporary_

DB2 CASE statement and CONCAT

I'm having trouble return the correct information with my select statement
table:
prefix | suffix | alternate
------ | ------ | --------
A | 12345 | 0
B | 67890 | 0
C | 0 | 555555
Here is my query
SELECT
CASE WHEN prefix = 'C' THEN alternate
ELSE CONCAT(prefix, suffix) END as Result
FROM table
What I would like to see as a result:
Result
------
A12345
B67890
555555
What I actually see:
555555
if I take out the CONCAT using this select
SELECT
CASE WHEN prefix = 'C' THEN alternate
ELSE suffix END as Result
FROM table
I get the number of rows I want but not the correct column values. I'm missing the prefix in the first two rows.
Result
12345
67890
555555
Thoughts on how I can do this without duplicating code with union?
Select concat(prefix,suffix) as result from table
union
select alternate as result from table
You can do something like this
SELECT
CASE WHEN prefix <> 'C'
THEN prefix||suffix
ELSE cast (alternate as char(20))
END as Result
FROM table

Find all multipolygons from one table within another

So, I've got two tables - PLUTO (pieces of land), and NYZMA (rezoning boundaries). They look like:
pluto nyzma
id | geom name | geom
-------------------- -------------------
1 | MULTIPOLYGON(x) A | MULTIPOLYGON(a)
2 | MULTIPOLYGON(y) B | MULTIPOLYGON(b)
And I want it to spit out something like this, assuming that PLUTO record 1 is in multipolygons A and B, and PLUTO record 2 is in neither:
pluto_id | nyzma_id
-------------------
1 | [A, B]
2 |
How do I, for every PLUTO record's corresponding geometry, cycle through each NYZMA record, and print the names of any whose geometry matches?
Join the two tables using the spatial function ST_Contains. Than use GROUP BY and ARRAY_AGG in the main query:
WITH subquery AS (
SELECT pluto.id, nyzma.name
FROM pluto LEFT OUTER JOIN nyzma
ON ST_Contains(nyzma.geom, pluto.geom)
)
SELECT id, array_agg(name) FROM subquery GROUP BY id;

How to split a string in a smart way?

Function string_to_array splits strings without grouping substrings in apostrophes:
# select unnest(string_to_array('one, "two,three"', ','));
unnest
--------
one
"two
three"
(3 rows)
I would like to have a smarter function, like this:
# select unnest(smarter_string_to_array('one, "two,three"', ','));
unnest
--------
one
two,three
(2 rows)
Purpose.
I know that COPY command does it in a proper way, but I need this feature internally.
I want to parse a text representation of rows of existing table. Example:
# select * from dataset limit 2;
id | name | state
----+-----------------+--------
1 | Smith, Reginald | Canada
2 | Jones, Susan |
(2 rows)
# select dataset::text from dataset limit 2;
dataset
------------------------------
(1,"Smith, Reginald",Canada)
(2,"Jones, Susan","")
(2 rows)
I want to do it dynamically in a plpgsql function for different tables. I cannot assume constant number of columns of a table nor a format of columns values.
There is a nice method to transpose a whole table into a one-column table:
select (json_each_text(row_to_json(t))).value from dataset t;
If the column id is unique then
select id, array_agg(value) arr from (
select row_number() over() rn, id, value from (
select id, (json_each_text(row_to_json(t))).value from dataset t
) alias
order by id, rn
) alias
group by id;
gives you exactly what you want. Additional query with row_number() is necessary to keep original order of columns.