Appending query with if-else condition in PostgreSQL - postgresql

I have my function to run SELECT query with 3 condition of HAVING CLAUSE:
having sum() > 0
having sum() <= 0
dont have HAVING CLAUSE
Here is my function:
DROP function getf(arg int);
create or replace function getf(arg int)
returns table (
option_id bigint,
importQuantity bigint,
sold bigint,
remain bigint
)
as $$
begin
if arg = 1 then
return query select b.option_id, SUM(b.import_quantity)::bigint as importQuantity, SUM(b.sold_quantity)::bigint as sold, SUM(b.remaining_quantity)::bigint as remain from batch b where b.product_id = 220 and b.option_id in (select o.id from "option" o where o.barcode like '%%' or o.barcode is null) group by b.option_id having sum(b.remaining_quantity) > 0;
elsif arg = 2 then
return query select b.option_id, SUM(b.import_quantity)::bigint as importQuantity, SUM(b.sold_quantity)::bigint as sold, SUM(b.remaining_quantity)::bigint as remain from batch b where b.product_id = 220 and b.option_id in (select o.id from "option" o where o.barcode like '%%' or o.barcode is null) group by b.option_id having sum(b.remaining_quantity) <= 0;
elsif arg = 3 then
return query select b.option_id, SUM(b.import_quantity)::bigint as importQuantity, SUM(b.sold_quantity)::bigint as sold, SUM(b.remaining_quantity)::bigint as remain from batch b where b.product_id = 220 and b.option_id in (select o.id from "option" o where o.barcode like '%%' or o.barcode is null) group by b.option_id;
end if;
end; $$ language plpgsql;
And I call my function:
select getf(3);
Question
The function work fine. But SELECT query only different at HAVING CLAUSE
How can I use dynamic query to appending HAVING with if-else condition?

Since you need to change the structure of the query not just replace a parameter value within the query you need dynamic SQL. But first lets put on a diet. That is remove unnecessary parts.
b.option_id in (select o.id from "option" o where o.barcode like '%%' or o.barcode is null)
This is unnecessary the WHERE clause contains a tautology (a statement that in always true). Why is this. Well the predicate o.barcode like '%%' actually check if the barcode contains 0 or more characters. Only NULL makes this false, but in that case the other predicate barcode is NULL will evaluate true,so the overall condition is always true. As a result the sub-query always returns as ALL ids in options table and b.option_id is always in the list. (That of course assumes batch.option_id is properly defined as not null and a FK to options).
Now lets consider that HAVING clause. The first two are trivial replacements, just replace the rvalue with '< 0' or '>= 0'. The third option presents a problem, as you need to remove the entire clause rather change the rvalue. Its not all that difficult, however it also can be turned into a simple rvalue change. The HAVING predicate simply needs to evaluate true and we accomplish the same thing. That is achieved by replacing it by 'is not null'. Finally we can create an array of the rvalue replacements avoid any if then logic on the parameter (arg). Other that validating it contains a valid value. So: see demo
create or replace function getf(arg int)
returns table (option_id bigint
,importQuantity bigint
,sold bigint
,remain bigint
)
language plpgsql
as $$
declare
k_having_arg constant text[] = array['> 0','<= 0', 'is not null']; -- Replacement values for RVALUE below
k_base_query constant text =
'select b.option_id'
', SUM(b.import_quantity)::bigint '
', SUM(b.sold_quantity)::bigint '
', SUM(b.remaining_quantity)::bigint '
' from batch b'
' where b.product_id = 220'
' group by b.option_id '
' having sum(b.remaining_quantity) %s; '; -- expressions RVALUE to be replaces
l_exec_query text;
begin
if arg not between 1 and 3 then
raise exception 'Invalid Arg value (%) out of range, Must be 1, 2, or 3.', arg;
end if;
l_exec_query = format (k_base_query,k_having_arg[arg]);
raise notice E'Running query\n%',l_exec_query;
return query execute l_exec_query;
end;
$$;

try using with CTE clause. you can have your query (till group by) in with clause.
Then use can use your if conditions and select from with CTE and and append having clause.
For reference you can check
How to declare a variable in a PostgreSQL query

Related

In Postgres database How to get column values in one single row comma separated value [duplicate]

I am looking for a way to concatenate the strings of a field within a group by query. So for example, I have a table:
ID COMPANY_ID EMPLOYEE
1 1 Anna
2 1 Bill
3 2 Carol
4 2 Dave
and I wanted to group by company_id to get something like:
COMPANY_ID EMPLOYEE
1 Anna, Bill
2 Carol, Dave
There is a built-in function in mySQL to do this group_concat
PostgreSQL 9.0 or later:
Modern Postgres (since 2010) has the string_agg(expression, delimiter) function which will do exactly what the asker was looking for:
SELECT company_id, string_agg(employee, ', ')
FROM mytable
GROUP BY company_id;
Postgres 9 also added the ability to specify an ORDER BY clause in any aggregate expression; otherwise you have to order all your results or deal with an undefined order. So you can now write:
SELECT company_id, string_agg(employee, ', ' ORDER BY employee)
FROM mytable
GROUP BY company_id;
PostgreSQL 8.4.x:
PostgreSQL 8.4 (in 2009) introduced the aggregate function array_agg(expression) which collects the values in an array. Then array_to_string() can be used to give the desired result:
SELECT company_id, array_to_string(array_agg(employee), ', ')
FROM mytable
GROUP BY company_id;
PostgreSQL 8.3.x and older:
When this question was originally posed, there was no built-in aggregate function to concatenate strings. The simplest custom implementation (suggested by Vajda Gabo in this mailing list post, among many others) is to use the built-in textcat function (which lies behind the || operator):
CREATE AGGREGATE textcat_all(
basetype = text,
sfunc = textcat,
stype = text,
initcond = ''
);
Here is the CREATE AGGREGATE documentation.
This simply glues all the strings together, with no separator. In order to get a ", " inserted in between them without having it at the end, you might want to make your own concatenation function and substitute it for the "textcat" above. Here is one I put together and tested on 8.3.12:
CREATE FUNCTION commacat(acc text, instr text) RETURNS text AS $$
BEGIN
IF acc IS NULL OR acc = '' THEN
RETURN instr;
ELSE
RETURN acc || ', ' || instr;
END IF;
END;
$$ LANGUAGE plpgsql;
This version will output a comma even if the value in the row is null or empty, so you get output like this:
a, b, c, , e, , g
If you would prefer to remove extra commas to output this:
a, b, c, e, g
Then add an ELSIF check to the function like this:
CREATE FUNCTION commacat_ignore_nulls(acc text, instr text) RETURNS text AS $$
BEGIN
IF acc IS NULL OR acc = '' THEN
RETURN instr;
ELSIF instr IS NULL OR instr = '' THEN
RETURN acc;
ELSE
RETURN acc || ', ' || instr;
END IF;
END;
$$ LANGUAGE plpgsql;
How about using Postgres built-in array functions? At least on 8.4 this works out of the box:
SELECT company_id, array_to_string(array_agg(employee), ',')
FROM mytable
GROUP BY company_id;
As from PostgreSQL 9.0 you can use the aggregate function called string_agg. Your new SQL should look something like this: SELECT company_id, string_agg(employee, ', ')
FROM mytable
GROUP BY company_id;
I claim no credit for the answer because I found it after some searching:
What I didn't know is that PostgreSQL allows you to define your own aggregate functions with CREATE AGGREGATE
This post on the PostgreSQL list shows how trivial it is to create a function to do what's required:
CREATE AGGREGATE textcat_all(
basetype = text,
sfunc = textcat,
stype = text,
initcond = ''
);
SELECT company_id, textcat_all(employee || ', ')
FROM mytable
GROUP BY company_id;
As already mentioned, creating your own aggregate function is the right thing to do. Here is my concatenation aggregate function (you can find details in French):
CREATE OR REPLACE FUNCTION concat2(text, text) RETURNS text AS '
SELECT CASE WHEN $1 IS NULL OR $1 = \'\' THEN $2
WHEN $2 IS NULL OR $2 = \'\' THEN $1
ELSE $1 || \' / \' || $2
END;
'
LANGUAGE SQL;
CREATE AGGREGATE concatenate (
sfunc = concat2,
basetype = text,
stype = text,
initcond = ''
);
And then use it as:
SELECT company_id, concatenate(employee) AS employees FROM ...
This latest announcement list snippet might be of interest if you'll be upgrading to 8.4:
Until 8.4 comes out with a
super-effient native one, you can add
the array_accum() function in the
PostgreSQL documentation for rolling
up any column into an array, which can
then be used by application code, or
combined with array_to_string() to
format it as a list:
http://www.postgresql.org/docs/current/static/xaggr.html
I'd link to the 8.4 development docs but they don't seem to list this feature yet.
Following up on Kev's answer, using the Postgres docs:
First, create an array of the elements, then use the built-in array_to_string function.
CREATE AGGREGATE array_accum (anyelement)
(
sfunc = array_append,
stype = anyarray,
initcond = '{}'
);
select array_to_string(array_accum(name),'|') from table group by id;
Following yet again on the use of a custom aggregate function of string concatenation: you need to remember that the select statement will place rows in any order, so you will need to do a sub select in the from statement with an order by clause, and then an outer select with a group by clause to aggregate the strings, thus:
SELECT custom_aggregate(MY.special_strings)
FROM (SELECT special_strings, grouping_column
FROM a_table
ORDER BY ordering_column) MY
GROUP BY MY.grouping_column
Use STRING_AGG function for PostgreSQL and Google BigQuery SQL:
SELECT company_id, STRING_AGG(employee, ', ')
FROM employees
GROUP BY company_id;
I found this PostgreSQL documentation helpful: http://www.postgresql.org/docs/8.0/interactive/functions-conditional.html.
In my case, I sought plain SQL to concatenate a field with brackets around it, if the field is not empty.
select itemid,
CASE
itemdescription WHEN '' THEN itemname
ELSE itemname || ' (' || itemdescription || ')'
END
from items;
If you are on Amazon Redshift, where string_agg is not supported, try using listagg.
SELECT company_id, listagg(EMPLOYEE, ', ') as employees
FROM EMPLOYEE_table
GROUP BY company_id;
According to version PostgreSQL 9.0 and above you can use the aggregate function called string_agg. Your new SQL should look something like this:
SELECT company_id, string_agg(employee, ', ')
FROM mytable GROUP BY company_id;
You can also use format function. Which can also implicitly take care of type conversion of text, int, etc by itself.
create or replace function concat_return_row_count(tbl_name text, column_name text, value int)
returns integer as $row_count$
declare
total integer;
begin
EXECUTE format('select count(*) from %s WHERE %s = %s', tbl_name, column_name, value) INTO total;
return total;
end;
$row_count$ language plpgsql;
postgres=# select concat_return_row_count('tbl_name','column_name',2); --2 is the value
I'm using Jetbrains Rider and it was a hassle copying the results from above examples to re-execute because it seemed to wrap it all in JSON. This joins them into a single statement that was easier to run
select string_agg('drop table if exists "' || tablename || '" cascade', ';')
from pg_tables where schemaname != $$pg_catalog$$ and tableName like $$rm_%$$

PostgreSQL - ALL ( array ) Operator - Suggestion

Sample Code as follows : ALL or ANY operator is not working. I need to compare ALL the values of the array
CREATE OR REPLACE FUNCTION public.sample_function(
tt_sample_function text)
RETURNS TABLE (..... )
LANGUAGE 'plpgsql'
COST 100
VOLATILE
ROWS 1000
AS $BODY$
declare
e record;
v_cnt INTEGER:=0;
rec record;
str text;
a_v text [];
BEGIN
FOR rec IN ( SELECT * FROM json_populate_recordset(null::sample_function ,sample_function::json) )
LOOP
a_v:= array_append(a_v, ''''||rec.key || '#~#' || rec.value||'''');
END LOOP;
SELECT MAInfo.userid FROM
(SELECT DISTINCT i.userid,
CASE WHEN (i.settingKey || '#~#' || i.settingvalue) = ALL (a_v)
THEN i.settingKey || '#~#' || 'Y'
ELSE i.settingKey || '#~#' || 'N' END
AS MatchResult
FROM public.sample_table i
WHERE (i.settingKey || '#~#' || i.settingvalue) = ALL (a_v)
GROUP BY i.userid, MatchResult) AS MAInfo
GROUP BY MAInfo.userid
HAVING COUNT(MAInfo.userid) >= 1;
RETURN QUERY (....);
END;
$BODY$;
CREATE TYPE tt_sample_function AS
(
key character varying,
value character varying
)
Inputs are
SELECT public.sample_function(
'[{"key":"devicetype", "value":"TestType"},{"key":"ostype", "value":"TestType"}]'
)
Any suggestion, why my ALL operator is not working. I mean its always giving false, it should match with all the array elements...
Note: ofcourse data is there in table.
You are over complicating things. You don't need the FOR loop or the array to do the comparison. You can do that all in a single statement. No need for an extra TYPE or generating an array.
The parameter to the function should be declared as jsonb as you clearly want to pass valid JSON there.
I don't understand what you are trying to achieve with the CASE expression. The WHERE clause only returns rows that match the first condition in the CASE, so the second one will never be reached.
I also don't understand why you have the CASE at all, as you discard the result of that in the outer query completely.
But keeping the original structure as close as possible, I think you can simplify this to a single CREATE TABLE AS statement and get rid of all the array processing.
CREATE OR REPLACE FUNCTION public.sample_function(p_settings jsonb)
RETURNS TABLE (..... )
LANGUAGE plpgsql
AS $BODY$
declare
...
begin
CREATE TEMP TABLE hold_userID AS
SELECT MAInfo.userid
FROM (
-- the distinct is useless as the GROUP BY already does that
SELECT i.userid,
CASE
-- this checks if the parameter contains the settings key/value from sample_table
-- but the WHERE clause already makes sure of that???
WHEN p_settings #> jsonb_build_object('key', i.settingKey, 'value', i.settingvalue)
THEN i.settingKey || '#~#' || 'Y'
ELSE i.settingKey || '#~#' || 'N'
END AS MatchResult
FROM public.sample_table i
WHERE (i.settingKey, i.settingvalue) = IN (select t.element ->> 'key' as key,
t.element ->> 'value' as value
from jsonb_array_elements(p_settings) as t(element))
GROUP BY i.userid, MatchResult
) AS MAInfo
GROUP BY MAInfo.userid
HAVING COUNT(MAInfo.userid) >= 1;
return query ...;
end;
$body$
If you want to check if certain users have all the settings passed to the function, you don't really need a CASE expression, just a proper having condition
So maybe you want this instead:
CREATE TEMP TABLE hold_userID AS
SELECT i.userid,
FROM public.sample_table i
WHERE (i.settingKey, i.settingvalue) = IN (select t.element ->> 'key' as key,
t.element ->> 'value' as value
from jsonb_array_elements(p_settings) as t(element))
GROUP BY i.userid
HAVING COUNT(*) = jsonb_array_length(p_settings);
Or alternatively:
SELECT i.userid
FROM (
select userid, settingkey as key, settingvalue as value
from public.sample_table
) i
group by i.userid
HAVING jsonb_object_agg(key, value) = p_settings

postgres lag and window to create cohort table [duplicate]

I am trying to create crosstab queries in PostgreSQL such that it automatically generates the crosstab columns instead of hardcoding it. I have written a function that dynamically generates the column list that I need for my crosstab query. The idea is to substitute the result of this function in the crosstab query using dynamic sql.
I know how to do this easily in SQL Server, but my limited knowledge of PostgreSQL is hindering my progress here. I was thinking of storing the result of function that generates the dynamic list of columns into a variable and use that to dynamically build the sql query. It would be great if someone could guide me regarding the same.
-- Table which has be pivoted
CREATE TABLE test_db
(
kernel_id int,
key int,
value int
);
INSERT INTO test_db VALUES
(1,1,99),
(1,2,78),
(2,1,66),
(3,1,44),
(3,2,55),
(3,3,89);
-- This function dynamically returns the list of columns for crosstab
CREATE FUNCTION test() RETURNS TEXT AS '
DECLARE
key_id int;
text_op TEXT = '' kernel_id int, '';
BEGIN
FOR key_id IN SELECT DISTINCT key FROM test_db ORDER BY key LOOP
text_op := text_op || key_id || '' int , '' ;
END LOOP;
text_op := text_op || '' DUMMY text'';
RETURN text_op;
END;
' LANGUAGE 'plpgsql';
-- This query works. I just need to convert the static list
-- of crosstab columns to be generated dynamically.
SELECT * FROM
crosstab
(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2',
'SELECT DISTINCT key FROM test_db ORDER BY 1'
)
AS x (kernel_id int, key1 int, key2 int, key3 int); -- How can I replace ..
-- .. this static list with a dynamically generated list of columns ?
You can use the provided C function crosstab_hash for this.
The manual is not very clear in this respect. It's mentioned at the end of the chapter on crosstab() with two parameters:
You can create predefined functions to avoid having to write out the
result column names and types in each query. See the examples in the
previous section. The underlying C function for this form of crosstab
is named crosstab_hash.
For your example:
CREATE OR REPLACE FUNCTION f_cross_test_db(text, text)
RETURNS TABLE (kernel_id int, key1 int, key2 int, key3 int)
AS '$libdir/tablefunc','crosstab_hash' LANGUAGE C STABLE STRICT;
Call:
SELECT * FROM f_cross_test_db(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2'
,'SELECT DISTINCT key FROM test_db ORDER BY 1');
Note that you need to create a distinct crosstab_hash function for every crosstab function with a different return type.
Related:
PostgreSQL row to columns
Your function to generate the column list is rather convoluted, the result is incorrect (int missing after kernel_id), it can be replaced with this SQL query:
SELECT 'kernel_id int, '
|| string_agg(DISTINCT key::text, ' int, ' ORDER BY key::text)
|| ' int, DUMMY text'
FROM test_db;
And it cannot be used dynamically anyway.
#erwin-brandstetter: The return type of the function isn't an issue if you're always returning a JSON type with the converted results.
Here is the function I came up with:
CREATE OR REPLACE FUNCTION report.test(
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab JSON
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret JSON;
BEGIN
-- SELECT DISTINCT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
RETURN QUERY
EXECUTE '
SELECT array_to_json(array_agg(row_to_json(t)))
FROM (
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || ' ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')
) t;';
END;
$ab$ LANGUAGE 'plpgsql';
So, when you run it, you get the dynamic results in JSON, and you don't need to know how many values were pivoted:
select * from report.test(now()- '1 week'::interval, now(), 1);
tab
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{"date_start":"2015-07-27T08:40:01.277556-04:00","burn_rate":0.00,"monthly_revenue":5800.00,"cash_balance":0.00},{"date_start":"2015-07-27T08:50:02.458868-04:00","burn_rate":34000.00,"monthly_revenue":15800.00,"cash_balance":24000.00}]
(1 row)
Edit: If you have mixed datatypes in your crosstab, you can add logic to look it up for each column with something like this:
SELECT a.attname as column_name, format_type(a.atttypid, a.atttypmod) AS data_type
FROM pg_attribute a
JOIN pg_class b ON (a.attrelid = b.relfilenode)
JOIN pg_catalog.pg_namespace n ON n.oid = b.relnamespace
WHERE n.nspname = $$schema_name$$ AND b.relname = $$table_name$$ and a.attstattarget = -1;"
I realise this is an older post but struggled for a little while on the same issue.
My Problem Statement:
I had a table with muliple values in a field and wanted to create a crosstab query with 40+ column headings per row.
My Solution was to create a function which looped through the table column to grab values that I wanted to use as column headings within the crosstab query.
Within this function I could then Create the crosstab query. In my use case I added this crosstab result into a separate table.
E.g.
CREATE OR REPLACE FUNCTION field_values_ct ()
RETURNS VOID AS $$
DECLARE rec RECORD;
DECLARE str text;
BEGIN
str := '"Issue ID" text,';
-- looping to get column heading string
FOR rec IN SELECT DISTINCT field_name
FROM issue_fields
ORDER BY field_name
LOOP
str := str || '"' || rec.field_name || '" text' ||',';
END LOOP;
str:= substring(str, 0, length(str));
EXECUTE 'CREATE EXTENSION IF NOT EXISTS tablefunc;
DROP TABLE IF EXISTS temp_issue_fields;
CREATE TABLE temp_issue_fields AS
SELECT *
FROM crosstab(''select issue_id, field_name, field_value from issue_fields order by 1'',
''SELECT DISTINCT field_name FROM issue_fields ORDER BY 1'')
AS final_result ('|| str ||')';
END;
$$ LANGUAGE plpgsql;
The approach described here worked well for me.
Instead of retrieving the pivot table directly. The easier approach is to let the function generate a SQL query string. Dynamically execute the resulting SQL query string on demand.

Loop through columns of RECORD

I need to loop through type RECORD items by key/index, like I can do this using array structures in other programming languages.
For example:
DECLARE
data1 record;
data2 text;
...
BEGIN
...
FOR data1 IN
SELECT
*
FROM
sometable
LOOP
FOR data2 IN
SELECT
unnest( data1 ) -- THIS IS DOESN'T WORK!
LOOP
RETURN NEXT data1[data2]; -- SMTH LIKE THIS
END LOOP;
END LOOP;
As #Pavel explained, it is not simply possible to traverse a record, like you could traverse an array. But there are several ways around it - depending on your exact requirements. Ultimately, since you want to return all values in the same column, you need to cast them to the same type - text is the obvious common ground, because there is a text representation for every type.
Quick and dirty
Say, you have a table with an integer, a text and a date column.
CREATE TEMP TABLE tbl(a int, b text, c date);
INSERT INTO tbl VALUES
(1, '1text', '2012-10-01')
,(2, '2text', '2012-10-02')
,(3, ',3,ex,', '2012-10-03') -- text with commas
,(4, '",4,"ex,"', '2012-10-04') -- text with commas and double quotes
Then the solution can be a simple as:
SELECT unnest(string_to_array(trim(t::text, '()'), ','))
FROM tbl t;
Works for the first two rows, but fails for the special cases of row 3 and 4.
You can easily solve the problem with commas in the text representation:
SELECT unnest(('{' || trim(t::text, '()') || '}')::text[])
FROM tbl t
WHERE a < 4;
This would work fine - except for line 4 which has double quotes in the text representation. Those are escaped by doubling them up. But the array constructor would need them escaped by \. Not sure why this incompatibility is there ...
SELECT ('{' || trim(t::text, '()') || '}') FROM tbl t WHERE a = 4
Yields:
{4,""",4,""ex,""",2012-10-04}
But you would need:
SELECT '{4,"\",4,\"ex,\"",2012-10-04}'::text[]; -- works
Proper solution
If you knew the column names beforehand, a clean solution would be simple:
SELECT unnest(ARRAY[a::text,b::text,c::text])
FROM tbl
Since you operate on records of well know type you can just query the system catalog:
SELECT string_agg(a.attname || '::text', ',' ORDER BY a.attnum)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = 'tbl'::regclass
AND a.attnum > 0
AND a.attisdropped = FALSE
Put this in a function with dynamic SQL:
CREATE OR REPLACE FUNCTION unnest_table(_tbl text)
RETURNS SETOF text LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE '
SELECT unnest(ARRAY[' || (
SELECT string_agg(a.attname || '::text', ',' ORDER BY a.attnum)
FROM pg_catalog.pg_attribute a
WHERE a.attrelid = _tbl::regclass
AND a.attnum > 0
AND a.attisdropped = false
) || '])
FROM ' || _tbl::regclass;
END
$func$;
Call:
SELECT unnest_table('tbl') AS val
Returns:
val
-----
1
1text
2012-10-01
2
2text
2012-10-02
3
,3,ex,
2012-10-03
4
",4,"ex,"
2012-10-04
This works without installing additional modules. Another option is to install the hstore extension and use it like #Craig demonstrates.
PL/pgSQL isn't really designed for what you want to do. It doesn't consider a record to be iterable, it's a tuple of possibly different and incompatible data types.
PL/pgSQL has EXECUTE for dynamic SQL, but EXECUTE queries cannot refer to PL/pgSQL variables like NEW or other records directly.
What you can do is convert the record to a hstore key/value structure, then iterate over the hstore. Use each(hstore(the_record)), which produces a rowset of key,value tuples. All values are cast to their text representations.
This toy function demonstrates iteration over a record by creating an anonymous ROW(..) - which will have column names f1, f2, f3 - then converting that to hstore, iterating over its column/value pairs, and returning each pair.
CREATE EXTENSION hstore;
CREATE OR REPLACE FUNCTION hs_demo()
RETURNS TABLE ("key" text, "value" text)
LANGUAGE plpgsql AS
$$
DECLARE
data1 record;
hs_row record;
BEGIN
data1 = ROW(1, 2, 'test');
FOR hs_row IN SELECT kv."key", kv."value" FROM each(hstore(data1)) kv
LOOP
"key" = hs_row."key";
"value" = hs_row."value";
RETURN NEXT;
END LOOP;
END;
$$;
In reality you would never write it this way, since the whole loop can be replaced with a simple RETURN QUERY statement and it does the same thing each(hstore) does anyway - so this is only to show how each(hstore(record)) works, and the above function should never actually be used.
This feature is not supported in plpgsql - Record IS NOT hash array like other scripting languages - it is similar to C or ADA, where this functionality is impossible. You can use other PL language like PLPerl or PLPython or some tricks - you can iterate with HSTORE datatype (extension) or via dynamic SQL
see How to set value of composite variable field using dynamic SQL
But request for this functionality usually means, so you do some wrong. When you use PL/pgSQL you have think different than you use Javascript or Python
FOR data2 IN
SELECT d
from unnest( data1 ) s(d)
LOOP
RETURN NEXT data2;
END LOOP;
If you order your results prior to looping, will you accomplish what you want.
for rc in select * from t1 order by t1.key asc loop
return next rc;
end loop;
will do exactly what you need. It is also the fastest way to perform that kind of task.
I wasn't able to find a proper way to loop over record, so what I did is converted record to json first and looped over json
declare
_src_schema varchar := 'db_utility';
_targetjson json;
_key text;
_value text;
BEGIN
select row_to_json(c.*) from information_schema.columns c where c.table_name = prm_table and c.column_name = prm_column
and c.table_schema = _src_schema into _targetjson;
raise notice '_targetjson %', _targetjson;
FOR _key, _value IN
SELECT * FROM jsonb_each_text(_targetjson)
LOOP
-- do some math operation on its corresponding value
RAISE NOTICE '%: %', _key, _value;
END LOOP;
return true;
end;

How to use SIMILAR TO with variables

I have a function with a SELECT using a SIMILAR TO expression with a variable and I don't know how to do it:
DECLARE pckg_data cl_data;
DECLARE contacts contacts_reg%ROWTYPE;
DECLARE sim_name varchar;
BEGIN
SELECT client_reg._name,
client_reg.last_name,
client_reg.id_card,
client_reg.address
INTO pckg_data
FROM client_reg WHERE(client_reg._name = (cl_name ||' '|| cl_lastname));
RETURN NEXT pckg_data;
SELECT ('%'||cl_name || ' ' || cl_lastname ||'%') INTO sim_name;
FOR contacts IN SELECT contacts_reg.id
FROM contacts_reg, contactscli_asc, client_reg
WHERE(contacts_reg._name SIMILAR TO sim_name) LOOP
SELECT client_reg._name, client_reg.last_name, client_reg.id_card,
client_reg.address, client_reg.id
INTO pckg_data
FROM client_reg, contactscli_asc WHERE(contactscli_asc.contact = contacts.id
AND client_reg.id = contactscli_asc.client);
END LOOP;
END;
Your query that feeds the loop has CROSS JOIN over three (!) tables. I removed the last two on the notion that they are not needed. One of them is repeated in the body of the loop. Also consider #kgrittn's note on CROSS JOIN.
In the body of the loop you select data into a variable repeatedly, which does nothing. I assume you want to return those rows - that's what my edited version does, anyway.
I rewrote the LOOP construct with a simple SELECT with RETURN QUERY, because that's much faster and simpler.
Actually, I rewrote everything in a way that would make sense. What you presented is still incomplete (missing function header) and syntactically and logically a mess.
This is an educated guess, no more:
CREATE FUNCTION very_secret_function_name(cl_name varchar, cl_lastname varchar)
RETURNS TABLE (name varchar, last_name varchar,
id_card int, address varchar, id int)
LANGUAGE plpgsql AS
$func$
DECLARE
_sim_name varchar := (cl_name ||' '|| cl_lastname);
BEGIN
RETURN QUERY
SELECT c._name, c.last_name, c.id_card, c.address, NULL::int
-- added NULL for an id to match the second call
FROM client_reg c
WHERE c._name = _sim_name;
RETURN QUERY
SELECT c._name, c.last_name, c.id_card, c.address, r.id
FROM client_reg c
JOIN contactscli_asc a ON a.client = c.id
JOIN contacts_reg r ON r.id = a.contact
WHERE r._name LIKE ('%' || _sim_name || '%');
END
$func$;
Else, consider the features used.
Some advise:
You can assign a variable at declaration time.
The keyword DECLARE is only needed once.
Use table aliases to make your code easier to read.
You don't have to enclose the WHERE clause in parenthesis.
Most likely you don't need SIMILAR TO and LIKE does the job faster. I never use SIMILAR TO. LIKE or regular expressions (~) do a better job:
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL