Return SETOF rows from PostgreSQL function - postgresql

I have a situation where I want to return the join between two views. and that's a lot of columns. It was pretty easy in sql server. But in PostgreSQL when I do the join. I get the error "a column definition list is required".
Is there any way I can bypass this, I don't want to provide the definitions of returning columns.
CREATE OR REPLACE FUNCTION functionA(username character varying DEFAULT ''::character varying, databaseobject character varying DEFAULT ''::character varying)
RETURNS SETOF ???? AS
$BODY$
Declare
SqlString varchar(4000) = '';
BEGIN
IF(UserName = '*') THEN
Begin
SqlString := 'select * from view1 left join ' + databaseobject + ' as view2 on view1.id = view2.id';
End;
ELSE
Begin
SqlString := 'select * from view3 left join ' + databaseobject + ' as view2 on view3.id = view2.id';
End;
END IF;
execute (SqlString );
END;
$BODY$

Sanitize function
What you currently have can be simplified / sanitized to:
CREATE OR REPLACE FUNCTION func_a (username text = '', databaseobject text = '')
RETURNS ????
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE
format ('SELECT * FROM %s v1 LEFT JOIN %I v2 USING (id)'
, CASE WHEN username = '*' THEN 'view1' ELSE 'view3' END
, databaseobject);
END
$func$;
You only need additional instances of BEGIN ... END in the function body to start separate code blocks with their own scope, which is rarely needed.
The standard SQL concatenation operator is ||. + is a "creative" addition of your former vendor.
Don't use CaMeL-case identifiers unless you double-quote them. Best don't use them at all See:
Are PostgreSQL column names case-sensitive?
varchar(4000) is also tailored to a specific limitation of SQL Server. It has no specific significance in Postgres. Only use varchar(4000) if you actually need a limit of 4000 characters. I would just use text - except that we don't need any variables at all here, after simplifying the function.
If you have not used format(), yet, consult the manual here.
Return type
Now, for your actual question: The return type for a dynamic query can be tricky since SQL requires that to be declared at call time at the latest. If you have a table or view or composite type in your database already matching the column definition list, you can just use that:
CREATE FUNCTION foo()
RETURNS SETOF my_view AS
...
Else, spell the column definition list with out with (simplest) RETURNS TABLE:
CREATE FUNCTION foo()
RETURNS TABLE (col1 int, col2 text, ...) AS
...
If you are making the row type up as you go, you can return anonymous records:
CREATE FUNCTION foo()
RETURNS SETOF record AS
...
But then you have to provide a column definition list with every call, so I hardly ever use that.
I wouldn't use SELECT * to begin with. Use a definitive list of columns to return and declare your return type accordingly:
CREATE OR REPLACE FUNCTION func_a(username text = '', databaseobject text = '')
RETURNS TABLE(col1 int, col2 text, col3 date)
LANGUAGE plpgsql AS
$func$
BEGIN
RETURN QUERY EXECUTE
format ($f$SELECT v1.col1, v1.col2, v2.col3
FROM %s v1 LEFT JOIN %I v2 USING (id)$f$
, CASE WHEN username = '*' THEN 'view1' ELSE 'view3' END
, databaseobject);
END
$func$;
For completely dynamic queries, consider building the query in your client to begin with, instead of using a function.
You need to understand basics first:
Refactor a PL/pgSQL function to return the output of various SELECT queries
PL/pgSQL in the Postgres manual
Then there are more advanced options with polymorphic types, which allow you to pass the return type at call time. More in the last chapter of:
Refactor a PL/pgSQL function to return the output of various SELECT queries

Related

Function to return dynamic set of columns for given table

I have a fields table to store column information for other tables:
CREATE TABLE public.fields (
schema_name varchar(100),
table_name varchar(100),
column_text varchar(100),
column_name varchar(100),
column_type varchar(100) default 'varchar(100)',
column_visible boolean
);
And I'd like to create a function to fetch data for a specific table.
Just tried sth like this:
create or replace function public.get_table(schema_name text,
table_name text,
active boolean default true)
returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
begin
for r in EXECUTE 'select * from ' || entity_name loop
return next r;
end loop;
return;
end
$$
language plpgsql;
With this function I have to specify columns when I call it!
select * from public.get_table('public', 'users') as dept(id int, uname text);
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in public.fields table.
Solution for the simple case
As explained in the referenced answers below, you can use registered (row) types, and thus implicitly declare the return type of a polymorphic function:
CREATE OR REPLACE FUNCTION public.get_table(_tbl_type anyelement)
RETURNS SETOF anyelement AS
$func$
BEGIN
RETURN QUERY EXECUTE format('TABLE %s', pg_typeof(_tbl_type));
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM public.get_table(NULL::public.users); -- note the syntax!
Returns the complete table (with all user columns).
Wait! How?
Detailed explanation in this related answer, chapter
"Various complete table types":
Refactor a PL/pgSQL function to return the output of various SELECT queries
TABLE foo is just short for SELECT * FROM foo:
Is there a shortcut for SELECT * FROM?
2 steps for completely dynamic return type
But what you are trying to do is strictly impossible in a single SQL command.
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in
public.fields table.
There is no direct way to return an arbitrary selection of columns (return type not known at call time) from a function - or any SQL command. SQL demands to know number, names and types of resulting columns at call time. More in the 2nd chapter of this related answer:
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
There are various workarounds. You could wrap the result in one of the standard document types (json, jsonb, hstore, xml).
Or you generate the query with one function call and execute the result with the next:
CREATE OR REPLACE FUNCTION public.generate_get_table(_schema_name text, _table_name text)
RETURNS text AS
$func$
SELECT format('SELECT %s FROM %I.%I'
, string_agg(quote_ident(column_name), ', ')
, schema_name
, table_name)
FROM fields
WHERE column_visible
AND schema_name = _schema_name
AND table_name = _table_name
GROUP BY schema_name, table_name
ORDER BY schema_name, table_name;
$func$ LANGUAGE sql;
Call:
SELECT public.generate_get_table('public', 'users');
This create a query of the form:
SELECT usr_id, usr FROM public.users;
Execute it in the 2nd step. (You might want to add column numbers and order columns.)
Or append \gexec in psql to execute the return value immediately. See:
How to force evaluation of subquery before joining / pushing down to foreign server
Be sure to defend against SQL injection:
INSERT with dynamic table name in trigger function
Define table and column names as arguments in a plpgsql function?
Asides
varchar(100) does not make much sense for identifiers, which are limited to 63 characters in standard Postgres:
Maximum characters in labels (table names, columns etc)
If you understand how the object identifier type regclass works, you might replace schema and table name with a singe regclass column.
I think you just need another query to get the list of columns you want.
Maybe something like (this is untested):
create or replace function public.get_table(_schema_name text, _table_name text, active boolean default true) returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
columns varchar;
begin
-- Get the list of columns
SELECT string_agg(column_name, ', ')
INTO columns
FROM public.fields
WHERE fields.schema_name = _schema_name
AND fields.table_name = _table_name
AND fields.column_visible = TRUE;
-- Return rows from the specified table
RETURN QUERY EXECUTE 'select ' || columns || ' from ' || entity_name;
RETURN;
end
$$
language plpgsql;
Keep in mind that column/table references may need to be surrounded by double quotes if they have certain characters in them.

Execute a dynamic crosstab query

I implemented this function in my Postgres database: http://www.cureffi.org/2013/03/19/automatically-creating-pivot-table-column-names-in-postgresql/
Here's the function:
create or replace function xtab (tablename varchar, rowc varchar, colc varchar, cellc varchar, celldatatype varchar) returns varchar language plpgsql as $$
declare
dynsql1 varchar;
dynsql2 varchar;
columnlist varchar;
begin
-- 1. retrieve list of column names.
dynsql1 = 'select string_agg(distinct '||colc||'||'' '||celldatatype||''','','' order by '||colc||'||'' '||celldatatype||''') from '||tablename||';';
execute dynsql1 into columnlist;
-- 2. set up the crosstab query
dynsql2 = 'select * from crosstab (
''select '||rowc||','||colc||','||cellc||' from '||tablename||' group by 1,2 order by 1,2'',
''select distinct '||colc||' from '||tablename||' order by 1''
)
as ct (
'||rowc||' varchar,'||columnlist||'
);';
return dynsql2;
end
$$;
So now I can call the function:
select xtab('globalpayments','month','currency','(sum(total_fees)/sum(txn_amount)*100)::decimal(48,2)','text');
Which returns (because the return type of the function is varchar):
select * from crosstab (
'select month,currency,(sum(total_fees)/sum(txn_amount)*100)::decimal(48,2)
from globalpayments
group by 1,2
order by 1,2'
, 'select distinct currency
from globalpayments
order by 1'
) as ct ( month varchar,CAD text,EUR text,GBP text,USD text );
How can I get this function to not only generate the code for the dynamic crosstab, but also execute the result? I.e., the result when I manually copy/paste/execute is this. But I want it to execute without that extra step: the function shall assemble the dynamic query and execute it:
Edit 1
This function comes close, but I need it to return more than just the first column of the first record
Taken from: Are there any way to execute a query inside the string value (like eval) in PostgreSQL?
create or replace function eval( sql text ) returns text as $$
declare
as_txt text;
begin
if sql is null then return null ; end if ;
execute sql into as_txt ;
return as_txt ;
end;
$$ language plpgsql
usage: select * from eval($$select * from analytics limit 1$$)
However it just returns the first column of the first record :
eval
----
2015
when the actual result looks like this:
Year, Month, Date, TPV_USD
---- ----- ------ --------
2016, 3, 2016-03-31, 100000
What you ask for is impossible. SQL is a strictly typed language. PostgreSQL functions need to declare a return type (RETURNS ..) at the time of creation.
A limited way around this is with polymorphic functions. If you can provide the return type at the time of the function call. But that's not evident from your question.
Refactor a PL/pgSQL function to return the output of various SELECT queries
You can return a completely dynamic result with anonymous records. But then you are required to provide a column definition list with every call. And how do you know about the returned columns? Catch 22.
There are various workarounds, depending on what you need or can work with. Since all your data columns seem to share the same data type, I suggest to return an array: text[]. Or you could return a document type like hstore or json. Related:
Dynamic alternative to pivot with CASE and GROUP BY
Dynamically convert hstore keys into columns for an unknown set of keys
But it might be simpler to just use two calls: 1: Let Postgres build the query. 2: Execute and retrieve returned rows.
Selecting multiple max() values using a single SQL statement
I would not use the function from Eric Minikel as presented in your question at all. It is not safe against SQL injection by way of maliciously malformed identifiers. Use format() to build query strings unless you are running an outdated version older than Postgres 9.1.
A shorter and cleaner implementation could look like this:
CREATE OR REPLACE FUNCTION xtab(_tbl regclass, _row text, _cat text
, _expr text -- still vulnerable to SQL injection!
, _type regtype)
RETURNS text
LANGUAGE plpgsql AS
$func$
DECLARE
_cat_list text;
_col_list text;
BEGIN
-- generate categories for xtab param and col definition list
EXECUTE format(
$$SELECT string_agg(quote_literal(x.cat), '), (')
, string_agg(quote_ident (x.cat), %L)
FROM (SELECT DISTINCT %I AS cat FROM %s ORDER BY 1) x$$
, ' ' || _type || ', ', _cat, _tbl)
INTO _cat_list, _col_list;
-- generate query string
RETURN format(
'SELECT * FROM crosstab(
$q$SELECT %I, %I, %s
FROM %I
GROUP BY 1, 2 -- only works if the 3rd column is an aggregate expression
ORDER BY 1, 2$q$
, $c$VALUES (%5$s)$c$
) ct(%1$I text, %6$s %7$s)'
, _row, _cat, _expr -- expr must be an aggregate expression!
, _tbl, _cat_list, _col_list, _type);
END
$func$;
Same function call as your original version. The function crosstab() is provided by the additional module tablefunc which has to be installed. Basics:
PostgreSQL Crosstab Query
This handles column and table names safely. Note the use of object identifier types regclass and regtype. Also works for schema-qualified names.
Table name as a PostgreSQL function parameter
However, it is not completely safe while you pass a string to be executed as expression (_expr - cellc in your original query). This kind of input is inherently unsafe against SQL injection and should never be exposed to the general public.
SQL injection in Postgres functions vs prepared queries
Scans the table only once for both lists of categories and should be a bit faster.
Still can't return completely dynamic row types since that's strictly not possible.
Not quite impossible, you can still execute it (from a query execute the string and return SETOF RECORD.
Then you have to specify the return record format. The reason in this case is that the planner needs to know the return format before it can make certain decisions (materialization comes to mind).
So in this case you would EXECUTE the query, return the rows and return SETOF RECORD.
For example, we could do something like this with a wrapper function but the same logic could be folded into your function:
CREATE OR REPLACE FUNCTION crosstab_wrapper
(tablename varchar, rowc varchar, colc varchar,
cellc varchar, celldatatype varchar)
returns setof record language plpgsql as $$
DECLARE outrow record;
BEGIN
FOR outrow IN EXECUTE xtab($1, $2, $3, $4, $5)
LOOP
RETURN NEXT outrow
END LOOP;
END;
$$;
Then you supply the record structure on calling the function just like you do with crosstab.
Then when you all the query you would have to supply a record structure (as (col1 type, col2 type, etc) like you do with connectby.

Unexpected behaviour for custom type returned from a function

I have created a custom type
CREATE TYPE rc_test_type AS (a1 bigint);
and a function
CREATE OR REPLACE FUNCTION public.rc_test_type_function(test_table character varying, dummy integer)
RETURNS rc_test_type AS
$BODY$
DECLARE
ret rc_test_type;
query text;
BEGIN
query := 'SELECT count(*) from ' || test_table ;
EXECUTE query into ret.a1;
RETURN ret;
END $BODY$
LANGUAGE plpgsql VOLATILE
If I run
SELECT * FROM rc_test_type_function('some_table', 1);
I get
"a1"
1389
So far so good.
If I run
SELECT p FROM (SELECT rc_test_type_function('some_table', s.step) AS p
FROM some_other_table s) foo;
I get
"p"
"(1389)"
"(1389)"
since 'some_other_table' has just two records. Fine.
But then if I try
SELECT p.a1 FROM (select rc_test_type_function('some_table', s.step) AS p
FROM some_other_table s) foo;
I get the error
missing FROM-clause entry in subquery for table »p«
which I find strange since the subquery has not changed.
Two questions:
Can anyone explain what's going on?
How do I extract the field value a1 from the returned array?
Use parentheses around the composite type:
SELECT (p).a1
FROM (SELECT rc_test_type_function('some_table', s.step) AS p
FROM some_other_table s
) foo;
Even though your type has just a single column is still a composite type - with its own column name. Doesn't make a lot of sense, but that's how you built it.
(You might want to just use a simple type or maybe a DOMAIN instead.)
Quoting the manual here:
(compositecol).somefield
(mytable.compositecol).somefield
The parentheses are required here to show that compositecol is a column name not a
a table name, or that mytable is a table name not a schema name in the second case.
Proper function
Omitting the part with the composite type, your function would be safer, simpler and faster this way:
CREATE OR REPLACE FUNCTION foo(test_table varchar, dummy int, OUT p bigint)
AS
$func$
BEGIN
EXECUTE format('SELECT count(*) from %I', test_table) -- !avoid SQLi!
INTO p;
END
$func$ LANGUAGE plpgsql;
Avoid SQL injection with dynamic SQL!
An OUT parameter simplifies the syntax in this case. You don't need a DECLARE clause at all, and no RETURN either
Even better
CREATE OR REPLACE FUNCTION foo(test_table regclass, dummy int, OUT p bigint)
AS
$func$
BEGIN
EXECUTE 'SELECT count(*) from ' || test_table
INTO p;
END
$func$ LANGUAGE plpgsql;
By using the object identifier regclass this would also work with schema-qualified table names. And SQLi is not possible to begin with. The function would fail immediately if the table name is illegal and it is quoted automatically when converted to text automatically.

PostgreSQL - Writing dynamic sql in stored procedure that returns a result set

How can I write a stored procedure that contains a dynamically built SQL statement that returns a result set? Here is my sample code:
CREATE OR REPLACE FUNCTION reporting.report_get_countries_new (
starts_with varchar,
ends_with varchar
)
RETURNS TABLE (
country_id integer,
country_name varchar
) AS
$body$
DECLARE
starts_with ALIAS FOR $1;
ends_with ALIAS FOR $2;
sql VARCHAR;
BEGIN
sql = 'SELECT * FROM lookups.countries WHERE lookups.countries.country_name >= ' || starts_with ;
IF ends_with IS NOT NULL THEN
sql = sql || ' AND lookups.countries.country_name <= ' || ends_with ;
END IF;
RETURN QUERY EXECUTE sql;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100 ROWS 1000;
This code returns an error:
ERROR: syntax error at or near "RETURN"
LINE 1: RETURN QUERY SELECT * FROM omnipay_lookups.countries WHERE o...
^
QUERY: RETURN QUERY SELECT * FROM omnipay_lookups.countries WHERE omnipay_lookups.countries.country_name >= r
CONTEXT: PL/pgSQL function "report_get_countries_new" line 14 at EXECUTE statement
I have tried other ways instead of this:
RETURN QUERY EXECUTE sql;
Way 1:
RETURN EXECUTE sql;
Way 2:
sql = 'RETURN QUERY SELECT * FROM....
/*later*/
EXECUTE sql;
In all cases without success.
Ultimately I want to write a stored procedure that contains a dynamic sql statement and that returns the result set from the dynamic sql statement.
There is room for improvements:
CREATE OR REPLACE FUNCTION report_get_countries_new (starts_with text
, ends_with text = NULL)
RETURNS SETOF lookups.countries AS
$func$
DECLARE
sql text := 'SELECT * FROM lookups.countries WHERE country_name >= $1';
BEGIN
IF ends_with IS NOT NULL THEN
sql := sql || ' AND country_name <= $2';
END IF;
RETURN QUERY EXECUTE sql
USING starts_with, ends_with;
END
$func$ LANGUAGE plpgsql;
-- the rest is default settings
Major points
PostgreSQL 8.4 introduced the USING clause for EXECUTE, which is useful for several reasons. Recap in the manual:
The command string can use parameter values, which are referenced in
the command as $1, $2, etc. These symbols refer to values supplied in
the USING clause. This method is often preferable to inserting data
values into the command string as text: it avoids run-time overhead of
converting the values to text and back, and it is much less prone to
SQL-injection attacks since there is no need for quoting or escaping.
IOW, it is safer and faster than building a query string with text representation of parameters, even when sanitized with quote_literal().
Note that $1, $2 in the query string refer to the supplied values in the USING clause, not to the function parameters.
While you return SELECT * FROM lookups.countries, you can simplify the RETURN declaration like demonstrated:
RETURNS SETOF lookups.countries
In PostgreSQL there is a composite type defined for every table automatically. Use it. The effect is that the function depends on the type and you get an error message if you try to alter the table. Drop & recreate the function in such a case.
This may or may not be desirable - generally it is! You want to be made aware of side effects if you alter tables. The way you have it, your function would break silently and raise an exception on it's next call.
If you provide an explicit default for the second parameter in the declaration like demonstrated, you can (but don't have to) simplify the call in case you don't want to set an upper bound with ends_with.
SELECT * FROM report_get_countries_new('Zaire');
instead of:
SELECT * FROM report_get_countries_new('Zaire', NULL);
Be aware of function overloading in this context.
Don't quote the language name 'plpgsql' even if that's tolerated (for now). It's an identifier.
You can assign a variable at declaration time. Saves an extra step.
Parameters are named in the header. Drop the nonsensical lines:
starts_with ALIAS FOR $1;
ends_with ALIAS FOR $2;
Use quote_literal() to avoid SQL injection (!!!) and fix your quoting problem:
CREATE OR REPLACE FUNCTION report_get_countries_new (
starts_with varchar,
ends_with varchar
)
RETURNS TABLE (
country_id integer,
country_name varchar
) AS
$body$
DECLARE
starts_with ALIAS FOR $1;
ends_with ALIAS FOR $2;
sql VARCHAR;
BEGIN
sql := 'SELECT * FROM lookups.countries WHERE lookups.countries.country_name ' || quote_literal(starts_with) ;
IF ends_with IS NOT NULL THEN
sql := sql || ' AND lookups.countries.country_name <= ' || quote_literal(ends_with) ;
END IF;
RETURN QUERY EXECUTE sql;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100 ROWS 1000;
This is tested in version 9.1, works fine.

PostgreSQL: store function in column as value

Can functions be stored as anonymous functions directly in column as its value?
Let's say I want this function be stored in column.
Example (pseudocode):
Table my_table: pk (int), my_function (func)
func ( x ) { return x * 100 }
And later use it as:
select
t.my_function(some_input) AS output
from
my_table as t
where t.pk = 1999
Function may vary for each pk.
Your title asks something else than your example.
A function has to be created before you can call it. (title)
An expression has to be evaluated. You would need a meta-function for that. (example)
Here are solutions for both:
1. Evaluate expressions dynamically
You have to take into account that the resulting type can vary. I use polymorphic types for that.
CREATE OR REPLACE FUNCTION f1(int)
RETURNS int
LANGUAGE sql IMMUTABLE AS
'SELECT $1 * 100;';
CREATE OR REPLACE FUNCTION f2(text)
RETURNS text
LANGUAGE sql IMMUTABLE AS
$$SELECT $1 || '_foo';$$;
CREATE TABLE my_expr (
expr text PRIMARY KEY
, def text
, rettype regtype
);
INSERT INTO my_expr VALUES
('x', 'f1(3)' , 'int')
, ('y', $$f2('bar')$$, 'text')
, ('z', 'now()' , 'timestamptz')
;
CREATE OR REPLACE FUNCTION f_eval(text, _type anyelement = 'NULL'::text, OUT _result anyelement)
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE
'SELECT ' || (SELECT def FROM my_expr WHERE expr = $1)
INTO _result;
END
$func$;
Related:
Refactor a PL/pgSQL function to return the output of various SELECT queries
Call:
SQL is strictly typed, the same result column can only have one data type. For multiple rows with possibly heterogeneous data types, you might settle for type text, as every data type can be cast to and from text:
SELECT *, f_eval(expr) AS result -- default to type text
FROM my_expr;
Or return multplce columns like:
SELECT *
, CASE WHEN rettype = 'text'::regtype THEN f_eval(expr) END AS text_result -- default to type text
, CASE WHEN rettype = 'int'::regtype THEN f_eval(expr, NULL::int) END AS int_result
, CASE WHEN rettype = 'timestamptz'::regtype THEN f_eval(expr, NULL::timestamptz) END AS tstz_result
-- , more?
FROM my_expr;
db<>fiddle here
2. Create and use functions dynamically
It is possible to create functions dynamically and then use them. You cannot do that with plain SQL, however. You will have to use another function to do that or at least an anonymous code block (DO statement), introduced in PostgreSQL 9.0.
It can work like this:
CREATE TABLE my_func (func text PRIMARY KEY, def text);
INSERT INTO my_func VALUES
('f'
, $$CREATE OR REPLACE FUNCTION f(int)
RETURNS int
LANGUAGE sql IMMUTABLE AS
'SELECT $1 * 100;'$$);
CREATE OR REPLACE FUNCTION f_create_func(text)
RETURNS void
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE (SELECT def FROM my_func WHERE func = $1);
END
$func$;
Call:
SELECT f_create_func('f');
SELECT f(3);
db<>fiddle here
You may want to drop the function afterwards.
In most cases you should just create the functions instead and be done with it. Use separate schemas if you have problems with multiple versions or privileges.
For more information on the features I used here, see my related answer on dba.stackexchange.com.