What I'm trying to do:
Modular function querying for time-range of any table I specify (pg_attribute as string input)
Chain multiple conditions as function inputs in where clause, after ::regclass is called
Reference the attrelid = _tablename::regclass in the chained where clauses
CREATE OR REPLACE FUNCTION the_func(
rngstart timestamptz,
rngdend timestamptz,
_tablename VARCHAR(16),
_id INT
) AS $$
SELECT *
FROM pg_attribute
WHERE attrelid = _tablename::regclass
AND id = _id
AND time > rngstart
AND time <= rngend
$$ LANGUAGE sql STABLE;
You have to write a PL/pgSQL function and use dynamic SQL, something along the lines of
BEGIN
RETURN QUERY EXECUTE format(
'SELECT ... FROM %I '
'WHERE id = $1 '
'AND time > $2 AND time <= $3',
_tablename::text)
USING _id, rngstart, rngend;
END;
Related
I want to create a postgresql funciton that returns records. But if I pass an id parameter, it should be add in where clause. if I do not pass or null id parameter, where clasuse will not add the query.
CREATE OR REPLACE FUNCTION my_func(id integer)
RETURNS TABLE (type varchar, total bigint) AS $$
DECLARE where_clause VARCHAR(200);
BEGIN
IF id IS NOT NULL THEN
where_clause = ' group_id= ' || id;
END IF ;
RETURN QUERY SELECT
type,
count(*) AS total
FROM
table1
WHERE
where_clause ???
GROUP BY
type
ORDER BY
type;
END
$$
LANGUAGE plpgsql;
You can either use one condition that takes care of both situations (then you don't need PL/pgSQL to begin with):
CREATE OR REPLACE FUNCTION my_func(p_id integer)
RETURNS TABLE (type varchar, total bigint)
AS $$
SELECT type,
count(*) AS total
FROM table1
WHERE p_id is null or group_id = p_id
GROUP BY type
ORDER BY type;
$$
LANGUAGE sql;
But an OR condition like that is typically not really good for performance. The second option you have, is to simply run two different statements:
CREATE OR REPLACE FUNCTION my_func(p_id integer)
RETURNS TABLE (type varchar, total bigint)
AS $$
begin
if (p_id is null) then
return query
SELECT type,
count(*) AS total
FROM table1
GROUP BY type
ORDER BY type;
else
return query
SELECT type,
count(*) AS total
FROM table1
WHERE group_id = p_id
GROUP BY type
ORDER BY type;
end if;
END
$$
LANGUAGE plgpsql;
And finally you can build a dynamic SQL string depending the parameter:
CREATE OR REPLACE FUNCTION my_func(p_id integer)
RETURNS TABLE (type varchar, total bigint)
AS $$
declare
l_sql text;
begin
l_sql := 'SELECT type, count(*) AS total FROM table1 '
if (p_id is not null) then
l_sql := l_sql || ' WHERE group_id = '||p_id;
end if;
l_sql := l_sql || ' GROUP BY type ORDER BY type';
return query execute l_sql;
end;
$$
LANGUAGE plpgsql;
Nothing is required just to use the variable as it is for more info please refer :plpgsql function parameters
this is my function:
CREATE OR REPLACE FUNCTION SANDBOX.DAILYVERIFY_DATE(TABLE_NAME regclass, DATE_DIFF INTEGER)
RETURNS void AS $$
DECLARE
RESULT BOOLEAN;
DATE DATE;
BEGIN
EXECUTE 'SELECT VORHANDENES_DATUM AS DATE, CASE WHEN DATUM IS NULL THEN FALSE ELSE TRUE END AS UPDATED FROM
(SELECT DISTINCT DATE VORHANDENES_DATUM FROM ' || TABLE_NAME ||
' WHERE DATE > CURRENT_DATE -14-'||DATE_DIFF|| '
) A
RIGHT JOIN
(
WITH compras AS (
SELECT ( NOW() + (s::TEXT || '' day'')::INTERVAL )::TIMESTAMP(0) AS DATUM
FROM generate_series(-14, -1, 1) AS s
)
SELECT DATUM::DATE
FROM compras)
B
ON DATUM = VORHANDENES_DATUM'
INTO date,result;
RAISE NOTICE '%', result;
INSERT INTO SANDBOX.UPDATED_TODAY VALUES (TABLE_NAME, DATE, RESULT);
END;
$$ LANGUAGE plpgsql;
It is supposed to upload rows into the table SANDBOX.UPDATED_TODAY, which contains table name, a date and a boolean.
The boolean shows, whether there was an entry for that date in the table. The whole part, which is inside of EXECUTE ... INTO works fine and gives me those days.
However, this code only inserts the first row of the query's result. What I want is that all 14 rows get inserted. Obviously, I need to change it into something like a loop or something completely different, but how exactly would that work?
Side note: I removed some unnecessary parts regarding those 2 parameters you can see. It does not have to do with that at all.
Put the INSERT statement inside the EXECUTE. You don't need the result of the SELECT for anything other than inserting it into that table, right? So just insert it directly as part of the same query:
CREATE OR REPLACE FUNCTION SANDBOX.DAILYVERIFY_DATE(TABLE_NAME regclass, DATE_DIFF INTEGER)
RETURNS void AS
$$
BEGIN
EXECUTE
'INSERT INTO SANDBOX.UPDATED_TODAY
SELECT ' || QUOTE_LITERAL(TABLE_NAME) || ', VORHANDENES_DATUM, CASE WHEN DATUM IS NULL THEN FALSE ELSE TRUE END
FROM (
SELECT DISTINCT DATE VORHANDENES_DATUM FROM ' || TABLE_NAME ||
' WHERE DATE > CURRENT_DATE -14-'||DATE_DIFF|| '
) A
RIGHT JOIN (
WITH compras AS (
SELECT ( NOW() + (s::TEXT || '' day'')::INTERVAL )::TIMESTAMP(0) AS DATUM
FROM generate_series(-14, -1, 1) AS s
)
SELECT DATUM::DATE
FROM compras
) B
ON DATUM = VORHANDENES_DATUM';
END;
$$ LANGUAGE plpgsql;
The idiomatic way to loop through dynamic query results would be
FOR date, result IN
EXECUTE 'SELECT ...'
LOOP
INSERT INTO ...
END LOOP;
I am trying to create crosstab queries in PostgreSQL such that it automatically generates the crosstab columns instead of hardcoding it. I have written a function that dynamically generates the column list that I need for my crosstab query. The idea is to substitute the result of this function in the crosstab query using dynamic sql.
I know how to do this easily in SQL Server, but my limited knowledge of PostgreSQL is hindering my progress here. I was thinking of storing the result of function that generates the dynamic list of columns into a variable and use that to dynamically build the sql query. It would be great if someone could guide me regarding the same.
-- Table which has be pivoted
CREATE TABLE test_db
(
kernel_id int,
key int,
value int
);
INSERT INTO test_db VALUES
(1,1,99),
(1,2,78),
(2,1,66),
(3,1,44),
(3,2,55),
(3,3,89);
-- This function dynamically returns the list of columns for crosstab
CREATE FUNCTION test() RETURNS TEXT AS '
DECLARE
key_id int;
text_op TEXT = '' kernel_id int, '';
BEGIN
FOR key_id IN SELECT DISTINCT key FROM test_db ORDER BY key LOOP
text_op := text_op || key_id || '' int , '' ;
END LOOP;
text_op := text_op || '' DUMMY text'';
RETURN text_op;
END;
' LANGUAGE 'plpgsql';
-- This query works. I just need to convert the static list
-- of crosstab columns to be generated dynamically.
SELECT * FROM
crosstab
(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2',
'SELECT DISTINCT key FROM test_db ORDER BY 1'
)
AS x (kernel_id int, key1 int, key2 int, key3 int); -- How can I replace ..
-- .. this static list with a dynamically generated list of columns ?
You can use the provided C function crosstab_hash for this.
The manual is not very clear in this respect. It's mentioned at the end of the chapter on crosstab() with two parameters:
You can create predefined functions to avoid having to write out the
result column names and types in each query. See the examples in the
previous section. The underlying C function for this form of crosstab
is named crosstab_hash.
For your example:
CREATE OR REPLACE FUNCTION f_cross_test_db(text, text)
RETURNS TABLE (kernel_id int, key1 int, key2 int, key3 int)
AS '$libdir/tablefunc','crosstab_hash' LANGUAGE C STABLE STRICT;
Call:
SELECT * FROM f_cross_test_db(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2'
,'SELECT DISTINCT key FROM test_db ORDER BY 1');
Note that you need to create a distinct crosstab_hash function for every crosstab function with a different return type.
Related:
PostgreSQL row to columns
Your function to generate the column list is rather convoluted, the result is incorrect (int missing after kernel_id), it can be replaced with this SQL query:
SELECT 'kernel_id int, '
|| string_agg(DISTINCT key::text, ' int, ' ORDER BY key::text)
|| ' int, DUMMY text'
FROM test_db;
And it cannot be used dynamically anyway.
#erwin-brandstetter: The return type of the function isn't an issue if you're always returning a JSON type with the converted results.
Here is the function I came up with:
CREATE OR REPLACE FUNCTION report.test(
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab JSON
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret JSON;
BEGIN
-- SELECT DISTINCT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
RETURN QUERY
EXECUTE '
SELECT array_to_json(array_agg(row_to_json(t)))
FROM (
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || ' ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')
) t;';
END;
$ab$ LANGUAGE 'plpgsql';
So, when you run it, you get the dynamic results in JSON, and you don't need to know how many values were pivoted:
select * from report.test(now()- '1 week'::interval, now(), 1);
tab
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{"date_start":"2015-07-27T08:40:01.277556-04:00","burn_rate":0.00,"monthly_revenue":5800.00,"cash_balance":0.00},{"date_start":"2015-07-27T08:50:02.458868-04:00","burn_rate":34000.00,"monthly_revenue":15800.00,"cash_balance":24000.00}]
(1 row)
Edit: If you have mixed datatypes in your crosstab, you can add logic to look it up for each column with something like this:
SELECT a.attname as column_name, format_type(a.atttypid, a.atttypmod) AS data_type
FROM pg_attribute a
JOIN pg_class b ON (a.attrelid = b.relfilenode)
JOIN pg_catalog.pg_namespace n ON n.oid = b.relnamespace
WHERE n.nspname = $$schema_name$$ AND b.relname = $$table_name$$ and a.attstattarget = -1;"
I realise this is an older post but struggled for a little while on the same issue.
My Problem Statement:
I had a table with muliple values in a field and wanted to create a crosstab query with 40+ column headings per row.
My Solution was to create a function which looped through the table column to grab values that I wanted to use as column headings within the crosstab query.
Within this function I could then Create the crosstab query. In my use case I added this crosstab result into a separate table.
E.g.
CREATE OR REPLACE FUNCTION field_values_ct ()
RETURNS VOID AS $$
DECLARE rec RECORD;
DECLARE str text;
BEGIN
str := '"Issue ID" text,';
-- looping to get column heading string
FOR rec IN SELECT DISTINCT field_name
FROM issue_fields
ORDER BY field_name
LOOP
str := str || '"' || rec.field_name || '" text' ||',';
END LOOP;
str:= substring(str, 0, length(str));
EXECUTE 'CREATE EXTENSION IF NOT EXISTS tablefunc;
DROP TABLE IF EXISTS temp_issue_fields;
CREATE TABLE temp_issue_fields AS
SELECT *
FROM crosstab(''select issue_id, field_name, field_value from issue_fields order by 1'',
''SELECT DISTINCT field_name FROM issue_fields ORDER BY 1'')
AS final_result ('|| str ||')';
END;
$$ LANGUAGE plpgsql;
The approach described here worked well for me.
Instead of retrieving the pivot table directly. The easier approach is to let the function generate a SQL query string. Dynamically execute the resulting SQL query string on demand.
I have a database with multiple identical schemas. There is a number of tables all named 'tran_...' in each schema. I want to loop through all 'tran_' tables in all schemas and pull out records that fall within a specific date range. This is the code I have so far:
CREATE OR REPLACE FUNCTION public."configChanges"(starttime timestamp, endtime timestamp)
RETURNS SETOF character varying AS
$BODY$DECLARE
tbl_row RECORD;
tbl_name VARCHAR(50);
tran_row RECORD;
out_record VARCHAR(200);
BEGIN
FOR tbl_row IN
SELECT * FROM pg_tables WHERE schemaname LIKE 'ivr%' AND tablename LIKE 'tran_%'
LOOP
tbl_name := tbl_row.schemaname || '.' || tbl_row.tablename;
FOR tran_row IN
SELECT * FROM tbl_name
WHERE ch_edit_date >= starttime AND ch_edit_date <= endtime
LOOP
out_record := tbl_name || ' ' || tran_row.ch_field_name;
RETURN NEXT out_record;
END LOOP;
END LOOP;
RETURN;
END;
$BODY$
LANGUAGE plpgsql;
When I attempt to run this, I get:
ERROR: relation "tbl_name" does not exist
LINE 1: SELECT * FROM tbl_name WHERE ch_edit_date >= starttime AND c...
#Pavel already provided a fix for your basic error.
However, since your tbl_name is actually schema-qualified (two separate identifiers in : schema.table), it cannot be escaped as a whole with %I in format(). You have to escape each identifier individually.
Aside from that, I suggest a different approach. The outer loop is necessary, but the inner loop can be replaced with a simpler and more efficient set-based approach:
CREATE OR REPLACE FUNCTION public.config_changes(_start timestamp, _end timestamp)
RETURNS SETOF text AS
$func$
DECLARE
_tbl text;
BEGIN
FOR _tbl IN
SELECT quote_ident(schemaname) || '.' || quote_ident(tablename)
FROM pg_tables
WHERE schemaname LIKE 'ivr%'
AND tablename LIKE 'tran_%'
LOOP
RETURN QUERY EXECUTE format (
$$
SELECT %1$L || ' ' || ch_field_name
FROM %1$s
WHERE ch_edit_date BETWEEN $1 AND $2
$$, _tbl
)
USING _start, _end;
END LOOP;
RETURN;
END
$func$ LANGUAGE plpgsql;
You have to use dynamic SQL to parametrize identifiers (or code), like #Pavel already told you. With RETURN QUERY EXECUTE you can return the result of a dynamic query directly. Examples:
Return SETOF rows from PostgreSQL function
Refactor a PL/pgSQL function to return the output of various SELECT queries
Remember that identifiers have to be treated as unsafe user input in dynamic SQL and must always be sanitized to avoid syntax errors and SQL injection:
Table name as a PostgreSQL function parameter
Note how I escape table and schema separately:
quote_ident(schemaname) || '.' || quote_ident(tablename)
Consequently I just use %s to insert the already escaped table name in the later query. And %L to escape it a string literal for output.
I like to prepend parameter and variable names with _ to avoid naming conflicts with column names. No other special meaning.
There is a slight difference compared to your original function. This one returns an escaped identifier (double-quoted only where necessary) as table name, e.g.:
"WeIRD name"
instead of
WeIRD name
Much simpler yet
If possible, use inheritance to obviate the need for above function altogether. Complete example:
Select (retrieve) all records from multiple schemas using Postgres
You cannot use a plpgsql variable as SQL table name or SQL column name. In this case you have to use dynamic SQL:
FOR tran_row IN
EXECUTE format('SELECT * FROM %I
WHERE ch_edit_date >= starttime AND ch_edit_date <= endtime', tbl_name)
LOOP
out_record := tbl_name || ' ' || tran_row.ch_field_name;
RETURN NEXT out_record;
END LOOP;
i have a storerd procedure like below,
CREATE FUNCTION select_transactions3(text, text, int)
RETURNS SETOF transactions AS
$body$
DECLARE
rec transactions%ROWTYPE;
BEGIN
FOR rec IN (SELECT invoice_no, trans_date FROM transactions WHERE $1 = $2 limit $3 )
LOOP
RETURN NEXT rec;
END LOOP;
END;
$body$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
when i execute query like this :
select * from select_transactions3("invoice_no", '1103300105472',10);
or
select * from select_transactions3(invoice_no, '1103300105472',10);
it getting error like this :
ERROR: column "invoice_no" does not exist
but when i try execute with one colon like this :
select * from select_transactions3('invoice_no', '1103300105472',10);
the result is no row.
how i can get the data like this :
invoice_no | trans_date
---------------+-------------------------
1103300105472 | 2011-03-30 12:25:35.694
thanks .
UPDATE : If we want a certain column of table that we want to show
CREATE FUNCTION select_to_transactions14(_col character varying, _val character varying, _limit int)
RETURNS SETOF RECORD AS
$$
DECLARE
rec record;
BEGIN
FOR rec IN EXECUTE 'SELECT invoice_no, amount FROM transactions
WHERE ' || _col || ' = $1 LIMIT $2' USING _val, _limit LOOP
RETURN NEXT rec;
END LOOP;
END;
$$ LANGUAGE plpgsql;
to get the result :
SELECT * FROM select_to_transactions14( 'invoice_no', '1103300105472',1)
as ("invoice_no" varchar(125), "amount" numeric(12,2));
Your function could look like this:
CREATE FUNCTION select_transactions3(_col text, _val text, _limit int)
RETURNS SETOF transactions AS
$BODY$
BEGIN
RETURN QUERY EXECUTE '
SELECT *
FROM transactions
WHERE ' || quote_ident(_col) || ' = $1
LIMIT $2'
USING _val, _limit;
END;
$BODY$
LANGUAGE plpgsql VOLATILE SECURITY DEFINER;
IN PostgreSQL 9.1 or later that's simpler with format()
...
RETURN QUERY EXECUTE format('
SELECT *
FROM transactions
WHERE %I = $1
LIMIT $2', _col)
USING _val, _limit;
...
%I escapes identifiers like quote_ident().
Major points:
You were bumping into the limitation of dynamic SQL that you cannot use parameters for identifiers. You have to build the query string with the column name and then execute it.
You can do that with values though. I demonstrate the use of the USING clause for EXECUTE. Also note the use of quote_ident(): prevents SQL injection and certain syntax errors.
I also largely simplified your function. [RETURN QUERY EXECUTE][3] makes your code shorter and faster. No need to loop if all you do is return the row.
I use named IN parameters, so you don't get confused with the $-notation in the query string. $1 and $2 inside the query string refer to the values provided in the USING clause, not to the input parameters.
I change to SELECT * as you have to return the whole row to match the declared return type anyway.
Last but not least: Be sure to consider what the manual has to say about functions declared SECURITY DEFINER.
RETURN TYPE
If you don't want to return the whole row, one convenient possibility is:
CREATE FUNCTION select_transactions3(_col text, _val text, _limit int)
RETURNS TABLE (invoice_no varchar(125), amount numeric(12,2) AS ...
Then you don't have to provide a column definition list with every call and can simplify to:
SELECT * FROM select_to_transactions3('invoice_no', '1103300105472', 1);
You can query all databases from the server and sort them according to your own database.
SELECT column_name
FROM INFORMATION_SCHEMA.COLUMNS
WHERE TABLE_NAME = 'tableName';