Error: a column definition list is required for functions in dblink using PostgreSQL 9.3 - postgresql

I have the following function:
In which I am updating one database table by joining other database table by using the dblink().
I have installed:
create extension dblink;
The more details as shown below:
CREATE OR REPLACE FUNCTION Fun_test
(
Table_Name varchar
)
RETURNS void AS
$BODY$
DECLARE
dynamic_statement varchar;
BEGIN
perform dblink_connect('port=5234 dbname=testdb user=postgres password=****');
dynamic_statement := 'With CTE AS
(
Select HNumber,JoiningDate,Name,Address
From '|| Table_Name ||'c
)
, Test_A
AS
(
Select Row_Number() over ( Partition by PNumber order by Date1 Desc,Date2 Desc) AS roNum,
Name,PNumber,Date1,Address
From dblink(
''Select distinct PNumber,
(
case when fname is null then '' else fname end || '' ||
case when lname is null then '' else lname end
) as FullName,
Address,
Date1,Date2
From testdb_Table
inner join CTE on CTE.HNumber = PNumber''
) Num
)
Update CTE
Set
Name = Test_A.FullName
,SubAddress_A = Test_A.Address
,Date1 = Test_A.Date1
from CTE
left outer join Test_A on
CTE.HNumber= Test_A.PNumber
where roNum =1';
RAISE INFO '%',dynamic_statement;
EXECUTE dynamic_statement;
perform dblink_disconnect();
END;
$BODY$
LANGUAGE PLPGSQL;
Calling Function:
select fun_test('test1');
Getting an error:
ERROR: a column definition list is required for functions returning "record"
LINE 11: From dblink
^

You have to tell PostgreSQL what the columns the dblink query will return are.
See the manual for dblink for details.
This is the same as for any function returning a runtime-determined record type. You can't query it without telling PostgreSQL what the column layout of the results will be.
You use a column specifier list, e.g.
SELECT * FROM my_function_returning_record() f(col1 text, col2 integer);
If you are on a current PostgreSQL version you may want to look at postgres_fdw as an alternative to dblink.

Related

Postgresql : null values cannot be formatted as an SQL identifier

how to resolve this ERROR: null values cannot be formatted as an SQL identifier when trying to select my function:
select ws_sls_core.ars_pricing_test()
ERROR: null values cannot be formatted as an SQL identifier
CONTEXT: SQL statement "select string_agg(distinct format('(props ->> %L) as %I', w_order_line_d.matl_grp2_desc, w_order_line_d.matl_grp2_desc), ', ')
from ws_sls_core.w_support_pricing_d
left join ws_sls_core.w_order_line_d
on w_support_pricing_d.svc_pricing_type = w_order_line_d.matl_grp2_cd"
PL/pgSQL function ars_pricing_test() line 6 at SQL statement
I have checked and the query in use doesn't produce any NULL
select * from ws_sls_core.w_support_pricing_d where svc_pricing_type is
null
I have tried the below-mentioned code without JOIN and it works fine, I need an additional column and have to use join, and I am only seeing this error.
Here is my complete code
CREATE OR REPLACE FUNCTION ws_sls_core.ars_pricing_test(
)
RETURNS boolean
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
declare
l_sql text;
l_columns text;
begin
select string_agg(distinct format('(props ->> %L) as %I', w_order_line_d.matl_grp2_desc, w_order_line_d.matl_grp2_desc), ', ')
into l_columns
from ws_sls_core.w_support_pricing_d
left join ws_sls_core.w_order_line_d
on w_support_pricing_d.svc_pricing_type = w_order_line_d.matl_grp2_cd;
-- and A.svc_pricing_type is not null;
l_sql :=
'create or replace view ars_pricing_test as
select w_support_pricing_d.item_num, '||l_columns||'
from (
select w_support_pricing_d.item_num, json_object_agg(w_order_line_d.matl_grp2_desc,w_support_pricing_d.mnth_maint_price) as props
from ws_sls_core.w_support_pricing_d
left join ws_sls_core.w_order_line_d
on w_support_pricing_d.svc_pricing_type = w_order_line_d.matl_grp2_cd
group by w_support_pricing_d.item_num
) t';
execute l_sql;
return true;
end;
$BODY$;
ALTER FUNCTION ws_sls_core.ars_pricing_test()

postgres lag and window to create cohort table [duplicate]

I am trying to create crosstab queries in PostgreSQL such that it automatically generates the crosstab columns instead of hardcoding it. I have written a function that dynamically generates the column list that I need for my crosstab query. The idea is to substitute the result of this function in the crosstab query using dynamic sql.
I know how to do this easily in SQL Server, but my limited knowledge of PostgreSQL is hindering my progress here. I was thinking of storing the result of function that generates the dynamic list of columns into a variable and use that to dynamically build the sql query. It would be great if someone could guide me regarding the same.
-- Table which has be pivoted
CREATE TABLE test_db
(
kernel_id int,
key int,
value int
);
INSERT INTO test_db VALUES
(1,1,99),
(1,2,78),
(2,1,66),
(3,1,44),
(3,2,55),
(3,3,89);
-- This function dynamically returns the list of columns for crosstab
CREATE FUNCTION test() RETURNS TEXT AS '
DECLARE
key_id int;
text_op TEXT = '' kernel_id int, '';
BEGIN
FOR key_id IN SELECT DISTINCT key FROM test_db ORDER BY key LOOP
text_op := text_op || key_id || '' int , '' ;
END LOOP;
text_op := text_op || '' DUMMY text'';
RETURN text_op;
END;
' LANGUAGE 'plpgsql';
-- This query works. I just need to convert the static list
-- of crosstab columns to be generated dynamically.
SELECT * FROM
crosstab
(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2',
'SELECT DISTINCT key FROM test_db ORDER BY 1'
)
AS x (kernel_id int, key1 int, key2 int, key3 int); -- How can I replace ..
-- .. this static list with a dynamically generated list of columns ?
You can use the provided C function crosstab_hash for this.
The manual is not very clear in this respect. It's mentioned at the end of the chapter on crosstab() with two parameters:
You can create predefined functions to avoid having to write out the
result column names and types in each query. See the examples in the
previous section. The underlying C function for this form of crosstab
is named crosstab_hash.
For your example:
CREATE OR REPLACE FUNCTION f_cross_test_db(text, text)
RETURNS TABLE (kernel_id int, key1 int, key2 int, key3 int)
AS '$libdir/tablefunc','crosstab_hash' LANGUAGE C STABLE STRICT;
Call:
SELECT * FROM f_cross_test_db(
'SELECT kernel_id, key, value FROM test_db ORDER BY 1,2'
,'SELECT DISTINCT key FROM test_db ORDER BY 1');
Note that you need to create a distinct crosstab_hash function for every crosstab function with a different return type.
Related:
PostgreSQL row to columns
Your function to generate the column list is rather convoluted, the result is incorrect (int missing after kernel_id), it can be replaced with this SQL query:
SELECT 'kernel_id int, '
|| string_agg(DISTINCT key::text, ' int, ' ORDER BY key::text)
|| ' int, DUMMY text'
FROM test_db;
And it cannot be used dynamically anyway.
#erwin-brandstetter: The return type of the function isn't an issue if you're always returning a JSON type with the converted results.
Here is the function I came up with:
CREATE OR REPLACE FUNCTION report.test(
i_start_date TIMESTAMPTZ,
i_end_date TIMESTAMPTZ,
i_interval INT
) RETURNS TABLE (
tab JSON
) AS $ab$
DECLARE
_key_id TEXT;
_text_op TEXT = '';
_ret JSON;
BEGIN
-- SELECT DISTINCT for query results
FOR _key_id IN
SELECT DISTINCT at_name
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start BETWEEN i_start_date AND i_end_date
AND interval_type_id = i_interval
LOOP
-- build function_call with datatype of column
IF char_length(_text_op) > 1 THEN
_text_op := _text_op || ', ' || _key_id || ' NUMERIC(20,2)';
ELSE
_text_op := _text_op || _key_id || ' NUMERIC(20,2)';
END IF;
END LOOP;
-- build query with parameter filters
RETURN QUERY
EXECUTE '
SELECT array_to_json(array_agg(row_to_json(t)))
FROM (
SELECT * FROM crosstab(''SELECT date_start, at.at_name, cda.amount ct
FROM report.company_data_date cd
JOIN report.company_data_amount cda ON cd.id = cda.company_data_date_id
JOIN report.amount_types at ON cda.amount_type_id = at.id
WHERE date_start between $$' || i_start_date::TEXT || '$$ AND $$' || i_end_date::TEXT || '$$
AND interval_type_id = ' || i_interval::TEXT || ' ORDER BY date_start'')
AS ct (date_start timestamptz, ' || _text_op || ')
) t;';
END;
$ab$ LANGUAGE 'plpgsql';
So, when you run it, you get the dynamic results in JSON, and you don't need to know how many values were pivoted:
select * from report.test(now()- '1 week'::interval, now(), 1);
tab
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{"date_start":"2015-07-27T08:40:01.277556-04:00","burn_rate":0.00,"monthly_revenue":5800.00,"cash_balance":0.00},{"date_start":"2015-07-27T08:50:02.458868-04:00","burn_rate":34000.00,"monthly_revenue":15800.00,"cash_balance":24000.00}]
(1 row)
Edit: If you have mixed datatypes in your crosstab, you can add logic to look it up for each column with something like this:
SELECT a.attname as column_name, format_type(a.atttypid, a.atttypmod) AS data_type
FROM pg_attribute a
JOIN pg_class b ON (a.attrelid = b.relfilenode)
JOIN pg_catalog.pg_namespace n ON n.oid = b.relnamespace
WHERE n.nspname = $$schema_name$$ AND b.relname = $$table_name$$ and a.attstattarget = -1;"
I realise this is an older post but struggled for a little while on the same issue.
My Problem Statement:
I had a table with muliple values in a field and wanted to create a crosstab query with 40+ column headings per row.
My Solution was to create a function which looped through the table column to grab values that I wanted to use as column headings within the crosstab query.
Within this function I could then Create the crosstab query. In my use case I added this crosstab result into a separate table.
E.g.
CREATE OR REPLACE FUNCTION field_values_ct ()
RETURNS VOID AS $$
DECLARE rec RECORD;
DECLARE str text;
BEGIN
str := '"Issue ID" text,';
-- looping to get column heading string
FOR rec IN SELECT DISTINCT field_name
FROM issue_fields
ORDER BY field_name
LOOP
str := str || '"' || rec.field_name || '" text' ||',';
END LOOP;
str:= substring(str, 0, length(str));
EXECUTE 'CREATE EXTENSION IF NOT EXISTS tablefunc;
DROP TABLE IF EXISTS temp_issue_fields;
CREATE TABLE temp_issue_fields AS
SELECT *
FROM crosstab(''select issue_id, field_name, field_value from issue_fields order by 1'',
''SELECT DISTINCT field_name FROM issue_fields ORDER BY 1'')
AS final_result ('|| str ||')';
END;
$$ LANGUAGE plpgsql;
The approach described here worked well for me.
Instead of retrieving the pivot table directly. The easier approach is to let the function generate a SQL query string. Dynamically execute the resulting SQL query string on demand.

Perform query using tables and columns from information_schema

I'm trying to using information_schema.columns to find all of the columns in my database that has a geometry type and then check the SRID for the data in those columns.
I can do this with multiple queries where I first find the table names and column names
SELECT table_name, column_name
FROM information_schema.columns
WHERE udt_name = 'geometry';
and then (manually)
SELECT ST_SRID(column_name)
FROM table_name;
for each entry.
Does anyone how to streamline this into a single query?
Table names can't be variable; Postgres needs to be able to come up with an execution plan before it knows the parameter values. So you can't do this in a simple SQL statement.
Instead, you need to construct a dynamic query string using a procedural language like PL/pgSQL:
CREATE FUNCTION SRIDs() RETURNS TABLE (
tablename TEXT,
columnname TEXT,
srid INTEGER
) AS $$
BEGIN
FOR tablename, columnname IN (
SELECT table_name, column_name
FROM information_schema.columns
WHERE udt_name = 'geometry'
)
LOOP
EXECUTE format(
'SELECT ST_SRID(%s) FROM %s',
columnname, tablename
) INTO srid;
RETURN NEXT;
END LOOP;
END
$$
LANGUAGE plpgsql;
SELECT * FROM SRIDs();

How to execute dynamic query in PostgreSQL?

I am trying to execute dynamic query using PostgreSQL. I have a function with three parameters and need to suffix that parameters with some variables(to make view name) and need to retrieve rows from that variables(view) and return the result.
Example
create or replace function testing(abc varchar,def varchar,ghi varchar)
returns setof record as
$BODY$
Declare
temptable1 varchar :='temp1_';
temptable2 varchar :='temp2_';
viewname varchar :='view_';
Body
temptable1 := temptable1||abc;
temptable2 := temptable2||def;
viewname := viewname||ghi;
execute 'Drop table if exists'||temptable1;
execute 'Drop table if exists'||temptable2;
WITH cm
AS
(
SELECT "ssno","rlno",
DENSE_RANK() OVER(Partition by "ssno" Order By "rlno" )FoundIn
From viewname;
)
SELECT DISTINCT * INTO temptable1
FROM cm
WHERE FoundIn > 1;
SELECT DISTINCT cr."ssno", cdr."rlno"
INTO temptable2
FROM temptable1 l1
INNER JOIN viewname cr on l1."rlno" = cr."rlno"
ORDER BY "rlno";
/* Need to result should be display for below query */
SELECT DISTINCT cr.ssno AS Nos, cr.rlno, FoundIn,cr.Name, cr.Address,
from temptable1 l1
inner join viewname cr on l1.rlno = cr.rlno
order by "rlno"
end;
$BODY$
Language plpgsql;

EXECUTE...INTO...USING statement in PL/pgSQL can't execute into a record?

I'm attempting to write an area of a function in PL/pgSQL that loops through an hstore and sets a record's column(the key of the hstore) to a specific value (the value of the hstore). I'm using Postgres 9.1.
The hstore will look like: ' "column1"=>"value1","column2"=>"value2" '
Generally, here is what I want from a function that takes in an hstore and has a record with values to modify:
FOR my_key, my_value IN
SELECT key,
value
FROM EACH( in_hstore )
LOOP
EXECUTE 'SELECT $1'
INTO my_row.my_key
USING my_value;
END LOOP;
The error which I am getting with this code:
"myrow" has no field "my_key". I've been searching for quite a while now for a solution, but everything else I've tried to achieve the same result hasn't worked.
Simpler alternative to your posted answer. Should perform much better.
This function retrieves a row from a given table (in_table_name) and primary key value (in_row_pk), and inserts it as new row into the same table, with some values replaced (in_override_values). The new primary key value as per default is returned (pk_new).
CREATE OR REPLACE FUNCTION f_clone_row(in_table_name regclass
, in_row_pk int
, in_override_values hstore
, OUT pk_new int)
LANGUAGE plpgsql AS
$func$
DECLARE
_pk text; -- name of PK column
_cols text; -- list of names of other columns
BEGIN
-- Get name of PK column
SELECT INTO _pk a.attname
FROM pg_catalog.pg_index i
JOIN pg_catalog.pg_attribute a ON a.attrelid = i.indrelid
AND a.attnum = i.indkey[0] -- single PK col!
WHERE i.indrelid = in_table_name
AND i.indisprimary;
-- Get list of columns excluding PK column
SELECT INTO _cols string_agg(quote_ident(attname), ',')
FROM pg_catalog.pg_attribute
WHERE attrelid = in_table_name -- regclass used as OID
AND attnum > 0 -- exclude system columns
AND attisdropped = FALSE -- exclude dropped columns
AND attname <> _pk; -- exclude PK column
-- INSERT cloned row with override values, returning new PK
EXECUTE format('
INSERT INTO %1$I (%2$s)
SELECT %2$s
FROM (SELECT (t #= $1).* FROM %1$I t WHERE %3$I = $2) x
RETURNING %3$I'
, in_table_name, _cols, _pk)
USING in_override_values, in_row_pk -- use override values directly
INTO pk_new; -- return new pk directly
END
$func$;
Call:
SELECT f_clone_row('tbl', 1, '"col1"=>"foo_new","col2"=>"bar_new"');
db<>fiddle here
Old sqlfiddle
Use regclass as input parameter type, so only valid table names can be used to begin with and SQL injection is ruled out. The function also fails earlier and more gracefully if you should provide an illegal table name.
Use an OUT parameter (pk_new) to simplify the syntax.
No need to figure out the next value for the primary key manually. It is inserted automatically and returned after the fact. That's not only simpler and faster, you also avoid wasted or out-of-order sequence numbers.
Use format() to simplify the assembly of the dynamic query string and make it less error-prone. Note how I use positional parameters for identifiers and unquoted strings respectively.
I build on your implicit assumption that allowed tables have a single primary key column of type integer with a column default. Typically serial columns.
Key element of the function is the final INSERT:
Merge override values with the existing row using the #= operator in a subselect and decompose the resulting row immediately.
Then you can select only relevant columns in the main SELECT.
Let Postgres assign the default value for the PK and get it back with the RETURNING clause.
Write the returned value into the OUT parameter directly.
All done in a single SQL command, that is generally fastest.
Since I didn't want to have to use any external functions for speed purposes, I created a solution using hstores to insert a record into a table:
CREATE OR REPLACE FUNCTION fn_clone_row(in_table_name character varying, in_row_pk integer, in_override_values hstore)
RETURNS integer
LANGUAGE plpgsql
AS $function$
DECLARE
my_table_pk_col_name varchar;
my_key text;
my_value text;
my_row record;
my_pk_default text;
my_pk_new integer;
my_pk_new_text text;
my_row_hstore hstore;
my_row_keys text[];
my_row_keys_list text;
my_row_values text[];
my_row_values_list text;
BEGIN
-- Get the next value of the pk column for the table.
SELECT ad.adsrc,
at.attname
INTO my_pk_default,
my_table_pk_col_name
FROM pg_attrdef ad
JOIN pg_attribute at
ON at.attnum = ad.adnum
AND at.attrelid = ad.adrelid
JOIN pg_class c
ON c.oid = at.attrelid
JOIN pg_constraint cn
ON cn.conrelid = c.oid
AND cn.contype = 'p'
AND cn.conkey[1] = at.attnum
JOIN pg_namespace n
ON n.oid = c.relnamespace
WHERE c.relname = in_table_name
AND n.nspname = 'public';
-- Get the next value of the pk in a local variable
EXECUTE ' SELECT ' || my_pk_default
INTO my_pk_new;
-- Set the integer value back to text for the hstore
my_pk_new_text := my_pk_new::text;
-- Add the next value statement to the hstore of changes to make.
in_override_values := in_override_values || hstore( my_table_pk_col_name, my_pk_new_text );
-- Copy over only the given row to the record.
EXECUTE ' SELECT * '
' FROM ' || quote_ident( in_table_name ) ||
' WHERE ' || quote_ident( my_table_pk_col_name ) ||
' = ' || quote_nullable( in_row_pk )
INTO my_row;
-- Replace the values that need to be changed in the column name array
my_row := my_row #= in_override_values;
-- Create an hstore of my record
my_row_hstore := hstore( my_row );
-- Create a string of comma-delimited, quote-enclosed column names
my_row_keys := akeys( my_row_hstore );
SELECT array_to_string( array_agg( quote_ident( x.colname ) ), ',' )
INTO my_row_keys_list
FROM ( SELECT unnest( my_row_keys ) AS colname ) x;
-- Create a string of comma-delimited, quote-enclosed column values
my_row_values := avals( my_row_hstore );
SELECT array_to_string( array_agg( quote_nullable( x.value ) ), ',' )
INTO my_row_values_list
FROM ( SELECT unnest( my_row_values ) AS value ) x;
-- Insert the values into the columns of a new row
EXECUTE 'INSERT INTO ' || in_table_name || '(' || my_row_keys_list || ')'
' VALUES (' || my_row_values_list || ')';
RETURN my_pk_new;
END
$function$;
It's quite a bit longer than what I had envisioned, but it works and is actually quite speedy.