PostgreSQL: Create foreign table with string and variable - postgresql

My problem: I need to define foreign table dynamically and set different where conditions every time. I am doing this in function, but I am getting error which doesn't make sense to me during creation of the foreign table(via oracle_fdw).
Creation of foreign table that works:
CREATE FOREIGN TABLE MYFOREIGNTABLE
(
column1 int,
column2 text
)
SERVER fwdb
OPTIONS (table $$(
select
column1,
column2
from
table1
where
column3 = 5
and column4 = 'a'
)$$);
Now if I try to split the string for putting there my variables (instead of variable I left there number so anybody can try it), it stop working and I am getting error
[Code: 0, SQL State: 42601] ERROR: syntax error at or near "||"
CREATE FOREIGN TABLE MYFOREIGNTABLE
(
column1 int,
column2 text
)
SERVER fwdb
OPTIONS (table $$(
select
column1,
column2
from
table1
where
column3 = $$ || 5 || $$
and column4 = 'a'
)$$);
Just for sure I tried my string in select to make sure I didn't do any syntax mistake and it works no problem
select $$(
select
column1,
column2
from
table1
where
column3 = $$ || 5 || $$
and column4 = 'a'
)$$
I tried few other things like using concat() or putting my whole string into variable OPTIONS (table myvariable); But neither worked. What is the correct syntax here?
PostgreSQL 11.10 on x86_64-pc-linux-gnu, compiled by gcc (GCC) 4.8.5
20150623 (Red Hat 4.8.5-39), 64-bit

You have to use a string literal as value for a FDW option, expressions like the string concatenation you are trying to use are not allowed.
You will have to construct the complete statement with dynamic SQL, for example
DO
$$DECLARE
var integer := 5;
BEGIN
EXECUTE
format(
E'CREATE FOREIGN TABLE MYFOREIGNTABLE (\n'
' column1 int,\n'
' column2 text\n'
') SERVER fwdb OPTIONS (\n'
' table ''(SELECT column1,\n'
' column2\n'
' FROM table1\n'
' WHERE column3 = %s\n'
' AND column4 = ''''a'''')'')',
var
);
END;$$;
For string variables you have to get the quoting right by using quote_literal(quote_literal(var)).

Related

Concatenate string instead of just replacing it

I have a table with standard columns where I want to perform regular INSERTs.
But one of the columns is of type varchar with special semantics. It's a string that's supposed to behave as a set of strings, where the elements of the set are separated by commas.
Eg. if one row has in that varchar column the value fish,sheep,dove, and I insert the string ,fish,eagle, I want the result to be fish,sheep,dove,eagle (ie. eagle gets added to the set, but fish doesn't because it's already in the set).
I have here this Postgres code that does the "set concatenation" that I want:
SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array('fish,sheep,dove' || ',fish,eagle', ','))) AS x;
But I can't figure out how to apply this logic to insertions.
What I want is something like:
CREATE TABLE IF NOT EXISTS t00(
userid int8 PRIMARY KEY,
a int8,
b varchar);
INSERT INTO t00 (userid,a,b) VALUES (0,1,'fish,sheep,dove');
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array(t00.b || EXCLUDED.b, ','))) AS x;
How can I achieve something like that?
Storing comma separated values is a huge mistake to begin with. But if you really want to make your life harder than it needs to be, you might want to create a function that merges two comma separated lists:
create function merge_lists(p_one text, p_two text)
returns text
as
$$
select string_agg(item, ',')
from (
select e.item
from unnest(string_to_array(p_one, ',')) as e(item)
where e.item <> '' --< necessary because of the leading , in your data
union
select t.item
from unnest(string_to_array(p_two, ',')) t(item)
where t.item <> ''
) t;
$$
language sql;
If you are using Postgres 14 or later, unnest(string_to_array(..., ',')) can be replace with string_to_table(..., ',')
Then your INSERT statement gets a bit simpler:
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = merge_lists(excluded.b, t00.b);
I think I was only missing parentheses around the SELECT statement:
INSERT INTO t00 (userid,a,b) VALUES (0,1,',fish,eagle')
ON CONFLICT (userid)
DO UPDATE SET
a = EXCLUDED.a,
b = (SELECT string_agg(unnest, ',') AS x FROM (SELECT DISTINCT unnest(string_to_array(t00.b || EXCLUDED.b, ','))) AS x);

Using a column of a join for argument the a function table in postgresql

I am join a table with a postgresql function table.
SELECT * FROM tb_accounts a,
(SELECT column1, column2 FROM ft_extra_data(a.id) ) e
And it is throwing me the following error:
ERROR: invalid reference to FROM-clause entry for table "a"
LINE 4: ft_extra_data(a.id)
^
HINT: There is an entry for table "a", but it cannot be referenced from this part of the query.
SQL state: 42P01
Character: 153
This would be an example of the table function (this example is to view the definition only):
CREATE OR REPLACE FUNCTION ft_extra_data(IN p_account_id bigint)
RETURNS TABLE(
column1 character varying, column2 character varying) AS
$BODY$
DECLARE
BEGIN
RETURN QUERY
SELECT 'xxx' as column1, 'yyy' as column2;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100
ROWS 1000;
I've been doing some research and haven't found anything, is this impossible to do?
Code example:
dbfiddle
In order to access table a in the subquery (aka derived table) you must make the join lateral:
SELECT *
FROM tb_accounts a
CROSS JOIN LATERAL ( SELECT column1, column2 FROM ft_extra_data(a.id) ) e;
Alternatively you can call put the function directly into the FROM clause without a derived table:
SELECT a.*, ft.column1, ft.column2
FROM tb_accounts a
CROSS JOIN LATERAL ft_extra_data(a.id) as ft;
In that case the lateral is optional, as ft_extra_data() is defined to return a table.

Dropping multiple columns based on their name?

I'm creating a procedure/function in PostgreSQL. I have an array containing some column name and a temporary table as follows;
columns_names varchar[] := array['A','B','C','D'];
table PQR(A integer, B integer, C integer, X integer, Y integer);
I want to drop columns X and Y(i.e columns which are not present given array).
Is there any way to achieve this in single line statement?
Something like
alter table pqr drop column where columnName not in column_names
You could do that if you are using function like you mentioned and language is set to plpgsql, then dynamic SQL is possible.
For example:
EXECUTE concat('ALTER TABLE ',
attrelid::regclass::text, ' ',
string_agg(concat('DROP COLUMN ', attname), ', ')
)
FROM pg_attribute
WHERE attnum > 0
AND NOT attisdropped
AND attrelid = 'PQR'::regclass
AND attname != ALL(array['A','B','C','D'])
GROUP BY attrelid;
It will only work for one table, otherwise it will complain about returning more than one row.
If you need more tables, then you can use LOOP and execute query in it.

Transpose columns and rows in Firebird 2.5

I've written a procedure in Firebird (Dialect 3), which returns me something like this:
column1 | column2 | column3 | column4 | ...
----------|-------------|-----------|------------|--------
1 | 55 | 2.5 | 100€ | ...
The specific column names don't really matter. I access it like this
SELECT * FROM MY_PROCEDURE(:START_DATE, :END_DATE);
It only return one row so I guess I could also access it with EXECUTE_PROCEDURE.
Now what I want is to transpose the columns and the rows in the return
row | result
----------|---------
column1 | 1
column2 | 55
column3 | 2.0
column4 | 100€
... | ...
What I initially did is somethink like this:
select 'column1' AS row, column1 AS result
FROM MY_PROCEDURE(:START_DATE, :END_DATE)
union all
select 'column2' AS row, column2 AS result
FROM MY_PROCEDURE(:START_DATE, :END_DATE)
union all
...
Basically one query for each column. It worked. However, eventually I ran into this problem:
Dynamic SQL Error
Too many Contexts of Relation/Procedure/Views. Maxium allowed is 255.
So I need to restructure my script. As you can see, my SQL knowledge is pretty mediocre, and I simply don't know how to fetch each column as a row in a single select.
Would anyone be able to help? Thanks in advance.
Firebird by itself as no unpivot or other built-in support for transposing columns.
The 'best' solution, and probably the most performing solution would be to rewrite MY_PROCEDURE (or write an alternative version) to output the rows transposed.
For example, assuming your stored procedure does something like this:
set term #;
create procedure test_1
returns (id integer, column1 double precision, column2 double precision, column3 double precision)
as
begin
for
select id, column1, column2, column3
from sometable
into :id, :column1, :column2, :column3 do
begin
suspend;
end
end#
set term ;#
You can then rewrite this by manually transposing the values into separate suspends:
set term #;
create procedure test_2
returns (id integer, columnname varchar(100), columnvalue double precision)
as
declare column1 double precision;
declare column2 double precision;
declare column3 double precision;
begin
for
select id, column1, column2, column3
from sometable
into :id, :column1, :column2, :column3 do
begin
columnname = 'column1';
columnvalue = column1;
suspend;
columnname = 'column2';
columnvalue = column2;
suspend;
columnname = 'column3';
columnvalue = column3;
suspend;
end
end#
set term ;#
This will output something like
id columnname columnvalue
1 column1 1.0
1 column2 1.5
1 column3 5.0
2 ...etc
This solution does require that all output (columnvalue) has the same type. Otherwise you will need to cast to a common data type.
Alternatively, you could chain the first procedure into the second procedure by using for select * from test_1 into .... This maybe more or less efficient depending on the internals of your stored procedure:
set term #;
create procedure test_3
returns (id integer, columnname varchar(100), columnvalue double precision)
as
declare column1 double precision;
declare column2 double precision;
declare column3 double precision;
begin
for
select id, column1, column2, column3 from test_1
into :id, :column1, :column2, :column3 do
begin
columnname = 'column1';
columnvalue = column1;
suspend;
columnname = 'column2';
columnvalue = column2;
suspend;
columnname = 'column3';
columnvalue = column3;
suspend;
end
end#
set term ;#
This last option is probably best if you need both variants of the output, as this means you will only have single place for the logic of that stored procedure.
For ad-hoc querying, you can also replace the stored procedure with an execute block with the same code.

Duplicate single database record

Hello what is the easiest way to duplicate a DB record over the same table?
My problem is that the table where I am doing this has many column, like 100+, and I don't like how the solution looks like. Here is what I do (this is inside plpqsql function):
...
1. duplicate record
INSERT INTO history
(SELECT NEXTVAL('history_id_seq'), col_1, col_2, ... , col_100)
FROM history
WHERE history_id = 1234
ORDER BY datetime DESC
LIMIT 1)
RETURNING
history_id INTO new_history_id;
2. update some columns
UPDATE history
SET
col_5 = 'test_5',
col_23 = 'test_23',
datetime = CURRENT_TIMESTAMP
WHERE history_id = new_history_id;
Here are the problems I am attempting to solve
Listing all these 100+ columns looks lame
When new column is added eventually the function should be updated too
On separate DB instances the column order might differ, which would cause the function fail
I am not sure if I can list them once more (solving issue 3) like insert into <table> (<columns_list>) values (<query>) but then the query looks even uglier.
I would like to achieve something like 'insert into ', but this seems impossible the unique primary key constraint will raise a duplication error.
Any suggestions?
Thanks in advance for you time.
This isn't pretty or particularly optimized but there are a couple of ways to go about this. Ideally, you might want to do this all in an UPDATE trigger though you could implement a duplication function something like this:
-- create source table
CREATE TABLE history (history_id serial not null primary key, col_2 int, col_3 int, col_4 int, datetime timestamptz default now());
-- add some data
INSERT INTO history (col_2, col_3, col_4)
SELECT g, g * 10, g * 100 FROM generate_series(1, 100) AS g;
-- function to duplicate record
CREATE OR REPLACE FUNCTION fn_history_duplicate(p_history_id integer) RETURNS SETOF history AS
$BODY$
DECLARE
cols text;
insert_statement text;
BEGIN
-- build list of columns
SELECT array_to_string(array_agg(column_name::name), ',') INTO cols
FROM information_schema.columns
WHERE (table_schema, table_name) = ('public', 'history')
AND column_name <> 'history_id';
-- build insert statement
insert_statement := 'INSERT INTO history (' || cols || ') SELECT ' || cols || ' FROM history WHERE history_id = $1 RETURNING *';
-- execute statement
RETURN QUERY EXECUTE insert_statement USING p_history_id;
RETURN;
END;
$BODY$
LANGUAGE 'plpgsql';
-- test
SELECT * FROM fn_history_duplicate(1);
history_id | col_2 | col_3 | col_4 | datetime
------------+-------+-------+-------+-------------------------------
101 | 1 | 10 | 100 | 2013-04-15 14:56:11.131507+00
(1 row)
As I noted in my original comment, you might also take a look at the colnames extension as an alternative to querying the information schema.
You don't need the update anyway, you can supply the constant values directly in the SELECT statement:
INSERT INTO history
SELECT NEXTVAL('history_id_seq'),
col_1,
col_2,
col_3,
col_4,
'test_5',
...
'test_23',
...,
col_100
FROM history
WHERE history_sid = 1234
ORDER BY datetime DESC
LIMIT 1
RETURNING history_sid INTO new_history_sid;