I am trying to pass xml nodes as a parameter and extract values from them and insert them into a table. I am able to do so if I pass the entire xml node but if I try to pass it through a variable I am facing syntax error. Since I will be having dynamic xml which will need to be parsed , I have to pass it via variable only. I am providing a simplified version below of what I am trying to achieve. I am getting a syntax error 'syntax error at or near "xmlvalue"'
CREATE TEMPORARY TABLE SAMPLE(Id text,Author text,Title text);
xmlvalue text := '<Book><Id>1</Id><Author>Subhrendu</Author><Title>Postgre</Title></Book>';
with data as (
select xmlvalue::xml val)
--select '<Book><Id>1</Id><Author>Subhrendu</Author><Title>Postgre</Title></Book>'::xml val)
INSERT INTO SAMPLE(Id, Author,Title)
SELECT Id, Author,Title
FROM data x,
XMLTABLE('/Book'
PASSING val
COLUMNS
Id text PATH 'Id',
Author text PATH 'Author',
Title text PATH 'Title' )
;
select * from sample
Edit 1 : As suggested I am now trying to wrap the above code inside a function, since we can't use variables outside procedures/functions.
create or replace function xml()
returns table(Id text,Author Text,Title Text)
as $$
declare
xmlvalue text := '<Book><Id>1</Id><Author>Subhrendu</Author><Title>Postgre</Title></Book>';
begin
CREATE TEMPORARY TABLE SAMPLE(Id text,Author text,Title text);
with data as (
select xmlvalue::xml val)
INSERT INTO SAMPLE(Id, Author,Title)
SELECT Id, Author,Title
FROM data x,
XMLTABLE('/Book'
PASSING val
COLUMNS
Id text PATH 'Id',
Author text PATH 'Author',
Title text PATH 'Title' )
;
return query
select s.Id,s.Author,s.Title from sample s ;
end;
$$
language plpgsql
While trying to execute the above function I am getting the below errors. What I understand from the error is I have to provide the table alias name to refer to a column.
ERROR: column reference "id" is ambiguous
LINE 4: SELECT Id, Author,Title
^
DETAIL: It could refer to either a PL/pgSQL variable or a table column.
QUERY: with data as (
select xmlvalue::xml val)
INSERT INTO SAMPLE(Id, Author,Title)
SELECT Id, Author,Title
FROM data x,
XMLTABLE('/Book'
PASSING val
COLUMNS
Id text PATH 'Id',
Author text PATH 'Author',
Title text PATH 'Title' )
CONTEXT: PL/pgSQL function xml() line 6 at SQL statement
SQL state: 42702
This works.
create or replace function xml()
returns table(Id text,Author Text,Title Text)
as $$
declare
xmlvalue text := '<Book><Id>1</Id><Author>Subhrendu</Author><Title>Postgre</Title></Book>';
begin
CREATE TEMPORARY TABLE SAMPLE(Id text,Author text,Title text);
with data as (
select xmlvalue::xml val)
INSERT INTO SAMPLE(Id, Author,Title)
SELECT d.Id,d.Author,d.Title
FROM data x,
XMLTABLE('/Book'
PASSING val
COLUMNS
Id text PATH 'Id',
Author text PATH 'Author',
Title text PATH 'Title' ) as d
;
return query
select s.Id,s.Author,s.Title from sample s ;
end;
$$
language plpgsql
Related
I have the following query that I call from .Net core
select * from crosstab(
$$select
cast(s.sample_name as text) as sample_name,
cast(dt.demographic_type_name as text) as demographic_type_name,
cast(sdd.demographic_value as text) as demographic_value
from
sample s
inner join sample_demographic_data sdd on s.sample_id = sdd.sample_id
inner join demographic_type dt on sdd.demographic_type_id = dt.demographic_type_id and dt.sort_order >= 0
where
s.sample_id in (0,14,28)
$$
) as t (
sample_name text,
"SEX" text,
"TUMOR_STAGE" text,
"RACE" text,
"TUMOR_TYPE" text,
"PATIENT_AGE" text);
);
This produces the following output:
However, I'd like to pass in an array of int for the sample Ids. I changed the where clause to look like this:
s.sample_id = ANY(#sample_id)
and added code from the .Net core application to add the parameter:
cmd.Parameters.Add("#sample_id", NpgsqlDbType.Array | NpgsqlDbType.Integer).Value = sampleIds;
When I do that, I get an error:
Npgsql.PostgresException (0x80004005): 42702: column reference "sample_id" is ambiguous
My assumption is that the parameter is not playing nicely with the $$ quoted query for crosstab. What is the correct syntax for including a parameter into the $$ quoted area of a crosstab query?
I would like to access a column by using variable instead of a static column name.
Example:
variable := 'customer';
SELECT table.variable (this is what I would prefer) instead of table.customer
I need this functionality as records in my table vary in terms of data length (eg. some have data in 10 columns, some in 14 or 16 columns etc.) so I need to address columns dynamically. As I understand, I can't address columns by their index (eg. select 8-th column of the table) right?
I can loop and put the desired column name in a variable for the given iteration. However, I get errors when I try to access a column using that variable (e.g. table_name.variable is not working).
For the sake of simplicity, I paste just some dummy code to illustrate the issue:
CREATE OR REPLACE FUNCTION dynamic_column_name() returns text
LANGUAGE PLPGSQL
AS $$
DECLARE
col_name text;
return_value text;
BEGIN
create table customer (
id bigint,
name varchar
);
INSERT INTO customer VALUES(1, 'Adam');
col_name := 'name';
-- SELECT customer.name INTO return_value FROM customer WHERE id = 1; -- WORKING, returns 'Adam' but it is not DYNAMIC.
-- SELECT customer.col_name INTO return_value FROM customer WHERE id = 1; -- ERROR: column customer.col_name does not exist
-- SELECT 'customer.'||col_name INTO return_value FROM customer WHERE id = 1; -- NOT working, returns 'customer.name'
-- SELECT customer||'.'||col_name INTO return_value FROM customer WHERE id = 1; -- NOT working, returns whole record + .name, i.e.: (1,Adam).name
DROP TABLE customer;
RETURN return_value;
END;
$$;
SELECT dynamic_column_name();
So how to obtain 'Adam' string with SQL query using col_name variable when addressing column of customer table?
SQL does not allow to parameterize identifiers (incl. column names) or syntax elements. Only values can be parameters.
You need dynamic SQL for that. (Basically, build the SQL string and execute.) Use EXECUTE in a plpgsql function. There are multiple syntax variants. For your simple example:
CREATE OR REPLACE FUNCTION dynamic_column_name(_col_name text, OUT return_value text)
RETURNS text
LANGUAGE plpgsql AS
$func$
BEGIN
EXECUTE format('SELECT %I FROM customer WHERE id = 1', _col_name)
INTO return_value;
END
$func$;
Call:
SELECT dynamic_column_name('name');
db<>fiddle here
Data types have to be compatible, of course.
More examples:
How to use text input as column name(s) in a Postgres function?
https://stackoverflow.com/search?q=%5Bpostgres%5D+%5Bdynamic-sql%5D+parameter+column+code%3AEXECUTE
I am very new to Postgres SQL. My requirement is to pass a dynamic number of columnId, columnValue pair and insert this combination in a table(Example: employeeId, employeeName combination). The length of list can be anything. I am thinking of building dynamic query at the code-side and pass it as string to function and execute the statement. Is there any better approach for this problem. Any example or idea will be much appreciated.
If you are allowed to pass that information as a structured JSON value, this gets quite easy. Postgres has a feature to map a JSON value to a table type using the function json_populate_record
Sample table:
create table some_table
(
id integer primary key,
some_name text,
some_date date,
some_number integer
);
The insert function:
create function do_insert(p_data text)
returns void
as
$$
insert into some_table (id, some_name, some_date, some_number)
select (json_populate_record(null::some_table, p_data::json)).*;
$$
language sql;
Then you can use:
select do_insert('{"id": 42, "some_name": "Arthur"}');
select do_insert('{"id": 1, "some_value": 42}');
Note that columns that are not part of the passed JSON string are explicitly set to NULL using this approach.
If the passed string contains column names that do not exist, they are simply ignored, so
select do_insert('{"id": 12, "some_name": "Arthur", "not_there": 123}');
will ignore the not_there "column".
Online example: https://rextester.com/JNIBL25827
Edit
A similar approach can be used for updating:
create function do_update(p_data text)
returns void
as
$$
update some_table
set id = t.id,
some_name = t.some_name,
some_date = t.some_date,
some_number = t.some_number
from json_populate_record(null::some_table, p_data::json) as t;
$$
language sql;
or using insert on conflict to cover both use cases with one function:
create function do_upsert(p_data text)
returns void
as
$$
insert into some_table (id, some_name, some_date, some_number)
select (json_populate_record(null::some_table, p_data::json)).*
on conflict (id) do update
set id = excluded.id,
some_name = excluded.some_name,
some_date = excluded.some_date,
some_number = excluded.some_number
$$
language sql;
I have a generic key/value table that I would like to preprocess in a function to filter by the key, and then name the value according to the key.
The table is like this where id points into another table (but really is don't care for the purposes of this question):
CREATE TABLE keyed_values (id INTEGER, key TEXT, value TEXT, PRIMARY KEY(id, key));
INSERT INTO keyed_values(id, key, value) VALUES
(1, 'x', 'Value for x for 1'),
(1, 'y', 'Value for y for 1'),
(2, 'x', 'Value for x for 2'),
(2, 'z', 'Value for z for 2');
and so forth. The key values can vary and not all IDs will use the same keys.
Specifically, what I'd like to do is to create a function that would produce rows matching a given key. Then it would return a row for each matching key with the value named with the key.
So the output of the function given the key name argument 'x' would be the same as if I executed this:
SELECT id, value x FROM keyed_values WHERE key = 'x';
id | x
----+-------------------
1 | Value for x for 1
2 | Value for x for 2
(2 rows)
Here's my stab, put together from looking at some other SO answers:
CREATE FUNCTION value_for(key TEXT) RETURNS SETOF RECORD AS $$
BEGIN
RETURN QUERY EXECUTE format('SELECT id, value %I FROM keyed_values WHERE key = %I', key, key);
END;
$$ LANGUAGE 'plpgsql';
The function is accepted. But when I execute it, I get this error:
SELECT * FROM value_for('x');
ERROR: a column definition list is required for functions returning "record"
LINE 1: SELECT * FROM value_for('x');
I get what it's telling me, but not how to fix that. How can I dynamically define the output column name?
The key used in the WHERE clause is not an identifier but a literal, so the function should looks like:
CREATE FUNCTION value_for(key TEXT) RETURNS SETOF RECORD AS $$
BEGIN
RETURN QUERY EXECUTE format('SELECT id, value %I FROM keyed_values WHERE key = %L', key, key);
END;
$$ LANGUAGE plpgsql;
From the documentation:
If the function has been defined as returning the record data type, then an alias or the key word AS must be present, followed by a column definition list in the form ( column_name data_type [, ... ]). The column definition list must match the actual number and types of columns returned by the function.
Use a column definition list:
SELECT * FROM value_for('x') AS (id int, x text);
Note that you do not need dynamic SQL nor plpgsql, as the column name is defined in the above list. The simple SQL function should be a bit faster:
CREATE OR REPLACE FUNCTION value_for(_key text)
RETURNS SETOF RECORD AS $$
SELECT id, value
FROM keyed_values
WHERE key = _key
$$ LANGUAGE SQL;
If you do not like being forced to append a column definition list to every function call, you can return a table from the function:
CREATE OR REPLACE FUNCTION value_for_2(_key text)
RETURNS TABLE(id int, x text) AS $$
SELECT id, value
FROM keyed_values
WHERE key = _key
$$ LANGUAGE SQL;
SELECT * FROM value_for_2('x');
id | x
----+-------------------
1 | Value for x for 1
2 | Value for x for 2
(2 rows)
However this does not solve the issue. There is no way to dynamically define a column name returned from a function. The names and types of returned columns must be specified before the query is executed.
I have a fields table to store column information for other tables:
CREATE TABLE public.fields (
schema_name varchar(100),
table_name varchar(100),
column_text varchar(100),
column_name varchar(100),
column_type varchar(100) default 'varchar(100)',
column_visible boolean
);
And I'd like to create a function to fetch data for a specific table.
Just tried sth like this:
create or replace function public.get_table(schema_name text,
table_name text,
active boolean default true)
returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
begin
for r in EXECUTE 'select * from ' || entity_name loop
return next r;
end loop;
return;
end
$$
language plpgsql;
With this function I have to specify columns when I call it!
select * from public.get_table('public', 'users') as dept(id int, uname text);
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in public.fields table.
Solution for the simple case
As explained in the referenced answers below, you can use registered (row) types, and thus implicitly declare the return type of a polymorphic function:
CREATE OR REPLACE FUNCTION public.get_table(_tbl_type anyelement)
RETURNS SETOF anyelement AS
$func$
BEGIN
RETURN QUERY EXECUTE format('TABLE %s', pg_typeof(_tbl_type));
END
$func$ LANGUAGE plpgsql;
Call:
SELECT * FROM public.get_table(NULL::public.users); -- note the syntax!
Returns the complete table (with all user columns).
Wait! How?
Detailed explanation in this related answer, chapter
"Various complete table types":
Refactor a PL/pgSQL function to return the output of various SELECT queries
TABLE foo is just short for SELECT * FROM foo:
Is there a shortcut for SELECT * FROM?
2 steps for completely dynamic return type
But what you are trying to do is strictly impossible in a single SQL command.
I want to pass schema_name and table_name as parameters to function and get record list, according to column_visible field in
public.fields table.
There is no direct way to return an arbitrary selection of columns (return type not known at call time) from a function - or any SQL command. SQL demands to know number, names and types of resulting columns at call time. More in the 2nd chapter of this related answer:
How do I generate a pivoted CROSS JOIN where the resulting table definition is unknown?
There are various workarounds. You could wrap the result in one of the standard document types (json, jsonb, hstore, xml).
Or you generate the query with one function call and execute the result with the next:
CREATE OR REPLACE FUNCTION public.generate_get_table(_schema_name text, _table_name text)
RETURNS text AS
$func$
SELECT format('SELECT %s FROM %I.%I'
, string_agg(quote_ident(column_name), ', ')
, schema_name
, table_name)
FROM fields
WHERE column_visible
AND schema_name = _schema_name
AND table_name = _table_name
GROUP BY schema_name, table_name
ORDER BY schema_name, table_name;
$func$ LANGUAGE sql;
Call:
SELECT public.generate_get_table('public', 'users');
This create a query of the form:
SELECT usr_id, usr FROM public.users;
Execute it in the 2nd step. (You might want to add column numbers and order columns.)
Or append \gexec in psql to execute the return value immediately. See:
How to force evaluation of subquery before joining / pushing down to foreign server
Be sure to defend against SQL injection:
INSERT with dynamic table name in trigger function
Define table and column names as arguments in a plpgsql function?
Asides
varchar(100) does not make much sense for identifiers, which are limited to 63 characters in standard Postgres:
Maximum characters in labels (table names, columns etc)
If you understand how the object identifier type regclass works, you might replace schema and table name with a singe regclass column.
I think you just need another query to get the list of columns you want.
Maybe something like (this is untested):
create or replace function public.get_table(_schema_name text, _table_name text, active boolean default true) returns setof record as $$
declare
entity_name text default schema_name || '.' || table_name;
r record;
columns varchar;
begin
-- Get the list of columns
SELECT string_agg(column_name, ', ')
INTO columns
FROM public.fields
WHERE fields.schema_name = _schema_name
AND fields.table_name = _table_name
AND fields.column_visible = TRUE;
-- Return rows from the specified table
RETURN QUERY EXECUTE 'select ' || columns || ' from ' || entity_name;
RETURN;
end
$$
language plpgsql;
Keep in mind that column/table references may need to be surrounded by double quotes if they have certain characters in them.