I have a custom type:
CREATE TYPE public.relacion AS
(id integer,
codpadre character varying,
codhijo character varying,
canpres numeric(7,3),
cancert numeric(7,3),--this is usually NULL
posicion smallint);
After that, I need another type from relacion type:
CREATE TYPE public.borrarrelacion AS
(idborrar integer,
paso integer,
rel relacion);
Well, now into a function I need to copy some rows from a relacion type table to borrarrelacion type table:
This is a snippet of my code:
DECLARE:
r relacion%ROWTYPE;
------------
BEGIN
EXECUTE FORMAT ('CREATE TABLE IF NOT EXISTS "mytable" OF borrarrelacion (PRIMARY KEY (idborrar))');
EXECUTE FORMAT ('SELECT * FROM %I WHERE codpadre = %s AND codhijo = %s',
tablarelacion,quote_literal(codigopadre),quote_literal(codigohijo)) INTO r;
EXECUTE FORMAT ('INSERT INTO "mytable" VALUES(0,0,%s)',r);
But I get an error because the field r.cancert is NULL and it's trying to insert something like that:
INSERT INTO "mytable" VALUES(0,0,(0,'cod1','cod2',10,,0));
I can solve this problem reading every field of r and putting it values in the INSERT statement, like that: (and changing the NULL value by 0)
EXECUTE FORMAT ('INSERT INTO "mytable" VALUES(0,0,(%s,%s,%s,%s,%s,%s))',
r.id,quote_literal(r.codpadre),quote_literal(r.codhijo),r.canpres,COALESCE(r.cancert,0),r.posicion);
But I would like to know if I can avoid this way and I can insert exactly the same row from one table to other.
Related
I am trying to create a stored procedure to insert data into a table in a PostgreSQL 13 database. How do I automatically enter the primary key?
Here is how I made the table
CREATE TABLE public.testing_serial
(
id integer NOT NULL DEFAULT nextval('testing_serial_id_seq'::regclass),
name character varying COLLATE pg_catalog."default",
age integer,
CONSTRAINT testing_serial_pkey PRIMARY KEY (id)
)
Here is my attempted at a stored procedure and calling it
CREATE OR REPLACE PROCEDURE test_input("name" character varying,
"age" integer)
LANGUAGE SQL
AS $$
INSERT INTO testing_serial VALUES ("name","age");
$$;
CALL test_input('Cheese',2000);
I got the following error and surmise that SQL is expecting an input for the "id" column. Is that correct? How can I automatically generate the ID during input?
ERROR: column "id" is of type integer but expression is of type character varying
LINE 5: INSERT INTO testing_serial VALUES ("name","age");
^
HINT: You will need to rewrite or cast the expression.
SQL state: 42804
Character: 145
you need to specify the column list with the INSERT statement:
INSERT INTO testing_serial (name, age) VALUES ("name","age");
Steps for Execution:
Table Creation
CREATE TABLE xyz.table_a(
id bigint NOT NULL,
scores jsonb,
CONSTRAINT table_a_pkey PRIMARY KEY (id)
);
Add some dummy data :
INSERT INTO xyz.table_a(
id, scores)
VALUES (1, '{"a":20,"b":20}');
Function Creation
CREATE OR REPLACE FUNCTION xyz.example(
table_name text,
regular_columns text,
json_column text,
view_name text
) RETURNS text
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
DECLARE
cols TEXT;
cols_sum TEXT;
BEGIN
EXECUTE
format(
$ex$SELECT string_agg(
format(
'CAST(%2$s->>%%1$L AS INTEGER)',
key),
', '
)
FROM (SELECT DISTINCT key
FROM %1$s, jsonb_each(%2$s)
ORDER BY 1
) s;$ex$,
table_name, json_column
)
INTO cols;
EXECUTE
format(
$ex$SELECT string_agg(
format(
'CAST(%2$s->>%%1$L AS INTEGER)',
key
),
'+'
)
FROM (SELECT DISTINCT key
FROM %1$s, jsonb_each(%2$s)
ORDER BY 1) s;$ex$,
table_name, json_column
)
INTO cols_sum;
EXECUTE
format(
$ex$DROP VIEW IF EXISTS %2$s;
CREATE VIEW %2$s AS
SELECT %3$s, %4$s, SUM(%5$s) AS total
FROM %1$s
GROUP BY %3$s$ex$,
table_name, view_name, regular_columns, cols, cols_sum
);
RETURN cols;
END
$BODY$:
Call Function
SELECT xyz.example(
'xyz.table_a',
' id',
'scores',
'xyz.view_table_a'
);
Once you run these steps, I am getting an error
ERROR: column "int4" specified more than once
CONTEXT: SQL statement "
DROP VIEW IF EXISTS xyz.view_table_a;
CREATE VIEW xyz.view_table_a AS
SELECT id, CAST(scores->>'a' AS INTEGER), CAST(scores->>'b' AS INTEGER), SUM(CAST(scores->>'a' AS INTEGER)+CAST(scores->>'b' AS INTEGER)) AS total FROM xyz.table_a GROUP BY id
Look at the error message closely:
...
SELECT id, CAST(scores->>'a' AS INTEGER), CAST(scores->>'b' AS INTEGER),
...
There are multiple expressions without column alias. A named column like "id" defaults to the given name. But other expressions default to the internal type name, which is "int4" for integer. One might assume that the JSON key name is used, but that's not so. CAST(scores->>'a' AS INTEGER) is just another expression returning an unnamed integer value.
This still works for a plain SELECT. Postgres tolerates duplicate column names in the (outer) SELECT list. But a VIEW cannot be created that way. Would result in ambiguities.
Either add column aliases to expressions in the SELECT list:
SELECT id, CAST(scores->>'a' AS INTEGER) AS a, CAST(scores->>'b' AS INTEGER) AS b, ...
Or add a list of column names to CREATE VIEW:
CREATE VIEW xyz.view_table_a(id, a, b, ...) AS ...
Something like this should fix your function (preserving literal spelling of JSON key names:
...
format(
'CAST(%2$s->>%%1$L AS INTEGER) AS %%1$I',
key),
...
See the working demo here:
db<>fiddle here
Aside, your nested format() calls make the code pretty hard to read and maintain.
Detail Question: I have a function which takes inputs in the form of JSON and I would like to either Insert or Update the input into the existing table. Now, if I get multiple inputs, how do I handle.
Table Structure
create table sample ( colA Integer, colB character varying, colC character varying, colD character varying);
create type tt_sample AS
(colA Integer, colB character varying, colC character varying, colD character varying);
Function
CREATE OR REPLACE FUNCTION ins_upd_sample(
tt_sample text)
RETURNS timestamp without time zone
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
DECLARE
BEGIN
with cte as(
INSERT INTO sample(
colA, colB, colC, colD
)
SELECT
tmd.colA, tmd.colB, tmd.colC, tmd.colD
FROM json_populate_recordset(null::tt_sample ,tt_sample::json) tmd
LEFT JOIN sample md ON
md.colA = tmd.colA
AND md.colB = tmd.colB
WHERE md.colB IS NULL
RETURNING * /*Some Usage*/
)
/*some usage*/
with cte2 as(
UPDATE sample md SET
colC = tmd.colC, colD = tmd.colD
FROM json_populate_recordset(null::tt_sample ,tt_sample::json) tmd
where md.colA = tmd.colA AND md.colB = tmd.colB
AND md.colB IS NOT NULL
RETURNING */*some usage */
)
/*some usage*/
return( SELECT
/*timestamp */);
END;
$BODY$;
INPUTS:
select ins_upd_sample ('[{"colA":21, "colB":"abc", "colC":null, "colD":null},
{"colA":21, "colB":"abc", "colC":"xyz, "colD":"xyz"}]')
Desired Result :
Only 1 record should be in the table. First record should get Inserted and next record should get updated. I am getting two Inserted record, and is duplicate. ( obviously, update is there for second one ).
Is it possible to commit the transatcion in between.
So here is a possible workaround. Add a column that only allows 1 value ie a default and a check constraint that value is the default and it is unique. Now in your insert do not reference this column, just take the default. So something like:
create table <your_table>
( ...
, singleton varchar(1) default 'A'
, constraint singleton_bk unique (singleton)
, constraint singleton check (singleton = 'A')
) ;
Then revise insert to:
insert into <your_table>( ... ) -- omit column singleton
values (...)
on conflict (singleton)
do update
set <column> = excluded.<columns>;
See example here.
I have three tables in PostgreSQL:
CREATE TABLE organization (id int, name text, parent_id int);
CREATE TABLE staff (id int, name text, family text, organization_id int);
CREATE TABLE clock(id int, staff_id int, Date date, Time time);
I need a function that gets all the fields of these tables as inputs (8 on total) and then inserts these inputs into appropriate fields of the tables
Here is my code:
CREATE FUNCTION insert_into_tables(org_name character varying(50), org_PID int, person_name character varying(50),_family character varying(50), org_id int, staff_id int,_date date, _time time without time zone)
RETURNS void AS $$
BEGIN
INSERT INTO "Org".organisation("Name", "PID")
VALUES ($1, $2);
INSERT INTO "Org".staff("Name", "Family", "Organization_id")
VALUES ($3, $4, $5);
INSERT INTO "Org"."Clock"("Staff_Id", "Date", "Time")
VALUES ($6, $7, $8);
END;
$$ LANGUAGE plpgsql;
select * from insert_into_tables('SASAD',9,'mamad','Imani',2,2,1397-10-22,'08:26:47')
But no data is inserted. I get the error:
ERROR: function insert_into_tables(unknown, integer, unknown, unknown, integer, integer, integer, unknown) does not exist
LINE 17: select * from insert_into_tables('SASAD',9,'mamad','Imani',2... ^
HINT: No function matches the given name and argument types. You might need to add explicit type casts.
Where did i go wrong?
That's because the 2nd last parameter is declared as date, not int. You forgot the single quotes:
select * from insert_into_tables('SASAD',9,'mamad','Imani',2,2,'1397-10-22','08:26:47');
Without single quotes, this is interpreted as subtraction between 3 integer constants, resulting in an integer: 1397-10-22 = 1365.
Also fix your identifiers: double-quoting preserves upper-case letters, so "Name" is distinct from name etc. See:
Are PostgreSQL column names case-sensitive?
I have the following issue.
I will receive input as a text from a webservice to insert into a certain psql table. assume
create table test ( id serial, myvalues text[])
the recieved input will be:
insert into test(myvalues) values ('this,is,an,array');
I want to create a trigger before insert that will be able to convert this string to a text [] and insert it
first Idea that came in my mind was to create a trigger before insert
create function test_convert() returns trigger as $BODY%
BEGIN
new.myvalues = string_to_array(new.myvalues,',')
RETURNS NEW
END; $BODY$ language plpgsql
but this did not work
You can use the string_to_array function to convert your string into an string array within your insert query:
INSERT INTO test ( myvalues )
VALUES ( string_to_array( 'this,is,an,array', ',' ) );
Suppose you receive text in the following format this is an array and you want to convert it to this,is,an,array then you can use string_to_array('this is an array', ' ') and it will be converted. However if you are receiving comma separated then you can just used it.
Creating the Table Schema Like this,
CREATE TABLE expert (
id VARCHAR(32) NOT NULL,
name VARCHAR(36),
twitter_id VARCHAR(40),
coin_supported text[],
start_date TIMESTAMP,
followers BIGINT,
PRIMARY KEY (id)
);
Inserting values like this will help you to insert array,
insert into expert(id, name, twitter_id, coin_supported, start_date, followers) values('9ed1cdf2-564c-423e-b8e2-137eg', 'dev1', 'dev1#twitter', '{"btc","eth"}', current_timestamp, 12);