insert string into text [] column - postgresql

I have the following issue.
I will receive input as a text from a webservice to insert into a certain psql table. assume
create table test ( id serial, myvalues text[])
the recieved input will be:
insert into test(myvalues) values ('this,is,an,array');
I want to create a trigger before insert that will be able to convert this string to a text [] and insert it
first Idea that came in my mind was to create a trigger before insert
create function test_convert() returns trigger as $BODY%
BEGIN
new.myvalues = string_to_array(new.myvalues,',')
RETURNS NEW
END; $BODY$ language plpgsql
but this did not work

You can use the string_to_array function to convert your string into an string array within your insert query:
INSERT INTO test ( myvalues )
VALUES ( string_to_array( 'this,is,an,array', ',' ) );

Suppose you receive text in the following format this is an array and you want to convert it to this,is,an,array then you can use string_to_array('this is an array', ' ') and it will be converted. However if you are receiving comma separated then you can just used it.

Creating the Table Schema Like this,
CREATE TABLE expert (
id VARCHAR(32) NOT NULL,
name VARCHAR(36),
twitter_id VARCHAR(40),
coin_supported text[],
start_date TIMESTAMP,
followers BIGINT,
PRIMARY KEY (id)
);
Inserting values like this will help you to insert array,
insert into expert(id, name, twitter_id, coin_supported, start_date, followers) values('9ed1cdf2-564c-423e-b8e2-137eg', 'dev1', 'dev1#twitter', '{"btc","eth"}', current_timestamp, 12);

Related

Postgresql multi insert from array of composite type and one additional column

What I have:
CREATE TYPE Item AS (
a bigint,
b bigint
);
CREATE TABLE items (
id bigint NOT NULL,
a bigint NOT NULL,
b bigint NOT NULL
);
CREATE OR REPLACE FUNCTION items_insert(
_id bigint,
_items Item[]
) RETURNS void AS
...
How can I insert multi columns rows to table items with same _id from _items by ONE multi insert query?
I am using Postgresql-9.2
I think you mean "insert multiple rows" rather than columns.
Assuming that is correct, I think you are looking for a function like this:
create or replace function items_insert(_id bigint, _items item[])
returns void
as
$$
insert into items (id, a, b)
select _id, it.*
from unnest(_items) as it(a,b);
$$
language sql;
Online example

How to merge two input records and Insert or Update accordingly in PostgreSQL

Detail Question: I have a function which takes inputs in the form of JSON and I would like to either Insert or Update the input into the existing table. Now, if I get multiple inputs, how do I handle.
Table Structure
create table sample ( colA Integer, colB character varying, colC character varying, colD character varying);
create type tt_sample AS
(colA Integer, colB character varying, colC character varying, colD character varying);
Function
CREATE OR REPLACE FUNCTION ins_upd_sample(
tt_sample text)
RETURNS timestamp without time zone
LANGUAGE 'plpgsql'
COST 100
VOLATILE
AS $BODY$
DECLARE
BEGIN
with cte as(
INSERT INTO sample(
colA, colB, colC, colD
)
SELECT
tmd.colA, tmd.colB, tmd.colC, tmd.colD
FROM json_populate_recordset(null::tt_sample ,tt_sample::json) tmd
LEFT JOIN sample md ON
md.colA = tmd.colA
AND md.colB = tmd.colB
WHERE md.colB IS NULL
RETURNING * /*Some Usage*/
)
/*some usage*/
with cte2 as(
UPDATE sample md SET
colC = tmd.colC, colD = tmd.colD
FROM json_populate_recordset(null::tt_sample ,tt_sample::json) tmd
where md.colA = tmd.colA AND md.colB = tmd.colB
AND md.colB IS NOT NULL
RETURNING */*some usage */
)
/*some usage*/
return( SELECT
/*timestamp */);
END;
$BODY$;
INPUTS:
select ins_upd_sample ('[{"colA":21, "colB":"abc", "colC":null, "colD":null},
{"colA":21, "colB":"abc", "colC":"xyz, "colD":"xyz"}]')
Desired Result :
Only 1 record should be in the table. First record should get Inserted and next record should get updated. I am getting two Inserted record, and is duplicate. ( obviously, update is there for second one ).
Is it possible to commit the transatcion in between.
So here is a possible workaround. Add a column that only allows 1 value ie a default and a check constraint that value is the default and it is unique. Now in your insert do not reference this column, just take the default. So something like:
create table <your_table>
( ...
, singleton varchar(1) default 'A'
, constraint singleton_bk unique (singleton)
, constraint singleton check (singleton = 'A')
) ;
Then revise insert to:
insert into <your_table>( ... ) -- omit column singleton
values (...)
on conflict (singleton)
do update
set <column> = excluded.<columns>;
See example here.

Insert a row with NULL values from one table

I have a custom type:
CREATE TYPE public.relacion AS
(id integer,
codpadre character varying,
codhijo character varying,
canpres numeric(7,3),
cancert numeric(7,3),--this is usually NULL
posicion smallint);
After that, I need another type from relacion type:
CREATE TYPE public.borrarrelacion AS
(idborrar integer,
paso integer,
rel relacion);
Well, now into a function I need to copy some rows from a relacion type table to borrarrelacion type table:
This is a snippet of my code:
DECLARE:
r relacion%ROWTYPE;
------------
BEGIN
EXECUTE FORMAT ('CREATE TABLE IF NOT EXISTS "mytable" OF borrarrelacion (PRIMARY KEY (idborrar))');
EXECUTE FORMAT ('SELECT * FROM %I WHERE codpadre = %s AND codhijo = %s',
tablarelacion,quote_literal(codigopadre),quote_literal(codigohijo)) INTO r;
EXECUTE FORMAT ('INSERT INTO "mytable" VALUES(0,0,%s)',r);
But I get an error because the field r.cancert is NULL and it's trying to insert something like that:
INSERT INTO "mytable" VALUES(0,0,(0,'cod1','cod2',10,,0));
I can solve this problem reading every field of r and putting it values in the INSERT statement, like that: (and changing the NULL value by 0)
EXECUTE FORMAT ('INSERT INTO "mytable" VALUES(0,0,(%s,%s,%s,%s,%s,%s))',
r.id,quote_literal(r.codpadre),quote_literal(r.codhijo),r.canpres,COALESCE(r.cancert,0),r.posicion);
But I would like to know if I can avoid this way and I can insert exactly the same row from one table to other.

jsonb_populate_record / jsonb_populate_recordset should return a table

currently I try to make a history table based on postgresql jsonb, currently as a example I have two table's:
CREATE TABLE data (id BIGSERIAL PRIMARY KEY, price NUMERIC(10,4) NOT NULL, article TEXT NOT NULL, quantity BIGINT NOT NULL, lose BIGINT NOT NULL, username TEXT NOT NULL);
CREATE TABLE data_history (id BIGSERIAL PRIMARY KEY, data JSONB NOT NULL, username TEXT NOT NULL);
The history table act's a simple history (the username there could be avoided).
I populate the data of the history with a trigger:
CREATE OR REPLACE FUNCTION insert_history() RETURNS TRIGGER AS $$
BEGIN
INSERT INTO data_history (data, username) VALUES (row_to_json(NEW.*), NEW.username);
RETURN NEW;
END;
$$ LANGUAGE plpgsql;
Now I try to populate the history back to the data table:
SELECT jsonb_populate_record(NULL::data, data) FROM data_history;
However the result will now be a tuple and not a table:
jsonb_populate_record
-------------------------------------
(1,45.4500,0A45477,100,1,c.schmitt)
(2,5.4500,0A45477,100,1,c.schmitt)
(2 rows)
Is there any way to get the data back as the table data back? I know there is jsonb_populate_recordset, too, however it doesn't accept a query?!
jsonb_populate_record() returns a row-type (or record-type), so if you use it in the SELECT cluase, you'll get a single column, which is a row-type.
To avoid this, use it in the FROM clause instead (with an implicit LATERAL JOIN):
SELECT r.*
FROM data_history,
jsonb_populate_record(NULL::data, data) r
Technically, the statement below could work too
-- DO NOT use, just for illustration
SELECT jsonb_populate_record(NULL::data, data).*
FROM data_history
but it will call jsonb_populate_record() for each column in data (as a result of an engine limitation).

Need foreign key as array

CREATE TABLE test ( id int PRIMARY KEY , name );
CREATE TABLE test1 ( id integer[] REFERENCES test , rollid int );
ERROR: foreign key constraint "test3_id_fkey" cannot be implemented
DETAIL: Key columns "id" and "id" are of incompatible types: integer[] and integer.
after that I try to another way also
CREATE TABLE test1 ( id integer[] , rollid int);
ALTER TABLE test1 ADD CONSTRAINT foreignkeyarray FOREIGN KEY (id) REFERENCES test;
ERROR: foreign key constraint "fkarray" cannot be implemented
DETAIL: Key columns "id" and "id" are of incompatible types: integer[] and integer.
so I try create a foreign key array means it say error. please tell me anyone?
postgresql version is 9.1.
What you're trying to do simply can't be done. At all. No ifs, no buts.
Create a new table, test1_test, containing two fields, test1_id, test_id. Put the foreign keys as needed on that one, and make test1's id an integer.
Using arrays with foreign element keys is usually a sign of incorrect design. You need to do separate table with one to many relationship.
But technically it is possible. Example of checking array values without triggers. One reusable function with paramethers and dynamic sql. Tested on PostgreSQL 10.5
create schema if not exists test;
CREATE OR REPLACE FUNCTION test.check_foreign_key_array(data anyarray, ref_schema text, ref_table text, ref_column text)
RETURNS BOOL
RETURNS NULL ON NULL INPUT
LANGUAGE plpgsql
AS
$body$
DECLARE
fake_id text;
sql text default format($$
select id::text
from unnest($1) as x(id)
where id is not null
and id not in (select %3$I
from %1$I.%2$I
where %3$I = any($1))
limit 1;
$$, ref_schema, ref_table, ref_column);
BEGIN
EXECUTE sql USING data INTO fake_id;
IF (fake_id IS NOT NULL) THEN
RAISE NOTICE 'Array element value % does not exist in column %.%.%', fake_id, ref_schema, ref_table, ref_column;
RETURN false;
END IF;
RETURN true;
END
$body$;
drop table if exists test.t1, test.t2;
create table test.t1 (
id integer generated by default as identity primary key
);
create table test.t2 (
id integer generated by default as identity primary key,
t1_ids integer[] not null check (test.check_foreign_key_array(t1_ids, 'test', 't1', 'id'))
);
insert into test.t1 (id) values (default), (default), (default); --ok
insert into test.t2 (id, t1_ids) values (default, array[1,2,3]); --ok
insert into test.t2 (id, t1_ids) values (default, array[1,2,3,555]); --error
If you are able to put there just values from test.id, then you can try this:
CREATE OR REPLACE FUNCTION test_trigger() RETURNS trigger
LANGUAGE plpgsql AS $BODY$
DECLARE
val integer;
BEGIN
SELECT id INTO val
FROM (
SELECT UNNEST(id) AS id
FROM test1
) AS q
WHERE id = OLD.id;
IF val IS NULL THEN RETURN OLD;
ELSE
RAISE 'Integrity Constraint Violation: ID "%" in Test1', val USING ERRCODE = '23000';
RETURN NULL;
END IF;
END; $BODY$;
-- DROP TRIGGER test_delete_trigger ON test;
CREATE TRIGGER test_delete_trigger BEFORE DELETE OR UPDATE OF id ON test
FOR EACH ROW EXECUTE PROCEDURE test_trigger();