I have a table which records which is the most recently inserted b for the given a:
CREATE TABLE IF NOT EXISTS demo (
id serial primary key,
a int not null,
b int not null,
current boolean not null
);
CREATE UNIQUE INDEX ON demo (a, current) WHERE CURRENT = true;
INSERT INTO demo (a, b, current) VALUES (1, 101, true);
I want to be able to insert values and when they conflict, the new row should be updated and the prior, conflicting, row should be updated.
E.g. I have
select * from demo;
id | a | b | current
----+---+-----+---------
1 | 1 | 101 | t
Then I run something like:
INSERT INTO demo (a, b, current)
VALUES (1, 102, true)
ON CONFLICT SET «THE OTHER ONE».current = false;
and then I would see:
select * from demo;
id | a | b | current
----+---+-----+---------
1 | 1 | 101 | f <- changed
2 | 1 | 102 | t
Is there syntax in PostgreSQL that allows this?
As proposed by #Adrian, you can do it with a trigger :
CREATE OR REPLACE FUNCTION before_insert ()
RETURNS trigger LANGUAGE plpgsql AS
$$
BEGIN
UPDATE demo
SET current = false
WHERE a = NEW.a ;
RETURN NEW ;
END ;
$$ ;
CREATE TRIGGER before_insert BEFORE INSERT ON demo
FOR EACH ROW EXECUTE FUNCTION before_insert () ;
see test result in dbfiddle
PS : the constraint one_per will prevent from having several former rows for the same a value and with current = false
Related
I need a function to insert rows because one column's (seriano) default value should be the same as PK id.
I have defined table:
CREATE SEQUENCE some_table_id_seq
INCREMENT 1
START 1
MINVALUE 1
MAXVALUE 9223372036854775807
CACHE 1;
CREATE TABLE some_table
(
id bigint NOT NULL DEFAULT nextval('some_table_id_seq'::regclass),
itemid integer NOT NULL,
serialno bigint,
CONSTRAINT stockitem_pkey PRIMARY KEY (id),
CONSTRAINT stockitem_serialno_key UNIQUE (serialno)
);
and function to insert count of rows:
CREATE OR REPLACE FUNCTION insert_item(itemid int, count int DEFAULT 1) RETURNS SETOF bigint AS
$func$
DECLARE
ids bigint[] DEFAULT '{}';
id bigint;
BEGIN
FOR counter IN 1..count LOOP
id := NEXTVAL( 'some_table_id_seq' );
INSERT INTO some_table (id, itemid, serialno) VALUES (id, itemid, id);
ids := array_append(ids, id);
END LOOP;
RETURN QUERY SELECT unnest(ids);
END
$func$
LANGUAGE plpgsql;
And inserting with it works fine:
$ select insert_item(123, 10);
insert_item
-------------
1
2
3
4
5
6
7
8
9
10
(10 rows)
$ select * from some_table;
id | itemid | serialno
----+--------+----------
1 | 123 | 1
2 | 123 | 2
3 | 123 | 3
4 | 123 | 4
5 | 123 | 5
6 | 123 | 6
7 | 123 | 7
8 | 123 | 8
9 | 123 | 9
10 | 123 | 10
(10 rows)
But if I want to use function insert_item as subquery, it seems not to work anymore:
$ select id, itemid from some_table where id in (select insert_item(123, 10));
id | itemid
----+--------
(0 rows)
I created dumb function insert_dumb to test in a subquery:
CREATE OR REPLACE FUNCTION insert_dumb(itemid int, count int DEFAULT 1) RETURNS SETOF bigint AS
$func$
DECLARE
ids bigint[] DEFAULT '{}';
BEGIN
FOR counter IN 1..count LOOP
ids := array_append(ids, counter::bigint);
END LOOP;
RETURN QUERY SELECT unnest(ids);
END
$func$
LANGUAGE plpgsql;
and this works in a subquery as expected:
$ select id, itemid from some_table where id in (select insert_dumb(123, 10));
id | itemid
----+--------
1 | 123
2 | 123
3 | 123
4 | 123
5 | 123
6 | 123
7 | 123
8 | 123
9 | 123
10 | 123
(10 rows)
Why does insert_item function not insert new rows when called as subquery? I tried to add raise notice to the loop and it runs as expected shouting new id every time (and increasing the sequence), but no new rows are appended to the table.
I made all the setup available as fiddle
I am using Postgres 11 on Ubuntu.
EDIT
Of course, I let out my real reason, and it pays off...
I need the insert_item function returning ids, so I could use it in update-statement, like:
update some_table set some_text = 'x' where id in (select insert_item(123, 10);)
And addition to the why-question: it is understandable I can get no ids in return (because they share the same snapshot), but the function runs all the needed INSERTs without affecting the table. Shouldn't those rows be available in the next query?
The problem is that the subquery and the surrounding query share the same snapshot, that is, they see the same state of the database. Hence the outer query cannot see the rows inserted by the inner query.
See the documentation (which explains that in the context of WITH, although it also applies here):
The sub-statements in WITH are executed concurrently with each other and with the main query. Therefore, when using data-modifying statements in WITH, the order in which the specified updates actually happen is unpredictable. All the statements are executed with the same snapshot (see Chapter 13), so they cannot “see” one another's effects on the target tables.
In addition, there is a second problem with your approach: if you run EXPLAIN (ANALYZE) on your statement, you will find that the subquery is not executed at all! Since the table is empty, there is no id, and running the subquery is not necessary to calculate the (empty) result.
You will have to run that in two different statements. Or, better, do it in a different fashion: updating a row that you just inserted is unnecessarily wasteful.
Laurenz explained the visibility problem, but you don't need the sub-query at all if you re-write your function to return the actual table, rather than just he IDs
CREATE OR REPLACE FUNCTION insert_item(itemid int, count int DEFAULT 1)
RETURNS setof some_table
AS
$func$
INSERT INTO some_table (id, itemid, serialno)
select NEXTVAL( 'some_table_id_seq' ), itemid, currval('some_table_id_seq')
from generate_series(1,count)
returning *;
$func$
LANGUAGE sql;
Then you can use it like this:
select id, itemid
from insert_item(123, 10);
And you get the complete inserted rows.
Online example
There is a modes table that has different attributes
link, name , antoherAtrr.
link, name - string values and unique
The task is to query all the existing rows in the table and multiply them by N-numbers entries, and insert them into the same table, but at the same time, so that the link, name-fields have a unique value(for example, add to the existing values in these fields + random number in a string)
insert into modes (link, name , antoherAtrr)
select * from modes
This code will give an error, because the uniqueness of the first 2 columns is violated.
modes
create table if not exists modes
(
link varchar,
name varchar,
"anotherAtrr" integer
);
alter table modes owner to postgres;
create unique index if not exists modes_link_uindex
on modes (link);
create unique index if not exists modes_name_uindex
on modes (name);
and than must be
N is the number of rows that I would like to get in a single query ( i.e. duplicate rows based on the available ones, but considering the uniqueness of some attributes)
Who has any ideas on how to write this ? Can you provide the code with explanations ?
You can join on a generate_series expression.
Step by step...
Create a table expression returning n rows with an integer field:
SELECT * FROM generate_series(0,4) AS ser(nr);
nr
----
0
1
2
3
4
(5 rows)
modes contains:
SELECT * FROM modes;
link | name | anotherAtrr
------+------+-------------
foo | bar | 42
(1 row)
Select everything from the existing table and cross join it with the generated data (JOIN on true):
SELECT *
FROM modes
JOIN generate_series(0,4) AS ser(nr) ON true;
link | name | anotherAtrr | nr
------+------+-------------+----
foo | bar | 42 | 0
foo | bar | 42 | 1
foo | bar | 42 | 2
foo | bar | 42 | 3
foo | bar | 42 | 4
(5 rows)
Now combine the fields by concatenating the numbers to the string values:
INSERT INTO modes (link, name , "anotherAtrr")
SELECT t1.link ||'_'||ser.nr, t1.name||'_'||ser.nr , t1."anotherAtrr"
FROM modes t1
JOIN generate_series(0,4) AS ser(nr)ON true;
Could also be written as:
WITH ser AS (
SELECT * FROM generate_series(0,4) AS nr
)
INSERT INTO modes (link, name , "anotherAtrr")
SELECT t1.link ||'_'||ser.nr, t1.name||'_'||ser.nr , t1."anotherAtrr"
FROM modes t1
JOIN ser ON true;
Unrelated:
double quoting identifiers creates more problems than it solves.
If you need a table with generated data and you want to control amount rows.
You also add parameters in function...
create table foo
(
id integer not null
constraint foo_pkey
primary key,
name text,
data timestamp
);
CREATE OR REPLACE FUNCTION add_table_foo()
RETURNS void AS
$$
DECLARE
startId INT;
endId INT;
nameVar TEXT;
dateVar TIMESTAMP;
period INT default 0;
stepTimeZone INT default 0;
begin
startId := 1;
endId := 20;
for idVar in startId..endId
loop
nameVar := md5(random()::text);
stepTimeZone := 3;
dateVar :=
(now()::timestamp with time zone +
make_interval(hours := stepTimeZone) +
make_interval(days := period)
);
insert into foo(id, name, data) values (idVar, nameVar, dateVar);
period := period + 1;
end loop;
end ;
$$ LANGUAGE plpgsql;
I have a table like this:
id | group_id | parent_group
---+----------+-------------
1 | 1 | null
2 | 1 | null
3 | 2 | 1
4 | 2 | 1
Is it possible to add a constraint such that a row is automatically deleted when there is no row with a group_id equal to the row's parent_group? For example, if I delete rows 1 and 2, I want rows 3 and 4 to be deleted automatically because there are no more rows with group_id 1.
The answer that clemens posted led me to the following solution. I'm not very familiar with triggers though; could there be any problems with this and is there a better way to do it?
CREATE OR REPLACE FUNCTION on_group_deleted() RETURNS TRIGGER AS $$
BEGIN
IF NOT EXISTS (SELECT 1 FROM my_table WHERE group_id = OLD.group_id) THEN
DELETE FROM my_table WHERE parent_group = OLD.group_id;
END IF;
RETURN OLD;
END;
$$ LANGUAGE PLPGSQL;
CREATE TRIGGER my_table_delete_trigger AFTER DELETE ON my_table
FOR EACH ROW
EXECUTE PROCEDURE on_group_deleted();
I have a table which contains multiple rows for a user, holding their station_ids. When a station ID is changed from the front end via drop down button, I want to update the station ID in the table. I only want there to ever be one TRUE value for "is_default_station", but a user can have multiple FALSE values. I am using postgres 9.5, and the PG drive for NodeJS.
My table looks like this:
station_id | station_name | user_id | is_default_station
-----------------+---------------------------- +-------------------------+--------------------------
1 | station 1 | 1 | TRUE
2 | station 2 | 1 | FALSE
3 | station 3 | 1 | FALSE
4 | station 4 | 2 | FALSE
5 | station 5 | 2 | FALSE
6 | station 6 | 2 | TRUE
Here is my function:
CREATE OR REPLACE FUNCTION UPDATE_All_STATIONS_FUNC (
userId INTEGER,
stationId INTEGER
)
RETURNS RECORD AS $$
DECLARE
ret RECORD;
BEGIN
--Find all the user stations associated to a user, and set them to false. Then, update one to TRUE
UPDATE user_stations SET (is_default_station) = (FALSE) WHERE station_id = ALL (SELECT station_id FROM user_stations WHERE user_id =$1 AND is_default_station = TRUE);
UPDATE user_stations SET (is_default_station) = (TRUE) WHERE station_id =$2 AND user_id = $1 RETURNING user_id, station_id INTO ret;
RETURN ret;
END;
$$ LANGUAGE plpgsql;
I am accessing the function like so:
SELECT user_id, station_id FROM update_all_stations_func($1, $2) AS (user_id INTEGER, station_id INTEGER)
The function is not updating anything on the DB, and returning null values for user_id and station_id like so rows: [ { user_id: null, dashboard_id: null } ].
I am guessing that the initial update query with the nested SELECT is not finding anything inside the function, but if I use the first query alone to update, I find results and it updates as expected. What am I missing?
I simplified the first update statement, and the following works as expected:
CREATE OR REPLACE FUNCTION UPDATE_All_STATIONS_FUNC (
userId INTEGER,
stationId INTEGER
)
RETURNS RECORD AS $$
DECLARE
ret RECORD;
BEGIN
--Find all the user stations associated to a user, and set them to false. Then, update one to TRUE
UPDATE user_stations SET (is_default_station) = (FALSE) WHERE user_id = $1 AND station_id <> $2;
UPDATE user_stations SET (is_default_station) = (TRUE) WHERE user_id = $1 AND station_id = $2 RETURNING user_id, station_id INTO ret;
RETURN ret;
END;
$$ LANGUAGE plpgsql;
I have a table Table_A:
\d "Table_A";
Table "public.Table_A"
Column | Type | Modifiers
----------+---------+-------------------------------------------------------------
id | integer | not null default nextval('"Table_A_id_seq"'::regclass)
field1 | bigint |
field2 | bigint |
and now I want to add a new column. So I run:
ALTER TABLE "Table_A" ADD COLUMN "newId" BIGINT DEFAULT NULL;
now I have:
\d "Table_A";
Table "public.Table_A"
Column | Type | Modifiers
----------+---------+-------------------------------------------------------------
id | integer | not null default nextval('"Table_A_id_seq"'::regclass)
field1 | bigint |
field2 | bigint |
newId | bigint |
And I want newId to be filled with the same value as id for new/updated rows.
I created the following function and trigger:
CREATE OR REPLACE FUNCTION autoFillNewId() RETURNS TRIGGER AS $$
BEGIN
NEW."newId" := NEW."id";
RETURN NEW;
END $$ LANGUAGE plpgsql;
CREATE TRIGGER "newIdAutoFill" AFTER INSERT OR UPDATE ON "Table_A" EXECUTE PROCEDURE autoFillNewId();
Now if I insert something with:
INSERT INTO "Table_A" values (97, 1, 97);
newId is not filled:
select * from "Table_A" where id = 97;
id | field1 | field2 | newId
----+----------+----------+-------
97 | 1 | 97 |
Note: I also tried with FOR EACH ROW from some answer here in SO
What's missing me?
You need a BEFORE INSERT OR UPDATE ... FOR EACH ROW trigger to make this work:
CREATE TRIGGER "newIdAutoFill"
BEFORE INSERT OR UPDATE ON "Table_A"
FOR EACH ROW EXECUTE PROCEDURE autoFillNewId();
A BEFORE trigger takes place before the new row is inserted or updated, so you can still makes changes to the field values. An AFTER trigger is useful to implement some side effect, like auditing of changes or cascading changes to other tables.
By default, triggers are FOR EACH STATEMENT and then the NEW parameter is not defined (because the trigger does not operate on a row). So you have to specify FOR EACH ROW.