Weird now() time difference with Postgres triggers - postgresql

In a Postgres 10.10 database, I have a table table1 , and an AFTER INSERT trigger on table1 for table2:
CREATE TABLE table1 (
id SERIAL PRIMARY KEY,
-- other cols
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL
);
CREATE UNIQUE INDEX table1_pkey ON table1(id int4_ops);
CREATE TABLE table2 (
id SERIAL PRIMARY KEY,
table1_id integer NOT NULL REFERENCES table1(id) ON UPDATE CASCADE,
-- other cols (not used in query)
created_at timestamp with time zone NOT NULL,
updated_at timestamp with time zone NOT NULL
);
CREATE UNIQUE INDEX table2_pkey ON table2(id int4_ops);
This query is executed on application start:
CREATE OR REPLACE FUNCTION after_insert_table1()
RETURNS trigger AS
$$
BEGIN
INSERT INTO table2 (table1_id, ..., created_at, updated_at)
VALUES (NEW.id, ..., 'now', 'now');
RETURN NEW;
END;
$$
LANGUAGE 'plpgsql';
DROP TRIGGER IF EXISTS after_insert_table1 ON "table1";
CREATE TRIGGER after_insert_table1
AFTER INSERT ON "table1"
FOR EACH ROW
EXECUTE PROCEDURE after_insert_table1();
I noticed some created_at and updated_at values on table2 are different to table1. In fact, table2 has mostly older values.
Here are 10 sequential entries, which show the difference jumping around a huge amount within a few minutes:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|2000 |2019-11-07 22:29:47.245+00|2019-11-07 19:51:09.727021+00|-02:38:37.517979|
|2001 |2019-11-07 22:30:02.256+00|2019-11-07 13:18:29.45962+00 |-09:11:32.79638 |
|2002 |2019-11-07 22:30:43.021+00|2019-11-07 13:44:12.099577+00|-08:46:30.921423|
|2003 |2019-11-07 22:31:00.794+00|2019-11-07 19:51:09.727021+00|-02:39:51.066979|
|2004 |2019-11-07 22:31:11.315+00|2019-11-07 13:18:29.45962+00 |-09:12:41.85538 |
|2005 |2019-11-07 22:31:27.234+00|2019-11-07 13:44:12.099577+00|-08:47:15.134423|
|2006 |2019-11-07 22:31:47.436+00|2019-11-07 13:18:29.45962+00 |-09:13:17.97638 |
|2007 |2019-11-07 22:33:19.484+00|2019-11-07 17:22:48.129063+00|-05:10:31.354937|
|2008 |2019-11-07 22:33:51.607+00|2019-11-07 19:51:09.727021+00|-02:42:41.879979|
|2009 |2019-11-07 22:34:28.786+00|2019-11-07 13:18:29.45962+00 |-09:15:59.32638 |
|2010 |2019-11-07 22:36:50.242+00|2019-11-07 13:18:29.45962+00 |-09:18:20.78238 |
Sequential entries have similar differences (mostly negative/mostly positive), and similar orders of magnitude (mostly minutes vs mostly hours) within the sequence, though there are exceptions
Here are the top 5 largest positive differences:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|1630 |2019-10-25 21:12:14.971+00|2019-10-26 00:52:09.376+00 |03:39:54.405 |
|950 |2019-09-16 12:36:07.185+00|2019-09-16 14:07:35.504+00 |01:31:28.319 |
|1677 |2019-10-26 22:19:12.087+00|2019-10-26 23:38:34.102+00 |01:19:22.015 |
|58 |2018-12-08 20:11:20.306+00|2018-12-08 21:06:42.246+00 |00:55:21.94 |
|171 |2018-12-17 22:24:57.691+00|2018-12-17 23:16:05.992+00 |00:51:08.301 |
Here are the top 5 largest negative differences:
|table1_id|table1_created |table2_created |diff |
|---------|--------------------------|-----------------------------|----------------|
|1427 |2019-10-15 16:03:43.641+00|2019-10-14 17:59:41.57749+00 |-22:04:02.06351 |
|1426 |2019-10-15 13:26:07.314+00|2019-10-14 18:00:50.930513+00|-19:25:16.383487|
|1424 |2019-10-15 13:13:44.092+00|2019-10-14 18:00:50.930513+00|-19:12:53.161487|
|4416 |2020-01-11 00:15:03.751+00|2020-01-10 08:43:19.668399+00|-15:31:44.082601|
|4420 |2020-01-11 01:58:32.541+00|2020-01-10 11:04:19.288023+00|-14:54:13.252977|
Negative differences outnumber positive differences 10x. The database timezone is UTC.
table2.table1_id is a foreign key, so it should be impossible to insert before insert on table1 completes.
table1.created_at is set by Sequelize, using option timestamps: true on the model.
When a row is inserted into table1, it's done inside a transaction. From the documentation I can find, triggers are executed inside the same transaction, so I can't think of a reason for this.
I can fix the issue by changing my trigger to use NEW.created_at instead of 'now', but I'm curious if anyone has any idea what the cause of this bug is?
Here is the query used to produce the above difference tables:
SELECT
table1.id AS table1_id,
table1.created_at AS table1_created,
table2.created_at AS table2_created,
(table2.created_at - table1.created_at) AS diff
FROM table1
INNER JOIN table2 ON
table2.table1_id = table1.id AND (
(table2.created_at - table1.created_at) > '2 min' OR
(table1.created_at - table2.created_at) > '2 min')
ORDER BY diff;

While 'now' is not a plain string, it is also not a function in this context, but a special date/time input. The manual:
... simply notational shorthands that will be converted to ordinary date/time values when read. (In particular, now and related strings are converted to a specific time value as soon as they are read.)
The body of a PL/pgSQL function is stored as string, each nested SQL command is parsed and prepared when control reaches it the first time per session. The manual:
The PL/pgSQL interpreter parses the function's source text and
produces an internal binary instruction tree the first time the
function is called (within each session). The instruction tree fully
translates the PL/pgSQL statement structure, but individual SQL
expressions and SQL commands used in the function are not translated
immediately.
As each expression and SQL command is first executed in the function,
the PL/pgSQL interpreter parses and analyzes the command to create a
prepared statement, using the SPI manager's SPI_prepare function.
Subsequent visits to that expression or command reuse the prepared statement.
There is more. Read on. But that's enough for our case:
The first time the trigger is executed per session, 'now' is translated to the current timestamp (the transaction timestamp). While doing more inserts in that same transaction, there won't be any difference to transaction_timestamp() because that is stable within a transaction by design.
But every subsequent transaction in the same session will insert the same, constant timestamp in table2, while values for table1 may be anything (not sure what Sequelize does there). If new values in table1 are the then current timestamp, that results in a "negative" diff in your test. (Timestamps in table2 will be older.)
Solution
Situations where you actually want 'now' are few and far between. Typically, you want the function now() (without single quotes!) - which is equivalent to CURRENT_TIMESTAMP (standard SQL) and transaction_timestamp(). Related (recommended reading!):
Difference between now() and current_timestamp
In your particular case I suggest column defaults instead of doing additional work in triggers. If you set the same default now() in table1 and table2, you also eliminate any nonsense the INSERT to table1 might add. And you never have to even mention these columns in inserts any more:
CREATE TABLE table1 (
id SERIAL PRIMARY KEY,
-- other cols
created_at timestamptz NOT NULL DEFAULT now(),
updated_at timestamptz NOT NULL DEFAULT now() -- or leave this one NULL?
);
CREATE TABLE table2 (
id SERIAL PRIMARY KEY,
table1_id integer NOT NULL REFERENCES table1(id) ON UPDATE CASCADE,
-- other cols (not used in query)
created_at timestamptz NOT NULL DEFAULT now(), -- not 'now'!
updated_at timestamptz NOT NULL DEFAULT now() -- or leave this one NULL?
);
CREATE OR REPLACE FUNCTION after_insert_table1()
RETURNS trigger LANGUAGE plpgsql AS
$$
BEGIN
INSERT INTO table2 (table1_id) -- more columns? but not: created_at, updated_at
VALUES (NEW.id); -- more columns?
RETURN NULL; -- can be NULL for AFTER trigger
END
$$;

Related

Determine next auto_increment value before an INSERT in Postgres

I am in the process of switching from MariaDB to Postgres and have run into a small issue. There are times when I need to establish the next AUTO_INCREMENT value prior to making an INSERT. This is because the INSERT has an impact on a few other tables that would be quite messy to repair if done post the INSERT itself. In mySQL/MariaDB this was easy. I simply did
"SELECT AUTO_INCREMENT
FROM information_schema.tables
WHERE table_name = 'users'
AND table_schema = DATABASE( ) ;";
and used the returned value to pre-correct the other tables prior to making the actual INSERT. I am aware that with pgSQL one can use RETURNINGwith SELECT,INSERT and UPDATE statements. However, this would require a post-INSERT correction to the other tables which in turn would involve breaking code that has been tested and proven to work. I imagine that there is a way to find the next AUTO_INCREMENT but I have been unable to find it. Amongst other things I tried nextval('users_id_seq') which did not do anything useful.
To port my original MariaDB schema over to Postgres I edited the SQL emitted by Adminer with the MariaDB version to ensure it works with Postgres. This mostly involved changing INT(11) to INTEGER, TINYINT(3) to SMALL INT, VARCHAR to CHARACTER VARYING etc. With the auto-increment columns I read up a bit and concluded that I needed to use SERIAL instead. So the typical SQL I fed to Postgres was like this
CREATE TABLE "users"
(
"id" SERIAL NOT NULL,
"bid" INTEGER NOT NULL DEFAULT 0,
"gid" INTEGER NOT NULL DEFAULT 0,
"sid" INTEGER NOT NULL DEFAULT 0,
"s1" character varying(64)NOT NULL,
"s2" character varying(64)NOT NULL,
"name" character varying(64)NOT NULL,
"apik" character varying(128)NOT NULL,
"email" character varying(192)NOT NULL,
"gsm" character varying(64)NOT NULL,
"rights" character varying(64)NOT NULL,
"managed" character varying(256)NOT NULL DEFAULT
'M_BepHJXALYpLyOjHxVGWJnlAMqxv0KNENmcYA,,',
"senior" SMALLINT NOT NULL DEFAULT 0,
"refs" INTEGER NOT NULL DEFAULT 0,
"verified" SMALLINT NOT NULL DEFAULT 0,
"vkey" character varying(64)NOT NULL,
"lang" SMALLINT NOT NULL DEFAULT 0,
"leader" INTEGER NOT NULL
);
This SQL run from Adminer works correctly. However, when I then try to get Adminer to export the new users table in Postgres it gives me
CREATE TABLE "public"."users"
(
"id" integer DEFAULT nextval('users_id_seq') NOT NULL,
"bid" integer DEFAULT 0 NOT NULL,
It is perhaps possible that I have gone about things incorrectly when porting over the AUTO_INCREMENT columns - in which case there is still time to correct the error.
If you used serial in the column definition then you have a sequence named TABLE_COLUMN_seq in the same namespace of the table (where TABLE and COLUMN are, respectively, the names of the table and the column). You can just do:
SELECT nextval('TABLE_COLUMN_seq');
I see you have tried that, can you show your CREATE TABLE statement so that we can check all names are ok?
As documented in the manual serial is not a "real" data type, it's just a shortcut for a column that takes its default value from a sequence.
If you need the generated value in your code before inserting, use nextval() then use the value you got in your insert statement:
In PL/pgSQL this would be something like the following. The exact syntax obviously depends on the programming language you use:
declare
l_userid integer;
begin
l_userid := nextval('users_id_seq');
-- do something with that value
insert into users (id, ...)
values (l_userid, ...);
end;
It is important that you never pass a value to the insert statement that was not generated by the sequence. Postgres will not automagically sync the sequence values with "manually" provided values.
you can select last_value+1 from the sequence itself, eg:
t=# create table so109(i serial,n int);
CREATE TABLE
Time: 2.585 ms
t=# insert into so109(n) select i from generate_series(1,22,1) i;
INSERT 0 22
Time: 1.236 ms
t=# select * from so109_i_seq ;
sequence_name | last_value | start_value | increment_by | max_value | min_value | cache_value | log_cnt | is_cycled | is_called
---------------+------------+-------------+--------------+---------------------+-----------+-------------+---------+-----------+-----------
so109_i_seq | 22 | 1 | 1 | 9223372036854775807 | 1 | 1 | 11 | f | t
(1 row)
or use currval, eg:
t=# select currval('so109_i_seq')+1;
?column?
----------
23
(1 row)
UPDATE
While this answer gives an idea on how to Determine next auto_increment value before an INSERT in Postgres (which is the title), proposed methods would not fit the needs of post itself. If you are looking for "replacement" for RETURNING directive in INSERT statement, the better way is actually "reserving" the value with nextval, just as #fog proposed. So concurrent transactions would not get the same value twice...

Set the value of a column to its default value

I have few existing tables in which I have to modify various columns to have a default value.
How can I apply the default value to old records which are NULL, so that the old records will be consistent with the new ones
ALTER TABLE "mytable" ALTER COLUMN "my_column" SET DEFAULT NOW();
After modifying table looks something like this ...
Table "public.mytable"
Column | Type | Modifiers
-------------+-----------------------------+-----------------------------------------------
id | integer | not null default nextval('mytable_id_seq'::regclass)
....
my_column | timestamp(0) with time zone | default now()
Indexes:
"mytable_pkey" PRIMARY KEY, btree (id)
Is there a simple to way to have all columns which are currently null and also which have a default value to be set to the default value ?
Deriving from insert into:
For clarity, you can also request default values explicitly, for individual columns or for the entire row:
INSERT INTO products (product_no, name, price) VALUES (1, 'Cheese', DEFAULT);
INSERT INTO products DEFAULT VALUES;
I just tried this, and it is as simple as
update mytable
set my_column = default
where my_column is null
See sqlfiddle
Edit: olaf answer is easiest and correct way of doing this however the below also is viable solution for most cases.
For a each column it is easy to use the information_schema and get the default value of a column and then use that in a UPDATE statement
UPDATE mytable set my_column = (
SELECT column_default
FROM information_schema.columns
WHERE (table_schema, table_name, column_name) = ('public', 'mytable','my_column')
)::timestamp
WHERE my_column IS NULL;
Note the sub-query must by typecast to the corresponding column data type .
Also this statement will not evaluate expressions as column_default will be of type character varying it will work for NOW() but not for expressions like say (NOW()+ interval ' 7 days')
It is better to get expression and validate it then apply it manually

postgresql - insert result of query SELECT EXTRACT into another table

I have the following table in postgresql (table1):
Var1,
var2,
var3,
timestamp1 timestamp without time zone NOT NULL,
timestamp2 timestamp without time zone NOT NULL,
diff double precision,
The column diff is empty.
I calculate the variable diff by the following code:
SELECT EXTRACT(EPOCH FROM ((timestamp1 – timestamp2)/1800))
I want insert the result of this operation in variable diff of table 1.
I write the following code, but do not work…
CREATE TEMPORARY TABLE temptablename AS
SELECT EXTRACT(EPOCH FROM ((timestamp1 – timestamp2)/1800)) AS diff2 from table1;
INSERT INTO table1 (diff) SELECT diff2 FROM temptablename;
ERROR: null value in column "" violates not-null constraint
DETAIL: Failing row contains (null, null, null, null, null,83).
Assuming your arithmetic is right, it sounds like you just need an update statement.
update table1
set diff = extract(epoch from ((timestamp1 – timestamp2)/1800))
where diff is null;
The WHERE clause isn't strictly necessary, since you already know that column is empty. But it guards against overwriting values the second time you run that statement.

Calling now() in a function

One of our Postgres tables, called rep_event, has a timestamp column that indicates when each row was inserted. But all of the rows have a timestamp value of 2000-01-01 00:00:00, so something isn't set up right.
There is a function that inserts rows into the table, and it is the only code that inserts rows into that table - no other code inserts into that table. (There also isn't any code that updates the rows in that table.) Here is the definition of the function:
CREATE FUNCTION handle_event() RETURNS "trigger"
AS $$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO rep_event SELECT 'D', TG_RELNAME, OLD.object_id, now();
RETURN OLD;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO rep_event SELECT 'U', TG_RELNAME, NEW.object_id, now();
RETURN NEW;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO rep_event SELECT 'I', TG_RELNAME, NEW.object_id, now();
RETURN NEW;
END IF;
RETURN NULL;
END;
$$ LANGUAGE plpgsql;
Here is the table definition:
CREATE TABLE rep_event
(
operation character(1) NOT NULL,
table_name text NOT NULL,
object_id bigint NOT NULL,
time_stamp timestamp without time zone NOT NULL
)
As you can see, the now() function is called to get the current time. Doing a "select now()" on the database returns the correct time, so is there an issue with calling now() from within a function?
A simpler solution is to just modify your table definition to have NOW() be the default value:
CREATE TABLE rep_event (
operation character(1) NOT NULL,
table_name text NOT NULL,
object_id bigint NOT NULL,
time_stamp timestamp without time zone NOT NULL DEFAULT NOW()
);
Then you can get rid of the now() calls in your trigger.
Also as a side note, I strongly suggest including the column ordering in your function... IOW;
INSERT INTO rep_event (operation,table_name,object_id,time_stamp) SELECT ...
This way if you ever add a new column or make other table changes that change the internal ordering of the tables, your function won't suddenly break.
Your problem has to be elsewhere, as your function works well. Create test database, paste the code you cited and run:
create table events (object_id bigserial, data text);
create trigger rep_event
before insert or update or delete on events
for each row execute procedure handle_event();
insert into events (data) values ('v1'),('v2'),('v3');
delete from events where data='v2';
update events set data='v4' where data='v3';
select * from events;
object_id | data
-----------+------
1 | v1
3 | v4
select * from rep_event;
operation | table_name | object_id | time_stamp
-----------+------------+-----------+----------------------------
I | events | 1 | 2011-07-08 10:31:50.489947
I | events | 2 | 2011-07-08 10:31:50.489947
I | events | 3 | 2011-07-08 10:31:50.489947
D | events | 2 | 2011-07-08 10:32:12.65699
U | events | 3 | 2011-07-08 10:32:33.662936
(5 rows)
Check other triggers, trigger creation command etc. And change this timestamp without timezone to timestamp with timezone.

PostgreSQL: Auto-increment based on multi-column unique constraint

One of my tables has the following definition:
CREATE TABLE incidents
(
id serial NOT NULL,
report integer NOT NULL,
year integer NOT NULL,
month integer NOT NULL,
number integer NOT NULL, -- Report serial number for this period
...
CONSTRAINT PRIMARY KEY (id),
CONSTRAINT UNIQUE (report, year, month, number)
);
How would you go about incrementing the number column for every report, year, and month independently? I'd like to avoid creating a sequence or table for each (report, year, month) set.
It would be nice if PostgreSQL supported incrementing "on a secondary column in a multiple-column index" like MySQL's MyISAM tables, but I couldn't find a mention of such a feature in the manual.
An obvious solution is to select the current value in the table + 1, but this obviously is not safe for concurrent sessions. Maybe a pre-insert trigger would work, but are they guaranteed to be non-concurrent?
Also note that I'm inserting incidents individually, so I can't use generate_series as suggested elsewhere.
It would be nice if PostgreSQL supported incrementing "on a secondary column in a multiple-column index" like MySQL's MyISAM tables
Yeah, but note that in doing so, MyISAM locks your entire table. Which then makes it safe to find the biggest +1 without worrying about concurrent transactions.
In Postgres, you can do this too, and without locking the whole table. An advisory lock and a trigger will be good enough:
CREATE TYPE animal_grp AS ENUM ('fish','mammal','bird');
CREATE TABLE animals (
grp animal_grp NOT NULL,
id INT NOT NULL DEFAULT 0,
name varchar NOT NULL,
PRIMARY KEY (grp,id)
);
CREATE OR REPLACE FUNCTION animals_id_auto()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);
-- Obtain an advisory lock on this table/group.
PERFORM pg_advisory_lock(_rel_id, _grp_id);
SELECT COALESCE(MAX(id) + 1, 1)
INTO NEW.id
FROM animals
WHERE grp = NEW.grp;
RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;
CREATE TRIGGER animals_id_auto
BEFORE INSERT ON animals
FOR EACH ROW WHEN (NEW.id = 0)
EXECUTE PROCEDURE animals_id_auto();
CREATE OR REPLACE FUNCTION animals_id_auto_unlock()
RETURNS trigger AS $$
DECLARE
_rel_id constant int := 'animals'::regclass::int;
_grp_id int;
BEGIN
_grp_id = array_length(enum_range(NULL, NEW.grp), 1);
-- Release the lock.
PERFORM pg_advisory_unlock(_rel_id, _grp_id);
RETURN NEW;
END;
$$ LANGUAGE plpgsql STRICT;
CREATE TRIGGER animals_id_auto_unlock
AFTER INSERT ON animals
FOR EACH ROW
EXECUTE PROCEDURE animals_id_auto_unlock();
INSERT INTO animals (grp,name) VALUES
('mammal','dog'),('mammal','cat'),
('bird','penguin'),('fish','lax'),('mammal','whale'),
('bird','ostrich');
SELECT * FROM animals ORDER BY grp,id;
This yields:
grp | id | name
--------+----+---------
fish | 1 | lax
mammal | 1 | dog
mammal | 2 | cat
mammal | 3 | whale
bird | 1 | penguin
bird | 2 | ostrich
(6 rows)
There is one caveat. Advisory locks are held until released or until the session expires. If an error occurs during the transaction, the lock is kept around and you need to release it manually.
SELECT pg_advisory_unlock('animals'::regclass::int, i)
FROM generate_series(1, array_length(enum_range(NULL::animal_grp),1)) i;
In Postgres 9.1, you can discard the unlock trigger, and replace the pg_advisory_lock() call with pg_advisory_xact_lock(). That one is automatically held until and released at the end of the transaction.
On a separate note, I'd stick to using a good old sequence. That will make things faster -- even if it's not as pretty-looking when you look at the data.
Lastly, a unique sequence per (year, month) combo could also be obtained by adding an extra table, whose primary key is a serial, and whose (year, month) value has a unique constraint on it.
I think I found better solution. It doesn't depends on grp Type (it can be enum, integer and string) and can be used in a lot of cases.
myFunc() - function for a trigger. You can name it as you want.
number - autoincrement column which grows up for each exists value of grp.
grp - your column you want to count in number.
myTrigger - trigger for your table.
myTable - table where you want to make trigger.
unique_grp_number_key - unique constraint key. We need make it for unique pair of values: grp and number.
ALTER TABLE "myTable"
ADD CONSTRAINT "unique_grp_number_key" UNIQUE(grp, number);
CREATE OR REPLACE FUNCTION myFunc() RETURNS trigger AS $body_start$
BEGIN
SELECT COALESCE(MAX(number) + 1, 1)
INTO NEW.number
FROM "myTable"
WHERE grp = NEW.grp;
RETURN NEW;
END;
$body_start$ LANGUAGE plpgsql;
CREATE TRIGGER myTrigger BEFORE INSERT ON "myTable"
FOR EACH ROW
WHEN (NEW.number IS NULL)
EXECUTE PROCEDURE myFunc();
How does it work? When you insert something in myTable, trigger invokes and checks if number field is empty. If it is empty, myFunc() select MAX value of number where grp equals to new grp value which you want to insert. It returns max value + 1 like auto_increment and replaces null number field to new autoincrement value.
This solution is more unique than Denis de Bernardy cause it doesn't depend on grp Type, but thanks to him, his code helps me write my solution.
Maybe it's too late to write answer, but i can't found unique solution for this problem in stackoverflow, so it can help someone. Enjoy and thanks for help!
I think this will help:
http://www.varlena.com/GeneralBits/130.php
Note that in MySQL it is for MyISAM tables only.
PP I have tested advisory locks and found them useless for more than 1 transaction in same time. I am using 2 windows of pgAdmin. First is as simple as possible:
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','dog');
COMMIT;
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','cat');
COMMIT;
ERROR: duplicate key violates unique constraint "animals_pkey"
Second:
BEGIN;
INSERT INTO animals (grp,name) VALUES ('mammal','dog');
INSERT INTO animals (grp,name) VALUES ('mammal','cat');
COMMIT;
ERROR: deadlock detected
SQL state: 40P01
Detail: Process 3764 waits for ExclusiveLock on advisory lock [46462,46496,2,2]; blocked by process 2712.
Process 2712 waits for ShareLock on transaction 136759; blocked by process 3764.
Context: SQL statement "SELECT pg_advisory_lock( $1 , $2 )"
PL/pgSQL function "animals_id_auto" line 15 at perform
And database is locked and can not be unlocked - it is unknown what to unlock.