We are experiencing a strange problem with triggers and a sequence generator in Postgres.
We have a trigger on table that inserts audit log informations (some business Data before update, some business Data after update, ...).
The problem we have is that the order of the sequence generated doesn't match the timestamp and the data at the time of the capture.
We are sure that functionnaly the row with the id (7 228 916) happened after the row with the id (7 229 471), as the timestamps confirm it.
But the sequences are not in the correct order.
At the end of the of the trigger, here is the insert statement:
INSERT INTO AUDIT_TABLE (ID, ROW_ID, CREATED_AT, ...)
VALUES (nextval('AUDIT_TABLE_SEQ'), NEW.PK_, current_timestamp, ...);
Does someone please have an explanation for this problem?
Thank you for your help
We are expecting to have the sequences to be ordered with timestamps of the capture of events.
Related
insert into table1 (ID,date)
select
ID,sysdate
from table2
assume i insert a record into table2 with value ID:1,date:2023-1-1
the expected result is update the ID of table1 base on the ID from table2 and update the value of date of table1 base on the sysdate from table2.
select *
from table1;
the expected result after running the insert statement will be
ID
date
1
2023-1-6
but what i get is:
ID
date
1
2023-1-1
I see a few possibilities based on the information given:
You say "the expected result is update the ID of table1 base on the ID from table2" and this begs the question - did ID = 1 exist in table1 BEFORE you ran the INSERT statement? If so are you expecting that the INSERT will update the value for ID #1? Redshift doesn't enforce or check uniqueness of primary keys and you would get 2 rows in the table1 in this case. Is this what is happening?
SYSDATE on Redshift provides the start timestamp of the current transaction, NOT the current statement. Have you had the current transaction open since the 1st?
You didn't COMMIT the results (or the statement failed) and are checking from a different session. It could also be that the transaction started before in the second session before the COMMIT completed. Working with MVCC across multiple sessions can trip anyone up.
There are likely other possible explanations. If you could provide DDL, sample data, and a simple test case so that others can recreate what you are seeing it would greatly narrow down the possibilities.
If I have a Postgresql db schema for a tires table like this (a user has many tires):
user_id integer
description text
size integer
color text
created_at timestamp
and I want to enforce a constraint that says "a user can only have 4 tires".
A naive way to implement this would be to do:
SELECT COUNT(*) FROM tires WHERE user_id='123'
, compare the result to 4 and insert if it's lower than 4. It's suspectible to race conditions and so is a naive approach.
I don't want to add a count column. How can I do this (or can I do this) using an exclusion constraint? If it's not possible with exclusion constraints, what is the canonical way?
The "right" way is using locks here.
Firstly, you need to lock related rows. Secondly, insert new record if a user has less than 4 tires. Here is the SQL:
begin;
-- lock rows
select count(*)
from tires
where user_id = 123
for update;
-- throw an error of there are more than 3
-- insert new row
insert into tires (user_id, description)
values (123, 'tire123');
commit;
You can read more here How to perform conditional insert based on row count?
I want table to contain logs and be looped, so when it came to last key in sequence it will start over from one and will rewrite old logs.
I can do it if i mark sequence as cycle and do requests like that:
INSERT INTO logs (initiator_id, target_id, action, message) VALUES ($1, $2, $3, $4)
ON CONFLICT (id) DO UPDATE SET target_id=$2, initiator_id=$1, action=$3, message=$4;
What i don't like, is that i need to repeat same iserts if there is conflict (line 2)
What i'am curios about, is it a good way to do, or there is other more practical ways for tables that you need clear lets say every 3 mounts?
Or maybe there is a way to generate unique id without using sequences?
For now i decided to make 2 colums primary key (date, initiator_id) as log is always 1 action at a time.
Are there any conditions under which records created in a table using a typical auto-increment field would be available for read out of sequence?
For instance, could a record with value 10 ever appear in the result of a select query when the record with value 9 is not yet visible to a select query?
The purpose for my question is… I want to know if it is reliable to use the maximum value retrieved from one query as the lower bound to identify previously unretrieved values in a later query, or could that potentially miss a row?
If that kind of race condition is possible under some circumstances, then are any of the isolation levels that can be used for the select queries that are immune to that problem?
Yes, and good on you for thinking about it.
You can trivially demonstrate this with three concurrent psql sessions, given some table
CREATE TABLE x (
seq serial primary key,
n integer not null
);
then
SESSION 1 SESSION 2 SESSION 3
BEGIN;
BEGIN;
INSERT INTO x(n) VALUES(1)
INSERT INTO x(n) VALUES (2);
COMMIT;
SELECT * FROM x;
COMMIT;
SELECT * FROM x;
It is not safe to assume that for any generated value n, all generated values n-1 have been used by already-committed or already-aborted xacts. They might be in progress and commit after you see n.
I don't think isolation levels really help you here. There's no mutual dependency for SERIALIZABLE to detect.
This is partly why logical decoding was added, so you can get a consistent stream in commit order.
Is there a way to generate some kind of in-order identifier for a table records?
Suppose that we have two threads doing queries:
Thread 1:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
Thread 2:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'world');
commit;
It's entirely possible (depending on timing) that an external observer would see the (2, 'world') record appear before the (1, 'hello').
That's fine, but I want a way to get all the records in the 'table1' that appeared since the last time the external observer checked it.
So, is there any way to get the records in the order they were inserted? Maybe OIDs can help?
No. Since there is no natural order of rows in a database table, all you have to work with is the values in your table.
Well, there are the Postgres specific system columns cmin and ctid you could abuse to some degree.
The tuple ID (ctid) contains the file block number and position in the block for the row. So this represents the current physical ordering on disk. Later additions will have a bigger ctid, normally. Your SELECT statement could look like this
SELECT *, ctid -- save ctid from last row in last_ctid
FROM tbl
WHERE ctid > last_ctid
ORDER BY ctid
ctid has the data type tid. Example: '(0,9)'::tid
However it is not stable as long-term identifier, since VACUUM or any concurrent UPDATE or some other operations can change the physical location of a tuple at any time. For the duration of a transaction it is stable, though. And if you are just inserting and nothing else, it should work locally for your purpose.
I would add a timestamp column with default now() in addition to the serial column ...
I would also let a column default populate your id column (a serial or IDENTITY column). That retrieves the number from the sequence at a later stage than explicitly fetching and then inserting it, thereby minimizing (but not eliminating) the window for a race condition - the chance that a lower id would be inserted at a later time. Detailed instructions:
Auto increment table column
What you want is to force transactions to commit (making their inserts visible) in the same order that they did the inserts. As far as other clients are concerned the inserts haven't happened until they're committed, since they might roll back and vanish.
This is true even if you don't wrap the inserts in an explicit begin / commit. Transaction commit, even if done implicitly, still doesn't necessarily run in the same order that the row its self was inserted. It's subject to operating system CPU scheduler ordering decisions, etc.
Even if PostgreSQL supported dirty reads this would still be true. Just because you start three inserts in a given order doesn't mean they'll finish in that order.
There is no easy or reliable way to do what you seem to want that will preserve concurrency. You'll need to do your inserts in order on a single worker - or use table locking as Tometzky suggests, which has basically the same effect since only one of your insert threads can be doing anything at any given time.
You can use advisory locking, but the effect is the same.
Using a timestamp won't help, since you don't know if for any two timestamps there's a row with a timestamp between the two that hasn't yet been committed.
You can't rely on an identity column where you read rows only up to the first "gap" because gaps are normal in system-generated columns due to rollbacks.
I think you should step back and look at why you have this requirement and, given this requirement, why you're using individual concurrent inserts.
Maybe you'll be better off doing small-block batched inserts from a single session?
If you mean that every query if it sees world row it has to also see hello row then you'd need to do:
begin;
lock table table1 in share update exclusive mode;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
This share update exclusive mode is the weakest lock mode which is self-exclusive — only one session can hold it at a time.
Be aware that this will not make this sequence gap-less — this is a different issue.
We found another solution with recent PostgreSQL servers, similar to #erwin's answer but with txid.
When inserting rows, instead of using a sequence, insert txid_current() as row id. This ID is monotonically increasing on each new transaction.
Then, when selecting rows from the table, add to the WHERE clause id < txid_snapshot_xmin(txid_current_snapshot()).
txid_snapshot_xmin(txid_current_snapshot()) corresponds to the transaction index of the oldest still-open transaction. Thus, if row 20 is committed before row 19, it will be filtered out because transaction 19 will still be open. When the transaction 19 is committed, both rows 19 and 20 will become visible.
When no transaction is opened, the snapshot xmin will be the transaction id of the currently running SELECT statement.
The returned transaction IDs are 64-bits, the higher 32 bits are an epoch and the lower 32 bits are the actual ID.
Here is the documentation of these functions: https://www.postgresql.org/docs/9.6/static/functions-info.html#FUNCTIONS-TXID-SNAPSHOT
Credits to tux3 for the idea.