Postgres: `last_value` Clarifiation - postgresql

I'm currently fixing up a bad postgresql migration and have just reset the pg_sequence for a few of my tables. Just to clarify specifically:
Is last_value supposed to EQUAL TO the last-highest PK for a given table? Or the should last_value be the highest value + 1? The reason I ask is because I see a few that are equivalent to the max PK, and a few that are a few higher.
I know this seems like an odd question– "why wouldn't the last_value be the last used value", but I just wanted to clarify to remove any ambiguity that last_value is not in fact the last pk+1 but equal to the last pk

From the pg_sequences documentation:
last_value: The last sequence value written to disk. If caching is used, this value can be greater than the last value handed out from the sequence. Null if the sequence has not been read from yet. Also, if the current user does not have USAGE or SELECT privilege on the sequence, the value is null.
Your question:
Is last_value supposed to EQUAL TO the last-highest PK for a given table?
If caching is not used, then last_value gives you the highest value of the pk. If caching is used, you might get a value slightly greater than the highest pk in the table.

Related

Postgresql: increment column with unique values without constraint violation?

I am trying to implement a table with revision history in Postgresql as follows:
table has a multi-column primary key or unique constraint on columns id and rev (both numeric)
to create a new entry, insert data, have id auto-generated and rev set to 0
to update an existing entry, insert a new row with the previous id and rev set to -1, then increment the rev on all entries with that id by 1
to get the latest version, select by id and rev = 0
The problem that I am facing is the update after the insert; unsurprisingly, Postgresql sometimes raises a "duplicate key" error when rows are updated in the wrong order.
Since there is no ORDER BY available on UPDATE, I searched for other solutions and noticed that the updates pass without errors if the starting point is the result set of an subquery:
UPDATE test
SET rev = test.rev + 1
FROM (
SELECT rev
FROM test
WHERE id = 99
ORDER BY rev DESC
) AS prev
WHERE test.rev = prev.rev
The descending order ensures that the greater values get incremented first, so that the lower values do not violate the unique constraint when they get updated.
One catch though; I can't derive from the documentation if this is working due to some implementation detail (which might change without notice in the future) or indeed guaranteed by the language specification - can someone explain?
I was also wondering whether it is performance-wise better to have the rev column in the index (as described above, which leads to at least a partial index rebuild on every update, but maybe also to faster reads) or to define a (non-unique) index on id only and ignore the performance impact that could be caused by an (initially) larger query set. (I am expecting a rather low revision count per unique id on average, maybe 5.)

Are race conditions possible with PostgreSQL auto-increment

Are there any conditions under which records created in a table using a typical auto-increment field would be available for read out of sequence?
For instance, could a record with value 10 ever appear in the result of a select query when the record with value 9 is not yet visible to a select query?
The purpose for my question is… I want to know if it is reliable to use the maximum value retrieved from one query as the lower bound to identify previously unretrieved values in a later query, or could that potentially miss a row?
If that kind of race condition is possible under some circumstances, then are any of the isolation levels that can be used for the select queries that are immune to that problem?
Yes, and good on you for thinking about it.
You can trivially demonstrate this with three concurrent psql sessions, given some table
CREATE TABLE x (
seq serial primary key,
n integer not null
);
then
SESSION 1 SESSION 2 SESSION 3
BEGIN;
BEGIN;
INSERT INTO x(n) VALUES(1)
INSERT INTO x(n) VALUES (2);
COMMIT;
SELECT * FROM x;
COMMIT;
SELECT * FROM x;
It is not safe to assume that for any generated value n, all generated values n-1 have been used by already-committed or already-aborted xacts. They might be in progress and commit after you see n.
I don't think isolation levels really help you here. There's no mutual dependency for SERIALIZABLE to detect.
This is partly why logical decoding was added, so you can get a consistent stream in commit order.

Postgresql Increment if exist or Create a new row

Hello I have a simple table like that:
+------------+------------+----------------------+----------------+
|id (serial) | date(date) | customer_fk(integer) | value(integer) |
+------------+------------+----------------------+----------------+
I want to use every row like a daily accumulator, if a customer value arrives
and if doesn't exist a record for that customer and date, then create a new row for that customer and date, but if exist only increment the value.
I don't know how implement something like that, I only know how increment a value using SET, but more logic is required here. Thanks in advance.
I'm using version 9.4
It sounds like what you are wanting to do is an UPSERT.
http://www.postgresql.org/docs/devel/static/sql-insert.html
In this type of query, you update the record if it exists or you create a new one if it does not. The key in your table would consist of customer_fk and date.
This would be a normal insert, but with ON CONFLICT DO UPDATE SET value = value + 1.
NOTE: This only works as of Postgres 9.5. It is not possible in previous versions. For versions prior to 9.1, the only solution is two steps. For 9.1 or later, a CTE may be used as well.
For earlier versions of Postgres, you will need to perform an UPDATE first with customer_fk and date in the WHERE clause. From there, check to see if the number of affected rows is 0. If it is, then do the INSERT. The only problem with this is there is a chance of a race condition if this operation happens twice at nearly the same time (common in a web environment) since the INSERT has a chance of failing for one of them and your count will always have a chance of being slightly off.
If you are using Postgres 9.1 or above, you can use an updatable CTE as cleverly pointed out here: Insert, on duplicate update in PostgreSQL?
This solution is less likely to result in a race condition since it's executed in one step.
WITH new_values (date::date, customer_fk::integer, value::integer) AS (
VALUES
(today, 24, 1)
),
upsert AS (
UPDATE mytable m
SET value = value + 1
FROM new_values nv
WHERE m.date = nv.date AND m.customer_fk = nv.customer_fk
RETURNING m.*
)
INSERT INTO mytable (date, customer_fk, value)
SELECT date, customer_fk, value
FROM new_values
WHERE NOT EXISTS (SELECT 1
FROM upsert up
WHERE up.date = new_values.date
AND up.customer_fk = new_values.customer_fk)
This contains two CTE tables. One contains the data you are inserting (new_values) and the other contains the results of an UPDATE query using those values (upsert). The last part uses these two tables to check if the records in new_values are not present in upsert, which would mean the UPDATE failed, and performs an INSERT to create the record instead.
As a side note, if you were doing this in another SQL engine that conforms to the standard, you would use a MERGE query instead. [ https://en.wikipedia.org/wiki/Merge_(SQL) ]

why would anyone alter a column twice leaving it as it began?

I'm looking at code someone has written and the first thing they are doing on a script is to alter a column to
alter column column_1 decimal(12,0)
alter column column_1 varchar(30)
alter column column_2 decimal(12,0)
alter column column_2 varchar(30)
the fields are originally Varchar 30 in the table and I am curious as to why someone would do this? I cannot think of any reason but I'm sure it must be something obvious I am overlooking.
In some cases (depending on how the values were inserted) you may want to normalize the information in the column. Casting to a decimal has formatting repercussions, but also leaves it as a numeric field. So the author decided to cast it once (apply formatting) then cast it back (for whatever reason).
See this example.
alternatively they could have just run an UPDATE script which results in the same outcome. I'm not that fluent in optimization of SQL, but there may be performance benefits to performing a column alteration over an update. Or, as #t-clausen.dk mentions, there are implications with regards to constraints that the author may have wanted. Either way, it was the authors decision to go this route (for whatever reason s/he had).
So, to illustrate, a table that originates looking like:
ID VAL
1 123
2 123.45
3 .123
4 12.34567
After the two alters (or an update), you'd end up with:
ID VAL
1 123
2 123
3 0
4 12

In-order sequence generation

Is there a way to generate some kind of in-order identifier for a table records?
Suppose that we have two threads doing queries:
Thread 1:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
Thread 2:
begin;
insert into table1(id, value) values (nextval('table1_seq'), 'world');
commit;
It's entirely possible (depending on timing) that an external observer would see the (2, 'world') record appear before the (1, 'hello').
That's fine, but I want a way to get all the records in the 'table1' that appeared since the last time the external observer checked it.
So, is there any way to get the records in the order they were inserted? Maybe OIDs can help?
No. Since there is no natural order of rows in a database table, all you have to work with is the values in your table.
Well, there are the Postgres specific system columns cmin and ctid you could abuse to some degree.
The tuple ID (ctid) contains the file block number and position in the block for the row. So this represents the current physical ordering on disk. Later additions will have a bigger ctid, normally. Your SELECT statement could look like this
SELECT *, ctid -- save ctid from last row in last_ctid
FROM tbl
WHERE ctid > last_ctid
ORDER BY ctid
ctid has the data type tid. Example: '(0,9)'::tid
However it is not stable as long-term identifier, since VACUUM or any concurrent UPDATE or some other operations can change the physical location of a tuple at any time. For the duration of a transaction it is stable, though. And if you are just inserting and nothing else, it should work locally for your purpose.
I would add a timestamp column with default now() in addition to the serial column ...
I would also let a column default populate your id column (a serial or IDENTITY column). That retrieves the number from the sequence at a later stage than explicitly fetching and then inserting it, thereby minimizing (but not eliminating) the window for a race condition - the chance that a lower id would be inserted at a later time. Detailed instructions:
Auto increment table column
What you want is to force transactions to commit (making their inserts visible) in the same order that they did the inserts. As far as other clients are concerned the inserts haven't happened until they're committed, since they might roll back and vanish.
This is true even if you don't wrap the inserts in an explicit begin / commit. Transaction commit, even if done implicitly, still doesn't necessarily run in the same order that the row its self was inserted. It's subject to operating system CPU scheduler ordering decisions, etc.
Even if PostgreSQL supported dirty reads this would still be true. Just because you start three inserts in a given order doesn't mean they'll finish in that order.
There is no easy or reliable way to do what you seem to want that will preserve concurrency. You'll need to do your inserts in order on a single worker - or use table locking as Tometzky suggests, which has basically the same effect since only one of your insert threads can be doing anything at any given time.
You can use advisory locking, but the effect is the same.
Using a timestamp won't help, since you don't know if for any two timestamps there's a row with a timestamp between the two that hasn't yet been committed.
You can't rely on an identity column where you read rows only up to the first "gap" because gaps are normal in system-generated columns due to rollbacks.
I think you should step back and look at why you have this requirement and, given this requirement, why you're using individual concurrent inserts.
Maybe you'll be better off doing small-block batched inserts from a single session?
If you mean that every query if it sees world row it has to also see hello row then you'd need to do:
begin;
lock table table1 in share update exclusive mode;
insert into table1(id, value) values (nextval('table1_seq'), 'hello');
commit;
This share update exclusive mode is the weakest lock mode which is self-exclusive — only one session can hold it at a time.
Be aware that this will not make this sequence gap-less — this is a different issue.
We found another solution with recent PostgreSQL servers, similar to #erwin's answer but with txid.
When inserting rows, instead of using a sequence, insert txid_current() as row id. This ID is monotonically increasing on each new transaction.
Then, when selecting rows from the table, add to the WHERE clause id < txid_snapshot_xmin(txid_current_snapshot()).
txid_snapshot_xmin(txid_current_snapshot()) corresponds to the transaction index of the oldest still-open transaction. Thus, if row 20 is committed before row 19, it will be filtered out because transaction 19 will still be open. When the transaction 19 is committed, both rows 19 and 20 will become visible.
When no transaction is opened, the snapshot xmin will be the transaction id of the currently running SELECT statement.
The returned transaction IDs are 64-bits, the higher 32 bits are an epoch and the lower 32 bits are the actual ID.
Here is the documentation of these functions: https://www.postgresql.org/docs/9.6/static/functions-info.html#FUNCTIONS-TXID-SNAPSHOT
Credits to tux3 for the idea.