Postgres - What is inserted when a delete executes? - postgresql

I have been digging into Postgres' MVCC. I've watched videos regarding the behaviors of inserts, updates, and deletes. I keep seeing that a delete actually updates the xmax of the previous version and also possibly a commit flag and then inserts another record. I have yet to see any documentation or videos on what is actually inserted with a delete statement.
This seems very similar to Cassandra and its tombstone behavior, but have not been able to confirm. So my question is, what is actually inserted for a row when a delete occurs?

A DELETE does not insert a new version of the affected row, it only fills in the xmax field of the row thereby marking it as non-existent for all later transactions. (see this article for more details)
So the answer to your question
What is inserted when a DELETE executes?
is: Nothing is inserted. If you have any articles stating otherwise they are either wrong or unprecise.

Related

How does Postgres support rollback without undo logs

I was going through this link and it is mentioned that PostgreSQL does not support undo log. So I am wondering how PostgreSQL rollback a transaction without undo log.
When a row is updated or deleted in PostgreSQL, it is not really updated or deleted. The old row version just remains in the table and is marked as removed by a certain transaction. That makes an update much like a delete of the old and insert of the new row version.
To roll back a transaction is nothing but to mark the transaction as aborted. Then the old row version automatically becomes the current row version again, without a need for undoing anything.
The Achilles' heel of this technique is that data modifications produce “dead row versions”, which have to be reclaimed later by a background procedure called “vacuuming”.

PostgreSQL logical replication - ignore pre-existing data

Imagine dropping a subscription and recreating it from scratch. Is it possible to ignore existing data during the first synchronization?
Creating a subscription with (copy_data=false) is not an option because I do want to copy data, I just don't want to copy already existing data.
Example: There is a users table and a corresponding publication on the master. This table has 1 million rows and every minute a new row is added. Then we drop the subscription for a day.
If we recreate the subscription with (copy_data=true), replication will not start due to a conflict with already existing data. If we specify (copy_data=false), 1440 new rows will be missing. How can we synchronize the publisher and the subscriber properly?
You cannot do that, because PostgreSQL has no way of telling when the data were added.
You'd have to reconcile the tables by hand (or INSERT ... ON CONFLICT DO NOTHING).
Unfortunately PostgreSQL does not support nice skip options for conflicts yet, but I believe it will be enhanced in the feature.
Based on #Laurenz Albe answer which recommends the use of the statement:
INSERT ... ON CONFLICT DO NOTHING.
I believe that it would be better to use the following command which also will take care any possible updates on your data before you start the subscription again:
INSERT ... ON CONFLICT UPDATE SET...
Finally I have to say that both are dirty solutions as during the execution of the above statement and the creation of the subscription, new lines may have been arrived which will result in losing them until you perform again the custom sync.
I have seen some other suggested solutions using the LSN number from the Postgresql log file...
For me maybe is elegant and safe to delete all the data from the destination table and create the replication again!

CREATE SCHEMA IF NOT EXISTS raises duplicate key error

To give some context, the command is issued inside a task, and many task might issue the same command from multiple workers at the same time.
Each tasks tries to create a postgres schema. I often get the following error:
IntegrityError: (IntegrityError) duplicate key value violates unique constraint "pg_namespace_nspname_index"
DETAIL: Key (nspname)=(9621584361) already exists.
'CREATE SCHEMA IF NOT EXISTS "9621584361"'
Postgres version is PostgreSQL 9.4rc1.
Is it a bug in Postgres?
This is a bit of a wart in the implementation of IF NOT EXISTS for tables and schemas. Basically, they're an upsert attempt, and PostgreSQL doesn't handle the race conditions cleanly. It's safe, but ugly.
If the schema is being concurrently created in another session but isn't yet committed, then it both exists and does not exist, depending on who you are and how you look. It's not possible for other transactions to "see" the new schema in the system catalogs because it's uncommitted, so it's entry in pg_namespace is not visible to other transactions. So CREATE SCHEMA / CREATE TABLE tries to create it because, as far as it's concerned, the object doesn't exist.
However, that inserts a row into a table with a unique constraint. Unique constraints must be able to see uncommitted rows in order to function. So the insert blocks (stops) until the first transaction that did the CREATE either commits or rolls back. If it commits, the second transaction aborts, because it tried to insert a row that violates a unique constraint. CREATE SCHEMA isn't smart enough to catch this case and re-try.
To properly fix this PostgreSQL would probably need predicate locking, where it could lock the potential for a row. This might get added as part of the current work going on for implementing UPSERT.
For these particular commands, PostgreSQL could probably do a dirty read of the system catalogs, where it can see uncommitted changes. Then it could wait for the uncommitted transaction to commit or roll back, re-do the dirty read to see if someone else is waiting, and retry. But this would have a race condition where someone else might create the schema between when you do the read to check for it and when you try to create it.
So the IF NOT EXISTS variants would have to:
Check to see if the schema exists; if it does, finish without doing anything.
Attempt to create the table
If creation fails due to a unique constraint error, retry at the start
If table creation succeeds, finish
As far as I know nobody's implemented that, or they tried and it wasn't accepted. There would be possible issues with transaction ID burn rate, etc, with this approach.
I think this is a bug of sorts, but it's a "yeah, we know" kind of bug, not a "we'll get right on fixing that" kind of bug. Feel free to post to pgsql-bugs about it; at the very least the documentation should mention this caveat about IF NOT EXISTS.
I don't recommend doing DDL concurrently like that.
I needed to work around this limitation in an application where schemas are created concurrently. What worked for me was adding
LOCK TABLE pg_catalog.pg_namespace
in the transaction including CREATE SCHEMA. Looks like a dirty and unsafe thing to do, but helped me to solve the problem which occurred only in tests anyway.

What is supported as transactional in postgres

I am trying find out what is postgres can handle safely inside of transaction, but I cannot find the relavant information in the postgres manual. So far I have found out the following:
UPDATE, INSERT and DELTE are fully supported inside transactions and rolled back when the transaction is not finished
DROP TABLE is not handled safely inside a transaction, and is undone with a CREATE TABLE, thus recreates the dropped table but does not repopulate it
CREATE TABLE is also not truly transactionized and is instead undone with a corresponding DROP TABLE
Is this correct? Also I could not find any hints as to the handling of ALTER TABLE and TRUNCATE. In what way are those handled and are they safe inside transactions? Is there a difference of the handling between different types of transactions and different versions of postgres?
DROP TABLE is transactional. To undo this, you need to issue a ROLLBACK not a CREATE TABLE. The same goes for CREATE TABLE (which is also undone using ROLLBACK).
ROLLBACK is always the only correct way to undo a transaction - that includes ALTER TABLE and TRUNCATE.
The only thing that is never transactional in Postgres are the numbers generated by a sequence (CREATE/ALTER/DROP SEQUENCE themselves are transactional though).
Best I'm aware all of these commands are transaction aware, except for TRUNCATE ... RESTART IDENTITY (and even that one is transactional since 9.1.)
See the manual on concurrency control and transaction-related commands.

ADO Transaction and READPAST

I don't know if this is the best way, if there is a better way, please post.
I have an application that read a file and insert records.
The entire file is processed in one transaction.
Before a record is inserted the table needs to be checked for duplicates
(note: I can't make this a table constraint since there are exceptions)
So the duplicate check is a normal select statement, but the problem is, it reads the uncomitted records from the current transaction.
I have included READPAST and READCOMMITTED hints in the select statement, but that still return the record.
Any ideas?
the only way to implement this within the db is locking table. look at the ISOLATION LEVEL SERIALIZABLE