I think I read somewhere that running an ALTER TABLE foo ADD COLUMN baz text on a postgres database will not cause a read or write lock. Setting a default value causes locking, but allowing a null default prevents a lock.
I can't find this in the documentation, though. Can anyone point to a place that says, definitively, if this is true or not?
The different sorts of locks and when they're used are mentioned in the doc in
Table-level Locks. For instance, Postgres 11's ALTER TABLE may acquire a SHARE UPDATE EXCLUSIVE, SHARE ROW EXCLUSIVE, or ACCESS EXCLUSIVE lock.
Postgres 9.1 through 9.3 claimed to support two of the above three but actually forced Access Exclusive for all variants of this command. This limitation was lifted in Postgres 9.4 but ADD COLUMN remains at ACCESS EXCLUSIVE by design.
It's easy to check in the source code because there's a function dedicated to establishing the lock level needed for this command in various cases: AlterTableGetLockLevel in src/backend/commands/tablecmds.c.
Concerning how much time the lock is held, once acquired:
When the column's default value is NULL, the column's addition should be very quick because it doesn't need a table rewrite: it's only an update in the catalog.
When the column has a non-NULL default value, it depends on PostgreSQL version: with version 11 or newer, there is no immediate rewriting of all the rows, so it should be as fast as the NULL case. But with version 10 or older, the table is entirely rewritten, so it may be quite expensive depending on the table's size.
Adding new null column will lock the table for very very short time since no need to rewrite all data on disk. While adding column with default value requires PostgreSQL to make new versions of all rows and store them on the disk. And during that time table will be locked.
So when you need to add column with default value to big table it's recommended to add null value first and then update all rows in small portions. This way you'll avoid high load on disk and allow autovacuum to do it's job so you'll not end up doubling table size.
http://www.postgresql.org/docs/current/static/sql-altertable.html#AEN57290
"Adding a column with a non-null default or changing the type of an existing column will require the entire table and indexes to be rewritten."
So the documentation only specifies when the table is not rewritten.
There will always be a lock, but it will be very short in case the table is not to be rewritten.
Related
Here
https://stackoverflow.com/a/53016193/10894456
is an answer provided for Oracle 11g,
My question is the same:
What is the best approach to add a not null column with default value
in production oracle database when that table contain one million
records and it is live. Does it create any locks if we do the column
creation , adding default value and making it as not null in a single
statement?
but for PostgreSQL ?
This prior answer essentially answers your query.
Cross referencing the relevant PostgreSQL doc with the PostgreSQL sourcecode for AlterTableGetLockLevel mentioned in the above answer shows that ALTER TABLE ... ADD COLUMN will always obtain an ACCESS EXCLUSIVE table lock, precluding any other transaction from accessing the table for the duration of the ADD COLUMN operation.
This same exclusive lock is obtained for any ADD COLUMN variation; ie. it doesn't matter whether you add a NULL column (with or without DEFAULT) or have a NOT NULL with a default.
However, as mentioned in the linked answer above, adding a NULL column with no DEFAULT should be very quick as this operation simply updates the catalog.
In contrast, adding a column with a DEFAULT specifier necessitates a rewrite the entire table in PostgreSQL 10 or less.
This operation is likely to take a considerable time on your 1M record table.
According to the linked answer, PostgreSQL >= 11 does not require such a rewrite for adding such a column, so should perform more similarly to the no-DEFAULT case.
I should add that for PostgreSQL 11 and above, the ALTER TABLE docs note that table rewrites are only avoided for non-volatile DEFAULT specifiers:
When a column is added with ADD COLUMN and a non-volatile DEFAULT is specified, the default is evaluated at the time of the statement and the result stored in the table's metadata. That value will be used for the column for all existing rows. If no DEFAULT is specified, NULL is used. In neither case is a rewrite of the table required.
Adding a column with a volatile DEFAULT [...] will require the entire table and its indexes to be rewritten. [...] Table and/or index rebuilds may take a significant amount of time for a large table; and will temporarily require as much as double the disk space.
Need help figuring this out. Tried googling, but found no specific answer.
Does altering a table in PostgreSQL like this will lock the table?
ALTER TABLE public.account ALTER COLUMN type acc_type VARCHAR(16) USING acc_type::VARCHAR(16);
Yes, that has to take an access exclusive lock, because the metadata change. Depending on what data type it was before that would be a very fast operation (e. g., it was varchar(10) before) or a very slow operation (if the previous type is not binary compatible).
I have a database that is going to experience the integer exhaustion problem that Basecamp famously faced back in November. I have several months to figure out what to do.
Is there a no-downtime-required, proactive solution to migrating this column type? If so what is it? If not, is it just a matter of eating the downtime and migrating the column when I can?
Is this article sufficient, assuming I have several days/weeks to perform the migration now before I'm forced to do it when I run out of ids?
Use logical replication.
With logical replication you can have different data types at primary and standby.
Copy the schema with pg_dump -s, change the data types on the copy and then start logical replication.
Once all data is copied over, switch the application to use the standby.
For zero down time, the application has to be able to reconnect and retry, but that's always a requirement in such a case.
You need PostgreSQL v10 or better for that, and your database
shouldn't modify the schema, as DDL is not replicated.
should not use sequence (SERIAL or IDENTITY), as the last used value would not be replicated
Another solution for pre-v10 databases where all transactions are short:
Add a bigint column to the table.
Create a BEFORE trigger that sets the new column whenever a row is added or updated.
Run a series of updates that set the new column from the old one where it IS NULL. Keep those batches short so you don't lock long and don't deadlock much. Make sure these transaction run with session_replication_role = replica so they don't trigger triggers.
Once all rows are updated, create a unique index CONCURRENTLY on the new column.
Add a unique constraint USING the index you just created. That will be fast.
Perform the switch:
BEGIN;
ALTER TABLE ... DROP oldcol;
ALTER TABLE ... ALTER newcol RENAME TO oldcol;
COMMIT;
That will be fast.
Your new column has no NOT NULL set. This cannot be done without a long invasive lock. But you can add a check constraint IS NOT NULL and create it NOT VALID. That is good enough, and you can later validate it without disruptions.
If there are foreign key constraints, things get a little more complicated. You have to drop these and create NOT VALID foreign keys to the new column.
Create a copy of the old table but with modified ID field. Next create a trigger on the old table that inserts new data to both tables. Finally copy data from the old table to the new one (it would be a good idea to distinguish pre-trigger data with post-trigger for example by id if it is sequential). Once you are done switch tables and delete the old one.
This obviously requires twice as much space (and time for copy) but will work without any downtime.
I have run into a case such that Pg always preferring into a sequential scan for a table that has around 70M rows. (Index scan is ideal for that query and i have confirmed it by setting enable_seq_scan=off, speed improved by 200x)
So, in order to help Pg understand my data better i executed this
ALTER TABLE tablename ALTER COLUMN columnname SET STATISTICS 1000;
Unfortunately this requires Update Exclusive lock which locks the entire table (too much lock).
Is there a solution to avoid locking for this statement ?
Data sharding is done for this table based on Primary Key Range, so I would like Pg to even understand my Pk better so that it knows which User has got large data. Will it be of use if i increase the statistics of PrimaryKey column as well ?
From the very docs you linked
SET STATISTICS
This form sets the per-column statistics-gathering target for subsequent ANALYZE operations. The target can be set in the range 0 to 10000; alternatively, set it to -1 to revert to using the system default statistics target (default_statistics_target). For more information on the use of statistics by the PostgreSQL query planner, refer to Section 14.2.
SET STATISTICS acquires a SHARE UPDATE EXCLUSIVE lock.
And, on the docs for Explicit Locking
SHARE UPDATE EXCLUSIVE
Conflicts with the SHARE UPDATE EXCLUSIVE, SHARE, SHARE ROW EXCLUSIVE, EXCLUSIVE, and ACCESS EXCLUSIVE lock modes. This mode protects a table against concurrent schema changes and VACUUM runs.
Acquired by VACUUM (without FULL), ANALYZE, CREATE INDEX CONCURRENTLY, and ALTER TABLE VALIDATE and other ALTER TABLE variants (for full details see ALTER TABLE).
So you can't change the schema, or vacuum while analytics are happening. So what? They should happen very fast. Almost instantly.
I have a postgresql DB and a table with almost billion of rows.
when I try to add a new column with default value:
ALTER TABLE big_table
ADD COLUMN some_flag integer NOT NULL DEFAULT 0;
The transaction goes on for 30+ min .. and the DB logs starts to shoots warnings.
Any way to optimize the query ?
Besides doing it in batches (which will still take a while):
You could dump the table as COPY statements and write a script to edit the contents of the COPY statements to insert another column (COPY can be CSV IIRC).
Then you just reload your altered COPY dump and it should in theory be faster than the ALTER because COPY will not log transactions.
The other option is to turn off fsync while you run the command... just remember to turn it back on.
You can also do both of the above in batches.
Starting from PostgreSQL 11 this behaviour will change.
Waiting for PostgreSQL 11 – Fast ALTER TABLE ADD COLUMN with a non-NULL default:
So, for the longest time, when you did:
alter table x add column z text;
it was virtually instantaneous. Get a lock on table, add information about new column to system catalogs, and it's done.
But when you tried:
alter table x add column z text default 'some value';
then it took long time. How long it did depend on size of table.
This was because postgresql was actually rewriting the whole table, adding the column to each row, and filling it with default value.
"What happens if you want to set the column to NOT NULL also? Are we back to the slow version in that case or does this handle that as well?"
not null doesn’t change anything. it is a constraint for new rows. so adding a column with “not null default ‘xxx'” will be fast.
I'd consider creating the column without the default and manually updating the rows in batches with intermittent commits to apply the default.