Does ALTER COLUMN TYPE varchar(N) rewrite the table in Postgres 9.6? - postgresql

In the past
The way we handled this with Postgres 8.4 was to manually update the pg_attribute table:
LOCK TABLE pg_attribute IN EXCLUSIVE MODE;
UPDATE pg_attribute SET atttypmod = 104
WHERE attrelid = 'table_name'::regclass AND
attname = 'column_name';
column_name was a varchar(50) and we wanted a varchar(100), but the table was too enormous (tens of millions of rows) and too heavily used to rewrite.
Nowadays
The content and answers around this topic were sparse and outdated for such a (at least anecdotally) common problem.
But, after seeing hints that this might be the case on at least 3 discussions, I've come to think that with newer versions of Postgres (we're on 9.6), you are now able to run the following:
ALTER TABLE 'table_name' ALTER COLUMN 'column_name' TYPE varchar(100);
...without rewriting the table.
Is this correct?
If so, do you know where some definitive info on the topic exists in the Postgres docs?

That ALTER TABLE will not require a rewrite.
The documentation says:
Adding a column with a DEFAULT clause or changing the type of an existing column will require the entire table and its indexes to be rewritten.
It is very simple to test:
Try with an empty table and see if the relfilenode column in the pg_class row for the table changes:
SELECT relfilenode FROM pg_class
WHERE relname = 'table_name';
Reading on in the documentation, you see:
As an exception, when changing the type of an existing column, if the USING clause does not change the column contents and the old type is either binary coercible to the new type or an unconstrained domain over the new type, a table rewrite is not needed; but any indexes on the affected columns must still be rebuilt.
Since varchar(50) is clearly binary coercible to varchar(100), your case will not require a table rewrite, as the above test should confirm.

According to What's new in PostgreSQL 9.2 the above answer is at least strange to me the accepted answer was edited to align with the below:
Reduce ALTER TABLE rewrites
A table won't get rewritten anymore during
an ALTER TABLE when changing the type of a column in the following
cases:
varchar(x) to varchar(y) when y>=x. It works too if going from
varchar(x) to varchar or text (no size limitation)
I tested with postgres 10.4 and the relfilenode remained the same after running alter table ... alter column ... type varchar(50)
create table aaa (field varchar(10));
insert into aaa select f from generate_series(1,1e6) f;
commit;
SELECT oid, relfilenode FROM pg_class WHERE relname = 'aaa';
alter table aaa alter column field type varchar(50);
commit;
SELECT oid, relfilenode FROM pg_class WHERE relname = 'aaa';
I'm not sure why you got a different relfilenode in 9.6 (or I'm missing something...).

Related

How to remove columns for real in postgresql?

I have a large system, and table schema updates quite offtenly, I noticed that after times of removing and recreating new cloumn, limitation "tables can have at most 1600 columns" is shown, but still there are few columns in information_schema.columns.
I've tried vacuum full analyze, still not working, any way to avoid this limitation?
DO $$
declare tbname varchar(1024);
BEGIN
FOR i IN 1..1599 LOOP
tbname := 'alter table vacuum_test add column test' || CAST(i AS varchar(8)) ||' int';
EXECUTE tbname;
END LOOP;
END $$;
alter table vacuum_test drop column test1;
VACUUM FULL ANALYZE vacuum_test;
alter table vacuum_test add column test1 int;
result:
alter table vacuum_test add column test1 int
> ERROR: tables can have at most 1600 columns
> 时间: 0.054s
Unfortunately vacuum full does not remove dropped columns from the table (i.e. entries that have attisdropped = true in `pg_attribute). I would have expected that, but apparently this does not happen.
The only way to get rid of the hidden columns is to create a brand new table and copy the data to the new table.
Something along the lines:
create table new_table (like old_table including all);
insert into new_table
select *
from old_table;
Then drop the old table and rename the new one to the old name. Constraint and index names will be named differently, so you might want to rename them as well.
You will have the re-create all foreign keys (incoming and outgoing) manually as they are not included when using CREATE TABLE (LIKE ...).
Another option is to use pg_repack which does this transparently in the background without locking the table.

Pre-fix the foreign table with the schema - postgres_pwd

If I follow: https://docs.aws.amazon.com/AmazonRDS/latest/UserGuide/Appendix.PostgreSQL.CommonDBATasks.html#postgresql-commondbatasks-fdw, how can I pre-fix the tables with the schema I am retrieving tables from, e.g.
IMPORT FOREIGN SCHEMA lands
LIMIT TO (land, land2)
FROM SERVER foreign_server INTO public;
The created tables are named land and land2. Is it possible to prefix land and land2 with 'lands', e.g. 'lands_land' and 'lands_land2'?
With psql and recent PostgreSQL versions, you could simply run (after the IMPORT FOREIGN SCHEMA):
SELECT format(
'ALTER FOREIGN TABLE public.%I RENAME TO %I;',
relname,
'lands_' || relname
)
FROM pg_class
WHERE relkind = 'f' -- foreign table
AND relnamespace = 'public'::regnamespace \gexec
The \gexec will interpret each result row as an SQL stateent and execute it.
Another option that I'd like better is to keep the original names, but use a different schema for the foreign tables:
IMPORT FOREIGN SCHEMA lands
LIMIT TO (land, land2)
FROM SERVER foreign_server INTO lands;
Then all foreign tables will be in a schema lands, and you have the same effect in a more natural fashion. You can adjust search_path to include the lands schema.

What does each column in an archive's table of contents mean? (pg_dump/pg_restore)

I'm using pg_restore to reconstitute a database I've backed up. As suggested in the pg_restore docs (https://www.postgresql.org/docs/curren/app-pgrestore.html), I've created a .list file with the archive's table of contents.
Nothing is wrong per se, but I'm struggling to figure out what every column in this ToC means. They each look like this:
5602; 0 16476 TABLE DATA public <table_name> postgres
The first column is the archive id for that table, but what do the next two numbers mean? In my ToC the first non-archive column is always zero but in other examples that's not true.
The fields pertain to:
Archive ID
Catalog Table OID (0 in your case, because that row pertains to TABLE DATA and not a table. A proc would get the value of SELECT oid FROM pg_class were relname = 'pg_proc' here, a table would get SELECT oid FROM pg_class where relname = 'pg_class', etc.)
Table OID (16476 is the oid you would find in pg_class)
Description (TABLE DATA in your example)
Schema (public in your example)
Name (<table_name>)
Owner (postgres in your example)

Change column type VARCHAR to TEXT in PostgreSQL without lock table

I have a "Parent Table" and partition table by year with a lot column and now I need change a column VARCHAR(32) to TEXT because we need more length flexibility.
So I will alter the parent them will also change all partition.
But the table have 2 unique index with this column and also 1 index.
This query lock the table:
ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE VARCHAR(64) USING
column_need_change :: VARCHAR(64);
Also this one :
ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE TEXT USING column_need_change :: TEXT;
I see this solution :
UPDATE pg_attribute SET atttypmod = 64+4
WHERE attrelid = 'my_schema.my_table'::regclass
AND attname = 'column_need_change ';
But I dislike this solution.
How can change VARCHAR(32) type to TEXT without lock table, I need continue to push some data in table between the update.
My Postgresql version : 9.6
EDIT :
This is the solution I ended up taking:
ALTER TABLE my_schema.my_table
ALTER COLUMN column_need_change TYPE TEXT USING column_need_change :: TEXT;
The query lock my table between : 1m 52s 548ms for 2.6 millions rows but it's fine.
The supported and safe variant is to use ALTER TABLE. This will not rewrite the table, since varchar and text have the same on-disk representation, so it will be done in a split second once the ACCESS EXCLUSIVE table lock is obtained.
Provided that your transactions are short, you will only experience a short stall while ALTER TABLE waits for all prior transactions to finish.
Messing with the system catalogs is dangerous, and you do so on your own risk.
You might get away with
UPDATE pg_attribute
SET atttypmod = -1,
atttypid = 25
WHERE attrelid = 'my_schema.my_table'::regclass
AND attname = 'column_need_change';
But if it breaks something, you get to keep the pieces…

Copy Postgres table while maintaining primary key autoincrement

I am trying to copy a table with this postgres command however the primary key autoincrement feature does not copy over. Is there any quick and simple way to accomplish this? Thanks!
CREATE TABLE table2 AS TABLE table;
Here's what I'd do:
BEGIN;
LOCK TABLE oldtable;
CREATE TABLE newtable (LIKE oldtable INCLUDING ALL);
INSERT INTO newtable SELECT * FROM oldtable;
SELECT setval('the_seq_name', (SELECT max(id) FROM oldtable)+1);
COMMIT;
... though this is a moderately unusual thing to need to do and I'd be interested in what problem you're trying to solve.