I have a generated column in PostgreSQL 12 defined as
create table people (
id bigserial primary key,
a varchar,
b boolean generated always as (a is not null) stored
);
but now i want column b to be settable but i don't want to lose the data already in the column, i could drop the column and recreate it but that would lose the current data.
Thanks In Advance
You can run several ALTER TABLE statements in a transaction:
BEGIN;
ALTER TABLE people ADD b_new boolean;
UPDATE people SET b_new = b;
ALTER TABLE people DROP b;
ALTER TABLE people RENAME b_new TO b;
COMMIT;
alter table people add column temp_data boolean;
update people set temp_data=b --(copy data from column b to temp_data)
Do whatever you want with column "b".
update people set b=temp_data --(move data back)
alter table people drop column temp_data --(optional)
Related
I hit the int limit on a large table I use.
The table is in single user mode and has no FK constraints.
CREATE TABLE my_table_bigint (LIKE my_table INCLUDING ALL);
ALTER TABLE my_table_bigint ALTER id DROP DEFAULT;
ALTER TABLE my_table_bigint alter column id set data type bigint;
CREATE SEQUENCE my_table_bigint_id_seq;
INSERT INTO my_table_bigint SELECT * FROM my_table;
ALTER TABLE my_table_bigint ALTER id SET DEFAULT nextval('my_table_bigint_id_seq');
ALTER SEQUENCE my_table_bigint_id_seq OWNED BY my_table_bigint.id;
SELECT setval('my_table_bigint_id_seq', (SELECT max(id) FROM my_table_bigint), true);
At this point I tested that I could insert new rows without any problems. Success, I thought.
I went about renaming the tables.
alter table my_table rename my_table_old
alter table my_table_bigint rename my_table
ALTER INDEX post_comments_pkey RENAME TO post_comments_old_pkey
ALTER INDEX post_comments_pkey_bigint RENAME TO post_comments_pkey
Now, when I checked the schema.... the table ID type had changed BACK to integer, instead of bigint.
Copying took about 3 days - so I am really, really hoping that I don't need to do this again. This is postgres10 on RDS.
EDIT
I'm going to take care of this problem like this:
Create a new table - call it my_table_bigint2.
Do this:
CREATE TABLE my_table_bigint2 (LIKE my_table INCLUDING ALL);
ALTER TABLE my_table_bigint2 ALTER id DROP DEFAULT;
ALTER TABLE my_table_bigint2 alter column id set data type bigint;
CREATE SEQUENCE my_table_bigint2_id_seq;
ALTER TABLE my_table_bigint2 ALTER id SET DEFAULT nextval('my_table_bigint2_id_seq');
ALTER SEQUENCE my_table_bigint2_id_seq OWNED BY my_table_bigint2.id;
And start populating that table with the new data. (This is fine given the usecase.)
In the meantime, I'm going to run
ALTER TABLE post_comments alter column id set data type bigint;
And finally, once that's done, I'm going to
INSERT INTO my_table SELECT * FROM my_table_bigint2;
My follow-up question - is this allowed? Will this create some interaction between the sequences? Should I use a new sequence?
I create table in PostgreSQL but I forgot to add auto increment.
How to alter empty Id column in Postgres to add auto increment?
Starting with Postgres 10 it's recommended to use identity columns for this.
You can turn an existing column into an identity column using an ALTER TABLE:
alter table the_table
alter id add generated always as identity;
If you already have data in the table, you will need to sync the sequence:
select setval(pg_get_serial_sequence('the_table', 'id'), (select max(id) from the_table));
You will need to create a sequence owned by that column and set that as the default value.
e.g.
CREATE TABLE mytable (id int);
CREATE SEQUENCE mytable_id_seq OWNED BY mytable.id;
ALTER TABLE mytable ALTER COLUMN id SET DEFAULT nextval('mytable_id_seq');
I am using PostgreSQL 9.5, I have a TYPE which discribes a collection of columns:
CREATE TYPE datastore.record AS
(recordid bigint,
...
tags text[]);
I have created many tables reliying on this TYPE:
CREATE TABLE datastore.events
OF datastore.record;
Now I would like to add a column to a table which rely on this TYPE without updating the TYPE. I think it is impossible as this, thus I am wondering if there is a way to unbind my table from this TYPE without losing any data or copying the table into a temporary table?
There is a special option not of for this purpose. Per the documentation:
NOT OF - This form dissociates a typed table from its type.
So:
alter table my_table not of;
alter table my_table add new_column integer;
If you don't want to break relations:
--drop table if exists t2;
--drop table if exists t1;
--drop type if exists tp_foo;
create type tp_foo as (i int, x int);
create table t1 of tp_foo;
create table t2 (y text) inherits(t1);
alter type tp_foo add attribute z date cascade;
select * from t2;
I want to alter column data type from float4 to float8 on a table with huge rows count. If I do it in usual path it takes much time and my table blocked for this time.
IS any hack to do it without rewrite the table content?
ALTER TABLE ... ALTER COLUMN ... TYPE ... USING ... (or related things like ALTER TABLE ... ADD COLUMN ... DEFAULT ... NOT NULL) requires a full table rewrite with an exclusive lock.
You can, with a bit of effort, work around this in steps:
ALTER TABLE thetable ADD COLUMN thecol_tmp newtype without NOT NULL.
Create a trigger on the table that, for every write to thecol, updates thecol_tmp as well, so new rows that're created, and rows that're updated, get a value for newcol_tmp as well as newcol.
In batches by ID range, UPDATE thetable SET thecol_tmp = CAST(thecol AS newtype) WHERE id BETWEEN .. AND ..
once all values are populated in thecol_tmp, ALTER TABLE thetable ALTER COLUMN thecol_tmp SET NOT NULL;.
Now swap the columns and drop the trigger in a single tx:
BEGIN;
ALTER TABLE thetable DROP COLUMN thecol;
ALTER TABLE thetable RENAME COLUMN thecol_tmp TO thecol;
DROP TRIGGER whatever_trigger_name ON thetable;
COMMIT;
Ideally we'd have an ALTER TABLE ... ALTER COLUMN ... CONCURRENTLY that did this within PostgreSQL, but nobody's implemented that. Yet.
Consider the following table with approximately 10M rows
CREATE TABLE user
(
id bigint NOT NULL,
...
CONSTRAINT user_pk PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
)
Then i applied the following alter
ALTER TABLE USER ADD COLUMN BUSINESS_ID VARCHAR2(50);
--OK
UPDATE USER SET BUSINESS_ID = ID; //~1500 sec
--OK
ALTER TABLE USER ALTER COLUMN BUSINESS_ID SET NOT NULL;
ERROR: column "business_id" contains null values
SQL state: 23502
This is very strange since id column (which has been copied to business_id column) can't contain null values since it is the primary key, but to be sure i check it
select count(*) from USER where BUSINESS_ID is null
--0 records
I suspect that this is a bug, just wondering if i am missing something trivial
The only logical explanation would be a concurrent INSERT.
(Using tbl instead of the reserved word user as table name.)
ALTER TABLE tbl ADD COLUMN BUSINESS_ID VARCHAR2(50);
--OK
UPDATE tbl SET BUSINESS_ID = ID; //~1500 sec
--OK
-- concurrent INSERT HERE !!!
ALTER TABLE tbl ALTER COLUMN BUSINESS_ID SET NOT NULL;</code></pre>
To prevent this, use instead:
ALTER TABLE tbl
ADD COLUMN BUSINESS_ID VARCHAR(50) DEFAULT ''; -- or whatever is appropriate
...
You may end up with a default value in some rows. You might want to check.
Or run everything as transaction block:
BEGIN;
-- LOCK tbl; -- not needed
ALTER ...
UPDATE ...
ALTER ...
COMMIT;
You might take an exclusive lock to be sure, but ALTER TABLE .. ADD COLUMN takes an ACCESS EXCLUSIVE lock anyway. (Which is only released at the end of the transaction, like all locks.)
Maybe it wants a default value? Postgresql docs on ALTER:
To add a column, use a command like this:
ALTER TABLE products ADD COLUMN description text;
The new column is initially filled with whatever default value is given (null if you don't specify a DEFAULT clause).
So,
ALTER TABLE USER ALTER COLUMN BUSINESS_ID SET DEFAULT="",
ALTER COLUMN BUSINESS_ID SET NOT NULL;
You cannot do that at the same transaction. Add your column and update it. Then in a separate transaction set the not null constraint.