Postgre supports this operation as below:
ALTER TABLE name
SET SCHEMA new_schema
The operation won't work in Redshift. Is there any way to do that?
I tried to update pg_class to set relnamespace(schema id) for the table, which needs superuser account and usecatupd is true in pg_shadow table. But I got permission denied error. The only account who can modify pg system table is rdsdb.
server=# select * from pg_user;
usename | usesysid | usecreatedb | usesuper | usecatupd | passwd | valuntil | useconfig
------------+----------+-------------+----------+-----------+----------+----------+----------------------------------
rdsdb | 1 | t | t | t | ******** | |
myuser | 100 | t | t | f | ******** | |
So really redshift gives no permission for that?
Quickest way to do this now is as follows:
CREATE TABLE my_new_schema.my_table (LIKE my_old_schema.my_table);
ALTER TABLE my_new_schema.my_table APPEND FROM my_old_schema.my_table;
DROP TABLE my_old_schema.my_table;
The data for my_old_schema.my_table is simply remapped to belong to my_new_schema.my_table in this case. Much faster than doing an INSERT INTO.
Important note: "After data is successfully appended to the target table, the source table is empty" (from AWS docs on ALTER TABLE APPEND), so be careful to run the ALTER statement only once!
Note that you may have to drop and recreate any views that depend on my_old_schema.my_table. UPDATE: If you do this regularly you should create your views using WITH NO SCHEMA BINDING and they will continue to point at the correct table without having to be recreated.
The best way to do that is to create a new table with the desired schema, and after that do an INSERT .... SELECT with the data from the old table.
Then drop your current table and rename the new one with ALTER TABLE.
You can create a new table with
CREATE TABLE schema1.tableName( LIKE schema2.tableName INCLUDING DEFAULTS ) ;
and then copy the contents of table from one schema to another using the INSERT INTO statement
followed by DROP TABLE to delete the table.
This is how i do it.
-- Drop if you have already one backup
DROP TABLE IF EXISTS TABLE_NAME_BKP CASCADE;
-- Create two back up, one to work and will be deleted at the end, and one more is real backup
SELECT * INTO TABLE_NAME_BKP FROM TABLE_NAME;
SELECT * INTO TABLE_NAME_4_WORK FROM TABLE_NAME;
--We can do also do the below ALTER, but this holds primary key constraint name, hence you cant create new table with same constraints names
ALTER TABLE TABLE_NAME RENAME TO TABLE_NAME_4_WORK;
-- Ensure you have copied
SELECT COUNT(*) FROM TABLE_NAME;
SELECT COUNT(*) FROM TABLE_NAME_4_WORK;
-- Create the new table schema
DROP TABLE IF EXISTS TABLE_NAME CASCADE;
CREATE TABLE TABLE_NAME (
ID varchar(36) NOT NULL,
OLD_COLUMN varchar(36),
NEW COLUMN_1 varchar(36)
)
compound sortkey (ID, OLD_COLUMN, NEW COLUMN_1);
ALTER TABLE TABLE_NAME
ADD CONSTRAINT PK__TAB_NAME__ID
PRIMARY KEY (id);
-- copy data from old to new
INSERT INTO TABLE_NAME (
id,
OLD_COLUMN)
(SELECT
id,
OLD_COLUMN FROM TABLE_NAME_4_WORK)
-- Drop the work table TABLE_NAME_4_WORK
DROP TABLE TABLE_NAME_4_WORK;
-- COMPARE BKP AND NEW TABLE ROWS, AND KEEP BKP TABLE FOR SOMETIME.
SELECT COUNT(*) FROM TABLE_NAME_BKP;
SELECT COUNT(*) FROM TABLE_NAME;
Related
I have a table that has the following fields
----------------------------------
| id | user_id | doc_id |
----------------------------------
I want to create a new unique constraint to make sure that there are no repeat user_id and doc_id records. Aka a user can only be linked to a doc one time. That is simple enough.
ALTER TABLE mytable
ADD CONSTRAINT uniquectm_const UNIQUE (user_id, doc_id);
The issue is I have records that currently violate that constraint. I was wondering if there is an easy way to query for those records or to tell postgres just delete anything that violates the constraint.
Identifying records that violate your new key:
SELECT *
FROM
(
SELECT id, user_id, doc_id
, COUNT(*) OVER (PARTITION BY user_id, doc_id) as unique_check
FROM mytable
)
WHERE unique_check > 1;
Then you can figure out from those duplicates, which should be deleted and perform the delete.
To my knowledge there is no other way to perform this since any automated "Delete any duplicates" command would leave the database engine to decide which of the two-or-more duplicate records to get rid of.
If the entire record is a duplicate (all columns match) then you could just create a new table with your new unique constraint and do a INSERT INTO newtable SELECT DISTINCT * FROM oldtable but I'm betting that isn't the case.
I hit the int limit on a large table I use.
The table is in single user mode and has no FK constraints.
CREATE TABLE my_table_bigint (LIKE my_table INCLUDING ALL);
ALTER TABLE my_table_bigint ALTER id DROP DEFAULT;
ALTER TABLE my_table_bigint alter column id set data type bigint;
CREATE SEQUENCE my_table_bigint_id_seq;
INSERT INTO my_table_bigint SELECT * FROM my_table;
ALTER TABLE my_table_bigint ALTER id SET DEFAULT nextval('my_table_bigint_id_seq');
ALTER SEQUENCE my_table_bigint_id_seq OWNED BY my_table_bigint.id;
SELECT setval('my_table_bigint_id_seq', (SELECT max(id) FROM my_table_bigint), true);
At this point I tested that I could insert new rows without any problems. Success, I thought.
I went about renaming the tables.
alter table my_table rename my_table_old
alter table my_table_bigint rename my_table
ALTER INDEX post_comments_pkey RENAME TO post_comments_old_pkey
ALTER INDEX post_comments_pkey_bigint RENAME TO post_comments_pkey
Now, when I checked the schema.... the table ID type had changed BACK to integer, instead of bigint.
Copying took about 3 days - so I am really, really hoping that I don't need to do this again. This is postgres10 on RDS.
EDIT
I'm going to take care of this problem like this:
Create a new table - call it my_table_bigint2.
Do this:
CREATE TABLE my_table_bigint2 (LIKE my_table INCLUDING ALL);
ALTER TABLE my_table_bigint2 ALTER id DROP DEFAULT;
ALTER TABLE my_table_bigint2 alter column id set data type bigint;
CREATE SEQUENCE my_table_bigint2_id_seq;
ALTER TABLE my_table_bigint2 ALTER id SET DEFAULT nextval('my_table_bigint2_id_seq');
ALTER SEQUENCE my_table_bigint2_id_seq OWNED BY my_table_bigint2.id;
And start populating that table with the new data. (This is fine given the usecase.)
In the meantime, I'm going to run
ALTER TABLE post_comments alter column id set data type bigint;
And finally, once that's done, I'm going to
INSERT INTO my_table SELECT * FROM my_table_bigint2;
My follow-up question - is this allowed? Will this create some interaction between the sequences? Should I use a new sequence?
In Postgres, you can select into a temporary table, is it possible to select into temporary table using a function, sucha as
Select * into temporary table myTempTable from someFucntionThatReturnsATable(1);
Thanks!
I am trying to copy a table with this postgres command however the primary key autoincrement feature does not copy over. Is there any quick and simple way to accomplish this? Thanks!
CREATE TABLE table2 AS TABLE table;
Here's what I'd do:
BEGIN;
LOCK TABLE oldtable;
CREATE TABLE newtable (LIKE oldtable INCLUDING ALL);
INSERT INTO newtable SELECT * FROM oldtable;
SELECT setval('the_seq_name', (SELECT max(id) FROM oldtable)+1);
COMMIT;
... though this is a moderately unusual thing to need to do and I'd be interested in what problem you're trying to solve.
Can I ALTER an existing table to be UNLOGGED?
PostgreSQL 9.5+ allows setting an existing table as LOGGED / UNLOGGED with the ALTER TABLE command... detailed better here.
For e.g.
ALTER TABLE table_test SET LOGGED;
ALTER TABLE table_test SET UNLOGGED;
The following solution is for PostgreSQL versions<=9.4:
You can do:
create unlogged table your_table_alt as
select * from your_table;
Then:
drop table your_table;
alter table your_table_alt rename to your_table;