CREATE TABLE AS with PRIMARY KEY in one statement (PostgreSQL) - postgresql

Is there a way to set the PRIMARY KEY in a single "CREATE TABLE AS" statement?
Example - I would like the following to be written in 1 statement rather than 2:
CREATE TABLE "new_table_name" AS SELECT a.uniquekey, a.some_value + b.some_value FROM "table_a" AS a, "table_b" AS b WHERE a.uniquekey=b.uniquekey;
ALTER TABLE "new_table_name" ADD PRIMARY KEY (uniquekey);
Is there a better way of doing this in general (assume there are more than 2 tables, e.g. 10)?

According to the manual: create table and create table as you can either:
create table with primary key first, and use select into later
create table as first, and use add primary key later
But not both create table as with primary key - what you wanted.

If you want to create a new table with the same table structure of another table, you can do this in one statement (both creating a new table and setting the primary key) like this:
CREATE TABLE mytable_clone (
LIKE mytable
INCLUDING defaults
INCLUDING constraints
INCLUDING indexes
);

No, there is no shorter way to create the table and the primary key.

See the command below, it will create a new table with all the constraints and with no data. Worked in postgres 9.5
CREATE TABLE IF NOT EXISTS <ClonedTableName>(like <OriginalTableName> including all)

well in mysql ,both is possible in one command
the command is
create table new_tbl (PRIMARY KEY(`id`)) as select * from old_tbl;
where id is column with primary key of old_tbl
done...

You may do this way
CREATE TABLE IOT (EMPID,ID,Name, CONSTRAINT PK PRIMARY KEY( ID,EMPID))
ORGANIZATION INDEX NOLOGGING COMPRESS 1 PARALLEL 4
AS SELECT 1 as empid,2 id,'XYZ' Name FROM dual;

Related

Add column to show a row number in the PostgreSQL [duplicate]

I have a table with existing data. Is there a way to add a primary key without deleting and re-creating the table?
(Updated - Thanks to the people who commented)
Modern Versions of PostgreSQL
Suppose you have a table named test1, to which you want to add an auto-incrementing, primary-key id (surrogate) column. The following command should be sufficient in recent versions of PostgreSQL:
ALTER TABLE test1 ADD COLUMN id SERIAL PRIMARY KEY;
Older Versions of PostgreSQL
In old versions of PostgreSQL (prior to 8.x?) you had to do all the dirty work. The following sequence of commands should do the trick:
ALTER TABLE test1 ADD COLUMN id INTEGER;
CREATE SEQUENCE test_id_seq OWNED BY test1.id;
ALTER TABLE test1 ALTER COLUMN id SET DEFAULT nextval('test_id_seq');
UPDATE test1 SET id = nextval('test_id_seq');
Again, in recent versions of Postgres this is roughly equivalent to the single command above.
ALTER TABLE test1 ADD COLUMN id SERIAL PRIMARY KEY;
This is all you need to:
Add the id column
Populate it with a sequence from 1 to count(*).
Set it as primary key / not null.
Credit is given to #resnyanskiy who gave this answer in a comment.
To use an identity column in v10,
ALTER TABLE test
ADD COLUMN id { int | bigint | smallint}
GENERATED { BY DEFAULT | ALWAYS } AS IDENTITY PRIMARY KEY;
For an explanation of identity columns, see https://blog.2ndquadrant.com/postgresql-10-identity-columns/.
For the difference between GENERATED BY DEFAULT and GENERATED ALWAYS, see https://www.cybertec-postgresql.com/en/sequences-gains-and-pitfalls/.
For altering the sequence, see https://popsql.io/learn-sql/postgresql/how-to-alter-sequence-in-postgresql/.
I landed here because I was looking for something like that too. In my case, I was copying the data from a set of staging tables with many columns into one table while also assigning row ids to the target table. Here is a variant of the above approaches that I used.
I added the serial column at the end of my target table. That way I don't have to have a placeholder for it in the Insert statement. Then a simple select * into the target table auto populated this column. Here are the two SQL statements that I used on PostgreSQL 9.6.4.
ALTER TABLE target ADD COLUMN some_column SERIAL;
INSERT INTO target SELECT * from source;
ALTER TABLE test1 ADD id int8 NOT NULL GENERATED ALWAYS AS IDENTITY;

DBeaver does not keep primary keys on import/export

I'm using DBeaver to migrate data from Postgres to Derby. When I use the wizard in DBeaver to go directly from one table to another, the primary key in Derby is being generated instead of inserted. This causes issues on foreign keys for subsequent tables.
If I generate the SQL, the primary key is part of the SQL statement and is properly inserted. However there are too many rows to handle in this way.
Is there a way to have DBeaver insert the primary key instead of letting it be generated when importing / exporting directly to database tables?
Schema of target table
CREATE TABLE APP.THREE_PHASE_MOTOR (
ID BIGINT NOT NULL DEFAULT GENERATED_BY_DEFAULT,
VERSION INTEGER NOT NULL,
CONSTRAINT SQL130812103636700 PRIMARY KEY (ID)
);
CREATE INDEX SQL160416184259290 ON APP.THREE_PHASE_MOTOR (ID);
Schema of source table
CREATE TABLE public.three_phase_motor (
id int8 NOT NULL DEFAULT nextval('three_phase_motor_id_seq'::regclass),
"version" int4 NOT NULL,
CONSTRAINT three_phase_motor_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
I found a trick working with version 6.0.5; do these steps:
double click a table name
then select Data tab
then click the gray table corner (the one on top of row order numbers) in order to select all rows
then right click the same gray table corner
then select Generate SQL -> INSERT menu
a window with the INSERT instructions including id (primary key) will popup.
PS: when selecting a subset of rows the same menu would work for only those too
When you go to export, check the Include generated column option, and the primary key (auto-incremented) will be included in the export.
See this for more details: https://github.com/dbeaver/dbeaver/commit/d1f74ec88183d78c7c6620690ced217a52555262
Personally I think this needs to be more clear, and why they excluded it in the first place was not good data integrity.
As of now DBeaver version [22.0.5] you have to select Include generated columns as true, as shown in the following screenshot that will export the primary/generated columns.

postgres update table based on another table

I am relatively new to postgres (I am a django user - use pgsql via the orm), and I am trying to figure out a way to insert content into a specfic column - but so far, am not having any luck. So, I first have a database dzmodel_uf with two columns: id (which is the PK) and content - both of which are populated (say 50 entries).
Now, I would like to create another table, which references (foreign keys) to id of dzmodel_uf. So, I do the following:
--INITIALIZATION
CREATE TABLE MyNewTable(id integer REFERENCES dzmodel_uf (id));
ALTER TABLE ONLY FullTextSearch ADD CONSTRAINT mynewtable_pkey PRIMARY KEY (id);
which works fine. Now, I create a column on my MyNewTable table like so:
ALTER TABLE MyNewTable ADD COLUMN content_tsv_gin tsvector;
..which also works fine. Finally, I would like to add the content from dzmodel_uf - column content like so:
UPDATE MyNewTable SET content_tsv_gin = to_tsvector('public.wtf', dzmodel_uf(content) )
.. but this FAILS and says that column content does not exist..
In a nutshell, I am not sure how I can reference values from another table.
I hope I understood the question (it is rather fuzzy).There are no rows in the target table, so you have to add them.
You need INSERT, not UPDATE :
INSERT INTO MyNewTable (id,content_tsv_gin)
SELECT dzu.id, to_tsvector( public.wtf, dzu.content )
FROM dzmodel_uf dzu
;

T-SQL create table with primary keys

Hello I wan to create a new table based on another one and create primary keys as well.
Currently this is how I'm doing it. Table B has no primary keys defined. But I would like to create them in table A. Is there a way using this select top 0 statement to do that? Or do I need to do an ALTER TABLE after I created tableA?
Thanks
select TOP 0 *
INTO [tableA]
FROM [tableB]
SELECT INTO does not support copying any of the indexes, constraints, triggers or even computed columns and other table properties, aside from the IDENTITY property (as long as you don't apply an expression to the IDENTITY column.
So, you will have to add the constraints after the table has been created and populated.
The short answer is NO. SELECT INTO will always create a HEAP table and, according to Books Online:
Indexes, constraints, and triggers defined in the source table are not
transferred to the new table, nor can they be specified in the
SELECT...INTO statement. If these objects are required, you must
create them after executing the SELECT...INTO statement.
So, after executing SELECT INTO you need to execute an ALTER TABLE or CREATE UNIQUE INDEX in order to add a primary key.
Also, if dbo.TableB does not already have an IDENTITY column (or if it does and you want to leave it out for some reason), and you need to create an artificial primary key column (rather than use an existing column in dbo.TableB to serve as the new primary key), you could use the IDENTITY function to create a candidate key column. But you still have to add the constraint to TableA after the fact to make it a primary key, since just the IDENTITY function/property alone does not make it so.
-- This statement will create a HEAP table
SELECT Col1, Col2, IDENTITY(INT,1,1) Col3
INTO dbo.MyTable
FROM dbo.AnotherTable;
-- This statement will create a clustered PK
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable_Col3 PRIMARY KEY (Col3);

How to add a new identity column to a table in SQL Server?

I am using SQL Server 2008 Enterprise. I want to add an identity column (as unique clustered index and primary key) to an existing table. Integer based auto-increasing by 1 identity column is ok. Any solutions?
BTW: my most confusion is for existing rows, how to automatically fill-in new identity column data?
thanks in advance,
George
you can use -
alter table <mytable> add ident INT IDENTITY
This adds ident column to your table and adds data starting from 1 and incrementing by 1.
To add clustered index -
CREATE CLUSTERED INDEX <indexName> on <mytable>(ident)
have 1 approach in mind, but not sure whether it is feasible at your end or not. But let me assure you, this is a very effective approach. You can create a table having an identity column and insert your entire data in that table. And from there on handling any duplicate data is a child's play. There are two ways of adding an identity column to a table with existing data:
Create a new table with identity, copy data to this new table then drop the existing table followed by renaming the temp table.
Create a new column with identity & drop the existing column
For reference the I have found 2 articles : http://blog.sqlauthority.com/2009/05/03/sql-server-add-or-remove-identity-property-on-column/
http://cavemansblog.wordpress.com/2009/04/02/sql-how-to-add-an-identity-column-to-a-table-with-data/
Not always you have permissions for DBCC commands.
Solution #2:
create table #tempTable1 (Column1 int)
declare #new_seed varchar(20) = CAST((select max(ID) from SomeOtherTable) as varchar(20))
exec (N'alter table #tempTable1 add ID int IDENTITY('+#new_seed+', 1)')