PostgreSQL use uuid_generate_v4() used in INSERT...SELECT in a later UPDATE statement - postgresql

I'm writing a database migration that adds a new table whose id column is populated using uuid_generate_v4(). However, that generated id needs to be used in an UPDATE on another table to associate the entities. Here's an example:
BEGIN;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
CREATE TABLE IF NOT EXISTS models(
id,
type
);
INSERT INTO models(id)
SELECT
uuid_generate_v4() AS id
,t.type
FROM body_types AS t WHERE t.type != "foo";
ALTER TABLE body_types
ADD COLUMN IF NOT EXISTS model_id uuid NOT NULL DEFAULT uuid_generate_v4();
UPDATE TABLE body_types SET model_id =
(SELECT ....??? I'M STUCK RIGHT HERE)
This is obviously a contrived query with flaws, but I'm trying to illustrate that what it looks like I need is a way to store the uuid_generate_v4() value from each inserted row into a variable or hash that I can reference in the later UPDATE statement.
Maybe I've modeled the solution wrong & there's a better way? Maybe there's a postgresql feature I just don't know about? Any pointers greatly appreciated.

I was modeling the solution incorrectly. The short answer is "don't make the id in the INSERT random". In this case the key is to add the 'model_id' column to 'body_types' first. Then I can use it in the INSERT...SELECT without having to save it for later use because I'll be selecting it from the body_types table.
BEGIN;
CREATE EXTENSION IF NOT EXISTS "uuid-ossp";
ALTER TABLE body_types
ADD COLUMN IF NOT EXISTS model_id uuid NOT NULL DEFAULT uuid_generate_v4();
CREATE TABLE IF NOT EXISTS models(
id,
type
);
INSERT INTO models(id)
SELECT
t.model_id AS id
,t.type
FROM body_types AS t WHERE t.type != "foo";
Wish I had a better contrived example, but the point is, avoid using random values that you have to use later, and in this case it was totally unnecessary to do so anyway.

Related

Adding Identity column in Ingres Db

I am trying to add an identity column in a table through alter query using Ingres DB. While creating the table, i am able to define the identity column but not when i am trying to add it through alter query. Kindly Suggest me an alter query for it.
It's not as straightforward as you might think, "alter table" has a a number of restrictions which make this a multi-step operation. Try this:
create table something(a integer, b varchar(20)) with page_size=8192;
alter table something add column c integer not null with default;
modify something to reconstruct;
alter table something alter column c integer not null generated always as identity;
modify something to reconstruct;

Is there a way to change the datatype for a column without changing the order of the column?

I have a column where I want to change the data type. I currently am using Redshift. I know I can use the alter table statement to change the datatype, but this would change the order of the columns.
Is there a way to change the datatype without changing the order of the column?
I would recommend creating a new table with the schema you want and copying it over from the old table using a insert into new_table (select * from old_table) statement (here you can also do any casting to the new data type), after which you can drop the old table and rename the new one:
drop table old_table;
alter table new_table rename to old_table;
Using ALTER TABLE table_name ALTER COLUMN column_name TYPE new_data_type will not change the order of the columns in your table.
Please note that this clause can only changes the size of a column defined as a VARCHAR data type.
There are also other limitations described in AWS documentation of ALTER TABLE

Getting error for auto increment fields when inserting records without specifying columns

We're in process of converting over from SQL Server to Postgres. I have a scenario that I am trying to accommodate. It involves inserting records from one table into another, WITHOUT listing out all of the columns. I realize this is not recommended practice, but let's set that aside for now.
drop table if exists pk_test_table;
create table public.pk_test_table
(
recordid SERIAL PRIMARY KEY NOT NULL,
name text
);
--example 1: works and will insert a record with an id of 1
insert into pk_test_table values(default,'puppies');
--example 2: fails
insert into pk_test_table
select first_name from person_test;
Error I receive in the second example:
column "recordid" is of type integer but expression is of type
character varying Hint: You will need to rewrite or cast the
expression.
The default keyword will tell the database to grab the next value.
Is there any way to utilize this keyword in the second example? Or some way to tell the database to ignore auto-incremented columns and just them be populated like normal?
I would prefer to not use a subquery to grab the next "id".
This functionality works in SQL Server and hence the question.
Thanks in advance for your help!
If you can't list column names, you should instead use the DEFAULT keyword, as you've done in the simple insert example. This won't work with a in insert into ... select ....
For that, you need to invoke nextval. A subquery is not required, just:
insert into pk_test_table
select nextval('pk_test_table_id_seq'), first_name from person_test;
You do need to know the sequence name. You could get that from information_schema based on the table name and inferring its primary key, using a function that takes just the table name as an argument. It'd be ugly, but it'd work. I don't think there's any way around needing to know the table name.
You're inserting value into the first column, but you need to add a value in the second position.
Therefore you can use INSERT INTO table(field) VALUES(value) syntax.
Since you need to fetch values from another table, you have to remove VALUES and put the subquery there.
insert into pk_test_table(name)
select first_name from person_test;
I hope it helps
I do it this way via a separate function- though I think I'm getting around the issue via the table level having the DEFAULT settings on a per field basis.
create table public.pk_test_table
(
recordid integer NOT NULL DEFAULT nextval('pk_test_table_id_seq'),
name text,
field3 integer NOT NULL DEFAULT 64,
null_field_if_not_set integer,
CONSTRAINT pk_test_table_pkey PRIMARY KEY ("recordid")
);
With function:
CREATE OR REPLACE FUNCTION func_pk_test_table() RETURNS void AS
$BODY$
INSERT INTO pk_test_table (name)
SELECT first_name FROM person_test;
$BODY$
LANGUAGE sql VOLATILE;
Then just execute the function via a SELECT FROM func_pk_test_table();
Notice it hasn't had to specify all the fields- as long as constraints allow it.

T-SQL create table with primary keys

Hello I wan to create a new table based on another one and create primary keys as well.
Currently this is how I'm doing it. Table B has no primary keys defined. But I would like to create them in table A. Is there a way using this select top 0 statement to do that? Or do I need to do an ALTER TABLE after I created tableA?
Thanks
select TOP 0 *
INTO [tableA]
FROM [tableB]
SELECT INTO does not support copying any of the indexes, constraints, triggers or even computed columns and other table properties, aside from the IDENTITY property (as long as you don't apply an expression to the IDENTITY column.
So, you will have to add the constraints after the table has been created and populated.
The short answer is NO. SELECT INTO will always create a HEAP table and, according to Books Online:
Indexes, constraints, and triggers defined in the source table are not
transferred to the new table, nor can they be specified in the
SELECT...INTO statement. If these objects are required, you must
create them after executing the SELECT...INTO statement.
So, after executing SELECT INTO you need to execute an ALTER TABLE or CREATE UNIQUE INDEX in order to add a primary key.
Also, if dbo.TableB does not already have an IDENTITY column (or if it does and you want to leave it out for some reason), and you need to create an artificial primary key column (rather than use an existing column in dbo.TableB to serve as the new primary key), you could use the IDENTITY function to create a candidate key column. But you still have to add the constraint to TableA after the fact to make it a primary key, since just the IDENTITY function/property alone does not make it so.
-- This statement will create a HEAP table
SELECT Col1, Col2, IDENTITY(INT,1,1) Col3
INTO dbo.MyTable
FROM dbo.AnotherTable;
-- This statement will create a clustered PK
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable_Col3 PRIMARY KEY (Col3);

auto-increment column in PostgreSQL on the fly?

I was wondering if it is possible to add an auto-increment integer field on the fly, i.e. without defining it in a CREATE TABLE statement?
For example, I have a statement:
SELECT 1 AS id, t.type FROM t;
and I am can I change this to
SELECT some_nextval_magic AS id, t.type FROM t;
I need to create the auto-increment field on the fly in the some_nextval_magic part because the result relation is a temporary one during the construction of a bigger SQL statement. And the value of id field is not really important as long as it is unique.
I search around here, and the answers to related questions (e.g. PostgreSQL Autoincrement) mostly involving specifying SERIAL or using nextval in CREATE TABLE. But I don't necessarily want to use CREATE TABLE or VIEW (unless I have to). There are also some discussions of generate_series(), but I am not sure whether it applies here.
-- Update --
My motivation is illustrated in this GIS.SE answer regarding the PostGIS extension. The original query was:
CREATE VIEW buffer40units AS
SELECT
g.path[1] as gid,
g.geom::geometry(Polygon, 31492) as geom
FROM
(SELECT
(ST_Dump(ST_UNION(ST_Buffer(geom, 40)))).*
FROM point
) as g;
where g.path[1] as gid is an id field "required for visualization in QGIS". I believe the only requirement is that it is integer and unique across the table. I encountered some errors when running the above query when the g.path[] array is empty.
While trying to fix the array in the above query, this thought came to me:
Since the gid value does not matter anyways, is there an auto-increment function that can be used here instead?
If you wish to have an id field that assigns a unique integer to each row in the output, then use the row_number() window function:
select
row_number() over () as id,
t.type from t;
The generated id will only be unique within each execution of the query. Multiple executions will not generate new unique values for id.