insert into table1 select * from table2
tabl1 have one addition column as compare to table2.
how can i move the data of other columns from table2 to table1 without set the all columns name individually in the insert query for Redshift ?
Is there any idea ?
If you really want to do this you have to put all extra columns at the table2 at the end of the column list and then you'll be able to set nulls or values after select star like this:
insert into table1
select *, null, null, null
from table2
Related
I want to insert into table1 multiple rows from table2. The problem is that I have some fields in table1 that I want to compute, and some rows that I want to select from table2. For example something like this:
insert into table1 (id, selectField1, selectField2, constant)
values ((gen_random_uuid()), (select superField1 from table2), (select superField2 from table2), 'test');
So the logic is to select superField1 and superField2 from all the rows in the table2 and insert them into table1 with constant value test and generated uids. superField1 and superField2 should be from the same row in table2 when inserting in table1. How can I achieve something like this using liquibase?
P.S: I'm using <sql> tag since it's easier to implement using SQL than using XML changeset, but if you know how to do it in XML that would be appreciated too, but just in SQL will be enough too. DBMS is postgres.
Don't use the VALUES clause if the source is a SELECT statement:
insert into table1 (id, selectField1, selectField2, constant)
select gen_random_uuid(), superField1, superField2, 'test'
from table2;
I have this query and insert rows to MYSQl database and work perfect.
insert int test(id,user)
select null,user from table2
union
select null,user from table3
But when run the above query in PostgreSQL not work. And I get this error column "id" is of type integer but expression is of type text, But when I run two query below as shown as worked.
When I run below query in PostgreSQL it works properly:
insert into test(id,user)
select null,user from table2
Or below query in PostgreSQL it works properly:
insert int test(id,user)
select null,user from table3
Or below query in PostgreSQL it works properly:
select null,user from table2
union
select null,user from table3
null is not a real value and thus has no data type. The default assumed data type is text, that's where the error message comes from. Just cast the value to int in the first SELECT:
insert into test(id, "user")
select null::int, "user" from table2
union
select null, "user" from table3
Or even better, leave out the id completely so that any default defined for the id column is used. It sounds strange to try and insert null into a column named id
insert into test("user")
select "user" from table2
union
select "user" from table3
Note that user is a reserved keyword and a built-in function, so you will have to quote it to avoid problems. In the long run I recommend to find a different name for that column.
I want to create a new table called table2 from another table called table1 without importing data and constraints. I used this query:
create table2 as select * from table1 where 1=2;
this code created table2 without any data, but imports constraints from table1. Is there a way not to import constraints from table1?
The answer can be found in the question create table with select union has no constraints.
If the select is a union, Oracle will not add any constraints, so simply use the same select twice, and be sure not to include any records in the second select:
create table2 as
select * from table1 where 1=2
union all
select * from table1 where 1=2;
I am new to postgresql (and databases in general) and was hoping to get some pointers on improving the efficiency of the following statement.
I am inserting data from one table to another, and do not want to insert duplicate values. I have a rid (unique identifier in each table) that are indexed and are Primary Keys.
I am currently using the following statement:
INSERT INTO table1 SELECT * FROM table2 WHERE rid NOT IN (SELECT rid FROM table1).
As of now the table one is 200,000 records, table2 is 20,000 records. Table1 is going to keep growing (probably to around 2,000,000) and table2 will stay around 20,000 records. As of now the statement takes about 15 minutes to run. I am concerned that as Table1 grows this is going to take way to long. Any suggestions?
This should be more efficient than your current query:
INSERT INTO table1
SELECT *
FROM table2
WHERE NOT EXISTS (
SELECT 1 FROM table1 WHERE table1.rid = table2.rid
);
insert into table1
select t2.*
from
table2 t2
left join
table1 t1 on t1.rid = t2.rid
where t1.rid is null
I need to insert data from several tables with all the same field names into one temp table, i know i can use cursor/loop to do this, i wanted to know is there a quicker way of doing this.
select from table 1, table 2, table 3, into #temptable.
select * into #temptable from table1
insert into #temptable select * from table2
insert into #temptable select * from table3
The first query creates the temp table on insert, the rest just keep adding data.