How can I insert union tables to table in PostgreSQL? - postgresql

I have this query and insert rows to MYSQl database and work perfect.
insert int test(id,user)
select null,user from table2
union
select null,user from table3
But when run the above query in PostgreSQL not work. And I get this error column "id" is of type integer but expression is of type text, But when I run two query below as shown as worked.
When I run below query in PostgreSQL it works properly:
insert into test(id,user)
select null,user from table2
Or below query in PostgreSQL it works properly:
insert int test(id,user)
select null,user from table3
Or below query in PostgreSQL it works properly:
select null,user from table2
union
select null,user from table3

null is not a real value and thus has no data type. The default assumed data type is text, that's where the error message comes from. Just cast the value to int in the first SELECT:
insert into test(id, "user")
select null::int, "user" from table2
union
select null, "user" from table3
Or even better, leave out the id completely so that any default defined for the id column is used. It sounds strange to try and insert null into a column named id
insert into test("user")
select "user" from table2
union
select "user" from table3
Note that user is a reserved keyword and a built-in function, so you will have to quote it to avoid problems. In the long run I recommend to find a different name for that column.

Related

Liquibase insert select multiple rows postgres

I want to insert into table1 multiple rows from table2. The problem is that I have some fields in table1 that I want to compute, and some rows that I want to select from table2. For example something like this:
insert into table1 (id, selectField1, selectField2, constant)
values ((gen_random_uuid()), (select superField1 from table2), (select superField2 from table2), 'test');
So the logic is to select superField1 and superField2 from all the rows in the table2 and insert them into table1 with constant value test and generated uids. superField1 and superField2 should be from the same row in table2 when inserting in table1. How can I achieve something like this using liquibase?
P.S: I'm using <sql> tag since it's easier to implement using SQL than using XML changeset, but if you know how to do it in XML that would be appreciated too, but just in SQL will be enough too. DBMS is postgres.
Don't use the VALUES clause if the source is a SELECT statement:
insert into table1 (id, selectField1, selectField2, constant)
select gen_random_uuid(), superField1, superField2, 'test'
from table2;

How to use the same common table expression in two consecutive psql statements?

I'm trying to perform a pretty basic operation with a few steps:
SELECT data from table1
Use id column from my selected table to remove data from table2
Insert the selected table from step 1 into table2
I would imagine that this would work
begin;
with temp as (
select id
from table1
)
delete from table2
where id in (select id from temp);
insert into table2 (id)
select id from temp;
commit;
But I'm getting an error saying that temp is not defined during my insert step?
Only other post I found about this is this one but it didn't really answer my question.
Thoughts?
From Postgres documentation:
WITH provides a way to write auxiliary statements for use in a larger
query. These statements, which are often referred to as Common Table
Expressions or CTEs, can be thought of as defining temporary tables
that exist just for one query.
If you need a temp table for more than one query you can do instead:
begin;
create temp table temp_table as (
select id
from table1
);
delete from table2
where id in (select id from temp_table);
insert into table2 (id)
select id from temp_table;
commit;

postgress: insert rows to table with multiple records from other join tables

ّ am trying to insert multiple records got from the join table to another table user_to_property. In the user_to_property table user_to_property_id is primary, not null it is not autoincrementing. So I am trying to add user_to_property_id manually by an increment of 1.
WITH selectedData AS
( -- selection of the data that needs to be inserted
SELECT t2.user_id as userId
FROM property_lines t1
INNER JOIN user t2 ON t1.account_id = t2.account_id
)
INSERT INTO user_to_property (user_to_property_id, user_id, property_id, created_date)
VALUES ((SELECT MAX( user_to_property_id )+1 FROM user_to_property),(SELECT
selectedData.userId
FROM selectedData),3,now());
The above query gives me the below error:
ERROR: more than one row returned by a subquery used as an expression
How to insert multiple records to a table from the join of other tables? where the user_to_property table contains a unique record for the same user-id and property_id there should be only 1 record.
Typically for Insert you use either values or select. The structure values( select...) often (generally?) just causes more trouble than it worth, and it is never necessary. You can always select a constant or an expression. In this case convert to just select. For generating your ID get the max value from your table and then just add the row_number that you are inserting: (see demo)
insert into user_to_property(user_to_property_id
, user_id
, property_id
, created
)
with start_with(current_max_id) as
( select max(user_to_property_id) from user_to_property )
select current_max_id + id_incr, user_id, 3, now()
from (
select t2.user_id, row_number() over() id_incr
from property_lines t1
join users t2 on t1.account_id = t2.account_id
) js
join start_with on true;
A couple notes:
DO NOT use user for table name, or any other object name. It is a
documented reserved word by both Postgres and SQL standard (and has
been since Postgres v7.1 and the SQL 92 Standard at lest).
You really should create another column or change the column type
user_to_property_id to auto-generated. Using Max()+1, or
anything based on that idea, is a virtual guarantee you will generate
duplicate keys. Much to the amusement of users and developers alike.
What happens in an MVCC when 2 users run the query concurrently.

Is a subquery able to select columns from outer query? [duplicate]

This question already has answers here:
sql server 2008 management studio not checking the syntax of my query
(2 answers)
Closed 1 year ago.
I have the following select:
SELECT DISTINCT pl
FROM [dbo].[VendorPriceList] h
WHERE PartNumber IN (SELECT DISTINCT PartNumber
FROM [dbo].InvoiceData
WHERE amount > 10
AND invoiceDate > DATEADD(yyyy, -1, CURRENT_TIMESTAMP)
UNION
SELECT DISTINCT PartNumber
FROM [dbo].VendorDeals)
The issue here is that the table [dbo].VendorDeals has NO column PartNumber, however no error is detected and the query works with the first part of the union.
Even more, IntelliSense also allows and recognize PartNumber. This fails only when inside a complex statement.
It is pretty obvious that if you qualify column names, the mistake will be evident.
This isn't a bug in SQL Server/the T-SQL dialect parsing, no, this is working exactly as intended. The problem, or bug, is in your T-SQL; specifically because you haven't qualified your columns. As I don't have the definition of your table, I'm going to provide sample DDL first:
CREATE TABLE dbo.Table1 (MyColumn varchar(10), OtherColumn int);
CREATE TABLE dbo.Table2 (YourColumn varchar(10) OtherColumn int);
And then an example that is similar to your query:
SELECT MyColumn
FROM dbo.Table1
WHERE MyColumn IN (SELECT MyColumn FROM dbo.Table2);
This, firstly, will parse; it is a valid query. Secondly, provided that dbo.Table2 contains at least one row, then every row from table dbo.Table1 will be returned where MyColumn has a non-NULL value. Why? Well, let's qualify the column with table's name as SQL Server would parse them:
SELECT Table1.MyColumn
FROM dbo.Table1
WHERE Table1.MyColumn IN (SELECT Table1.MyColumn FROM dbo.Table2);
Notice that the column inside the IN is also referencing Table1, not Table2. By default if a column has it's alias omitted in a subquery it will be assumed to be referencing the table(s) defined in that subquery. If, however, none of the tables in the sub query have a column by that name, then it will be assumed to reference a table where that column does exist; in this case Table1.
Let's, instead, take a different example, using the other column in the tables:
SELECT OtherColumn
FROM dbo.Table1
WHERE OtherColumn IN (SELECT OtherColumn FROM dbo.Table2);
This would be parsed as the following:
SELECT Table1.OtherColumn
FROM dbo.Table1
WHERE Table1.OtherColumn IN (SELECT Table2.OtherColumn FROM dbo.Table2);
This is because OtherColumn exists in both tables. As, in the subquery, OtherColumn isn't qualified it is assumed the column wanted is the one in the table defined in the same scope, Table2.
So what is the solution? Alias and qualify your columns:
SELECT T1.MyColumn
FROM dbo.Table1 T1
WHERE T1.MyColumn IN (SELECT T2.MyColumn FROM dbo.Table2 T2);
This will, unsurprisingly, error as Table2 has no column MyColumn.
Personally, I suggest that unless you have only one table being referenced in a query, you alias and qualify all your columns. This not only ensures that the wrong column can't be referenced (such as in a subquery) but also means that other readers know exactly what columns are being referenced. It also stops failures in the future. I have honestly lost count how many times over years I have had a process fall over due to the "ambiguous column" error, due to a table's definition being changed and a query referencing the table wasn't properly qualified by the developer...

Redshift move data from one table to another table

insert into table1 select * from table2
tabl1 have one addition column as compare to table2.
how can i move the data of other columns from table2 to table1 without set the all columns name individually in the insert query for Redshift ?
Is there any idea ?
If you really want to do this you have to put all extra columns at the table2 at the end of the column list and then you'll be able to set nulls or values after select star like this:
insert into table1
select *, null, null, null
from table2