I have already worked in Oracle. and my merge statements doesn't work in postgres.
whole scenario is:
I have 2 schemas in postgres DB, exactly the same tables into both schemas, first schemas has data in its table and schema 2 is empty for now.Data inserted every 8 hours in first schema from the source. I want to tranfer data from first schema to second one (by using trigger) and then truncate the tables of first schma. I wrote merge statement but I get two errors
null value
it doesn't accept
coalesce (tgt.movie_length,'NULL')= coalesce (src.movie_length,'NULL')
if I change it to
coalesce (tgt.movie_length,NULL)= coalesce (src.movie_length, NULL).
it goes further
in oracle I write 'NULL' for null value
NVL(tgt.movie_length,'NULL')= NVL(src.movie_length,'NULL')
after
when matched then update set
tgt.movie_id =src.movie_id
error is: column "tgt" of relation "movies" does not exist
why it has problem with tgt
merge into second_schema.movies as tgt
using (select
movie_id
,movie_name
,movie_length
,movie_lang
,age_certificate
,release_date
,director_id
from public.movies
) src
on (
coalesce (tgt.movie_length,NULL)= coalesce (src.movie_length,NULL)
and coalesce (tgt.movie_lang,NULL)= coalesce (src.movie_lang,NULL)
and coalesce (tgt.age_certificate,NULL)= coalesce (src.age_certificate,NULL)
and coalesce (tgt.release_date ,NULL) = coalesce (src.release_date, NULL)
--and coalesce (tgt.director_id, NULL)= coalesce (src.director_id, NULL)
)
when matched then update set
tgt.movie_id =src.movie_id -------> error: column "tgt" of relation "movies" does not exist
,tgt.movie_name=src.movie_id
,tgt.movie_length=src.movie_length
,tgt.movie_lang =src.movie_lang
,tgt.age_certificate =src.age_certificate
,tgt.release_date =src.release_date
,tgt.director_id =src.director_id
How should be 'NULL' and why it doesnt recognise tgt after update?
Related
I have two tables with a relation by id. And I want to insert two related records. The problem is that id is not known until I make the first insert. Is there a way to write a kind of embedded query that makes both inserts correctly? I would like to have one exact query and to avoid variables, if it is possible. What I tried is:
insert into "b" ("value", "b_id")
select 'val2', (select insert into "a" ("value") values ('val1') returning id);
I get the error:
ERROR: syntax error at or near "("
You'll need to use a CTE to do that, INSERT statements cannot be arbitrarily nested (unlike SELECT):
WITH a_results AS (
INSERT INTO a (value)
VALUES ('val1')
RETURNING id
)
INSERT INTO b (value, b_id)
SELECT 'val2', id
FROM a_results;
I've been looking for an answer to this question for a few days and can't find anything referencing this specific issue.
First of all, should it work if I want to use an INSERT INTO SELECT statement to copy over rows of a table, back into the same table but with new id's and 1 of the column modified?
Example:
INSERT INTO TABLE_A (column1, column2, column3) SELECT column1, 'value to change', column3 from TABLE_A where column 2 = 'original value';
When I try this on a DB2 database, I'm getting the following error:
INVALID MULTIPLE-ROW INSERT. SQLCODE=-533, SQLSTATE=21501, DRIVER=4.18.60
If I run the same statement but I put a specific ID to return in the select statement, ensuring only 1 row is returned, then the statement works. But that goes against what I'm trying to do which is copy multiple rows from the same table into itself, while updating a specific column to a new value.
Thanks everyone!
It works fine for me without error on Db2 11.1.4.0
CREATE TABLE TABLE_A( column1 int , column2 varchar(16), column3 int)
INSERT INTO TABLE_A values (1,'original value',3)
INSERT INTO TABLE_A (column1, column2, column3)
SELECT column1, 'value to change', column3 from TABLE_A where column2 = 'original value'
SELECT * FROM TABLE_A
returns
COLUMN1|COLUMN2 |COLUMN3
-------|---------------|-------
1|original value | 3
1|value to change| 3
maybe there is something you are not telling us....
You don't mention your platform and version, but the docs seems pretty clear..
IBM LUW 11.5
A multiple-row INSERT into a self-referencing table is invalid.
First google results
An INSERT operation with a subselect attempted to insert multiple rows
into a self-referencing table. The subselect of the INSERT operation
should return no more than one row of data. System action: The INSERT
statement cannot be executed. The contents of the object table are
unchanged. Programmer response: Examine the search condition of the
subselect to make sure that no more than one row of data is selected.
EDIT You've apparently got a self-referencing constraint on the table. Ex: EMPLOYEES table with a MANAGER column defined as a FK self-referencing back to the EMPLOYEES table.
Db2 simply doesn't support what you are trying to do.
You need to a temporary table to hold the modified rows.
Optionally, assuming that your table has a primary key, try using the MERGE statement instead.
This question already has an answer here:
What are differences between SQL queries?
(1 answer)
Closed 4 years ago.
This syntax is valid for PostgreSQL:
select T from table_name as T
T seems to become a CSV list of values from all columns in table_name. select T from table_name as T works, and, for that matter, select table_name from table_name. Where is this syntax documented, and what is the datatype of T?
This syntax is not in SQL Server, and (AFAIK) does not exist in any other SQL variant.
If you create a table, Postgres creates a type with the same name in the background. The table is then essentially a "list of that type".
Postgres also allows to reference a complete row as a single "record" - a value built from multiple columns. Those records can be created dynamically through a row constructor.
Each row in a the result of a SELECT statement is implicitly assigned a TYPE - if the row comes from a single table, it's the table's type. Otherwise it's an anonymous type.
When you use the table name in a place where a column would be allowed it references the full row as a single record. If the table is aliased in the select, the type of that record is still the table's type.
So the statement:
select T
from table_name as T;
returns a result with a single column which is a record (of the table's type) containing each column of the table as a field. The default output format of a record is a comma separated list of the values enclosed in parentheses.
Assuming table_name has three columns c1, c2 and c3 the following would essentially do the same thing:
select row(c1, c2, c3)
from table_name;
Note that a record reference can also be used in comparisons, e.g. finding rows that are different between two tables can be done in the following manner
select *
from table_one t1
full outer join table_two t2 on t1.id = t2.id
where t1 <> t2;
Instead of stating each column name individually, is there a more efficient way to select all rows which do not have any nulls from a table in a Postgres database?
For example, if there are 20 columns in a table, how to avoid typing out each of those columns individually?
Just check the whole row:
select *
from my_table
where my_table is not null
my_table is not null is only true if all columns in that rows are not null.
I am using following code to insert date by Table Valued Parameter in my SP. Actually it works when one record exists in my TVP but when it has more than one record it raises the following error :
'Violation of Primary key constraint 'PK_ReceivedCash''. Cannot insert duplicate key in object 'Banking.ReceivedCash'. The statement has been terminated.
insert into banking.receivedcash(ReceivedCashID,Date,Time)
select (select isnull(Max(ReceivedCashID),0)+1 from Banking.ReceivedCash),t.Date,t.Time from #TVPCash as t
Your query is indeed flawed if there is more than one row in #TVPCash. The query to retrieve the maximum ReceivedCashID is a constant, which is then used for each row in #TVPCash to insert into Banking.ReceivedCash.
I strongly suggest finding alternatives rather than doing it this way. Multiple users might run this query and retrieve the same maximum. If you insist on keeping the query as it is, try running the following:
insert into banking.receivedcash(
ReceivedCashID,
Date,
Time
)
select
(select isnull(Max(ReceivedCashID),0) from Banking.ReceivedCash)+
ROW_NUMBER() OVER(ORDER BY t.Date,t.Time),
t.Date,
t.Time
from
#TVPCash as t
This uses ROW_NUMBER to count the row number in #TVPCash and adds this to the maximum ReceivedCashID of Banking.ReceivedCash.