Assume I have the following postgresql table, i.e. Tbl1:
In Tbl1 I have the following attributes (C_ID is the unique ID field updated by means of a sequence):
C_ID, Col2, Col3, Col4, C_IDr, Col5, etc.
I want to create a trigger that when I INSERT a new record, the trigger must fire and set field C_IDr (5th column) equal to C_ID (based on a certain condition - the condition is the easy part), thus INSERT new record: C_IDr = C_ID
How do I go about achieving that?
Related
I've been looking for an answer to this question for a few days and can't find anything referencing this specific issue.
First of all, should it work if I want to use an INSERT INTO SELECT statement to copy over rows of a table, back into the same table but with new id's and 1 of the column modified?
Example:
INSERT INTO TABLE_A (column1, column2, column3) SELECT column1, 'value to change', column3 from TABLE_A where column 2 = 'original value';
When I try this on a DB2 database, I'm getting the following error:
INVALID MULTIPLE-ROW INSERT. SQLCODE=-533, SQLSTATE=21501, DRIVER=4.18.60
If I run the same statement but I put a specific ID to return in the select statement, ensuring only 1 row is returned, then the statement works. But that goes against what I'm trying to do which is copy multiple rows from the same table into itself, while updating a specific column to a new value.
Thanks everyone!
It works fine for me without error on Db2 11.1.4.0
CREATE TABLE TABLE_A( column1 int , column2 varchar(16), column3 int)
INSERT INTO TABLE_A values (1,'original value',3)
INSERT INTO TABLE_A (column1, column2, column3)
SELECT column1, 'value to change', column3 from TABLE_A where column2 = 'original value'
SELECT * FROM TABLE_A
returns
COLUMN1|COLUMN2 |COLUMN3
-------|---------------|-------
1|original value | 3
1|value to change| 3
maybe there is something you are not telling us....
You don't mention your platform and version, but the docs seems pretty clear..
IBM LUW 11.5
A multiple-row INSERT into a self-referencing table is invalid.
First google results
An INSERT operation with a subselect attempted to insert multiple rows
into a self-referencing table. The subselect of the INSERT operation
should return no more than one row of data. System action: The INSERT
statement cannot be executed. The contents of the object table are
unchanged. Programmer response: Examine the search condition of the
subselect to make sure that no more than one row of data is selected.
EDIT You've apparently got a self-referencing constraint on the table. Ex: EMPLOYEES table with a MANAGER column defined as a FK self-referencing back to the EMPLOYEES table.
Db2 simply doesn't support what you are trying to do.
You need to a temporary table to hold the modified rows.
Optionally, assuming that your table has a primary key, try using the MERGE statement instead.
I am trying to create a db2 trigger with the following functionality ...
When a record is inserted / updated into Table1, insert / update that same record into Table2. The catch is, Table1 has 11 columns, and Table2 has 5 columns. The 5 columns in Table2 are the exact same as 5 of the columns in Table1, however, since Table1 has an additional 6 columns, how will the Trigger know which 5 of the 11 specific columns I would like to Insert / Update into Table2?
For example:
Table1
MasterID
SubMasterID
Price
Location
Type
Status
Contact First Name
Contact Last Name
Contact Email
Contact State
Contact Zip
Table2
MasterID
SubMasterID
Price
Location
Type
From the example above, After an Insert / Update into Table1, I would like to insert / update that same record into Table2, however with only the 5 columns in Table2 (meaning MasterID, SubMasterID, Price, Location, Type).
Is this actually possible with a trigger in Db2?
Or would anyone recommend writing a piece of code instead?
Thanks! Any help is appreciated.
I've tried creating a trigger (untested) like so,
CREATE OR REPLACE TRIGGER "SCHEMA1"."TABLE1_TABLE2_INSERT" AFTER
INSERT
ON
SCHEMA1.TABLE1 REFERENCING NEW AS A
FOR EACH ROW MODE DB2SQL BEGIN ATOMIC
INSERT
INTO
SCHEMA2.TABLE2
(
"MasterID",
"SubMasterID",
"Price",
"Location",
"Type"
)
VALUES
(
a.MasterID,
a.SubMasterID,
a.Price,
a.Location,
a.Type
);
END
However, I am unsure how this trigger can specifically identify the specific columns from Table1 to insert / update into Table2.
I am using SQL Server where I have designed a view to sum the results of two tables and I want the output to be a single table with the results. My query simplified is something like:
SELECT SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This gives me the data I want, but when updating my EDM the view is excluded because "a primary key cannot be inferred".
With a little research I modified the query to spoof an id column to as follows:
SELECT ROW_NUMBER() OVER (ORDER BY col2) AS 'ID', SUM(col1), col2, col3
FROM Table1
GROUP BY col2, col3
This kind of query gives me a nice increasing set of ids. However, when I attempt to update my model it still excludes my view because it cannot infer a primary key. How can we use views that aggregate records and connect them with Linq-to-Entities?
As already discussed in the comments you can try adding MAX(id) as id to the view. Based on your feedback this would become:
SELECT ISNULL(MAX(id), 0) as ID,
SUM(col1),
col2,
col3
FROM Table1
GROUP BY col2, col3
Another option is to try creating an index on the view:
CREATE UNIQUE CLUSTERED INDEX idx_view1 ON dbo.View1(id)
I use this code alter view
ISNULL(ROW_NUMBER() OVER(ORDER BY ActionDate DESC), -1) AS RowID
I use this clause in multi relations view / table query
ROW_NUMBER never give null value because it never seen -1
This is all I needed to add in order to import my view into EF6.
select ISNULL(1, 1) keyField
What I'm trying to do is select various rows from a certain table and insert them right back into the same table. My problem is that I keep running into the whole "duplicate PK" error - is there a way to skip the PK field when executing an INSERT INTO statement in PostgreSQL?
For example:
INSERT INTO reviews SELECT * FROM reviews WHERE rev_id=14;
the rev_id in the preceding SQL is the PK key, which I somehow need to skip. (To clarify: I am using * in the SELECT statement because the number of table columns can increase dynamically).
So finally, is there any way to skip the PK field?
Thanks in advance.
You can insert only the values you want so your PK will get auto-incremented
insert into reviews (col1, col2, col3) select col1, col2, col3 from reviews where rev_id=14
Please do not retrieve/insert the id-column
insert into reviews (col0, col1, ...) select col0, col1, ... from reviews where rev_id=14;
I am inserting only new records that do not exist in a live table from a "dump" table. My issue is there is an identity column that I don't want to insert into the live, I want the live tables identity column to take care of incrementing the value but I am getting an insert error "Insert Error: Column name or number of supplied values does not match table definition." Is there a way around this or is the only fix to remove the identity column all together?
Thanks,
Sam
You need to list of all the needed columns in your query, excluding the identity column.
One more reason why you should never use SELECT *.
INSERT liveTable
(col1, col2, col3)
SELECT col1, col2, col3
FROM dumpTable dt
WHERE NOT EXISTS
(
SELECT 1
FROM liveTable lt
WHERE lt.Id == dt.Id
)
Pro tip: You can also achieve the above by using an OUTER JOIN between the dump and live tables and using WHERE liveTable.col1 = NULL (you will probably need to qualify the column names selected with the dump table alias).
I figured out the issue.... my live table didn't have the ID field set as an identity, somehow when I created it that field wasn't set up correctly.
you can leave that column in your insert statment like this
insert into destination (col2, col3, col4)
select col2, col3 col4 from source
Don't do just
insert into destination
select * from source