i am using DB2 LUW version 10.x
My question is can we use common table expression inside before insert trigger.
CREATE OR REPLACES TRIGGER TRG_TEST
BEFORE INSERT ON TEST
FOR EACH ROW
WHEN(
WITH DS AS (SELECT COUNT(1) FROM TEST)
SELECT 1 FROM DS
)
The CREATE TRIGGER statement documentation topic definitely should describe in more detail, what search-condition is.
Actually a search-condition may contain expressions, and you can use a scalar-fullselect as part of such an expression. But a fullselect can't contain CTE. Only a select-statement can.
So, you can't use CTEs in search-condition. If you really need it, then wrap your CTE in a function call.
Related
I would like to execute and update similar to this:
UPDATE dummy
SET customer=subquery.customer,
address=subquery.address,
partn=subquery.partn
FROM (SELECT address_id, customer, address, partn
FROM dummy) AS subquery
WHERE dummy.address_id=subquery.address_id;
Taken from this answer: https://stackoverflow.com/a/6258586/411965
I found this and wondered if this can auto converted to jooq fluent syntax.
What is the equivalent jooq query? specifically, how do I perform the outer when referencing the subquery?
Assuming you're using code generation, do it like this:
Table<?> subquery = table(
select(
DUMMY.ADDRESS_ID,
DUMMY.CUSTOMER,
DUMMY.ADDRESS,
DUMMY.PARTN
)
.from(DUMMY)
).as("subquery");
ctx.update(DUMMY)
.set(DUMMY.CUSTOMER, subquery.field(DUMMY.CUSTOMER))
.set(DUMMY.ADDRESS, subquery.field(DUMMY.ADDRESS))
.set(DUMMY.PARTN, subquery.field(DUMMY.PARTN))
.from(subquery)
.where(DUMMY.ADDRESS_ID.eq(subquery.field(DUMMY.ADDRESS_ID)))
.execute();
Of course, the query makes no sense it is, because you're just going to touch every row without modifying it, but since you copied the SQL from another answer, I'm assuming your subquery's dummy table is really something else.
A trigger seems to be ignoring the 'when condition' in my definition but I'm unsure why. I'm running the following:
create trigger trigger_update_candidate_location
after update on candidates
for each row
when (
OLD.address1 is distinct from NEW.address1
or
OLD.address2 is distinct from NEW.address2
or
OLD.city is distinct from NEW.city
or
OLD.state is distinct from NEW.state
or
OLD.zip is distinct from NEW.zip
or
OLD.country is distinct from NEW.country
)
execute procedure entities.tf_update_candidate_location();
But when I check back in on it, I get the following:
-- auto-generated definition
create trigger trigger_update_candidate_location
after update
on candidates
for each row
execute procedure tf_update_candidate_location();
This is problematic because the procedure I call ends up doing an update on the same table for different columns (lat/lng). Since the 'when' condition is ignored this crates an infinite loop.
My intention is to watch for address change, do a lookup on another table to get lat/lng values.
Postgresql version: 10.6
IDE: DataGrip 2018.1.3
How exactly do you create and "check back"? With datagrip?
The WHEN condition was added with Postgres 9.0. Some old (or poor) clients may be outdated. To be sure, check in pgsql with:
SELECT pg_get_triggerdef(oid, true)
FROM pg_trigger
WHERE tgrelid = 'candidates'::regclass -- schema-qualify name to be sure
AND NOT tgisinternal;
Any actual WHEN qualification is stored in internal format in pg_trigger.tgqual, btw. Details in the manual here.
Also what's your current search_path and what's the schema of table candidates?
It stands out that the table candidates is unqualified, while the trigger function entities.tf_update_candidate_location() has a schema-qualification ... You are not confusing tables of the same name in different DB schemas, are you?
Aside, you can simplify with this shorter, equivalent syntax:
create trigger trigger_update_candidate_location
after update on candidates -- schema-qualify??
for each row
when (
(OLD.address1, OLD.address2, OLD.city, OLD.state, OLD.zip, OLD.country)
IS DISTINCT FROM
(NEW.address1, NEW.address2, NEW.city, NEW.state, NEW.zip, NEW.country)
)
execute procedure entities.tf_update_candidate_location();
Unfortunately, that's the issue of DataGrip. Please follow the ticket to be notified when it's fixed.
https://youtrack.jetbrains.com/issue/DBE-7247
Using Postgres 9.3:
I am attempting to automatically populate a table when an insert is performed on another table. This seems like a good use for rules, but after adding the rule to the first table, I am no longer able to perform inserts into the second table using the writable CTE. Here is an example:
CREATE TABLE foo (
id INT PRIMARY KEY
);
CREATE TABLE bar (
id INT PRIMARY KEY REFERENCES foo
);
CREATE RULE insertFoo AS ON INSERT TO foo DO INSERT INTO bar VALUES (NEW.id);
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
INSERT INTO foo SELECT * FROM a
When this is run, I get the error
"ERROR: WITH cannot be used in a query that is rewritten by rules
into multiple queries".
I have searched for that error string, but am only able to find links to the source code. I know that I can perform the above using row-level triggers instead, but it seems like I should be able to do this at the statement level. Why can I not use the writable CTE, when queries like this can (in this case) be easily re-written as:
INSERT INTO foo SELECT * FROM (VALUES (1), (2)) a
Does anyone know of another way that would accomplish what I am attempting to do other than 1) using rules, which prevents the use of "with" queries, or 2) using row-level triggers? Thanks,
TL;DR: use triggers, not rules.
Generally speaking, prefer triggers over rules, unless rules are absolutely necessary. (Which, in practice, they never are.)
Using rules introduces heaps of problems which will needlessly complicate your life down the road. You've run into one here. Another (major) one is, for instance, that the number of affected rows will correspond to that of the very last query -- if you're relying on FOUND somewhere and your query is incorrectly reporting that no rows were affected by a query, you'll be in for painful bugs.
Moreover, there's occasional talk of deprecating Postgres rules outright:
http://postgresql.nabble.com/Deprecating-RULES-td5727689.html
As the other answer I definitely recommend using INSTEAD OF triggers before RULEs.
However if for some reason you don't want to change existing VIEW RULEs and still want use WITH you can do so by wrapping the VIEW in a stored procedure:
create function insert_foo(int) returns void as $$
insert into foo values ($1)
$$ language sql;
WITH a AS (SELECT * FROM (VALUES (1), (2)) b)
SELECT insert_foo(a.column1) from a;
This could be useful when using some legacy db through some system that wraps statements with CTEs.
I need to execute statements conditionally in DB2. I searched for DB2 documentation and though if..then..elseif will serves the purpose. But can't i use if without a procedure.?
My DB2 verion is 9.7.6.
My requirement is I have a table say Group(name,gp_id). And I have another table Group_attr(gp_id,value,elem_id). We can ignore ant the elem_id for the requirement now.
-> I need to check the Group if it have a specific name.
-> If it has then nothing to be done.
-> If it doesn't have I need to add it to the Group. Then I need to insert corresponding rows in the Group_attr. Assume the value and elem_id are static.
You can use an anonymous block for PL/SQL or a compound statement for SQL PL code.
BEGIN ATOMIC
FOR ROW AS
SELECT PK, C1, DISCRETIZE(C1) AS D FROM SOURCE
DO
IF ROW.D IS NULL THEN
INSERT INTO EXCEPT VALUES(ROW.PK, ROW.C1);
ELSE
INSERT INTO TARGET VALUES(ROW.PK, ROW.D);
END IF;
END FOR;
END
Compound statements :
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0004240.html
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.apdv.sqlpl.doc/doc/c0053781.html
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.sql.ref.doc/doc/r0004239.html
Anonymous block :
http://publib.boulder.ibm.com/infocenter/db2luw/v9r7/topic/com.ibm.db2.luw.apdv.plsql.doc/doc/c0053848.html
http://www.ibm.com/developerworks/data/library/techarticle/dm-0908anonymousblocks/
Many ot this features come since version 9.7
I got a solution for the conditional insert. For the scenario that I mentioned, the solution can be like this.
Insert into Group(name) select 'Name1' from sysibm.sysdummy1 where (select count(*) from Group where name='Name1')=0
Insert into Group_attr(gp_id,value,elem_id) select g.gp_id,'value1','elem1' Group g,group_attr ga where ga.gp_id=g.gp_id and (select count(*) from Group_attr Ga1 where Ga.gp_id=g.gp_id)=0
-- In my case Group_attr will contain some data for sure if the group has exists already
Some SQL servers have a feature where INSERT is skipped if it would violate a primary/unique key constraint. For instance, MySQL has INSERT IGNORE.
What's the best way to emulate INSERT IGNORE and ON DUPLICATE KEY UPDATE with PostgreSQL?
With PostgreSQL 9.5, this is now native functionality (like MySQL has had for several years):
INSERT ... ON CONFLICT DO NOTHING/UPDATE ("UPSERT")
9.5 brings support for "UPSERT" operations.
INSERT is extended to accept an ON CONFLICT DO UPDATE/IGNORE clause. This clause specifies an alternative action to take in the event of a would-be duplicate violation.
...
Further example of new syntax:
INSERT INTO user_logins (username, logins)
VALUES ('Naomi',1),('James',1)
ON CONFLICT (username)
DO UPDATE SET logins = user_logins.logins + EXCLUDED.logins;
Edit: in case you missed warren's answer, PG9.5 now has this natively; time to upgrade!
Building on Bill Karwin's answer, to spell out what a rule based approach would look like (transferring from another schema in the same DB, and with a multi-column primary key):
CREATE RULE "my_table_on_duplicate_ignore" AS ON INSERT TO "my_table"
WHERE EXISTS(SELECT 1 FROM my_table
WHERE (pk_col_1, pk_col_2)=(NEW.pk_col_1, NEW.pk_col_2))
DO INSTEAD NOTHING;
INSERT INTO my_table SELECT * FROM another_schema.my_table WHERE some_cond;
DROP RULE "my_table_on_duplicate_ignore" ON "my_table";
Note: The rule applies to all INSERT operations until the rule is dropped, so not quite ad hoc.
For those of you that have Postgres 9.5 or higher, the new ON CONFLICT DO NOTHING syntax should work:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT field_one, field_two, field_three
FROM source_table
ON CONFLICT (field_one) DO NOTHING;
For those of us who have an earlier version, this right join will work instead:
INSERT INTO target_table (field_one, field_two, field_three )
SELECT source_table.field_one, source_table.field_two, source_table.field_three
FROM source_table
LEFT JOIN target_table ON source_table.field_one = target_table.field_one
WHERE target_table.field_one IS NULL;
Try to do an UPDATE. If it doesn't modify any row that means it didn't exist, so do an insert. Obviously, you do this inside a transaction.
You can of course wrap this in a function if you don't want to put the extra code on the client side. You also need a loop for the very rare race condition in that thinking.
There's an example of this in the documentation: http://www.postgresql.org/docs/9.3/static/plpgsql-control-structures.html, example 40-2 right at the bottom.
That's usually the easiest way. You can do some magic with rules, but it's likely going to be a lot messier. I'd recommend the wrap-in-function approach over that any day.
This works for single row, or few row, values. If you're dealing with large amounts of rows for example from a subquery, you're best of splitting it into two queries, one for INSERT and one for UPDATE (as an appropriate join/subselect of course - no need to write your main filter twice)
To get the insert ignore logic you can do something like below. I found simply inserting from a select statement of literal values worked best, then you can mask out the duplicate keys with a NOT EXISTS clause. To get the update on duplicate logic I suspect a pl/pgsql loop would be necessary.
INSERT INTO manager.vin_manufacturer
(SELECT * FROM( VALUES
('935',' Citroën Brazil','Citroën'),
('ABC', 'Toyota', 'Toyota'),
('ZOM',' OM','OM')
) as tmp (vin_manufacturer_id, manufacturer_desc, make_desc)
WHERE NOT EXISTS (
--ignore anything that has already been inserted
SELECT 1 FROM manager.vin_manufacturer m where m.vin_manufacturer_id = tmp.vin_manufacturer_id)
)
INSERT INTO mytable(col1,col2)
SELECT 'val1','val2'
WHERE NOT EXISTS (SELECT 1 FROM mytable WHERE col1='val1')
As #hanmari mentioned in his comment. when inserting into a postgres tables, the on conflict (..) do nothing is the best code to use for not inserting duplicate data.:
query = "INSERT INTO db_table_name(column_name)
VALUES(%s) ON CONFLICT (column_name) DO NOTHING;"
The ON CONFLICT line of code will allow the insert statement to still insert rows of data. The query and values code is an example of inserted date from a Excel into a postgres db table.
I have constraints added to a postgres table I use to make sure the ID field is unique. Instead of running a delete on rows of data that is the same, I add a line of sql code that renumbers the ID column starting at 1.
Example:
q = 'ALTER id_column serial RESTART WITH 1'
If my data has an ID field, I do not use this as the primary ID/serial ID, I create a ID column and I set it to serial.
I hope this information is helpful to everyone.
*I have no college degree in software development/coding. Everything I know in coding, I study on my own.
Looks like PostgreSQL supports a schema object called a rule.
http://www.postgresql.org/docs/current/static/rules-update.html
You could create a rule ON INSERT for a given table, making it do NOTHING if a row exists with the given primary key value, or else making it do an UPDATE instead of the INSERT if a row exists with the given primary key value.
I haven't tried this myself, so I can't speak from experience or offer an example.
This solution avoids using rules:
BEGIN
INSERT INTO tableA (unique_column,c2,c3) VALUES (1,2,3);
EXCEPTION
WHEN unique_violation THEN
UPDATE tableA SET c2 = 2, c3 = 3 WHERE unique_column = 1;
END;
but it has a performance drawback (see PostgreSQL.org):
A block containing an EXCEPTION clause is significantly more expensive
to enter and exit than a block without one. Therefore, don't use
EXCEPTION without need.
On bulk, you can always delete the row before the insert. A deletion of a row that doesn't exist doesn't cause an error, so its safely skipped.
For data import scripts, to replace "IF NOT EXISTS", in a way, there's a slightly awkward formulation that nevertheless works:
DO
$do$
BEGIN
PERFORM id
FROM whatever_table;
IF NOT FOUND THEN
-- INSERT stuff
END IF;
END
$do$;