Some background info: i have a table named defects which has column named status_id and another column named date_closed ,i want to set date_closed after status_id has been updated
i already try to do this using after update trigger with the following code:
after update on eba_bt_sw_defects
for each row
declare
l_status number(20) := null;
begin
select status_id into l_status from eba_bt_sw_defects D,eba_bt_status S where D.status_id = S.id;
if l_status in ( select id from eba_bt_status where is_open = 'N' and NVL(is_enhancement,'N')='N') then
:NEW.DATE_CLOSED := LOCALTIMESTAMP ;
end if;
end;
but an error occured ( subquery not allowed in this contextCompilation failed)
i want a help
A couple of things that need fixing in your code:
In a trigger do not select from the table the trigger you're on. This will probably raise a ORA-04091: table name is mutating, trigger/function may not see it error.
IF l_variable IN (SELECT ...) is not a valid oracle syntax. It raises PLS-00405: subquery not allowed in this context
I don't have your data so here is a similar example:
drop table todos;
drop table statuses;
-- create tables
create table statuses (
id number generated by default on null as identity
constraint statuses_id_pk primary key,
status varchar2(60 char),
is_open varchar2(1 char) constraint statuses_is_open_ck
check (is_open in ('Y','N'))
)
;
create table todos (
id number generated by default on null as identity
constraint todos_id_pk primary key,
name varchar2(255 char) not null,
close_date timestamp with local time zone,
status_id number
constraint todos_status_id_fk
references statuses on delete cascade
)
;
-- load data
insert into statuses (id, status, is_open ) values (1, 'OPEN', 'Y' );
insert into statuses (id, status, is_open ) values (2, 'COMPLETE', 'N' );
insert into statuses (id, status, is_open ) values (3, 'ON HOLD', 'Y' );
insert into statuses (id, status, is_open ) values (4, 'CANCELLED', 'N' );
commit;
insert into todos (name, close_date, status_id ) values ( 'Y2 Security Review', NULL, 1 );
-- triggers
CREATE OR REPLACE TRIGGER todos_biu BEFORE
INSERT OR UPDATE ON todos
FOR EACH ROW
DECLARE
l_dummy NUMBER;
BEGIN
SELECT
1
INTO l_dummy
FROM
statuses
WHERE
is_open = 'N' AND
id = :new.status_id;
:new.close_date := localtimestamp;
EXCEPTION
WHEN no_data_found THEN
-- I'm assuming you want close_date to NULL if todo is re-opened.
:new.close_date := NULL;
END todos_biu;
/
update todos set status_id = 2;
select * from todos;
id name close_date status_id
1 Y2 Security Review 11-MAY-22 05.27.04.987117000 PM 2
Related
I have about 100 rows to insert in a table - Some of them have already been inserted before and some of them have not
This is my insert that works fine if the primary key doesn't exist.. I'm going to run this 100 time with different values each time.. However, if the primary key exist, it fails and stop future commands to run.
How to ignore the failure and keep going or simply ignore duplicates?
INSERT INTO MY_TABLE
VALUES( 12342, 'fdbvdfb', 'svsdv', '5019 teR','' , 'saa', 'AL',35005 , 'C', 37, '0',368 , 'P', '2023-02-13', '2023-01-01', '2023-01-10', '2023-01-20','' , 'Test', 'Test', 'Test', 'JFK', '', null, 'Y', 'Y', '', '', '', '', '', '',2385 ,2 , '', 'N', '2023-01-16', '2023-01-20', '', NULL,NULL, NULL, NULL, 'Y', 'Test', 'Test', '', 'N', 'Test', '')
This is the error:
SQL0803N One or more values in the INSERT statement, UPDATE statement, or foreign key update caused by a DELETE statement are not valid because the primary key, unique constraint or unique index identified by "XPS01ME1" constrains
Insert IGNORE into throws:
The use of the reserved word "IGNORE" following "" is not valid.
If it can help I'm using WinSQL 10.0.157.697
You don't mention what platform and version of Db2, so I'll point you to the Linux/Unix/Windows (LUW) documentation for the MERGE statement...
Since I don't know your table or column names, I'll just give you an example with dummy names.
merge into MYTABLE as tgt
using (select *
from table( values(1,1,'CMW',5,1)
) tmp ( tblKey, fld1, fld2, fld3, fld4)
) as src
on src.tblKey = tgt.tblekey
when not matched then
insert ( tblKey, fld1, fld2, fld3, fld4)
values ( src.tblKey, src.fld1, src.fld2, src.fld3, src.fld4);
You're basically building a temporary table on the fly of one row
table( values(1,1,'CMW',5,1) ) tmp ( tblKey, fld1, fld2, fld3, fld4)
Then if there's no matching record via on src.tblKey = tgt.tblekey you do an insert.
Note that while you could do this 100 times, it is a much better performing solution to do all 100 rows at a time.
merge into MYTABLE as tgt
using (select *
from table( values (1,1,'CMW1',5,1)
, (2,11,'CMW2',50,11)
, (3,21,'CMW3',8,21)
-- , <more rows here>
) tmp ( tblKey, fld1, fld2, fld3, fld4)
) as src
on src.tblKey = tgt.tblekey
when not matched then
insert ( tblKey, fld1, fld2, fld3, fld4)
values ( src.tblKey, src.fld1, src.fld2, src.fld3, src.fld4);
Optionally, you could create an actual temporary table, insert the 100 rows (preferably in a single insert) and then use MERGE.
You may do it with a compound statement like below:
--#SET TERMINATOR #
CREATE TABLE MY_TABLE (ID INT NOT NULL PRIMARY KEY)#
BEGIN
DECLARE CONTINUE HANDLER FOR SQLSTATE '23505' BEGIN END;
INSERT INTO MY_TABLE (ID) VALUES (1);
END#
BEGIN
DECLARE CONTINUE HANDLER FOR SQLSTATE '23505' BEGIN END;
INSERT INTO MY_TABLE (ID) VALUES (1), (2), (3);
END#
BEGIN
DECLARE CONTINUE HANDLER FOR SQLSTATE '23505' BEGIN END;
INSERT INTO MY_TABLE (ID) VALUES (4);
END#
SELECT * FROM MY_TABLE#
ID
1
4
fiddle
This is how you ignore an error in db2 --
note, this is not the correct sqlstate for your problem -- replace with the one you need
DECLARE CONTINUE HANDLER FOR SQLSTATE '23505'
BEGIN -- ignore error for duplicate value
END;
Documented here
https://www.ibm.com/docs/en/db2-for-zos/12?topic=procedure-ignoring-condition-in-sql
I have 2 tables like this
drop table if exists public.table_1;
drop table if exists public.table_2;
CREATE TABLE public.table_1 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
CREATE TABLE public.table_2 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
alter table public.table_1
add constraint my_constraint_1
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
alter table public.table_2
add constraint my_constraint_2
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
Every table contains rows which are related to a user, and all the rows of the same user cannot overlap in range. In addition, some rows may be logically deleted, so I added a where condition.
So far it's working w/o problems, but the 2 constraints work separately for each table.
I need to create a constraint which cover the 2 set of tables, so that a single daterange (of the same user and not deleted), may appaer only once across the 2 different tables.
Does the EXCLUDE notation be extended to work with different tables or do I need to check it with a trigger? If the trigger is the answer, which is the simplier way to do this? Create a temporary table with the union of the 2, add the constraint on it and check if fails?
Starting from #Laurenz Albe suggestion, this is what I made
-- #################### SETUP SAMPLE TABLES ####################
drop table if exists public.table_1;
drop table if exists public.table_2;
CREATE TABLE public.table_1 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
CREATE TABLE public.table_2 (
id serial NOT NULL,
user_id bigint not null,
status varchar(255) not null,
date_start date NOT NULL,
date_end date NULL
);
alter table public.table_1
add constraint my_constraint_1
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
alter table public.table_2
add constraint my_constraint_2
EXCLUDE USING gist (user_id with =, daterange(date_start, date_end, '[]') WITH &&)
where (status != 'deleted');
-- #################### SETUP TRIGGER ####################
create or REPLACE FUNCTION check_date_overlap_trigger_hook()
RETURNS trigger as
$body$
DECLARE
l_table text;
l_sql text;
l_row record;
begin
l_table := TG_ARGV[0];
l_sql := format('
select *
from public.%s as t
where
t.user_id = %s -- Include only records of the same user
and t.status != ''deleted'' -- Include only records that are active
', l_table, new.user_id);
for l_row in execute l_sql
loop
IF daterange(l_row.date_start, COALESCE(l_row.date_end, 'infinity'::date)) && daterange(new.date_start, COALESCE(new.date_end, 'infinity'::date))
THEN
RAISE EXCEPTION 'Date interval is overlapping with another one in table %', l_table
USING HINT = 'You can''t have the same interval across table1 AND table2';
END IF;
end loop;
RETURN NEW;
end
$body$
LANGUAGE plpgsql;
-- #################### INSTALL TRIGGER ####################
create trigger check_date_overlap
BEFORE insert or update
ON public.table_1
FOR EACH row
EXECUTE PROCEDURE check_date_overlap_trigger_hook('table_2');
create trigger check_date_overlap
BEFORE insert or update
ON public.table_2
FOR EACH row
EXECUTE PROCEDURE check_date_overlap_trigger_hook('table_1');
-- #################### INSERT DEMO ROWS ####################
insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-10', '2020-12-20');
insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'deleted', '2020-12-15', '2020-12-25');
insert into public.table_1 (user_id, status, date_start, date_end) values (2, 'active', '2020-12-10', '2020-12-20');
insert into public.table_1 (user_id, status, date_start, date_end) values (2, 'deleted', '2020-12-15', '2020-12-25');
-- This will fail for overlap on the same table
-- insert into public.table_1 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-15', '2020-12-25');
-- This will fail as the user 1 already has an overlapping period on table 1
-- insert into public.table_2 (user_id, status, date_start, date_end) values (1, 'active', '2020-12-15', '2020-12-25');
-- This will fail as the user 1 already has an overlapping period on table 1
insert into public.table_2 (user_id, status, date_start, date_end) values (1, 'deleted', '2020-12-15', '2020-12-25');
update public.table_2 set status = 'active' where id = 1;
select 'table_1' as src_table, * from public.table_1
union
select 'table_2', * from public.table_2
You can probably use a trigger, but triggers are always vulnerable to race conditions (unless you are using SERIALIZABLE isolation).
If your tables really have the same columns, why don't you use a single table (and perhaps add a type column to disambiguate)?
I have a table with records which can reference another row in the same table so there is a parent-child relationship between rows in the same table.
What I am trying to achieve is to create the same data for another user so that they can see and manage their own version of this structure through the web ui where these rows are displayed as a tree.
Problem is when I bulk insert this data by only changing user_id, I lose the relation between rows because the parent_id values will be invalid for these new records and they should be updated as well with the newly generated ids.
Here is what I tried: (did not work)
Iterate over main_table
copy-paste the static values after each
do another insert on a temp table for holding old and new ids
update old parent_ids with new ids after loop ends
My attempt at doing such thing(last step is not included here)
create or replace function test_x()
returns void as
$BODY$
declare
r RECORD;
userId int8;
rowPK int8;
begin
userId := (select 1)
create table if not exists id_map (old_id int8, new_id int8);
create table if not exists temp_table as select * from main_table;
for r in select * from temp_table
loop
rowPK := insert into main_table(id, user_id, code, description, parent_id)
values(nextval('hibernate_sequence'), userId, r.code, r.description, r.parent_id) returning id;
insert into id_map (old_id, new_id) values (r.id, rowPK);
end loop;
end
$BODY$
language plpgsql;
My PostgreSQL version is 9.6.14.
DDL below for testing.
create table main_table(
id bigserial not null,
user_id int8 not null,
code varchar(3) not null,
description varchar(100) not null,
parent_id int8 null,
constraint mycompkey unique (user_id, code, parent_id),
constraint mypk primary key (id),
constraint myfk foreign key (parent_id) references main_table(id)
);
insert into main_table (id, user_id, code, description, parent_id)
values(0, 0, '01', 'Root row', null);
insert into main_table (id, user_id, code, description, parent_id)
values(1, 0, '001', 'Child row 1', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(2, 0, '002', 'Child row 2', 0);
insert into main_table (id, user_id, code, description, parent_id)
values(3, 0, '002', 'Grand child row 1', 2);
How to write a procedure to accomplish this?
Thanks in advance.
It appears your task is coping all data for a given user to another while maintaining the hierarchical relationship within the new rows. The following accomplishes that.
It begins creating a new copy of the existing rows with the new user_id, including the old row parent_id. That will be user in the next (update) step.
The CTE logically begins with the new rows which have parent_id and joins to the old parent row. From here it joins to the old parent row to the new parent row using the code and description. At that point we have the new id along with the new parent is. At that point just update with those values. Actually for the update the CTE need only select those two columns, but I've left the intermediate columns so you trace through if you wish.
create or replace function copy_user_data_to_user(
source_user_id bigint
, target_user_id bigint
)
returns void
language plpgsql
as $$
begin
insert into main_table ( user_id,code, description, parent_id )
select target_user_id, code, description, parent_id
from main_table
where user_id = source_user_id ;
with n_list as
(select mt.id, mt.code, mt.description, mt.parent_id
, mtp.id p_id,mtp.code p_code,mtp.description p_des
, mtc.id c_id, mtc.code c_code, mtc.description c_description
from main_table mt
join main_table mtp on mtp.id = mt.parent_id
join main_table mtc on ( mtc.user_id = target_user_id
and mtc.code = mtp.code
and mtc.description = mtp.description
)
where mt.parent_id is not null
and mt.user_id = target_user_id
)
update main_table mt
set parent_id = n_list.c_id
from n_list
where mt.id = n_list.id;
return;
end ;
$$;
-- test
select * from copy_user_data_to_user(0,1);
select * from main_table;
CREATE TABLE 'table name you want to create' SELECT * FROM myset
but new table and myset column name should be equal and you can also
use inplace of * to column name but column name exist in new table
othwerwise getting errors
So I'm setting up a schema in which I can input transactions of a journal entry independent of each other but also that rely on each other (mainly to ensure that debits = credits). I set up the tables, function, and trigger. Then, when I try to input values into the transactions table, I get the error below. I'm doing all of this in pgAdmin4.
CREATE TABLE transactions (
transactions_id UUID PRIMARY KEY DEFAULT uuid_generate_v1(),
entry_id INTEGER NOT NULL,
post_date DATE NOT NULL,
account_id INTEGER NOT NULL,
contact_id INTEGER NULL,
description TEXT NOT NULL,
reference_id UUID NULL,
document_id UUID NULL,
amount NUMERIC(12,2) NOT NULL
);
CREATE TABLE entries (
id UUID PRIMARY KEY,
test_date DATE NOT NULL,
balance NUMERIC(12,2)
CHECK (balance = 0.00)
);
CREATE OR REPLACE FUNCTION transactions_biut()
RETURNS TRIGGER
LANGUAGE plpgsql
AS $$
BEGIN
EXECUTE 'INSERT INTO entries (id,test_date,balance)
SELECT
entry_id,
post_date,
SUM(amount) AS ''balance''
FROM
transactions
GROUP BY
entry_id;';
END;
$$;
CREATE TRIGGER transactions_biut
BEFORE INSERT OR UPDATE ON transactions
FOR EACH ROW EXECUTE PROCEDURE transactions_biut();
INSERT INTO transactions (
entry_id,
post_date,
account_id,
description,
amount
)
VALUES
(
'1',
'2019-10-01',
'101',
'MISC DEBIT: PAID FOR FACEBOOK ADS',
-200.00
),
(
'1',
'2019-10-01',
'505',
'MISC DEBIT: PAID FOR FACEBOOK ADS',
200.00
);
After I execute this input, I get the following error:
ERROR: column "id" of relation "entries" does not exist
LINE 1: INSERT INTO entries (id,test_date,balance)
^
QUERY: INSERT INTO entries (id,test_date,balance)
SELECT
entry_id,
post_date,
SUM(amount) AS "balance"
FROM
transactions
GROUP BY
entry_id;
CONTEXT: PL/pgSQL function transactions_biut() line 2 at EXECUTE
SQL state: 42703
There are a few problems here:
You're not returning anything from the trigger function => should probably be return NEW or return OLD since you're not modifying anything
Since you're executing the trigger before each row, it's bound to fail for any transaction that isn't 0 => maybe you want a deferred constraint trigger?
You're not grouping by post_date, so your select should fail
You've defined entry_id as INTEGER, but entries.id is of type UUID
Also note that this isn't really going to scale (you're summing up all transactions of all days, so this will get slower and slower...)
#chirs I was able to figure out how to create a functioning solution using statement-level triggers:
CREATE TABLE transactions (
transactions_id UUID PRIMARY KEY DEFAULT uuid_generate_v1(),
entry_id INTEGER NOT NULL,
post_date DATE NOT NULL,
account_id INTEGER NOT NULL,
contact_id INTEGER NULL,
description TEXT NOT NULL,
reference_id UUID NULL,
document_id UUID NULL,
amount NUMERIC(12,2) NOT NULL
);
CREATE TABLE entries (
entry_id INTEGER PRIMARY KEY,
post_date DATE NOT NULL,
balance NUMERIC(12,2),
CHECK (balance = 0.00)
);
CREATE OR REPLACE FUNCTION transactions_entries() RETURNS TRIGGER AS $$
BEGIN
IF (TG_OP = 'DELETE') THEN
INSERT INTO entries
SELECT o.entry_id, o.post_date, SUM(o.amount) FROM old_table o GROUP BY o.entry_id, o.post_date;
ELSIF (TG_OP = 'UPDATE') THEN
INSERT INTO entries
SELECT o.entry_id, n.post_date, SUM(n.amount) FROM new_table n, old_table o GROUP BY o.entry_id, n.post_date;
ELSIF (TG_OP = 'INSERT') THEN
INSERT INTO entries
SELECT n.entry_id,n.post_date, SUM(n.amount) FROM new_table n GROUP BY n.entry_id, n.post_date;
END IF;
RETURN NULL; -- result is ignored since this is an AFTER trigger
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER transactions_ins
AFTER INSERT ON transactions
REFERENCING NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
CREATE TRIGGER transactions_upd
AFTER UPDATE ON transactions
REFERENCING OLD TABLE AS old_table NEW TABLE AS new_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
CREATE TRIGGER transactions_del
AFTER DELETE ON transactions
REFERENCING OLD TABLE AS old_table
FOR EACH STATEMENT EXECUTE PROCEDURE transactions_entries();
Any thoughts on optimization?
Suppose I have a table like this
create schema test;
CREATE TABLE test.customers (
customer_id serial PRIMARY KEY,
name VARCHAR UNIQUE,
email VARCHAR NOT NULL,
active bool NOT NULL DEFAULT TRUE,
is_active_datetime TIMESTAMP(3) NOT NULL DEFAULT'1900-01-01T00:00:00.000Z'::timestamp(3)
updated_datetime TIMESTAMP(3) NOT NULL DEFAULT '1900-01-01T00:00:00.000Z'::timestamp(3),
);
Now If i want to update email on conflict name
WHERE $tableName.updated_datetime < excluded.updated_datetime
and i want to update is_active_datetime on conflict name but that condition for this update is where active flag has changed.
WHERE customer.active != excluded.active
basically want to track when active status is changed. so can I do that in single statement like this
Initial insert :
insert INTO test.customers (NAME, email)
VALUES
('IBM', 'contact#ibm.com'),
(
'Microsoft',
'contact#microsoft.com'
),
(
'Intel',
'contact#intel.com'
);
To achieve my purpose I am trying something like this :
select * from test.customers;
INSERT INTO customers (name, email)
VALUES
(
'Microsoft',
'hotline#microsoft.com'
)
ON CONFLICT (name)
DO
UPDATE
SET customers.email = EXCLUDED.email
WHERE $tableName.updated_datetime < excluded.updated_datetime
on CONFLICT (name)
do
update
set is_active_datetime = current_timestamp()
WHERE customer.active != excluded.active ;
Is it possible to do this ? How to do this using this method.
You could update multiple columns with CASE conditions in a single DO UPDATE clause.
INSERT INTO customers (
name
,email
,updated_datetime
)
VALUES (
'Microsoft'
,'hotline#microsoft.com'
,now()
) ON CONFLICT(name) DO
UPDATE
SET email = CASE
WHEN customers.updated_datetime < excluded.updated_datetime
THEN excluded.email
ELSE customers.email --default when condition not satisfied
END
,is_active_datetime = CASE
WHEN customers.active != excluded.active
THEN current_timestamp
ELSE customers.is_active_datetime
END;
Demo