DB2 trigger syntax - db2

I an learning to write triggers in DB2. I need one and only 1 record in mytable have ENABLED_IND= 'Y'.
CREATE TABLE MYTABLE(
OBJECTID BIGINT GENERATED BY DEFAULT AS IDENTITY (START WITH 1, INCREMENT BY 1, NO CACHE, NO CYCLE),
MYDATA VARCHAR(8) NOT NULL,
....
ENABLED_IND CHAR(1) NOT NULL
) IN "TS_PROF_D" INDEX IN "TS_PROF_IX" ;
when ever I insert into mytable I need all existing ENABLED_IND to be set to "N". I came up with the following, a oracle EE dba says it should work, but does not
CREATE TRIGGER MYSCHEMA.MYTRIGGER BEFORE INSERT ON MYSCHEMA.MYTABLE
FOR EACH ROW MODE DB2SQL
BEGIN ATOMIC
UPDATE MYSCHEMA.MYTABLE SET ENABLED_IND = 'N';
END
All db2 is telling me "illegal character" We don't see an illegal character. The web examples of db2 triggers are very confusing.

Related

Flyway fail on error in Transact-SQL migration

When using Flyway in combination with a Microsoft SQL Server, we are observing the issue described on this question.
Basically, a migration script like this one does not rollback the successful GO-delimited batches when another part failed:
BEGIN TRANSACTION
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
GO
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
GO
COMMIT TRANSACTION
In the above example, the table t2 is being created even though the preceding ALTER TABLE statement fails.
On the linked question, the following approaches (outside of the flyway context) were suggested:
A multi-batch script should have a single error handler scope that rolls back the transaction on error, and commits at the end. In TSQL you can do this with dynamic sql
Dynamic SQL makes for hard-to-read script and would be very inconvenient
With SQLCMD you can use the -b option to abort the script on error
Is this available in flyway?
Or roll your own script runner
Is this maybe the case in flyway? Is there a flyway-specific configuration to enable proper failing on errors?
EDIT: alternative example
Given: simple database
BEGIN TRANSACTION
CREATE TABLE [a] (
[a_id] [nvarchar](36) NOT NULL,
[a_name] [nvarchar](100) NOT NULL
);
CREATE TABLE [b] (
[b_id] [nvarchar](36) NOT NULL,
[a_name] [nvarchar](100) NOT NULL
);
INSERT INTO [a] VALUES (NEWID(), 'name-1');
INSERT INTO [b] VALUES (NEWID(), 'name-1'), (NEWID(), 'name-2');
COMMIT TRANSACTION
Migration Script 1 (failing, without GO)
BEGIN TRANSACTION
ALTER TABLE [b] ADD [a_id] [nvarchar](36) NULL;
UPDATE [b] SET [a_id] = [a].[a_id] FROM [a] WHERE [a].[a_name] = [b].[a_name];
ALTER TABLE [b] ALTER COLUMN [a_id] [nvarchar](36) NOT NULL;
ALTER TABLE [b] DROP COLUMN [a_name];
COMMIT TRANSACTION
This results in the error message Invalid column name 'a_id'. for the UPDATE statement.
Possible solution: introduce GO between statements
Migration Script 2 (with GO: working for "happy case" but only partial rollback when there's an error)
BEGIN TRANSACTION
SET XACT_ABORT ON
GO
ALTER TABLE [b] ADD [a_id] [nvarchar](36) NULL;
GO
UPDATE [b] SET [a_id] = [a].[a_id] FROM [a] WHERE [a].[a_name] = [b].[a_name];
GO
ALTER TABLE [b] ALTER COLUMN [a_id] [nvarchar](36) NOT NULL;
GO
ALTER TABLE [b] DROP COLUMN [a_name];
GO
COMMIT TRANSACTION
This performs the desired migration as long as all values in table [b] have a matching entry in table [a].
In the given example, that's not the case. I.e. we get two errors:
expected: Cannot insert the value NULL into column 'a_id', table 'test.dbo.b'; column does not allow nulls. UPDATE fails.
unexpected: The COMMIT TRANSACTION request has no corresponding BEGIN TRANSACTION.
Horrifyingly: the last ALTER TABLE [b] DROP COLUMN [a_name] statement was actually executed, committed and not rolled back. I.e. one cannot fix this up afterwards as the linking column is lost.
This behaviour is actually independent of flyway and can be reproduced directly via SSMS.
The problem is fundamental to the GO command. It's not a part of the T-SQL language. It's a construct in use within SQL Server Management Studio, sqlcmd, and Azure Data Studio. Flyway is simply passing the commands on to your SQL Server instance through the JDBC connection. It's not going to be dealing with those GO commands like the Microsoft tools do, separating them into independent batches. That's why you won't see individual rollbacks on errors, but instead see a total rollback.
The only way to get around this that I'm aware of would be to break apart the batches into individual migration scripts. Name them in such a way so it's clear, V3.1.1, V3.1.2, etc. so that everything is under the V3.1* version (or something similar). Then, each individual migration will pass or fail instead of all going or all failing.
Edited 20201102 -- learned a lot more about this and largely rewrote it! So far have been testing in SSMS, do plan to test in Flyway as well and write up a blog post. For brevity in migrations, I believe you could put the ##trancount check / error handling into a stored procedure if you prefer, that's also on my list to test.
Ingredients in the fix
For error handling and transaction management in SQL Server, there are three things which may be of great help:
Set XACT_ABORT to ON (it is off by default). This setting "specifies whether SQL Server automatically rolls back the current transaction when a Transact-SQL statement raises a runtime error" docs
Check ##TRANCOUNT state after each batch delimiter you send and using this to "bail out" with RAISERROR / RETURN if needed
Try/catch/throw -- I'm using RAISERROR in these examples, Microsoft recommends you use THROW if it's available to you (it's available SQL Server 2016+ I think) - docs
Working on the original sample code
Two changes:
Set XACT_ABORT ON;
Perform a check on ##TRANCOUNT after each batch delimiter is sent to see if the next batch should be run. The key here is that if an error has occurred, ##TRANCOUNT will be 0. If an error hasn't occurred, it will be 1. (Note: if you explicitly open multiple "nested" transactions you'd need to adjust trancount checks as it can be higher than 1)
In this case the ##TRANCOUNT check clause will work even if XACT_ABORT is off, but I believe you want it on for other cases. (Need to read up more on this, but I haven't come across a downside to having it ON yet.)
BEGIN TRANSACTION;
SET XACT_ABORT ON;
GO
-- Create a table with two nullable columns
CREATE TABLE [dbo].[t1](
[id] [nvarchar](36) NULL,
[name] [nvarchar](36) NULL
)
-- add one row having one NULL column
INSERT INTO [dbo].[t1] VALUES(NEWID(), NULL)
-- set one column as NOT NULLABLE
-- this fails because of the previous insert
ALTER TABLE [dbo].[t1] ALTER COLUMN [name] [nvarchar](36) NOT NULL
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
-- create a table as next action, so that we can test whether the rollback happened properly
CREATE TABLE [dbo].[t2](
[id] [nvarchar](36) NOT NULL
)
GO
COMMIT TRANSACTION;
Alternative example
I added a bit of code at the top to be able to reset the test database. I repeated the pattern of using XACT_ABORT ON and checking ##TRANCOUNT after each batch terminator (GO) is sent.
/* Reset database */
USE master;
GO
IF DB_ID('transactionlearning') IS NOT NULL
BEGIN
ALTER DATABASE transactionlearning
SET SINGLE_USER
WITH ROLLBACK IMMEDIATE;
DROP DATABASE transactionlearning;
END;
GO
CREATE DATABASE transactionlearning;
GO
/* set up simple schema */
USE transactionlearning;
GO
BEGIN TRANSACTION;
CREATE TABLE [a]
(
[a_id] [NVARCHAR](36) NOT NULL,
[a_name] [NVARCHAR](100) NOT NULL
);
CREATE TABLE [b]
(
[b_id] [NVARCHAR](36) NOT NULL,
[a_name] [NVARCHAR](100) NOT NULL
);
INSERT INTO [a]
VALUES
(NEWID(), 'name-1');
INSERT INTO [b]
VALUES
(NEWID(), 'name-1'),
(NEWID(), 'name-2');
COMMIT TRANSACTION;
GO
/*******************************************************/
/* Test transaction error handling starts here */
/*******************************************************/
USE transactionlearning;
GO
BEGIN TRANSACTION;
SET XACT_ABORT ON;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 1: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] ADD [a_id] [NVARCHAR](36) NULL;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 2: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
UPDATE [b]
SET [a_id] = [a].[a_id]
FROM [a]
WHERE [a].[a_name] = [b].[a_name];
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 3: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] ALTER COLUMN [a_id] [NVARCHAR](36) NOT NULL;
GO
IF ##TRANCOUNT <> 1
BEGIN
DECLARE #ErrorMessage AS NVARCHAR(4000);
SET #ErrorMessage
= N'Check 4: Transaction in an invalid or closed state (##TRANCOUNT=' + CAST(##TRANCOUNT AS NVARCHAR(10))
+ N'). Exactly 1 transaction should be open at this point. Rolling-back any pending transactions.';
RAISERROR(#ErrorMessage, 16, 127);
RETURN;
END;
ALTER TABLE [b] DROP COLUMN [a_name];
GO
COMMIT TRANSACTION;
My fave references on this topic
There is a wonderful free resource online which digs into error and transaction handling in great detail. It is written and maintained by Erland Sommarskog:
Part One – Jumpstart Error Handling
Part Two – Commands and Mechanisms
Part Three – Implementation
One common question is why XACT_ABORT is still needed/ if it is entirely replaced by TRY/CATCH. Unfortunately it is not entirely replaced, and Erland has some examples of this in his paper, this is a good place to start on that.

Sequence in postgresql

Converting below SQL Server procedures and tables to store and generate sequence to postgresql.
Can anyone guide how to do this in Postgres (via table and this function) and not via sequence or nextval or currval
Sequence table
IF NOT EXISTS (SELECT name FROM sys.tables WHERE name = 'testtable')
    CREATE TABLE dbo.testtable(Sequence int NOT NULL )
go
IF NOT EXISTS (SELECT * FROM testtable)
    INSERT INTO testtable VALUES (-2147483648) 
go 
Sequence generating proc
CREATE PROCEDURE test_proc
AS
SET NOCOUNT ON
DECLARE #iReturn int
BEGIN TRANSACTION
SELECT #iReturn = Sequence FROM schema.test (TABLOCKX) -- set exclusive table lock 
UPDATE schema.test SET Sequence = ( Sequence + 1 )
COMMIT TRANSACTION
SELECT #iReturn
RETURN #iReturn 
go 
grant execute on schema.test to public 
go
Disclaimer: using a sequence is the only scalable and efficient way to generate unique numbers.
Having said that, it is possible to implement your own sequence generator. The only situation where makes any sense is, if you are required to generate gapless numbers. If you do not have such a requirement, use a sequence.
You need one table that stores the values of the sequences. I usually use one table with a row for each "generator" that avoids costly table locks.
create table seq_generator
(
entity varchar(30) not null primary key,
seq_value integer default 0 not null
);
insert into seq_generator (entity) values ('testsequence');
Then create a function to increment the sequence value:
create or replace function next_value(p_entity varchar)
returns integer
as
$$
update seq_generator
set seq_value = seq_value + 1
where entity = lower(p_entity)
returning seq_value;
$$
language sql;
To obtain the next sequence value, e.g. inside an insert:
insert into some_table
(id, ...)
values
(next_value('testsequence'), ...);
Or make it a default value:
create table some_table
(
id integer not null primary key default next_value('testsequence'),
...
);
The UPDATE increments and locks the row in a single statement returning the new value for the sequence. If the calling transaction commits, the update to seq_generator will also be committed. If the calling transaction rolls back, the update will roll back as well.
If a second transaction calls next_value() for the same entity, it has to wait until the first transaction commits or rolls back.
So access to the generator is serialized through this function. Only one transaction at a time can do that.
If you need a second gapless sequence, just insert a new row in the `seq_generator' table.
This will seriously affect performance when you use in an environment that does a lot of concurrent inserts.
The only reason that would justify this is a legal requirement to have a gapless number. In every other case you should really, really use a native Postgres sequence.

Postgres: how to add unique identifier to table

I have the following table:
CREATE TABLE myid
(
nid bigserial NOT NULL,
myid character varying NOT NULL,
CONSTRAINT myid_pkey PRIMARY KEY (myid )
)
Now, I want to add records to this table with the following function:
CREATE FUNCTION getmyid(_myid character varying)
RETURNS bigint AS
$BODY$ --version 1.1 2015-03-04 08:16
DECLARE
p_nid bigint;
BEGIN
SELECT nid INTO p_nid FROM myid WHERE myid=_myid FOR UPDATE;
IF NOT FOUND THEN
INSERT INTO myid(myid) VALUES(_myid) RETURNING nid INTO p_nid;
END IF;
RETURN p_nid;
END;$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
Generally it works fine, but under high load, this function sometimes fails with "duplicate key value violates unique constraint "myid_pkey";
This function is called from trigger on insert on another table, and inserts are called within transaction. Isolation level is set for READ COMMITED, postgres 9.1 on Debian Wheezy.
What I'm doing wrong ?
I see following way how it happens.
Two processes(threads) call the function simultaneously with the same myid.
Both threads successfully execute SELECT nid INTO .. query, and see - there is no such myid in table now.
Both threads go into IF NOT FOUND THEN
Thread 1 executes INSERT INTO myid(myid) and commits transaction with no errors
Thread 2 executes INSERT INTO myid(myid) and fails, because same myid value already exists in table (PRIMARY KEY constraint).
Why Thread 2 sees other transaction committed data in own transaction ?
Because of 'non-repeatable read' phenomena, which is possible with READ COMMITTED isolation (http://www.postgresql.org/docs/9.2/static/transaction-iso.html).

Capture columns in plpgsql during UPDATE

I am writing a trigger in plpgsql for Postgres 9.1. I need to be able to capture the column names that were issued in the SET clause of an UPDATE so I can record the specified action in an audit table. The examples in the Postgres documentation are simple and inadequate for my needs. I have searched the internet for days and I am unable to find any other examples that try to achieve what I want to do here.
I am on a tight schedule to resolve this soon. I don't know Tcl so pl/Tcl is out of the question for me at this point. pl/Perl may work but I don't know where to start with it. Also I wanted to find a way to accomplish this in pl/pgsql if at all possible for portability and maintenance. If someone can recommend a pl/Perl solution to this I would be grateful.
Here is the table structure of the target table that will be audited:
Note: There are many other columns in the record table but I have not listed them here in order to keep things simple. But the trigger should be able to record changes to any of the columns in the row.
CREATE TABLE record (
record_id integer NOT NULL PRIMARY KEY,
lastname text,
frstname text,
dob date,
created timestamp default NOW(),
created_by integer,
inactive boolean default false
);
create sequence record_record_id_seq;
alter table record alter record_id set default nextval('record_record_id_seq');
Here is my audit table:
CREATE TABLE record_audit (
id integer NOT NULL PRIMARY KEY,
operation char(1) NOT NULL, -- U, I or D
source_column text,
source_id integer,
old_value text,
new_value text,
created_date timestamp default now(),
created_by integer
);
create sequence record_audit_id_seq;
alter table record_audit alter id set default nextval('record_audit_id_seq');
My goal is to record INSERTS and UPDATES to the record table in the record_audit table that will detail not only what the target record_id was (source_id) that was updated and what column was updated (source_column), but also the old_value and the new_value of the column.
I understand that the column values will have to be CAST() to a type of text. I believe I can access the old_value and new_value by accessing NEW and OLD but I am having difficulty figuring out how to obtain the column names used in the SET clause of the UPDATE query. I need the trigger to add a new record to the record_audit table for every column specified in the SET clause. Note, there are not DELETE actions as records are simply UPDATED to inactive = 't' (and thus recorded in the audit table)
Here is my trigger so far (obviously incomplete). Please forgive me, I am learning pl/pgsql as I go.
-- Trigger function for record_audit table
CREATE OR REPLACE FUNCTION audit_record() RETURNS TRIGER AS $$
DECLARE
insert_table text;
ref_col text; --how to get the referenced column name??
BEGIN
--
-- Create a new row in record_audit depending on the operation (TG_OP)
--
IF (TG_OP = 'INSERT') THEN
-- old_value and new_value are meaningless for INSERTs. Just record the new ID.
INSERT INTO record_audit
(operation,source_id,created_by)
VALUES
('I', NEW.record_id, NEW.created_by);
ELSIF (TG_OP = 'UPDATE') THEN
FOR i in 1 .. TG_ARGV[0] LOOP
ref_col := TG_ARGV[i].column; -- I know .column doesn't exist but what to use?
INSERT INTO record_audit
(operation, source_column, source_id, old_value, new_value, created_by)
VALUES
('U', ref_col, NEW.record_id, OLD.ref_col, NEW.ref_col, NEW.created_by);
END LOOP;
END IF;
RETURN NULL; -- result is ignored anyway since this is an AFTER trigger
END;
$$ LANGUAGE plpgsql;
CREATE TRIGGER record_audit_trig
AFTER INSERT OR UPDATE on record
FOR EACH ROW EXECUTE PROCEDURE audit_record();
Thanks for reading this long and winding question!
you cannot to get this information - not in PL level - probably it is possible in C.
Good enough solution is based on changed fields in records NEW and OLD. You can get list of fields from system tables ~ are related to table that is joined to trigger.

Trying to prevent duplicate entries in SQL Developer (Oracle)

i try to create a trigger to prevent the insertion of duplicate entries in SQL Developer (Oracle 11g XPRESS) but it 's not compiled correctly. Can you help me why because I can't see any obvious error in syntax.
CREATE OR REPLACE TRIGGER trig1
BEFORE INSERT ON table1
BEGIN
DECLARE CURSOR C1
IS
SELECT value1,value2 FROM inserted;
DECLARE value11 number;
DECLARE value22 number;
OPEN C1;
FETCH NEXT FROM C1 INTO #value11, #value22;
WHILE FETCH_STATUS = 0
LOOP
IF NOT EXISTS (SELECT * FROM table1 WHERE value1 = #value11 AND value2 = #value22)
THEN
INSERT INTO table1 (value1,value2)
VALUES
(#value11, #value22);
ELSE
ROLLBACK TRANSACTION
--DELETE FROM table1 WHERE value1 = #value11 AND value2 = #value22
PRINT 'Cannot add duplicate entry.'
END IF;
FETCH NEXT FROM C1 INTO #value11, #value22;
END LOOP;
CLOSE C1;
END;
Most of the problems with your trigger are down to you using the wrong syntax; this looks like MySQL not PL/SQL. I would recommend reading the documentation and looking at some examples before continuing.
Having said all that; you're going about this all the wrong way. In order to prevent the insertion of duplicates you have to create a unique constraint on your table. It is the only way to guarantee that you prevent them; trying to work around it in code is bound to fail at some point.
You can create a unique constraint inline, or if your table already exists you could create a unique index:
CREATE UNIQUE INDEX index_name
ON table_name (column1, column2, ... column_n);
or use an ALTER TABLE statement:
ALTER TABLE table_name
add CONSTRAINT constraint_name UNIQUE (column1, column2, ... column_n);
If the set of columns you're testing against are the primary key you can add a primary key constraint instead.
In addition to enforcing integrity no matter what your users decide enabling a unique constraint enables you to simply insert data and catch the errors. There's no need to query the table prior to insertion, which should speed up your application.