IBM DB2 error sql code-724 & sql state 54038 - db2

I was trying to insert some sample data to the table but the db2 command line processor return this message "DB21034E The command was processed as an SQL statement because it was not a
valid Command Line Processor command. During SQL processing it returned:
SQL0723N An error occurred in a triggered SQL statement in trigger
"EDWIN.CALLCQ". Information returned for the error includes SQLCODE "-724",
SQLSTATE "54038" and message tokens "EDWIN.CHKQUANTITY|PROCEDURE ".
SQLSTATE=09000"
Here is my procedure
create procedure chkQuantity (Cart_ID int,Food_ID int, Food_Quantity int)begin declare c cursor with return for select sum(Food_Quantity) from Cart_details group by Cart_ID;open c; If(Food_Quantity <= 10)then insert into cart_details(Cart_ID,Food_ID,Food_Quantity) values (Cart_ID , Food_ID ,Food_Quantity);Else signal sqlstate'45000' set message_text = '1 Cart Maximum order only 10 food Quantity' ;delete from cart_details where cart_details_id=cart_details_id; end if;close c; end
trigger
create trigger callCQ after insert on cart_details referencing new as N for each row mode db2sql call chkQuantity(N.Cart_ID, N.Food_ID, N.Food_Quantity)
table
create table Cart_Details(Cart_Details_ID int not null primary key ,Cart_ID int , Foreign Key(Cart_ID) references Cart,Food_ID int,foreign key(Food_ID) references Food, Food_quantity int check(food_quantity <= 10))

The description of the SQL0724N message clearly states that:
SQL0724N
The activation of object-name of type object-type would
exceed the maximum level of indirect SQL cascading.
Explanation
Cascading of indirect SQL occurs when a trigger activates another
trigger (possibly through referential constraint delete rules) or a
routine, containing SQL, invokes another routine. The depth of this
cascading is limited to 16 for triggers and 64 for routines. Note that
recursive situations where a trigger includes a triggered SQL
statement that directly or indirectly causes the same trigger to be
activated, or where a routine directly or indirectly invokes itself,
is a form of cascading that is very likely to cause this error if
there are no conditions to prevent cascading from exceeding the limit.
The object-type is one of TRIGGER, FUNCTION, METHOD, or PROCEDURE. The
object-name specified is one of the objects that would have been
activated at the seventeenth level of cascading.
User response
Start with the objects that are activated or invoked by the statement that
received this error. If any of these objects are recursive, ensure
that there is some condition that prevents the object from being
activated or invoked more than the limit allows. If this is not the
cause of the problem, follow the chain of objects that are activated
or invoked to determine the chain that exceeds the cascading limit.
You call a routine inserting a row into a table in the after insert trigger. This leads to recursive calls of this trigger exceeding the allowed number of cascading calls.

Related

Can postgres insert triggers and/or check be ran without inserting

I would love to be able to validate objects representing table rows using the database's existing constraints (triggers that raise exceptions and checks) without actually inserting them into the database.
Is there currently a way one could do this in postgres? At least with BEFORE INSERT triggers and CHECK, I assume it makes no sense with AFTER INSERT triggers.
The easiest way I can think or right now would be to:
Lock the table
Insert a new row
If exception raise to the API / else DELETE the row and call it valid
Unlock
But I can see several issues with this.
A simpler way is to insert within a transaction and not commit:
BEGIN;
INSERT INTO tbl(...) VALUES (...);
-- see effects ...
ROLLBACK;
No need for additional locking. The row is never visible to any other transaction with default transacton isolation level READ COMMITTED. (You might be stalling concurrent writes that confict with the tested row.)
Notable side-effect: Sequences of serial or IDENTITY columns are advanced even if the INSERT is never committed. But gaps in sequential numbers are to be expected anyway and nothing to worry about.
Be wary of triggers with side-effects. All "transactional" SQL effects are rolled back, even most DDL commands. But some special operations (like advancing sequences) are never rolled back.
Also, DEFERRED constraints do not kick in. The manual:
DEFERRED constraints are not checked until transaction commit.
If you need this a lot, work with a copy of your table, or even your database.
Strictly speaking, while any trigger / constraint / concurrent event is allowed, there is no other way to "validate objects" than to insert them into the actual target table in the actual target database at the actual point in time. Triggers, constraints, even default values, can interact with the current state of the whole DB. The more possibilities are ruled out and requirements are reduced, the more options we might have to emulate the test.
CREATE FUNCTION validate_function ( )
RETURNS trigger LANGUAGE plpgsql
AS $function$
DECLARE
valid_flag boolean := 't';
BEGIN
--Validation code
if valid_flag = 'f' then
RAISE EXCEPTION 'This record is not valid id %', id
USING HINT = 'Please enter valid record';
RETURN NULL;
else
RETURN NEW;
end if;
END;
$function$
CREATE TRIGGER validate_rec BEFORE INSERT OR UPDATE ON some_tbl
FOR EACH ROW EXECUTE FUNCTION validate_function();
With this function and trigger you validate inside the trigger. If the new record fails validation you set the valid_flag to false and then use that to raise exception. The RETURN NULL; is probably redundant and I am not sure it will be reached, but if it is it will also abort the insert or update. If the record is valid then you RETURN NEW and the insert/update completes.

How can pgsql sequence be undefined when I just called nextval?

I've got an app built on top of PostgresQL, which makes use of a custom sequence. I think I understand sequences pretty well by now: they are non-transactional, currval is defined only within the current session, etc. But I don't understand this:
2015-10-13 10:37:16 SQLSelect: SELECT nextval('commit_id_seq')
2015-10-13 10:37:16 commit_id_seq: 57
2015-10-13 10:37:16 SQLExecute: UPDATE bid SET is_archived=false,company_id=1436,contact_id=15529,...(etc)...,sharing_policy='' WHERE id = 56229
2015-10-13 10:37:16 ERROR: ERROR: currval of sequence "commit_id_seq" is not yet defined in this session
CONTEXT: SQL statement "INSERT INTO history (table_name, record_id, sec_user_id, created, action, notes, status, before, after, commit_id)
SELECT TG_TABLE_NAME, rec.id, (SELECT id FROM sec_user WHERE name = CURRENT_USER), now(), SUBSTR(TG_OP,1,1), note, stat, oldH, newH, currval('commit_id_seq')"
PL/pgSQL function log_to_history() line 28 at SQL statement
[3]
We log every call to the database, and in the case of the SELECT nextval, I also log the result. The above are the exact calls, except that I trimmed the UPDATE statement (because the original is really long).
So, you can see that we just called nextval on the sequence, got a reasonable number back, and then we do an UPDATE that invokes a trigger function that attempts to use currval on that sequence... and it fails, claiming currval is not defined.
Note that this doesn't usually happen, but once it does start happening, it does so consistently (perhaps until the user disconnects from the DB).
How can this be? And what can I do about it?
Your UPDATE statement obviously calls a trigger. The most plausible cause of this error is that the trigger function is in a different schema from where the sequence is defined and the schema of the sequence is not in the search_path. That gives you two options to resolve this:
Make the schema of the sequence visible to the trigger function using SET search_path TO .... Note that this will make all objects in the schema of the sequence visible, which may be something of a security risk, depending on your database design.
Schema-qualify the sequence name in the trigger function: currval('my_schema.commit_id_seq').
Another plausible cause is connection pooling at your application end. Log the "session ID" (really just the starting time and pid of the current session) by adding %c to your log_line_prefix() parameter in postgresql.conf. In PostgreSQL every command runs in its own transaction unless a transaction is explicitly established. Connection pooling software also works at the transaction level (i.e. you start a transaction and then your connection will stay open until you close it, outside of a transaction there are no guarantees about session persistence). If that is the case you can wrap your entire set of commands in a BEGIN ... COMMIT block (you should probably use a specific call from your pooling software), or better yet, change your code to not depend on a previous nextval() call.

PL/pgSQL query in PostgreSQL returns result for new, empty table

I am learning to use triggers in PostgreSQL but run into an issue with this code:
CREATE OR REPLACE FUNCTION checkAdressen() RETURNS TRIGGER AS $$
DECLARE
adrCnt int = 0;
BEGIN
SELECT INTO adrCnt count(*) FROM Adresse
WHERE gehoert_zu = NEW.kundenId;
IF adrCnt < 1 OR adrCnt > 3 THEN
RAISE EXCEPTION 'Customer must have 1 to 3 addresses.';
ELSE
RAISE EXCEPTION 'No exception';
END IF;
END;
$$ LANGUAGE plpgsql;
I create a trigger with this procedure after freshly creating all my tables so they are all empty. However the count(*) function in the above code returns 1.
When I run SELECT count(*) FROM adresse; outside of PL/pgSQL, I get 0.
I tried using the FOUND variable but it is always true.
Even more strangely, when I insert some values into my tables and then delete them again so that they are empty again, the code works as intended and count(*) returns 0.
Also if I leave out the WHERE gehoert_zu = NEW.kundenId, count(*) returns 0 which means I get more results with the WHERE clause than without.
--Edit:
Here is an example of how I use the procedure:
CREATE TABLE kunde (
kundenId int PRIMARY KEY
);
CREATE TABLE adresse (
id int PRIMARY KEY,
gehoert_zu int REFERENCES kunde
);
CREATE CONSTRAINT TRIGGER adressenKonsistenzTrigger AFTER INSERT ON Kunde
DEFERRABLE INITIALLY DEFERRED
FOR EACH ROW
EXECUTE PROCEDURE checkAdressen();
INSERT INTO kunde VALUES (1);
INSERT INTO adresse VALUES (1,1);
It looks like I am getting the DEFERRABLE INITIALLY DEFERRED part wrong. I assumed the trigger would be executed after the first INSERT statement but it happens after the second one, although the inserts are not inside a BEGIN; - COMMIT; - Block.
According to the PostgreSQL Documentation inserts are commited automatically every time if not inside such a block and thus there shouldn't be an entry in adresse when the first INSERT statement is commited.
Can anyone point out my mistake?
--Edit:
The trigger and DEFERRABLE INITIALLY DEFERRED seem to be working all right.
My mistake was to assume that since I am not using a BEGIN-COMMIT-Block each insert would be executed in an own transaction with the trigger being executed afterwards every time.
However even without the BEGIN-COMMIT all inserts get bundled into one transaction and the trigger is executed afterwards.
Given this behaviour, what is the point in using BEGIN-COMMIT?
You need a transaction plus the "DEFERRABLE INITIALLY DEFERRED" because of the chicken and egg problem.
starting with two empty tables:
you cannot insert a single row into the person table, because the it needs at least one address.
you cannot insert a single row into the address table, because the FK constraint needs a corresponding row on the person table to exist
This is why you need to bundle the two inserts into one operation: the transaction. You need the BEGIN+ COMMIT, and the DEFERRABLE allows transient forbidden database states to exists: it causes the check to be evaluated at commit time.
This may seem a bit silly, but the answer is you need to stop deferring the trigger and run it BEFORE the insert. If you run it after the insert, of course there is data in the table.
As far as I can tell this is working as expected.
One further note, you probably dont mean:
RAISE EXCEPTION 'No Exception';
You probably want
RAISE INFO 'No Exception';
Then you can change your settings and run queries in transactions to test that the trigger does what you want it to do. As it is, every insert is going to fail and you have no way to move this into production without editing your procedure.

Mutating table in SQL for a specific case of update

I have create a trigger for table stock
The schema of the table is as follows:
create table stock(item_code varchar2(2) primary key, p_qty number(2),s_qty number(2));
The Trigger is as follows:
CREATE OR REPLACE TRIGGER TR_STOCK BEFORE UPDATE OF S_QTY ON STOCK FOR EACH ROW
DECLARE
V_P STOCK.P_QTY%TYPE;
V_S STOCK.S_QTY%TYPE;
V_I VARCHAR2(2);
BEGIN
V_S:=:NEW.S_QTY;
V_I:=:NEW.ITEM_CODE;
SELECT P_QTY INTO V_P FROM STOCK WHERE ITEM_CODE=V_I;
IF V_S>V_P THEN
RAISE_APPLICATION_ERROR(-20400,'SOLD QTY CANNOT EXCEED PURCHASED QTY...');
END IF;
END;
/
Now every time I execute an update query, it says the table is mutating and flags the following error:
update stock set s_qty=2 where item_code='i4'
*
ERROR at line 1:
ORA-04091: table HR.STOCK is mutating, trigger/function may not see it
ORA-06512: at "HR.TR_STOCK", line 8
ORA-04088: error during execution of trigger 'HR.TR_STOCK'
Any help with this specific problem?
There is no need to query the STOCK table. Simply compare the :NEW.P_QTY and :NEW.S_QTY fields directly
CREATE OR REPLACE TRIGGER TR_STOCK BEFORE UPDATE OF S_QTY ON STOCK FOR EACH ROW
DECLARE
BEGIN
IF :new.s_qty > :new.p_qty THEN
RAISE_APPLICATION_ERROR(-20400,'SOLD QTY CANNOT EXCEED PURCHASED QTY...');
END IF;
END;
/
You really should consider using a database constraint to implement this logic, in which case you wouldn't need the trigger at all.
ALTER TABLE hr.stock
ADD CONSTRAINT stock_ck1
CHECK (
s_qty <= p_qty
)
Triggers have many drawbacks compared with constraints:
Triggers do not account for existing data rows, constraints can do this if you desire.
A FOR EACH ROW trigger has to context-switch between the SQL engine and the PL/SQL engine for every row, which increases the overhead of the INSERT or UPDATE statement running. This adds up as your number of rows increases.
Oracle can use constraints when optimising your SQL statements (it knows that a WHERE clause that violates a CHECK constraint will never return any rows without needing to inspect the rows).
If you're using the trigger to provide an error message, you should really consider moving this into your application logic, with constraints as a safeguard.

Is it possible to dynamically loop through a table's columns?

I have a trigger function for a table test which has the following code snippet:
IF TG_OP='UPDATE' THEN
IF OLD.locked > 0 AND
( OLD.org_id <> NEW.org_id OR
OLD.document_code <> NEW.document_code OR
-- other columns ...
)
THEN
RAISE EXCEPTION 'Message';
-- more code
So I am statically checking all the column's new value with its previous value to ensure integrity. Now every time my business logic changes and I have to add new columns into that table, I will have to modify this trigger each time. I thought it would be better if somehow I could dynamically check all the columns of that table, without explicitly typing their name.
How can it be done?
From 9.0 beta2 documentation about WHEN clause in triggers, which might be able to be used in earlier versions within the trigger body:
OLD.* IS DISTINCT FROM NEW.*
or possibly (from 8.2 release notes)
IF row(new.*) IS DISTINCT FROM row(old.*)
Take a look at the information_schema, there is a view "columns". Execute a query to get all current columnnames from the table that fired the trigger:
SELECT
column_name
FROM
information_schema.columns
WHERE
table_schema = TG_TABLE_SCHEMA
AND
table_name = TG_TABLE_NAME;
Loop through the result and there you go!
More information can be found in the fine manual.
In Postgres 9.0 or later add a WHEN clause to your trigger definition (CREATE TRIGGER statement):
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD IS DISTINCT FROM NEW) -- parentheses required!
EXECUTE PROCEDURE ...;
Only possible for triggers BEFORE / AFTER UPDATE, where both OLD and NEW are defined. You'd get an exception trying to use this WHEN clause with INSERT or DELETE triggers.
And radically simplify the trigger function accordingly:
...
IF OLD.locked > 0 THEN
RAISE EXCEPTION 'Message';
END IF;
...
No need to test IF TG_OP='UPDATE' ... since this trigger only works for UPDATE anyway.
Or move that condition in the WHEN clause, too:
CREATE TRIGGER foo
BEFORE UPDATE
FOR EACH ROW
WHEN (OLD.locked > 0
AND OLD IS DISTINCT FROM NEW)
EXECUTE PROCEDURE ...;
Leaving only an unconditional RAISE EXCEPTION in your trigger function, which is only called when needed to begin with.
Read the fine print:
In a BEFORE trigger, the WHEN condition is evaluated just before the
function is or would be executed, so using WHEN is not materially
different from testing the same condition at the beginning of the
trigger function. Note in particular that the NEW row seen by the
condition is the current value, as possibly modified by earlier
triggers. Also, a BEFORE trigger's WHEN condition is not allowed to
examine the system columns of the NEW row (such as oid), because those
won't have been set yet.
In an AFTER trigger, the WHEN condition is evaluated just after the
row update occurs, and it determines whether an event is queued to
fire the trigger at the end of statement. So when an AFTER trigger's
WHEN condition does not return true, it is not necessary to queue an
event nor to re-fetch the row at end of statement. This can result in
significant speedups in statements that modify many rows, if the
trigger only needs to be fired for a few of the rows.
Related:
Fire trigger on update of columnA or ColumnB or ColumnC
To also address the question title
Is it possible to dynamically loop through a table's columns?
Yes. Examples:
Handle result when dynamic SQL is in a loop
Removing all columns with given name
Iteration over RECORD variable inside trigger
Use pl/perl or pl/python. They are much better suited for such tasks. much better.
You can also install hstore-new, and use it's row->hstore semantics, but that's definitely not a good idea when using normal datatypes.