PL/SQL block in triggers getting set to null - oracle-sqldeveloper

Not sure how much information I can provide because I'm still not exactly sure why it happens, but the PL/SQL block in my triggers in Oracle SQL developer keep getting set to null at random times. It's happened when I've: dropped rows, enabled the trigger, compiled the trigger, inserted rows (through ODAP.net), and updated rows.
I've tried my best to find an answer, but I can't find anything. Any ideas would be much, much appreciated.
Also, was not sure whether or not to post here or serverfault.
Trigger is a variation of this:
create or replace TRIGGER basic_ticket_trg
BEFORE INSERT ON basic_ticket
FOR EACH ROW
BEGIN
<<COLUMN_SEQUENCES>>
BEGIN
IF INSERTING AND :NEW.ID IS NULL THEN
SELECT basic_ticket_seq.NEXTVAL INTO :NEW.ID FROM SYS.DUAL;
END IF;
END COLUMN_SEQUENCES;
END;
The if statement is what keeps getting set to null, so it ends up looking like:
create or replace TRIGGER basic_ticket_trg
BEFORE INSERT ON basic_ticket
FOR EACH ROW
BEGIN
<<COLUMN_SEQUENCES>>
BEGIN
null;
END COLUMN_SEQUENCES;
END;

I'm still not entirely 100% sure why it was happening, but for those viewing, it was caused by updating the table the trigger would act on.

Try to get rid of the label: "COLUMN_SEQUENCES", just delete it and every thing will be perfect, at least it worked for me, I think it's a bug that erases the code to NULL when you have to drop a column or change its name.

Related

Can postgres insert triggers and/or check be ran without inserting

I would love to be able to validate objects representing table rows using the database's existing constraints (triggers that raise exceptions and checks) without actually inserting them into the database.
Is there currently a way one could do this in postgres? At least with BEFORE INSERT triggers and CHECK, I assume it makes no sense with AFTER INSERT triggers.
The easiest way I can think or right now would be to:
Lock the table
Insert a new row
If exception raise to the API / else DELETE the row and call it valid
Unlock
But I can see several issues with this.
A simpler way is to insert within a transaction and not commit:
BEGIN;
INSERT INTO tbl(...) VALUES (...);
-- see effects ...
ROLLBACK;
No need for additional locking. The row is never visible to any other transaction with default transacton isolation level READ COMMITTED. (You might be stalling concurrent writes that confict with the tested row.)
Notable side-effect: Sequences of serial or IDENTITY columns are advanced even if the INSERT is never committed. But gaps in sequential numbers are to be expected anyway and nothing to worry about.
Be wary of triggers with side-effects. All "transactional" SQL effects are rolled back, even most DDL commands. But some special operations (like advancing sequences) are never rolled back.
Also, DEFERRED constraints do not kick in. The manual:
DEFERRED constraints are not checked until transaction commit.
If you need this a lot, work with a copy of your table, or even your database.
Strictly speaking, while any trigger / constraint / concurrent event is allowed, there is no other way to "validate objects" than to insert them into the actual target table in the actual target database at the actual point in time. Triggers, constraints, even default values, can interact with the current state of the whole DB. The more possibilities are ruled out and requirements are reduced, the more options we might have to emulate the test.
CREATE FUNCTION validate_function ( )
RETURNS trigger LANGUAGE plpgsql
AS $function$
DECLARE
valid_flag boolean := 't';
BEGIN
--Validation code
if valid_flag = 'f' then
RAISE EXCEPTION 'This record is not valid id %', id
USING HINT = 'Please enter valid record';
RETURN NULL;
else
RETURN NEW;
end if;
END;
$function$
CREATE TRIGGER validate_rec BEFORE INSERT OR UPDATE ON some_tbl
FOR EACH ROW EXECUTE FUNCTION validate_function();
With this function and trigger you validate inside the trigger. If the new record fails validation you set the valid_flag to false and then use that to raise exception. The RETURN NULL; is probably redundant and I am not sure it will be reached, but if it is it will also abort the insert or update. If the record is valid then you RETURN NEW and the insert/update completes.

A trigger fires on a table, but the select on the table returns null. How can I create the code to be able to access the row that fired the trigger?

A trigger fires on a table, but the select on the table returns null. How can I create the code to be able to access the row that fired the trigger?
I have the following in the trigger:
begin
dws_edi_api.init_edi_message(message_id,order_no',supplier_no');
end;
This fires on update of the column row_state in the table out_message_tab
The event fires OK but when in the procedure dws_edi_api.init_edi_message_line I do a select c08 from out_message_tab where message_id = message_id_ (variable from the trigger). it returns null.
I assume the change hasnt been committed. I have tried adding a commit as the first line in my code to force the change to commit but that doesnt help. I have tried adding a dbms_lock.sleep(!0) but that doesnt help either.
I add the code to the procedure in the "show some code box"
procedure init_edi_message_line(message_id in number) is
pragma autonomous_transaction;
message_id_ number;
order_no_ varchar2(20);
supplier_no_ varchar2(20);
c08_ varchar2(200);
cursor c1 is
select c08
from jdifs.out_message_line_tab
where message_id = message_id_
and name = 'HEADER';
begin
-- dbms_lock.sleep(10);
message_id_ := message_id;
open c1;
loop
fetch c1
into c08_;
exit when c08_ is not null;
insert into jdifs.jdws_temp_line_tab
values
(message_id_, '2', c08_, '4');
commit;
END LOOP;
close c1;
EXCEPTION
WHEN NO_DATA_FOUND THEN
-- Do something
null;
WHEN OTHERS THEN
null;
end init_edi_message_line;
EDIT:
Hi, no this didnt solve the problem unfortunately,
I will try again to explain as thourougly as possible.
I have a trigger on the table called out_message_line_tab. When a row is created in that table it contains a big number of columns.
the ones that are interesting to me are message_id(which is a sequential number), order_no (P123456), supplier_no(11242), linenumber(1), part_no (F1524).
When the trigger fires data needs to be fetched from that table (and a table "connected to this table" in this case, out_message_tab.
So the trigger is on out_message_line_tab, but it isnt enough to send the values in the trigger to the procedure, since I need some data from the other table as well.
The primary key between the tables out_message_tab and out_message_line_tab is message_id
So my problem is how to do the select from out_message_tab where message_id = message_id(primary key from out_message_line_tab
When I do, it just says no data found. I assume its because it has not been commited yet.
I hope this is clearer.
Your procedure init_edi_message_line() is defined using pragma autonomous_transaction. That means it executes in a completely separate session. Consequently it cannot see any of the uncommitted data in the session which fired the trigger.
If you want init_edi_message_line() to process data from that session your triggers needs to pass everything to the procedure as an argument. However it's not clear exactly what you're doing - is out_message_line_tab the table which owns the trigger? - so I can't guarantee that it will be easy for you to make the necessary changes.

Postgresql insert stopped working, duplicate key value violations

About 8 months ago I used a suggestion to set up a holding table, then push to the formal table and prevent duplicate entries, per this post: Best way to prevent duplicate data on copy csv postgresql
It's been working very nicely, but today I noticed some errors and gaps in the data.
Here's my insert statement:
And here's how the index is set up:
And here's an example of the error I'm getting, although on the next chron insert, it went through.
Here's it going through fine:
I haven't noted any large changes in the data incoming. Here's what the data looks like that's coming in now:
In summary, I've noticed recent oddities with the insert statements, and success is erratic, resulting in large data gaps in the database. Thanks for any help, and I'm happy to provide more details, but I wanted to see if my information sounds like something someone else has already dealt with.
Thanks very much for any help,
S
As Gordon pointed out in his answer to your previous question, this approach only works if you have exclusive access to the table. There is a delay between the existence check and the insert itself, and if another process modifies the table during this window, you may end up with duplicates.
If you're on Postgres 9.5+, the best approach is to skip the existence check and simply use an INSERT ... ON CONFLICT DO NOTHING statement.
On earlier versions, the simplest solution (if you can afford to do so) would be to lock the table for the duration of the import. Otherwise, you can emulate ON CONFLICT DO NOTHING (albeit less efficiently) using a loop and an exception handler:
DO $$
DECLARE r RECORD;
BEGIN
FOR r IN (SELECT * FROM holding) LOOP
BEGIN
INSERT INTO ltg_data (pulsecount, intensity, time, lon, lat, ltg_geom)
VALUES (r.pulsecount, r.intensity, r.time, r.lon, r.lat, r.ltg_geom);
EXCEPTION WHEN unique_violation THEN
END;
END LOOP;
END
$$
As an aside, DELETE FROM holding is time-consuming and probably unnecessary. Create your staging table as TEMP, and it will be cleaned up automatically at the end of your session. You can easily build a table which matches the structure of your import target:
CREATE TEMP TABLE holding (LIKE ltg_data);

Postgres audit trigger only fired by one row UPDATE

Hi I'm up to develop a simple audit trigger for postgresql server. According to this document, I pretty much understand how it works. But I want to record my activity only when the certain row is updated. Below is the code from the link. And it records when there is update no matter what row is updated.
IF (TG_OP = 'UPDATE') THEN
...
Please help me how to give a condition to above code. Thanks!
The trigger is written in PL/PgSQL. I strongly suggest you study the PL/PgSQL manual if you're going to modify PL/PgSQL code.
In triggers, the row data is in OLD and NEW (for UPDATE triggers). So you can do IF tests on that like anything else. E.g.:
IF (TG_OP = 'UPDATE') THEN
IF NEW."name" = 'name_to_audit' OR OLD."name" = 'name_to_audit' THEN
-- do audit commands
END IF;
END IF;
Both NEW and OLD are tested in case the name is being changed from/to the name of interest.
In this case you could instead change it to use a WHEN clause on the CREATE TRIGGER, so you never fire the trigger at all unless the conditions to audit are met. See the WHEN clause on triggers.
This is just a basic programming problem; you'll need to learn the programming language in order to use it.
See also the updated trigger for Pg 9.1.
Oh, and remember to think about NULL; remember NULL = 'anything' is NULL. Use IS DISTINCT FROM if you want to say "these things are equal, or are both null".
From Postgresql Docs:
CREATE TRIGGER log_update
AFTER UPDATE ON accounts
FOR EACH ROW
WHEN (OLD.* IS DISTINCT FROM NEW.*)
EXECUTE PROCEDURE log_account_update();
This only work for UPDATE on that table. For INSERT AND DELETE you can use same without WHERE query. Hope this help others.

PL/pgSQL query in PostgreSQL returns result for new, empty table

I am learning to use triggers in PostgreSQL but run into an issue with this code:
CREATE OR REPLACE FUNCTION checkAdressen() RETURNS TRIGGER AS $$
DECLARE
adrCnt int = 0;
BEGIN
SELECT INTO adrCnt count(*) FROM Adresse
WHERE gehoert_zu = NEW.kundenId;
IF adrCnt < 1 OR adrCnt > 3 THEN
RAISE EXCEPTION 'Customer must have 1 to 3 addresses.';
ELSE
RAISE EXCEPTION 'No exception';
END IF;
END;
$$ LANGUAGE plpgsql;
I create a trigger with this procedure after freshly creating all my tables so they are all empty. However the count(*) function in the above code returns 1.
When I run SELECT count(*) FROM adresse; outside of PL/pgSQL, I get 0.
I tried using the FOUND variable but it is always true.
Even more strangely, when I insert some values into my tables and then delete them again so that they are empty again, the code works as intended and count(*) returns 0.
Also if I leave out the WHERE gehoert_zu = NEW.kundenId, count(*) returns 0 which means I get more results with the WHERE clause than without.
--Edit:
Here is an example of how I use the procedure:
CREATE TABLE kunde (
kundenId int PRIMARY KEY
);
CREATE TABLE adresse (
id int PRIMARY KEY,
gehoert_zu int REFERENCES kunde
);
CREATE CONSTRAINT TRIGGER adressenKonsistenzTrigger AFTER INSERT ON Kunde
DEFERRABLE INITIALLY DEFERRED
FOR EACH ROW
EXECUTE PROCEDURE checkAdressen();
INSERT INTO kunde VALUES (1);
INSERT INTO adresse VALUES (1,1);
It looks like I am getting the DEFERRABLE INITIALLY DEFERRED part wrong. I assumed the trigger would be executed after the first INSERT statement but it happens after the second one, although the inserts are not inside a BEGIN; - COMMIT; - Block.
According to the PostgreSQL Documentation inserts are commited automatically every time if not inside such a block and thus there shouldn't be an entry in adresse when the first INSERT statement is commited.
Can anyone point out my mistake?
--Edit:
The trigger and DEFERRABLE INITIALLY DEFERRED seem to be working all right.
My mistake was to assume that since I am not using a BEGIN-COMMIT-Block each insert would be executed in an own transaction with the trigger being executed afterwards every time.
However even without the BEGIN-COMMIT all inserts get bundled into one transaction and the trigger is executed afterwards.
Given this behaviour, what is the point in using BEGIN-COMMIT?
You need a transaction plus the "DEFERRABLE INITIALLY DEFERRED" because of the chicken and egg problem.
starting with two empty tables:
you cannot insert a single row into the person table, because the it needs at least one address.
you cannot insert a single row into the address table, because the FK constraint needs a corresponding row on the person table to exist
This is why you need to bundle the two inserts into one operation: the transaction. You need the BEGIN+ COMMIT, and the DEFERRABLE allows transient forbidden database states to exists: it causes the check to be evaluated at commit time.
This may seem a bit silly, but the answer is you need to stop deferring the trigger and run it BEFORE the insert. If you run it after the insert, of course there is data in the table.
As far as I can tell this is working as expected.
One further note, you probably dont mean:
RAISE EXCEPTION 'No Exception';
You probably want
RAISE INFO 'No Exception';
Then you can change your settings and run queries in transactions to test that the trigger does what you want it to do. As it is, every insert is going to fail and you have no way to move this into production without editing your procedure.