To aid me with debugging another issue in my database, I've written the following function as a trigger in postgresql:
CREATE OR REPLACE FUNCTION stage.triggerlogfunction()
RETURNS trigger
LANGUAGE plpgsql
AS $function$
begin
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'Hello');
perform pg_sleep(5);
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'Did');
perform pg_sleep(5);
insert into stage.temptriggerlog_complete
values
(current_timestamp, 'This Work?');
return null;
END;
$function$
;
When a new row is inserted into a table in my database, this trigger is set to fire. This works as expected, but what I was expecting to see when this ran was three rows with timestamps that are 5 seconds apart (running in sequence with a 5 second delay between each), but the actual result I had returned was three rows all with the same timestamp.
Why is this happening? Is there a way I can force the behaviour that I was expecting?
is there any way I can get the current timestamp at the time each insert statement runs?
You're looking for statement_timestamp() (which would be the timestamp of the statement causing your trigger to fire) or even clock_timestamp() then.
See the documentation for what they do and the alternatives.
Related
I have a table where I would like to calculate the difference in time (in hours) between two columns after inserting a row. I would like to set up a trigger to do this whenever an insert or update is performed on the table.
My columns are delay_start, delay_stop, and delay_duration. I would like to do the following:
delay_duration = delay_stop - delay_start
The result should be of numeric (4,2) value and go into the delay_duration category. Below is what I have so far, but it will not populate the column for some reason.
BEGIN
INSERT INTO public.deckdelays(delay_duration)
VALUES(DATEDIFF(hh, delay_stop, delay_start));
RETURN NEW;
END;
I am quite new to all of this so if anyone could help I would greatly appreciate it!
If you have Postgres 12 or later you can define delay_duration as a generated column. This allows you to eliminate triggers.
create table deckdelays(id integer generated always as identity
, delay_start timestamp
, delay_stop timestamp
, delay_duration numeric(4,2)
generated always as
( extract(epoch from (delay_stop - delay_start))/3600 )
stored
--, other attributes
);
See demo here.
But if you insist on a trigger:
create or replace
function delayduration_func()
returns trigger
language plpgsql
as $$
begin
new.delay_duration = (extract(epoch from (deckdelays.delay_stop - deckdelays.delay_start))/3600)::numeric;
return new;
end;
$$;
create trigger delaydurationset1
before insert
or update of delay_stop, delay_start
on deckdelays
execute procedure delayduration_func();
Changes:
Before trigger instead of after. A before trigger can modify the
values in a column without additional DML statements, an after
trigger cannot. Issuing a DML statement on a table within a trigger
on that same table can lead to all types of problems. It is bast
avoided if possible.
Trigger name and function name not the same. Might just be me but I
do not like different things having the same name. Although it works
often leads to confusion. Always avoid confusion if possible.
Trigger fires on update of delay_start. An update of either delay_start or delay_end also updates delay_duration.
I have a problem I am stuck on for some time now. So I wanted to reach out for a little help.
I have 2 tables which are holding the same data: transactions and transactions2.
I want to write a Trigger that is triggering every time a new row is added to transactions and insert it into transaction2 in PLSQL.
First I simply duplicated the table with
CREATE TABLE transactions2 (SELECT * FROM transactions WHERE 1=1);
I think I found out how to insert
CREATE OR REPLACE FUNCTION copyRow RETURNS TRIGGER AS $$
DECLARE
BEGIN
INSERT INTO transaction2
VALUES transaction;
END;
I think the syntax with this is also wrong, but how do I say, that the Trigger should start as soon as a new Insert into the first table is made?
Can anyone help me with this?
Thanks
Bobby
The correct syntax for an INSERT is INSERT (<column list>) VALUES (<values list>). The INSERT syntax isn't different in a function compared to "outside". So your trigger function should look something like:
CREATE OR REPLACE FUNCTION t2t2_f ()
RETURNS TRIGGER
AS
$$
BEGIN
INSERT INTO transactions2
(column_1,
...,
column_n)
VALUES (NEW.column_1,
...,
NEW.column_n);
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
Replace the column_is with the actual column names of your table. NEW is a pseudo record with which you can access the values of the new row.
To create the trigger itself use something like:
CREATE TRIGGER t2t2_t
AFTER INSERT
ON transactions
FOR EACH ROW
EXECUTE PROCEDURE t2t2_f();
You may want to use another timing, e.g. BEFORE instead of AFTER.
That should give you something to start with. Please consider studying the comprehensive PostgreSQL Manual for further and more detailed information.
I have 2 tables hourly and daily and my aim is to calculate average of values from hourly table and save it to daily table. I have written a trigger function like this -
CREATE OR REPLACE FUNCTION public.calculate_daily_avg()
RETURNS trigger AS
$BODY$
DECLARE chrly CURSOR for
SELECT device, date(datum) datum, avg(cpu_util) cpu_util
FROM chourly WHERE date(datum) = current_date group by device, date(datum);
BEGIN
FOR chrly_rec IN chrly
LOOP
insert into cdaily (device, datum, cpu_util)
values (chrly_rec.device, chrly_rec.datum, chrly_rec.cpu_util)
on conflict (device, datum) do update set
device=chrly_rec.device, datum=chrly_rec.datum, cpu_util=chrly_rec.cpu_util;
return NEW;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE NOTICE 'NO DATA IN chourly FOR %', current_date;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.calculate_daily_avg()
OWNER TO postgres;
and a trigger on hourly table like this -
CREATE TRIGGER calculate_daily_avg_trg
BEFORE INSERT OR UPDATE
ON public.chourly
FOR EACH ROW
EXECUTE PROCEDURE public.calculate_daily_avg();
But when I try to insert or update about 3000 records in the hourly table only 3 or 4 devices are inserted and not 3000. (also in trigger I have already tried AFTER INSERT OR UPDATE but even that gives the same result) What I am doing wrong here? Please suggest any better way to write the trigger if you feel I have written it wrongly. Thanks!
I don't suggest using TRIGGER for calculation when INSERT. Try different approach using function execute by cron hourly or daily.
WHY?
Because everytime you INSERT one row. the postgres will always do aggregate function AVG() and LOOPING for insert (based on your flow).
That mean another INSERT statement will waiting until Previous INSERT commited that will suffer your database performance for highly INSERT Transaction. If somehow you manage to BREAK the rule (maybe from configuration) you will get inconsistent data like what happened to you right now.
I've got a function in a postgres database that does a lot of analysis; it consists of a succession of update and insert statements and eventually throws back some output. I'd like to figure out which statements execute slowly, without looking through the log files. (I'm much more comfortable with SQL than I am with, say, perl, to write date / time arithmetic queries in order to spot problems.)
I have a table, activity_log:
CREATE TABLE activity_log
(
action character varying(250),
action_date date,
action_tune time without time zone
);
then throughout my function, after each INSERT / UPDATE I write statements like
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to base_model');
So the function looks something like this:
CREATE FUNCTION rebucket(pos_control character varying, absolute_max_cpc numeric, absolute_max_bucket character varying)
RETURNS integer AS
$BODY$
DECLARE qty INT;
BEGIN
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'Off we go');
-- Do something that takes 5 minutes
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to base_model');
-- Then do something else that also takes about 5 minutes ...
INSERT INTO activity_log (action_date, action_tune, action)
VALUES (current_date, current_timestamp, 'INSERT to diagnostics');
END
$BODY$
LANGUAGE plpgsql VOLATILE
I've got away with this in other databases in the past, but when I try this approach in Postgres (9.1 on Windows 7), then whenever I run the whole function the date and time in activity_log is exactly the same for every statement within the function: in the example above,
SELECT * FROM activity_log
gets me
Off we go 2013-05-13 12:33:23:386
INSERT to base_model 2013-05-13 12:33:23:386
INSERT to diagnostics 2013-05-13 12:33:23:386
(The function takes from 5 minutes to an hour to run, depending on what parameters we feed it, and it has upwards of 20 different statements within there, so it seems highly unlikely that every statement completed within the same 1/100th of a second.)
Why is that?
The timestamp you are using always gives the start of the current transaction. If you look in the manuals you will see that you want clock_timestamp().
I am using Postgresql 9.0.5 and I have a cron job that periodically reads newly created rows from a table and accumulate its value into a summary table that has hourly data.
I need to get the latest ID (serial) that is committed and all rows before it are committed.
The currval function will not give a correct value in this case, because the transaction inserting currval may commit earlier than others. Using SELECT statement at a moment, I can see Id column is not continuous because some rows are still not committed.
Here is some sample code I have used to test:
--test race condition
create table mydata(id serial,val int);
--run in thread 1
create or replace function insert_delay() returns void as $$
begin
insert into mydata(val) values (1);
perform pg_sleep(60);
end;
$$ language 'plpgsql';
--run in thread 2
create or replace function insert_ok() returns void as $$
begin
insert into mydata(val) values (2);
end;
$$ language 'plpgsql';
--run in thread 3
mytest=# select * from mydata; --SHOULD HAVE SEEN id = 1 and 2;
id | val
----+-----
2 | 2
(1 row)
I even tried some statement like the one below;
select max(id) from mydata age(xmin) >= age(txid_snapshot_xmin(txid_current_snapshot())::text::xid);
But in production line (running high volume transactions), the returned max(id) will not move forwards (even all the busy transaction are finished). So this does not work either.
There isn't a really good way to do this directly. I think the best option really is to create a temporary table which truncates on transaction commit, and a trigger that inserts such into that table. Then you can look up the values from the temp table.