I'd like to get an opinion on a trigger I've written for a PostGreSQL Database in PL/pgSQL. I haven't done it previously and would like to get suggestions by more experienced users.
Task is simple enough:
Reduce the number of entries in a table to a set amount.
What should happen:
An INSERT into to the table device_position occurs,
If the amount of entries with a specific column (deviceid) value exceeds 50 delete the oldest.
Repeat
Please let me know if you see any obvious flaws:
CREATE OR REPLACE FUNCTION trim_device_positions() RETURNS trigger AS $trim_device_positions$
DECLARE
devicePositionCount int;
maxDevicePos CONSTANT int=50;
aDeviceId device_position.id%TYPE;
BEGIN
SELECT count(*) INTO devicePositionCount FROM device_position WHERE device_position.deviceid=NEW.deviceid;
IF devicePositionCount>maxDevicePos THEN
FOR aDeviceId IN SELECT id FROM device_position WHERE device_position.deviceid=NEW.deviceid ORDER BY device_position.id ASC LIMIT devicePositionCount-maxDevicePos LOOP
DELETE FROM device_position WHERE device_position.id=aDeviceId;
END LOOP;
END IF;
RETURN NULL;
END;
$trim_device_positions$ LANGUAGE plpgsql;
DROP TRIGGER trim_device_positions_trigger ON device_position;
CREATE TRIGGER trim_device_positions_trigger AFTER INSERT ON device_position FOR EACH ROW EXECUTE PROCEDURE trim_device_positions();
Thanks for any wisdom coming my way :)
Related
I have checked similar threats but not match exactly that I am trying to do.
I have two identical tables:
t_data: we will have the data of the last two months. There will be a job for delete data older than two months.
t_data_historical: we will have all data.
I want to do: If an INSERT is done into t_data, I want to "clone" the same query to t_data_historical.
I have seen that can be done with triggers but there is something that I don't understand: in the trigger definition I need to know the columns and values and I don't want to care about the INSERT query (I trust of the client that to do the query), I only clone it.
If you know that the history table will have the same definition, and the name is always constructed that way, you can
EXECUTE format('INSERT INTO %I.%I VALUES (($1).*)',
TG_TABLE_SCHEMA,
TG_TABLE_NAME || '_historical')
USING NEW;
in your trigger function.
That will work for all tables, and you don't need to know the actual columns.
Laurenz,
I created the trigger like:
CREATE OR REPLACE FUNCTION copy2historical() RETURNS trigger AS
$BODY$
BEGIN
EXECUTE format('INSERT INTO %I.%I VALUES (($1).*)', TG_TABLE_SCHEMA, TG_TABLE_NAME || '_historical')
USING NEW;
END;
RETURN NULL;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER copy2historical_tg
AFTER INSERT
ON t_data
FOR EACH ROW
EXECUTE PROCEDURE copy2historical();
thanks for your help.
I have a problem I am stuck on for some time now. So I wanted to reach out for a little help.
I have 2 tables which are holding the same data: transactions and transactions2.
I want to write a Trigger that is triggering every time a new row is added to transactions and insert it into transaction2 in PLSQL.
First I simply duplicated the table with
CREATE TABLE transactions2 (SELECT * FROM transactions WHERE 1=1);
I think I found out how to insert
CREATE OR REPLACE FUNCTION copyRow RETURNS TRIGGER AS $$
DECLARE
BEGIN
INSERT INTO transaction2
VALUES transaction;
END;
I think the syntax with this is also wrong, but how do I say, that the Trigger should start as soon as a new Insert into the first table is made?
Can anyone help me with this?
Thanks
Bobby
The correct syntax for an INSERT is INSERT (<column list>) VALUES (<values list>). The INSERT syntax isn't different in a function compared to "outside". So your trigger function should look something like:
CREATE OR REPLACE FUNCTION t2t2_f ()
RETURNS TRIGGER
AS
$$
BEGIN
INSERT INTO transactions2
(column_1,
...,
column_n)
VALUES (NEW.column_1,
...,
NEW.column_n);
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
Replace the column_is with the actual column names of your table. NEW is a pseudo record with which you can access the values of the new row.
To create the trigger itself use something like:
CREATE TRIGGER t2t2_t
AFTER INSERT
ON transactions
FOR EACH ROW
EXECUTE PROCEDURE t2t2_f();
You may want to use another timing, e.g. BEFORE instead of AFTER.
That should give you something to start with. Please consider studying the comprehensive PostgreSQL Manual for further and more detailed information.
I need to check if a value inside a jsonb is already present in a array. I'm trying to achieve this with a trigger but i'm new to this language and I don't know how to write the query.
CREATE TABLE merchants (
key uuid PRIMARY KEY,
data jsonb NOT NULL
)
Here is the trigger. I think the NEW.data.ids part is wrong.
CREATE FUNCTION validate_id_constraint() returns trigger as $$
DECLARE merchants_count int;
BEGIN
merchants_count := (SELECT count(*) FROM merchants WHERE data->'ids' #> NEW.data.ids);
IF (merchants_count != 0) THEN
RAISE EXCEPTION 'Duplicate id';
END IF;
RETURN NEW;
END;
$$ language plpgsql;
CREATE TRIGGER validate_id_constraint_trigger BEFORE INSERT OR UPDATE ON merchants
FOR EACH ROW EXECUTE PROCEDURE validate_id_constraint();
When i insert into the table i get this error message
ERROR: missing FROM-clause entry for table "data"
LINE 1: ...LECT count(*) FROM merchants WHERE data->'ids' #> NEW.data.i...
^
I have done the query outside the trigger and it works fine
SELECT count(*) FROM merchants WHERE data->'ids' #> '["11176", "11363"]'
You get that error because you are using . instead of -> to extract the ids array in the expression NEW.data.ids.
But your trigger won't work anyway because you are not trying to avoid containment, but overlaps in the arrays.
One way you could write the trigger function is:
CREATE OR REPLACE FUNCTION validate_id_constraint() RETURNS trigger
LANGUAGE plpgsql AS
$$DECLARE
j jsonb;
BEGIN
FOR j IN
SELECT jsonb_array_elements(NEW.data->'ids')
LOOP
IF EXISTS
(SELECT 1 FROM merchants WHERE j <# (data->'ids'))
THEN
RAISE EXCEPTION 'Duplicate IDs';
END IF;
END LOOP;
RETURN NEW;
END;$$;
You have to loop because there is no “overlaps” operator on jsonb arrays.
This is all slow and cumbersome because of your table design.
Note 1: You would be much better off if you only store data in jsonb that you do not need to manipulate in the database. In particular, you should store the ids array as field in the table. Then you can use the “overlaps” operator && and speed that up with a gin index.
You would be even faster if you normalized the table structure and stored the individual array entries in a separate table, then a regular unique constraint would do.
Note 2: Any constraint enabled by a trigger suffers from a race condition: if two concurrent INSERTs conflict with each other, the trigger function will not see the values from the concurrent INSERT and you may end up with inconsistent data.
I have 2 tables hourly and daily and my aim is to calculate average of values from hourly table and save it to daily table. I have written a trigger function like this -
CREATE OR REPLACE FUNCTION public.calculate_daily_avg()
RETURNS trigger AS
$BODY$
DECLARE chrly CURSOR for
SELECT device, date(datum) datum, avg(cpu_util) cpu_util
FROM chourly WHERE date(datum) = current_date group by device, date(datum);
BEGIN
FOR chrly_rec IN chrly
LOOP
insert into cdaily (device, datum, cpu_util)
values (chrly_rec.device, chrly_rec.datum, chrly_rec.cpu_util)
on conflict (device, datum) do update set
device=chrly_rec.device, datum=chrly_rec.datum, cpu_util=chrly_rec.cpu_util;
return NEW;
END LOOP;
EXCEPTION
WHEN NO_DATA_FOUND THEN
RAISE NOTICE 'NO DATA IN chourly FOR %', current_date;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION public.calculate_daily_avg()
OWNER TO postgres;
and a trigger on hourly table like this -
CREATE TRIGGER calculate_daily_avg_trg
BEFORE INSERT OR UPDATE
ON public.chourly
FOR EACH ROW
EXECUTE PROCEDURE public.calculate_daily_avg();
But when I try to insert or update about 3000 records in the hourly table only 3 or 4 devices are inserted and not 3000. (also in trigger I have already tried AFTER INSERT OR UPDATE but even that gives the same result) What I am doing wrong here? Please suggest any better way to write the trigger if you feel I have written it wrongly. Thanks!
I don't suggest using TRIGGER for calculation when INSERT. Try different approach using function execute by cron hourly or daily.
WHY?
Because everytime you INSERT one row. the postgres will always do aggregate function AVG() and LOOPING for insert (based on your flow).
That mean another INSERT statement will waiting until Previous INSERT commited that will suffer your database performance for highly INSERT Transaction. If somehow you manage to BREAK the rule (maybe from configuration) you will get inconsistent data like what happened to you right now.
I am trying to create a trigger that on update of one table, runs a query and updates another table with the results.
Where I am getting stuck, is assigning the result of the query to a correctly typed variable.
The current error is that the array must start with "{" or other dimensional information however as I make tweaks I get other errors
Please see my current code below and let me know the best approach
Your help is very appreciated as I have spent a huge amount of time consulting google.
CREATE TYPE compfoo AS (ownership character varying (50), count INT);
CREATE OR REPLACE FUNCTION test1_update() RETURNS trigger AS
$$
DECLARE
largest_owner character varying (50);
temp_result compfoo[];
BEGIN
SELECT ownership, count(*) INTO temp_result
FROM austpoly2
WHERE ownership IS NOT NULL
group by ownership
ORDER BY count DESC
LIMIT 1;
largest_owner = temp_result[0].ownership;
UPDATE public.states
SET ownership= largest_owner
WHERE statecode='1';
RETURN NEW;
END;
$$
LANGUAGE plpgsql;
CREATE TRIGGER test1_update_trigger
BEFORE UPDATE ON austpoly2
FOR EACH ROW EXECUTE PROCEDURE test1_update();
Thankyou a_horse with_no_name
Your response combined with
temp_result compfoo%ROWTYPE;
solved this problem