I'm trying to keey track of a clients database with which we sync. I need to record records_added (INSERTs) and records_updated (UPDATEs) to our table.
I'm using an UPSERT to handle the sync, and a trigger to update a table keeping track of insert/updates.
The issue is counting records that have are updated. I have 40+ columns to check, do I have to put all these in my check logic? Is there a more elegant way?
Section of code in question:
select
case
when old.uuid = new.uuid
and (
old.another_field != new.another_field,
old.and_another_field != new.and_another_field,
-- many more columns here << This is particularly painful
) then 1
else 0
end into update_count;
Reproducible example:
-- create tables
CREATE TABLE IF NOT EXISTS example (uuid serial primary key, another_field int, and_another_field int);
CREATE TABLE IF NOT EXISTS tracker_table (
records_added integer DEFAULT 0,
records_updated integer DEFAULT 0,
created_at date unique
);
-- create function
CREATE OR REPLACE FUNCTION update_records_inserted () RETURNS TRIGGER AS $body$
DECLARE update_count INT;
DECLARE insert_count INT;
BEGIN
-- ---------------- START OF BLOCK IN QUESTION -----------------
select
case
when old.uuid = new.uuid
and (
old.another_field != new.another_field
-- many more columns here
) then 1
else 0
end into update_count;
-- ------------------ END OF BLOCK IN QUESTION ------------------
-- count INSERTs
select
case
when old.uuid is null
and new.uuid is not null then 1
else 0
end into insert_count;
-- --log the counts
-- raise notice 'update %', update_count;
-- raise notice 'insert %', insert_count;
-- insert or update count to tracker table
insert into
tracker_table(
created_at,
records_added,
records_updated
)
VALUES
(CURRENT_DATE, insert_count, update_count) ON CONFLICT (created_at) DO
UPDATE
SET
records_added = tracker_table.records_added + insert_count,
records_updated = tracker_table.records_updated + update_count;
RETURN NEW;
END;
$body$ LANGUAGE plpgsql;
-- Trigger
DROP TRIGGER IF EXISTS example_trigger ON example;
CREATE TRIGGER example_trigger
AFTER
INSERT
OR
UPDATE
ON example FOR EACH ROW EXECUTE PROCEDURE update_records_inserted ();
-- A query to insert, then update when number of uses > 1
insert into example(whatever) values (2, 3) ON CONFLICT(uuid) DO UPDATE SET another_field=excluded.another_field+1;
Related
I would like to create a basic audit log trigger in postgres that I can use across various tables.
My only requirement is that the audit log show each updated value as a separate entry.
i.e. if an INSERT is performed into a table with 5 rows, I receive 5 entries in the audit log table (one for each value added) or if that row is deleted from the table, I receive 5 entries (one for each value removed)
I have looked at various examples and am still having trouble getting the output correct, especially when the operation is UPDATE.
Here is the basic trigger flow.
-- 1. Making the Audit Table
CREATE TABLE audit_table
-- The audit table which will show:
-- Table, the ID of whats being changed, new columns, old columns,
-- user, operation(insert update delete), and timestamp
(
"src_table" TEXT NOT NULL,
"src_id" INT NOT NULL,
"col_new" TEXT,
"col_old" TEXT,
"user" TEXT DEFAULT current_user,
"action" TEXT NOT NULL,
"when_changed" TIMESTAMP
);
-- 2. Creating the base audit trigger function, the engine for processing changes --
CREATE OR REPLACE FUNCTION audit_trigger()
RETURNS trigger AS
$$
BEGIN
IF TG_OP = 'INSERT' --Only shows new values
THEN
INSERT INTO audit_table ( "src_table", "src_id", "col_new", "user", "action", "when_changed")
VALUES(TG_TABLE_NAME, TG_RELID, row_to_json(NEW), current_user, TG_OP, current_timestamp);
RETURN NEW;
ELSIF TG_OP = 'UPDATE' --Shows new and old values
THEN
INSERT INTO audit_table ("src_table", "src_id", "col_new", "col_old", "user", "action", "when_changed")
VALUES(TG_TABLE_NAME, TG_RELID, row_to_json(NEW), row_to_json(OLD), current_user, TG_OP, current_timestamp);
RETURN NEW;
ELSIF TG_OP = 'DELETE' --Only shows old values
THEN
INSERT INTO audit_table ("src_table", "src_id", "col_old", "user", "action", "when_changed")
VALUES(TG_TABLE_NAME, TG_RELID, row_to_json(OLD), current_user, TG_OP, current_timestamp);
RETURN OLD;
END IF;
END
$$
LANGUAGE 'plpgsql';
-- 3. Basic logic for calling audit trigger on tables, works for any insert, update, delete
CREATE TRIGGER test_audit_trigger
BEFORE INSERT OR UPDATE OR DELETE
ON test_table
FOR EACH ROW EXECUTE PROCEDURE audit_trigger();
The issue:
row_to_json returns the entire old or new payload as one column. I would like to return each change as a separate entry row in the audit_table, using for example column_name old_value new_value as a schema. What is the simplest way of achieving this? Having it be json is not a requirement.
Also not required:
Having it be BEFORE keyword, if AFTER or a combination of the two is better.
Having one trigger do all 3 functions, it can be a separate trigger or function per action
Returning values that are not changed, for example if an UPDATE only changes 1 value, I do not need the unchanged values in the log.
Here is a possible solution based on yours. One AFTER row trigger for insert, update and delete. The structure of audit_table is a bit different. Old and new values are saved as their text representation. You may wish to change the behaviour under insert and delete as they seem to be too verbose.
Unrelated but under your requirements the audit table has to implement the EAV (entity, attribute, value) antipattern.
drop table if exists audit_table;
create table audit_table
(
src_table text not null,
col_name text not null,
v_old text,
v_new text,
"user" text not null default current_user,
action text not null,
when_changed timestamptz not null default current_timestamp
);
create or replace function audit_tf() returns trigger language plpgsql as
$$
declare
k text;
v text;
j_new jsonb := to_jsonb(new);
j_old jsonb := to_jsonb(old);
begin
if TG_OP = 'INSERT' then -- only shows new values
for k, v in select * from jsonb_each_text(j_new) loop
insert into audit_table (src_table, col_name, v_new, action)
values (TG_TABLE_NAME, k, v, TG_OP);
end loop;
elsif TG_OP = 'UPDATE' then -- shows new and old values
for k, v in select * from jsonb_each_text(j_new) loop
if (v <> j_old ->> k) then
insert into audit_table (src_table, col_name, v_new, v_old, action)
values (TG_TABLE_NAME, k, v, j_old ->> k, TG_OP);
end if;
end loop;
elsif TG_OP = 'DELETE' then -- only shows old values
for k, v in select * from jsonb_each_text(j_old) loop
insert into audit_table (src_table, col_name, v_old, action)
values (TG_TABLE_NAME, k, v, TG_OP);
end loop;
end if;
return null;
end;
$$;
LANGUAGE 'plpgsql';
-- Demo
drop table if exists delme;
create table delme (x integer, y integer, z text, t timestamptz default now());
create trigger test_audit_trigger
after insert or update or delete on delme
for each row execute procedure audit_tf();
insert into delme (x, y, z) values (2, 2, 'two');
update delme set x = 1, y = 10 where x = 2;
delete from delme where x = 1;
select src_table, col_name, v_old, v_new, action from audit_table;
src_table
col_name
v_old
v_new
action
delme
t
2021-12-28T00:21:03.966621
INSERT
delme
x
2
INSERT
delme
y
2
INSERT
delme
z
two
INSERT
delme
x
2
1
UPDATE
delme
y
2
10
UPDATE
delme
t
2021-12-28T00:21:03.966621
DELETE
delme
x
1
DELETE
delme
y
10
DELETE
delme
z
two
DELETE
I have two stored functions. delete_item which deletes one item, logging it's success or failure in an actionlog table, it returns a 1 on errors and 0 when running successfully.
Secondly I have another function remove_expired that finds what to delete and loops through it calling delete_item.
All this is intended to be called using a simple bash script (hard requirement from operations, so calling like this is not up for discussion), and it have to give an error code when things doesn't work for their reporting tools.
We want all the deletions possible to succeed (we don't expect errors, but humans are still humans, and errors does happen), so if we want to delete 10 items and 1 fails, we still want the other 9 to be deleted.
Secondly we would really like the logs to be in the table actionlog both in the success and error case. I.e., we want that log to be complete.
Since plpgsql functions doesn't allow manual transaction management that seems to not be an option (unless I missed a way to circumvent this?).
The only way I've found so far to achieve this is to wrap scripts around it outside plpgsql, but we would very much like this to be possible in pure plpgsql so we can just give operations a pssql -C ... command and then they shouldn't be concerned with anything else.
SQL to reproduce the problem:
DROP FUNCTION IF EXISTS remove_expired(timestamp with time zone);
DROP FUNCTION IF EXISTS delete_item(integer);
DROP TABLE IF EXISTS actionlog;
DROP TABLE IF EXISTS evil;
DROP TABLE IF EXISTS test;
CREATE TABLE test (
id serial primary key not null,
t timestamp with time zone not null
);
CREATE TABLE evil (
test_id integer not null references test(id)
);
CREATE TABLE actionlog (
eventTime timestamp with time zone not null default now(),
message text not null
);
INSERT INTO test (actualTime, t)
VALUES ('2020-04-01T10:00:00+0200'),
('2020-04-01T10:15:00+0200'), -- Will not be deleable due to foreign key
('2020-04-01T10:30:00+0200')
;
INSERT INTO evil (test_id) SELECT id FROM test WHERE id = 2;
CREATE OR REPLACE FUNCTION remove_expired(timestamp with time zone)
RETURNS void
AS
$$
DECLARE
test_id int;
failure_count int = 0;
BEGIN
FOR test_id IN
SELECT id FROM test WHERE t < $1
LOOP
failure_count := delete_item(test_id) + failure_count;
END LOOP;
IF failure_count > 0 THEN
-- I want this to cause 'psql ... -c "SELECT * FROM remove_expred...' to exit with exit code != 0
RAISE 'There was one or more errors deleting. See the log for details';
END IF;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE FUNCTION delete_item(integer)
RETURNS integer
AS
$$
BEGIN
DELETE FROM test WHERE id = $1;
INSERT INTO actionlog (message)
VALUES ('Deleted with ID: ' || $1);
RETURN 0;
EXCEPTION WHEN OTHERS THEN
INSERT INTO actionlog (message)
VALUES ('Error deleting ID: ' || $1 || '. The error was: ' || SQLERRM);
RETURN 1;
END
$$ LANGUAGE plpgsql;
Thanks in advance for any useful input
You can have something close to what your expect in PostgreSQL 11 or PostgreSQL 12 but only with procedures because as already said functions will always roll everything back in case of errors.
With:
DROP PROCEDURE IF EXISTS remove_expired(timestamp with time zone);
DROP PROCEDURE IF EXISTS delete_item(integer);
DROP FUNCTION IF EXISTS f_removed_expired;
DROP SEQUENCE IF EXISTS failure_count_seq;
DROP TABLE IF EXISTS actionlog;
DROP TABLE IF EXISTS evil;
DROP TABLE IF EXISTS test;
CREATE TABLE test (
id serial primary key not null,
t timestamp with time zone not null
);
CREATE TABLE evil (
test_id integer not null references test(id)
);
CREATE TABLE actionlog (
eventTime timestamp with time zone not null default now(),
message text not null
);
INSERT INTO test (t)
VALUES ('2020-04-01T10:00:00+0200'),
('2020-04-01T10:15:00+0200'), -- Will not be removed due to foreign key
('2020-04-01T10:30:00+0200')
;
select * from test where t < current_timestamp;
INSERT INTO evil (test_id) SELECT id FROM test WHERE id = 2;
CREATE SEQUENCE failure_count_seq MINVALUE 0;
SELECT SETVAL('failure_count_seq', 0, true);
CREATE OR REPLACE PROCEDURE remove_expired(timestamp with time zone)
AS
$$
DECLARE
test_id int;
failure_count int = 0;
return_code int;
BEGIN
FOR test_id IN
SELECT id FROM test WHERE t < $1
LOOP
call delete_item(test_id);
COMMIT;
END LOOP;
SELECT currval('failure_count_seq') INTO failure_count;
IF failure_count > 0 THEN
-- I want this to cause 'psql ... -c "SELECT * FROM remove_expred...' to exit with exit code != 0
RAISE 'There was one or more errors deleting. See the log for details';
END IF;
END;
$$ LANGUAGE plpgsql;
CREATE OR REPLACE PROCEDURE delete_item(in integer)
AS
$$
DECLARE
forget_value int;
BEGIN
DELETE FROM test WHERE id = $1;
INSERT INTO actionlog (message)
VALUES ('Deleted with ID: ' || $1);
EXCEPTION WHEN OTHERS THEN
INSERT INTO actionlog (message)
VALUES ('Error deleting ID: ' || $1 || '. The error was: ' || SQLERRM);
COMMIT;
SELECT NEXTVAL('failure_count_seq') INTO forget_value;
END
$$ LANGUAGE plpgsql;
--
I get:
select * from test;
id | t
----+------------------------
1 | 2020-04-01 10:00:00+02
2 | 2020-04-01 10:15:00+02
3 | 2020-04-01 10:30:00+02
(3 rows)
select current_timestamp;
current_timestamp
-------------------------------
2020-04-01 16:52:26.171975+02
(1 row)
call remove_expired(current_timestamp);
psql:test.sql:80: ERROR: There was one or more errors deleting. See the log for details
CONTEXT: PL/pgSQL function remove_expired(timestamp with time zone) line 17 at RAISE
select currval('failure_count_seq');
currval
---------
1
(1 row)
select * from test;
id | t
----+------------------------
2 | 2020-04-01 10:15:00+02
(1 row)
select * from actionlog;
eventtime | message
-------------------------------+-------------------------------------------------------------------------------------------------------------------------------------------
2020-04-01 16:52:26.172173+02 | Deleted with ID: 1
2020-04-01 16:52:26.179794+02 | Error deleting ID: 2. The error was: update or delete on table "test" violates foreign key constraint "evil_test_id_fkey" on table "evil"
2020-04-01 16:52:26.196503+02 | Deleted with ID: 3
(3 rows)
I use a sequence to record the number of failures: you can use this sequence to test for failures and return the right return code.
CREATE OR REPLACE FUNCTION public.flowrate7(
)
RETURNS TABLE(oms_id integer, flowrate numeric, chakno character varying)
LANGUAGE 'plpgsql'
COST 100
VOLATILE
ROWS 1000
AS $BODY$
declare temp_omsId integer;
declare temp_flowrate numeric(20,3);
declare temp_chakno varchar(100);
begin
DROP TABLE IF EXISTS tbl_oms;
DROP TABLE IF EXISTS tbl_calFlow;
CREATE temporary TABLE tbl_oms(omsid__ integer) ON COMMIT DELETE ROWS;
CREATE temporary TABLE tbl_calFlow(OmsId_ integer,FlowRate_ numeric(20,3),ChakNo_ varchar(100)) ON COMMIT DELETE ROWS;
insert into tbl_oms (select OmsId from MstOms);
while (select count(*) from tbl_oms) <> 0 LOOP
select temp_omsId = omsid__ from tbl_oms LIMIT 1;
temp_flowrate = (select (case when(InLetPressure > 0.5) then 1 else 0 end) from MstOms where OmsId = temp_omsId);
temp_chakno = (select ChakNo from MstOms where OmsId = temp_omsId);
insert into tbl_calFlow values (temp_omsId,temp_flowrate,temp_chakno);
delete from tbl_oms where omsid__ = temp_omsId;
END LOOP;
return query (select OmsId_,FlowRate_,ChakNo_ from tbl_calFlow);
end;
$BODY$;
ERROR: query has no destination for result data
HINT: If you want to discard the results of a SELECT, use PERFORM instead.
CONTEXT: PL/pgSQL function flowrate7() line 19 at SQL statement
SQL state: 42601
You are retrieving values into the variables incorrectly. To store the value of a query result into a variable (or two) you need to use select .. into variable ...
CREATE OR REPLACE FUNCTION public.flowrate7()
RETURNS TABLE(oms_id integer, flowrate numeric, chakno character varying)
LANGUAGE plpgsql
AS $BODY$
declare
temp_omsId integer;
temp_flowrate numeric(20,3);
temp_chakno varchar(100);
begin
DROP TABLE IF EXISTS tbl_oms;
DROP TABLE IF EXISTS tbl_calFlow;
CREATE temporary TABLE tbl_oms(omsid__ integer) ON COMMIT DELETE ROWS;
CREATE temporary TABLE tbl_calFlow(OmsId_ integer,FlowRate_ numeric(20,3),ChakNo_ varchar(100)) ON COMMIT DELETE ROWS;
insert into tbl_oms
select OmsId
from MstOms;
while (select count(*) from tbl_oms) <> 0 LOOP
select omsid__
into temp_omsId --<< here
from tbl_oms LIMIT 1;
select case when inletpressure> 0.5 then 1 else 0 end, chakno
into temp_flowrate, temp_chakno --<< here
from MstOms
where omsid = temp_omsId;
insert into tbl_calFlow values (temp_omsId,temp_flowrate,temp_chakno);
delete from tbl_oms where omsid__ = temp_omsId;
END LOOP;
return query select omsid_, flowrate_, chakno_
from tbl_calflow;
end;
$BODY$;
However the processing of that function is unnecessarily complicated
if first copies all rows MstOms to tmp_MstOms
retrieve the ID for one row from tbl_oms
retrieve one row from MstOms calculating the flowrate
stores that one row in the temp table
deletes the just processed row from the (other) temp table
counts the number or rows in tbl_oms and if that is not zero moves on to the "next" row
This is an extremely inefficient and complicate way of calculating a simple value which will not scale well for large tables.
Doing a row-by-row processing of tables in the database is an anti-pattern to begin with (and doing that by deleting, inserting and counting rows is making that even slower).
This is not how things are done in SQL. The whole inefficient loop can be replaced with a single query which also allows you to change the whole thing to a SQL function.
CREATE OR REPLACE FUNCTION public.flowrate7()
RETURNS TABLE(oms_id integer, flowrate numeric, chakno character varying)
LANGUAGE sql
AS $BODY$
select omsid,
case when inletpressure> 0.5 then 1 else 0 end as flowrate,
chakno
from mstoms;
$BODY$;
Actually a view would be more appropriate for this.
create or replace view flowrate7
as
select omsid,
case when inletpressure> 0.5 then 1 else 0 end as flowrate,
chakno
from mstoms;
I have these tables:
CREATE EXTENSION citext;
CREATE EXTENSION "uuid-ossp";
CREATE TABLE cities
(
city_id serial PRIMARY KEY,
city_name citext NOT NULL UNIQUE
);
INSERT INTO cities(city_name) VALUES
('New York'), ('Paris'), ('Madrid');
CREATE TABLE etags
(
etag_name varchar(128) PRIMARY KEY,
etag_value uuid
);
INSERT INTO etags(etag_name, etag_value)
VALUES ('cities', uuid_generate_v4());
I want to update the cities etag when the cities table changes. If no rows are affected by the insert, update or delete statement, I'd like to avoid to change the cities etag, so I wrote the following statement level trigger:
CREATE OR REPLACE FUNCTION update_etag()
RETURNS trigger AS
$BODY$
DECLARE
record_count integer;
vetag_name varchar(128);
BEGIN
GET DIAGNOSTICS record_count = ROW_COUNT;
vetag_name := TG_ARGV[0];
RAISE NOTICE 'affected %:%', vetag_name, record_count;
IF record_count = 0 THEN
RETURN NULL;
END IF;
UPDATE etags SET etag_value = uuid_generate_v4()
WHERE etag_name = vetag_name;
RETURN null;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER update_cities_etag_trigger
AFTER INSERT OR UPDATE OR DELETE
ON cities
FOR EACH STATEMENT
EXECUTE PROCEDURE update_etag('cities');
However GET DIAGNOSTICS record_count = ROW_COUNT; doesn't work for me, as it always returns 0.
If I execute the following:
DELETE FROM cities;
The following is output:
NOTICE: affected cities:0 Query returned successfully: 3 rows
affected, 47 msec execution time.
Is there a way to figure out how many rows are affected by the statement that triggers the trigger in a PostgreSQL statement-level trigger?
Version 10
CREATE TRIGGER
...
[ REFERENCING { { OLD | NEW } TABLE [ AS ] transition_relation_name } [ ... ] ]
...
https://www.postgresql.org/docs/current/static/release-10.html
Add AFTER trigger transition tables to record changed rows (Kevin
Grittner, Thomas Munro)
Transition tables are accessible from triggers written in server-side
languages.
Example
Solves it:
CREATE OR REPLACE FUNCTION update_etag()
RETURNS trigger AS
$BODY$
DECLARE
record_count integer;
vetag_name varchar(128);
begin
IF (TG_OP = 'DELETE') or (TG_OP = 'UPDATE') THEN
select count(*) from oldtbl into record_count ;
ELSE
select count(*) from newtbl into record_count ;
END IF;
vetag_name := TG_ARGV[0];
RAISE NOTICE 'affected %:%:%', vetag_name,TG_OP, record_count;
IF record_count = 0 THEN
RETURN NULL;
END IF;
UPDATE etags SET etag_value = uuid_generate_v4()
WHERE etag_name = vetag_name;
RETURN null;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
CREATE TRIGGER update_ins_cities_etag_trigger
AFTER INSERT
ON cities
REFERENCING NEW TABLE AS newtbl
FOR EACH STATEMENT
EXECUTE PROCEDURE update_etag('cities');
CREATE TRIGGER update_upd_cities_etag_trigger
AFTER UPDATE
ON cities
REFERENCING OLD TABLE AS oldtbl
FOR EACH STATEMENT
EXECUTE PROCEDURE update_etag('cities');
CREATE TRIGGER update_del_cities_etag_trigger
AFTER DELETE
ON cities
REFERENCING OLD TABLE AS oldtbl
FOR EACH STATEMENT
EXECUTE PROCEDURE update_etag('cities');
so=# INSERT INTO cities(city_name) VALUES
so-# ('New York'), ('Paris'), ('Madrid');
NOTICE: affected cities:INSERT:3
INSERT 0 3
so=# select * from etags;
etag_name | etag_value
-----------+--------------------------------------
cities | dc7d1525-eea7-4822-b736-5141a20764f8
(1 row)
so=# insert into cities(city_name) values ('Budapest');
NOTICE: affected cities:INSERT:1
INSERT 0 1
so=# select * from etags;
etag_name | etag_value
-----------+--------------------------------------
cities | df835f44-dada-4a94-bb62-5890f2316103
(1 row)
so=# delete from cities where city_id > 42;
NOTICE: affected cities:DELETE:0
DELETE 0
so=# select * from etags;
etag_name | etag_value
-----------+--------------------------------------
cities | df835f44-dada-4a94-bb62-5890f2316103
(1 row)
I have a trigger function that copy row of unique values to another table on update or insert that ALMOST work.
The trigger should only insert a new row to the sample table if the number don't exist in it before. Atm. it insert a new row to the sample table with the value NULL if the number already exist in the table. I dont want it to do anything if maintbl.number = sample.nb_main
EDIT: sample table and sample data
CREATE TABLE schema.main(
sid SERIAL NOT NULL,
number INTEGER,
CONSTRAINT sid_pk PRIMARY KEY (sid)
)
CREATE TABLE schema.sample(
gid SERIAL NOT NULL,
nb_main INTEGER,
CONSTRAINT gid_pk PRIMARY KEY (gid)
Example and desired result
schema.main schema.sample
number nb_main
234233 234233
234234 555555
234234
555555
555555
CREATE OR REPLACE FUNCTION schema.update_number()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO schema.sample(
nb_main)
SELECT DISTINCT(maintbl.number)
FROM schema.maintbl
WHERE NOT EXISTS (
SELECT nb_main FROM schema.sample WHERE maintbl.number = sample.nb_main);
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION schema.update_number()
OWNER TO postgres;
CREATE TRIGGER update_number
AFTER INSERT OR UPDATE
ON schema.maintbl
FOR EACH ROW
EXECUTE PROCEDURE schema.update_number();
I just found out that my select query is probably wrong, if I run SELECT query by itself it return one row 'NULL' but i should not?
SELECT DISTINCT(maintbl.number)
FROM schema.maintbl
WHERE NOT EXISTS (
SELECT nb_main FROM schema.sample WHERE maintbl.number = sample.nb_main);
Any good advice?
Best
If I understood correctly, you wish to append to schema.sample a number that has been inserted or updated in schema.maintbl, right?
CREATE OR REPLACE FUNCTION schema.update_number()
RETURNS trigger AS
$BODY$
BEGIN
IF (SELECT COUNT(*) FROM schema.sample WHERE number = NEW.number) = 0 THEN
INSERT INTO schema.sample(nb_main) VALUES (NEW.number);
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;