PostgreSQL - Determine which columns were updated - postgresql

I have a table with many columns, one of which is a lastUpdate column.
I am writing a trigger in plpgsql for Postgres 9.1, that should set a value for lasUpdate upon an UPDATE to the record.
The challenge is to exclude some pre-defined columns from that trigger; Meaning, updating those specific columns shouldn't affect the lastUpdate value of the record.
Any advise?

In PostgreSQL you can access the previous value using OLD. and the new ones using NEW. aliases. There is even a specific example in the docs for what you need:
CREATE TRIGGER check_update
BEFORE UPDATE ON accounts
FOR EACH ROW
WHEN (OLD.balance IS DISTINCT FROM NEW.balance)
EXECUTE PROCEDURE check_account_update();

I know it is too old question, but I found myself with the same need and I managed to do it with a trigger using the information_schema.colmns table.
I attach here the possible solution where the only parameters to edit would be the TIMEUPDATE_FIELD and EXCLUDE_FIELDS in the trigger function check_update_testtrig():
CREATE TABLE testtrig
(
id bigserial NOT NULL,
col1 integer,
col2 integer,
col3 integer,
lastupdate timestamp not null default now(),
lastread timestamp,
CONSTRAINT testtrig_pkey PRIMARY KEY (id)
)
WITH (
OIDS=FALSE
);
CREATE OR REPLACE FUNCTION check_update_testtrig()
RETURNS trigger AS
$BODY$
DECLARE
TIMEUPDATE_FIELD text := 'lastupdate';
EXCLUDE_FIELDS text[] := ARRAY['lastread'];
PK_FIELD text := 'id';
ROW_RES RECORD;
IS_DISTINCT boolean := false;
COND_RES integer := 0;
BEGIN
FOR ROW_RES IN
SELECT column_name
FROM information_schema.columns
WHERE table_schema = TG_TABLE_SCHEMA
AND table_name = TG_TABLE_NAME
AND column_name != TIMEUPDATE_FIELD
AND NOT(column_name = ANY (EXCLUDE_FIELDS))
LOOP
EXECUTE 'SELECT CASE WHEN $1.' || ROW_RES.column_name || ' IS DISTINCT FROM $2.' || ROW_RES.column_name || ' THEN 1 ELSE 0 END'
INTO STRICT COND_RES
USING NEW, OLD;
IS_DISTINCT := IS_DISTINCT OR (COND_RES = 1);
END LOOP;
IF (IS_DISTINCT)
THEN
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME || ' SET ' || TIMEUPDATE_FIELD || ' = now() WHERE ' || PK_FIELD || ' = $1.' || PK_FIELD
USING NEW;
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
CREATE TRIGGER trigger_update_testtrig
AFTER UPDATE
ON testtrig
FOR EACH ROW
EXECUTE PROCEDURE check_update_testtrig();

Looking to your question and your comment on the answer of Jakub Kania, would I say that part of the solution is that you will create an extra table.
The issue is that constraints on columns should only apply the functioning of the column itself, it should not affect the values of other columns in the table. Specifying which columns should influence the status 'lastUpdate' is imo business logic.
The idea which columns should have impact on the value of the status column 'lastUpdate' changes along with the business, not with the table design. Therefor should the solution imo consist of a table in combination with a trigger.
I would add a table with a column for a list of columns (column can be of type array) that can be used in a trigger on the table like described by Jakub Kania. If the default behaviour should be that a new column has to change the value of the column 'lastUpdate', then would I create the trigger so that it only lists names of columns that do not change the value of 'lastUpdate'. If the default behaviour is to not change the value of the column 'lastUpdate',then would I advise you to add the name of the column to the list of columns in case the members in the list would change the value of the column 'lastUpdate'.
If the table column is within the list of columns then should it update the field lastUpdate.

Related

Loop over NEW Record

Writing an audit trigger. Inside the postgresql function I'm trying todo:
'INSERT INTO ' || able_name || ' (' || columns || ') VALUES ' || NEW || ';'
When NEW is turned into string, varchar variables will not have quotes around them. This will cause the insert to fail. Easier would be to turn all the column values of NEW into varchar values, and postgres would automatically cast them into right values - when INSERT is executed.
Can I loop over the NEW record without turning it into json?
Looking around, I couldn't find good resource explaining how to work with Postgres Record type.
If your target table's structure is identical to the new structure, you don't really need to iterate over the columns.
Something like this will work:
create function audit_trigger()
returns trigger
as
$$
declare
l_columns text;
l_table_name text;
begin
-- this builds the name of the target table dynamicall
l_table_name := tg_table_name||'_audit';
execute format('insert into %I select ($1).*', l_table_name) using new;
return new;
end;
$$
language plpgsql;
Even if you don't want to store the changed data as a JSONB column, you can still use JSON functions to iterate over the columns of the new record if think you need it nevertheless.
The following will store the list of column names of the new record in the variable l_columns:
select string_agg(quote_ident(col), ',')
into l_columns
from jsonb_each_text(to_jsonb(new)) as t(col, val);

Postgres create universal function on all table

I have a few db tables.
I want write universtal postgres function on copy rows to history tables
I have tables:
table1
table1_h
table2
table2_h
I wrote function (with help stackoverflow)
CREATE OR REPLACE FUNCTION copy_history_f() RETURNS TRIGGER AS
$BODY$
DECLARE
tablename_h text:= TG_TABLE_NAME || '_h';
BEGIN
EXECUTE 'INSERT INTO ' || quote_ident(TG_TABLE_SCHEMA) || '.' || quote_ident(tablename_h) || ' VALUES (' || OLD.* ||')';
RETURN NULL;
END;
$BODY$
LANGUAGE plpgsql VOLATILE;
And functions was create, but after update is error.
ERROR: syntax error at or near ","
ROW 1: ...RT INTO table1_h VALUES ((12,,,0,,"Anto...
I know where is error in this insert but I don't know how I repair that.
Structure tables table1 and table1_h are identical but table1_h has one more column (id_h)
Can you help me, how I have create psql function?
Thnak you.
drop table if exists t;
drop table if exists t_h;
drop function if exists ftg();
create table t(i serial, x numeric);
insert into t(x) values(1.1),(2.2);
create table t_h(i int, x numeric);
create function ftg() returns trigger language plpgsql as $ftg$
declare
tablename_h text:= TG_TABLE_NAME || '_h';
begin
execute format($q$ insert into %I.%I select $1.*; $q$, TG_TABLE_SCHEMA, tablename_h) using old;
return null;
end $ftg$;
create trigger tg_t after delete on t for each row execute procedure ftg();
delete from t where i = 1;
select * from t_h;
dbfiddle
Update It solves your problem, but I think that you want to have a bit more info in your history tables. It will be more complex a bit:
drop table if exists t;
drop table if exists t_h;
drop function if exists ftg();
create table t(i serial, x numeric);
insert into t(x) values(1.1),(2.2);
create table t_h(
hi serial, -- just ID
hd timestamp, -- timestamp
hu text, -- user who made changes
ha text, -- action
i int, x numeric
);
create function ftg() returns trigger language plpgsql as $ftg$
declare
tablename_h text:= TG_TABLE_NAME || '_h';
begin
execute format(
$q$
insert into %I.%I
select
nextval(%L || '_hi_seq'),
clock_timestamp(),
current_user,
%L,
$1.*
$q$, TG_TABLE_SCHEMA, tablename_h, tablename_h, TG_OP) using old;
return null;
end $ftg$;
create trigger tg_t after delete or update on t for each row execute procedure ftg();
update t set x = x * 2;
update t set x = x * 2 where i = 2;
delete from t where i = 1;
select * from t_h;
dbfiddle
I assume you are inserting the 'old' values from table1 into table1_h.
The additional column is your problem. When you using an insert without naming columns you must use a matching number and type for the insert.
You must use column referencing.
eg.
Insert into table1_h(column1, column2, column3)
values (a,b,c)
Consider a default value for the additional column in table table1_h.

Trigger with dynamic field name

I have a problem on creating PostgreSQL (9.3) trigger on update table.
I want set new values in the loop as
EXECUTE 'NEW.'|| fieldName || ':=''some prepend data'' || NEW.' || fieldName || ';';
where fieldName is set dynamically. But this string raise error
ERROR: syntax error at or near "NEW"
How do I go about achieving that?
You can implement that rather conveniently with the hstore operator #=:
Make sure the additional module is installed properly (once per database), in a schema that's included in your search_path:
How to use % operator from the extension pg_trgm?
Best way to install hstore on multiple schemas in a Postgres database?
Trigger function:
CREATE OR REPLACE FUNCTION tbl_insup_bef()
RETURNS TRIGGER AS
$func$
DECLARE
_prefix CONSTANT text := 'some prepend data'; -- your prefix here
_prelen CONSTANT int := 17; -- length of above string (optional optimization)
_col text := quote_ident(TG_ARGV[0]);
_val text;
BEGIN
EXECUTE 'SELECT $1.' || _col
USING NEW
INTO _val;
IF left(_val, _prelen) = _prefix THEN
-- do nothing: prefix already there!
ELSE
NEW := NEW #= hstore(_col, _prefix || _val);
END IF;
RETURN NEW;
END
$func$ LANGUAGE plpgsql;
Trigger (reuse the same func for multiple tables):
CREATE TRIGGER insup_bef
BEFORE INSERT OR UPDATE ON tbl
FOR EACH ROW
EXECUTE PROCEDURE tbl_insup_bef('fieldName'); -- unquoted, case-sensitive column name
Closely related with more explanation and advice:
Assignment of a column with dynamic column name
How to access NEW or OLD field given only the field's name?
Get values from varying columns in a generic trigger
Your problem is that EXECUTE can only be used to execute SQL statements and not PL/pgSQL statements like the assignment in your question.
You can maybe work around that like this:
Let's assume that table testtab is defined like this:
CREATE TABLE testtab (
id integer primary key,
val text
);
Then a trigger function like the following will work:
BEGIN
EXECUTE 'SELECT $1.id, ''prefix '' || $1.val' INTO NEW USING NEW;
RETURN NEW;
END;
I used hard-coded idand val in my example, but that is not necessary.
I found a working solution:
trigger should execute after insert/update, not before. Then desired row takes the form
EXECUTE 'UPDATE ' || TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME ||
' SET ' || fieldName || '= ''prefix:'' ||''' || fieldValue || ''' WHERE id = ' || NEW.id;
fieldName and fieldValue I get in the next way:
FOR fieldName,fieldValue IN select key,value from each(hstore(NEW)) LOOP
IF .... THEN
END LOOP:

How to clone a RECORD in PostgreSQL

I want to loop through a query, but also retain the actual record for the next loop, so I can compare two adjacent rows.
CREATE OR REPLACE FUNCTION public.test ()
RETURNS void AS
$body$
DECLARE
previous RECORD;
actual RECORD;
query TEXT;
isdistinct BOOLEAN;
tablename VARCHAR;
columnname VARCHAR;
firstrow BOOLEAN DEFAULT TRUE;
BEGIN
tablename = 'naplo.esemeny';
columnname = 'esemeny_id';
query = 'SELECT * FROM ' || tablename || ' LIMIT 2';
FOR actual IN EXECUTE query LOOP
--do stuff
--save previous record
IF NOT firstrow THEN
EXECUTE 'SELECT ($1).' || columnname || ' IS DISTINCT FROM ($2).' || columnname
INTO isdistinct USING previous, actual;
RAISE NOTICE 'previous: %', previous.esemeny_id;
RAISE NOTICE 'actual: %', actual.esemeny_id;
RAISE NOTICE 'isdistinct: %', isdistinct;
ELSE
firstrow = false;
END IF;
previous = actual;
END LOOP;
RETURN;
END;
$body$
LANGUAGE 'plpgsql'
VOLATILE
CALLED ON NULL INPUT
SECURITY INVOKER
COST 100;
The table:
CREATE TABLE naplo.esemeny (
esemeny_id SERIAL,
felhasznalo_id VARCHAR DEFAULT "current_user"() NOT NULL,
kotesszam VARCHAR(10),
idegen_azonosito INTEGER,
esemenytipus_id VARCHAR(10),
letrehozva TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
szoveg VARCHAR,
munkalap_id VARCHAR(13),
ajanlat_id INTEGER,
CONSTRAINT esemeny_pkey PRIMARY KEY(esemeny_id),
CONSTRAINT esemeny_fk_esemenytipus FOREIGN KEY (esemenytipus_id)
REFERENCES naplo.esemenytipus(esemenytipus_id)
ON DELETE RESTRICT
ON UPDATE RESTRICT
NOT DEFERRABLE
)
WITH (oids = true);
The code above doesn't work, the following error message is thrown:
ERROR: could not identify column "esemeny_id" in record data type
LINE 1: SELECT ($1).esemeny_id IS DISTINCT FROM ($2).esemeny_id
^
QUERY: SELECT ($1).esemeny_id IS DISTINCT FROM ($2).esemeny_id
CONTEXT: PL/pgSQL function "test" line 18 at EXECUTE statement
LOG: duration: 0.000 ms statement: SET DateStyle TO 'ISO'
What am I missing?
Disclaimer: I know the code doesn't make too much sense, I only created so I can demonstrate the problem.
This does not directly answer your question, and may be of no use at all, since you did not really describe your end goal.
If the end goal is to be able to compare the value of a column in the current row with the value of the same column in the previous row, then you might be much better off using a windowing query:
SELECT actual, previous
FROM (
SELECT mycolumn AS actual,
lag(mycolumn) OVER () AS previous
FROM mytable
ORDER BY somecriteria
) as q
WHERE previous IS NOT NULL
AND actual IS DISTINCT FROM previous
This example prints the rows where the current row is different from the previous row.
Note that I added an ORDER BY clause - it does not make sense to talk about "the previous row" without specifying ordering, otherwise you would get random results.
This is plain SQL, not PlPgSQL, but if you can wrap it in a function if you want to dynamically generate the query.
I am pretty sure, there is a better solution for your actual problem. But to answer the question asked, here is a solution with polymorphic types:
The main problem is that you need well known composite types to work with. the structure of anonymous records is undefined until assigned.
CREATE OR REPLACE FUNCTION public.test (actual anyelement, _col text
, OUT previous anyelement) AS
$func$
DECLARE
isdistinct bool;
BEGIN
FOR actual IN
EXECUTE format('SELECT * FROM %s LIMIT 3', pg_typeof(actual))
LOOP
EXECUTE format('SELECT ($1).%1$I IS DISTINCT FROM ($2).%1$I', _col)
INTO isdistinct
USING previous, actual;
RAISE NOTICE 'previous: %; actual: %; isdistinct: %'
, previous, actual, isdistinct;
previous := actual;
END LOOP;
previous := NULL; -- reset dummy output (optional)
END
$func$ LANGUAGE plpgsql;
Call:
SELECT public.test(NULL::naplo.esemeny, 'esemeny_id')
I am abusing an OUT parameter, since it's not possible to declare additional variables with a polymorphic composite type (at least I have failed repeatedly).
If your column name is stable you can replace the second EXECUTE with a simple expression.
I am running out of time, explanation in these related answers:
Declare variable of composite type in PostgreSQL using %TYPE
Refactor a PL/pgSQL function to return the output of various SELECT queries
Asides:
Don't quote the language name, it's an identifier, not a string.
Do you really need WITH (oids = true) in your table? This is still allowed, but largely deprecated in modern Postgres.

Generic trigger to restrict insertions based on count

Background
In a PostgreSQL 9.0 database, there are various tables that have many-to-many relationships. The number of those relationships must be restricted. A couple of example tables include:
CREATE TABLE authentication (
id bigserial NOT NULL, -- Primary key
cookie character varying(64) NOT NULL, -- Authenticates the user with a cookie
ip_address character varying(40) NOT NULL -- Device IP address (IPv6-friendly)
)
CREATE TABLE tag_comment (
id bigserial NOT NULL, -- Primary key
comment_id bigint, -- Foreign key to the comment table
tag_name_id bigint -- Foreign key to the tag name table
)
Different relationships, however, have different limitations. For example, in the authentication table, a given ip_address is allowed 1024 cookie values; whereas, in the tag_comment table, each comment_id can have 10 associated tag_name_ids.
Problem
Currently, a number of functions have these restrictions hard-coded; scattering the limitations throughout the database, and preventing them from being changed dynamically.
Question
How would you impose a maximum many-to-many relationship limit on tables in a generic fashion?
Idea
Create a table to track the limits:
CREATE TABLE imposed_maximums (
id serial NOT NULL,
table_name character varying(128) NOT NULL,
column_group character varying(128) NOT NULL,
column_count character varying(128) NOT NULL,
max_size INTEGER
)
Establish the restrictions:
INSERT INTO imposed_maximums
(table_name, column_group, column_count, max_size) VALUES
('authentication', 'ip_address', 'cookie', 1024);
INSERT INTO imposed_maximums
(table_name, column_group, column_count, max_size) VALUES
('tag_comment', 'comment_id', 'tag_id', 10);
Create a trigger function:
CREATE OR REPLACE FUNCTION impose_maximum()
RETURNS trigger AS
$BODY$
BEGIN
-- Join this up with imposed_maximums somehow?
select
count(1)
from
-- the table name
where
-- the group column = NEW value to INSERT;
RETURN NEW;
END;
Attach the trigger to every table:
CREATE TRIGGER trigger_authentication_impose_maximum
BEFORE INSERT
ON authentication
FOR EACH ROW
EXECUTE PROCEDURE impose_maximum();
Obviously it won't work as written... is there a way to make it work, or otherwise enforce the restrictions such that they are:
in a single location; and
not hard-coded?
Thank you!
I've been doing a similar type of generic triggers.
The most tricky part is to get the value entry in the NEW record based on the column name.
I'm doing it the following way:
convert NEW data into array;
find the attnum of the column and use it as an index for the array.
This approach works as long as there're no commas in the data :( I don't know of other ways how to convert NEW or OLD variables into the array of values.
The following function might help:
CREATE OR REPLACE FUNCTION impose_maximum() RETURNS trigger AS $impose_maximum$
DECLARE
_sql text;
_cnt int8;
_vals text[];
_anum int4;
_im record;
BEGIN
_vals := string_to_array(translate(trim(NEW::text), '()', ''), ',');
FOR _im IN SELECT * FROM imposed_maximums WHERE table_name = TG_TABLE_NAME LOOP
SELECT attnum INTO _anum FROM pg_catalog.pg_attribute a
JOIN pg_catalog.pg_class t ON t.oid = a.attrelid
WHERE t.relkind = 'r' AND t.relname = TG_TABLE_NAME
AND NOT a.attisdropped AND a.attname = _im.column_group;
_sql := 'SELECT count('||quote_ident(_im.column_count)||')'||
' FROM '||quote_ident(_im.table_name)||
' WHERE '||quote_ident(_im.column_group)||' = $1';
EXECUTE _sql INTO _cnt USING _vals[_anum];
IF _cnt > CAST(_im.max_size AS int8) THEN
RAISE EXCEPTION 'Maximum of % hit for column % in table %(%=%)',
_im.max_size, _im.column_count,
_im.table_name, _im.column_group, _vals[_anum];
END IF;
END LOOP;
RETURN NEW;
END; $impose_maximum$ LANGUAGE plpgsql;
This function will check for all conditions defined for a given table.
Yes, there is a way to make it work.
In my personal opinion your idea is the way to go. It just needs one level of "meta". So, the table imposed_restrictions should have trigger(s), which is (are) fired after insert, update and delete. The code should then in turn create, modify or remove triggers and functions.
Take a look at execute statement of PL/pgSQL, which - essentially - allows you to execute any string. Needless to say, this string may contain definitions of triggers, functions, etc. Obviously, you have the access to OLD and NEW in the triggers, so you can fill in the placeholders in the string and you are done.
I believe you should be able to accomplish what you want with this answer. Please note that this is my personal view on the topic and it might not be an optimal solution - I would like to see a different, maybe also more efficient, approach.
Edit - Below is a sample from one of my old projects. It is located inside the function that is triggered before update (though now I get to think of it, maybe it should have been called after ;) And yes, the code is messy, as it does not use the nice $escape$ syntax. I was really, really young then. Nonetheless, the snipped demonstrates that it is possible to achieve what you want.
query:=''CREATE FUNCTION '' || NEW.function_name || ''('';
IF NEW.parameter=''t'' THEN
query:=query || ''integer'';
END IF;
query:=query || '') RETURNS setof '' || type_name || '' AS'' || chr(39);
query:=query || '' DECLARE list '' || type_name || ''; '';
query:=query || ''BEGIN '';
query:=query || '' FOR list IN EXECUTE '' || chr(39) || chr(39);
query:=query || temp_s || '' FROM '' || NEW.table_name;
IF NEW.parameter=''t'' THEN
query:=query || '' WHERE id='' || chr(39) || chr(39) || ''||'' || chr(36) || ''1'';
ELSE
query:=query || '';'' || chr(39) || chr(39);
END IF;
query:=query || '' LOOP RETURN NEXT list; '';
query:=query || ''END LOOP; RETURN; END; '' || chr(39);
query:=query || ''LANGUAGE '' || chr(39) || ''plpgsql'' || chr(39) || '';'';
EXECUTE query;
These function + trigger could be used as a template. If You combine them with #Sorrow 's technique of dynamically generating the functions + triggers, this could solve the OP's problem.
Please note that, instead of recalculating the count for every affected row (by calling the COUNT() aggregate function), I maintain an 'incremental' count. This should be cheaper.
DROP SCHEMA tmp CASCADE;
CREATE SCHEMA tmp ;
SET search_path='tmp';
CREATE TABLE authentication
( id bigserial NOT NULL -- Primary key
, cookie varchar(64) NOT NULL -- Authenticates the user with a cookie
, ip_address varchar(40) NOT NULL -- Device IP address (IPv6-friendly)
, PRIMARY KEY (ip_address, cookie)
);
CREATE TABLE authentication_ip_count (
ip_address character varying(40) NOT NULL
PRIMARY KEY -- REFERENCES authentication(ip_address)
, refcnt INTEGER NOT NULL DEFAULT 0
--
-- This is much easyer:
-- keep the max value inside the table
-- + use a table constraint
-- , maxcnt INTEGER NOT NULL DEFAULT 2 -- actually 100
-- , CONSTRAINT no_more_cookies CHECK (refcnt <= maxcnt)
);
CREATE TABLE imposed_maxima
( id serial NOT NULL
, table_name varchar NOT NULL
, column_group varchar NOT NULL
, column_count varchar NOT NULL
, max_size INTEGER NOT NULL
, PRIMARY KEY (table_name,column_group,column_count)
);
INSERT INTO imposed_maxima(table_name,column_group,column_count,max_size)
VALUES('authentication','ip_address','cookie', 2);
CREATE OR REPLACE FUNCTION authentication_impose_maximum()
RETURNS trigger AS
$BODY$
DECLARE
dummy INTEGER;
BEGIN
IF (TG_OP = 'INSERT') THEN
INSERT INTO authentication_ip_count (ip_address)
SELECT sq.*
FROM ( SELECT NEW.ip_address) sq
WHERE NOT EXISTS (
SELECT *
FROM authentication_ip_count nx
WHERE nx.ip_address = sq.ip_address
);
UPDATE authentication_ip_count
SET refcnt = refcnt + 1
WHERE ip_address = NEW.ip_address
;
SELECT COUNT(*) into dummy -- ac.refcnt, mx.max_size
FROM authentication_ip_count ac
JOIN imposed_maxima mx ON (1=1) -- outer join
WHERE ac.ip_address = NEW.ip_address
AND mx.table_name = 'authentication'
AND mx.column_group = 'ip_address'
AND mx.column_count = 'cookie'
AND ac.refcnt > mx.max_size
;
IF FOUND AND dummy > 0 THEN
RAISE EXCEPTION 'Cookie moster detected';
END IF;
ELSIF (TG_OP = 'DELETE') THEN
UPDATE authentication_ip_count
SET refcnt = refcnt - 1
WHERE ip_address = OLD.ip_address
;
DELETE FROM authentication_ip_count ac
WHERE ac.ip_address = OLD.ip_address
AND ac.refcnt <= 0
;
-- ELSIF (TG_OP = 'UPDATE') THEN
-- (Only needed if we allow updates of ip-address)
-- otherwise the count stays the same.
END IF;
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER trigger_authentication_impose_maximum
BEFORE INSERT OR UPDATE OR DELETE
ON authentication
FOR EACH ROW
EXECUTE PROCEDURE authentication_impose_maximum();
-- Test it ...
INSERT INTO authentication(ip_address, cookie) VALUES ('1.2.3.4', 'Some koekje' );
INSERT INTO authentication(ip_address, cookie) VALUES ('1.2.3.4', 'kaakje' );
INSERT INTO authentication(ip_address, cookie) VALUES ('1.2.3.4', 'Yet another cookie' );
RESULTS:
INSERT 0 1
CREATE FUNCTION
CREATE TRIGGER
INSERT 0 1
INSERT 0 1
ERROR: Cookie moster detected