Following the sqlite documentation: https://www.sqlite.org/lang_createtrigger.html I created a very simple trigger on even simpler database table (using SQLite3 database):
CREATE TABLE IF NOT EXISTS
users_ids
(
id INTEGER PRIMARY KEY AUTOINCREMENT,
status INTEGER DEFAULT 0
)
And the trigger:
CREATE TRIGGER addID
AFTER UPDATE OF status
ON users_ids
WHEN NEW.status > 0
BEGIN
INSERT INTO users_ids DEFAULT VALUES;
END
This gives me error regarding the DEFAULT keyword. If I change the INSERT to this:
INSERT INTO users_ids (status) VALUES (0);
Then it works. But since the documentation mentions the DEFAULT VALUES clearly as an option for INSERT (except for insert trigger) then I see no reason why it gives me this error. What am I missing here?
CREATE TABLE IF NOT EXISTS users_ids ...
AFTER UPDATE OF status ON user_ids ...
INSERT INTO user_id ...
Please make a decision what the name of that table actually is.
Furthermore, the documentation says:
The "INSERT INTO table DEFAULT VALUES" form of the INSERT statement is not supported.
Related
So my first question here on SO,
let me describe the setup:
I have a postgressql database (version 12) with a table guilds (containing an internal guild_id and a few other informations). The guild_id is used as foreign key for many other tables like a teams table. Now if a team is inserted in teams for another guild then the guild with the guild_id = 1, I want a trigger function to create the same team entry, but now with a modified guild_id (should be now 1).
Definition of the relevant things I have atm:
create table if not exists bot.guilds
(
guild_id bigserial not null
constraint guilds_pk
primary key,
guild_dc_id bigint not null,
);
create table if not exists bot.teams
(
team_id bigserial not null
constraint teams_pk
primary key,
guild_id bigserial not null
constraint teams_guilds_guild_id_fk
references bot.guilds
on delete cascade,
team_name varchar(20) not null,
team_nickname varchar(10) not null
);
alter table bot.teams
owner to postgres;
create unique index if not exists teams_guild_id_team_name_uindex
on bot.teams (guild_id, team_name);
create unique index if not exists teams_guild_id_team_nickname_uindex
on bot.teams (guild_id, team_nickname);
create function duplicate_teams() returns trigger
language plpgsql
as
$$
BEGIN
INSERT INTO bot.teams VALUES(1,NEW."team_name",NEW."team_nickname");
RETURN NEW;
END;
$$;
create trigger duplicate_team
after insert
on bot.teams
for each row
execute procedure bot.duplicate_teams();
If I try now to insert a new row in teams (INSERT INTO bot.teams ("guild_id", "team_name", "team_nickname")VALUES (14, 'test2', 'test2');), I get the following error message (orginial german, translated by me to english):
[42804] ERROR: Column »guild_id« has type integer, but the expression has the type character varying
HINT: You have to rewrite the expression or cast the value.
WITH: PL/pgSQL-function duplicate_teams() row 3 in SQL-expressions
After execution the orgininal insert statement isn't in the table neither the copy.
I tried to cast the values for the guild id to serial, integer, bigserial.. but everytime the same error. I'm confused by the error message part with "has the type character varying".
So my questions are:
Is my understanding correct, that the error is caused by the trigger? and due to the error in the trigger the original insert statement doesnt work too?
Why is the type varing even with a cast?
Where is the error in the code?
I tried to search for the problem, but found nothing helpfull. Any hints are welcome. Thank you for your help!
EDIT:
The answer from #Lukas Thaler works, but now I get a new error:
[23505] ERROR: doubled key value violates unique-constraint »teams_guild_id_team_name_uindex«
Detail: Key»(guild_id, team_name)=(1, test3)« exists already.
WHERE: SQL-Statement»INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
SQL-Statment »INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
But the table only contains only "3,11,TeamUtils,TU"...
bot.teams has four columns: team_id, guild_id (both numerical data types), team_name and team_nickname (both varchars). In your INSERT statement in the function definition, you only provide three values and no association to particular columns. The default is to insert them in order, which assigns 1 to team_id and (crucially) NEW."team_name" to guild_id, hence the insert fails with a type mismatch error.
Specifying
INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname");
in your function should resolve your problem
To answer your other questions:
The INSERT statement is being executed inside a transaction, and a failure in the trigger will cause the entire transaction to be aborted and rolled back, hence you don't see the original row inserted to the table, either
The type is not deviating from the cast, it was the wrong value being inserted that caused the data type mismatch
I have a postgres database with a single table. The primary key of this table is a generated UUID. I am trying to add a logging table to this database such that whenever a row is added or deleted, the logging table gets an entry. My table has the following structure
CREATE TABLE configuration (
id uuid NOT NULL DEFAULT uuid_generate_v4(),
name text,
data json
);
My logging table has the following structure
CREATE TABLE configuration_log (
configuration_id uuid,
new_configuration_data json,
old_configuration_data json,
"user" text,
time timestamp
);
I have added the following rules:
CREATE OR REPLACE RULE log_configuration_insert AS ON INSERT TO "configuration"
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
'{}',
current_user,
current_timestamp
);
CREATE OR REPLACE RULE log_configuration_update AS ON UPDATE TO "configuration"
WHERE NEW.data::json::text != OLD.data::json::text
DO INSERT INTO configuration_log VALUES (
NEW.id,
NEW.data,
OLD.data,
current_user,
current_timestamp
);
Now, if I insert a value in the configuration table, the UUID in the configuration table and the configuration_log table are different. For example, the insert query
INSERT INTO configuration (name, data)
VALUES ('test', '{"property1":"value1"}')
The result is this... the UUID is c2b6ca9b-1771-404d-baae-ae2ec69785ac in the configuration table whereas in the configuration_log table the result is this... the UUID id 16109caa-dddc-4959-8054-0b9df6417406
However, the update rule works as expected. So if I write an update query as
UPDATE "configuration"
SET "data" = '{"property1":"abcd"}'
WHERE "id" = 'c2b6ca9b-1771-404d-baae-ae2ec69785ac';
The configuration_log table gets the correct UUID as seen here i.e. c2b6ca9b-1771-404d-baae-ae2ec69785ac
I am using NEW.id in both the rules so I was expecting the same behavior. Can anyone point out what I might be doing wrong here?
Thanks
This is another good example why rules should be avoided
Quote from the manual:
For any reference to NEW, the target list of the original query is searched for a corresponding entry. If found, that entry's expression replaces the reference.
So NEW.id is replaced with uuid_generate_v4() which explains why you are seeing a different value.
You should rewrite this to a trigger.
Btw: using jsonb is preferred over json, then you can also get rid of the (essentially incorrect) cast of the json column to text to compare the content.
I need to log any changes made in some table by trigger which will insert older version of modified row to another table with some additional data like:
-which action was performed
-when this action was performed
-by who.
I have problem with last requirement. While performing SQL somewhere in java by JDBC. I need to somehow pass logged user id stored in variable to postgres table where all older versions of modified row will be stored.
Is it even possible?
It may be stupid question but I desperately try to avoid inserting data like that manually in java. Triggers done some work for me but not all I need.
Demonstrative code below (I've cut out some code for security reasons):
"notes" table:
CREATE TABLE my_database.notes
(
pk serial NOT NULL,
client_pk integer,
description text,
CONSTRAINT notes_pkey PRIMARY KEY (pk)
)
Table storing older versions of every row changed in "notes" table:
CREATE TABLE my_database_log.notes_log
(
pk serial NOT NULL,
note_pk integer,
client_pk integer,
description text,
who_changed integer DEFAULT 0, -- how to fill in this field?
action_date timestamp without time zone DEFAULT now(), --when action was performed
action character varying, --which action was performed
CONSTRAINT notes_log_pkey PRIMARY KEY (pk)
)
Trigger for "notes" table:
CREATE TRIGGER after_insert_or_update_note_trigger
AFTER INSERT OR UPDATE
ON database.notes
FOR EACH ROW
EXECUTE PROCEDURE my_database.notes_new_row_log();
Procedure executed by trigger:
CREATE OR REPLACE FUNCTION my_database.notes_new_row_log()
RETURNS trigger AS
$BODY$
BEGIN
INSERT INTO my_database_log.notes_log(
note_pk, client_pk, description, action)
VALUES (
NEW.pk, NEW.client_pk, NEW.description, TG_OP);
RETURN NEW;
END;
$BODY$
LANGUAGE plpgsql VOLATILE
COST 100;
ALTER FUNCTION my_database.notes_new_row_log()
OWNER TO database_owner;
According to #Nick Barnes hint in comments, there is a need to declare a variable in postgresql.conf file:
...
#----------------------------------------------------------------------------
# CUSTOMIZED OPTIONS
#----------------------------------------------------------------------------
custom_variable_classes = 'myapp' # list of custom variable class names
myapp.user_id = 0
and call:
SET LOCAL customvar.user_id=<set_user_id_value_here>
before query that should be triggered.
To handle variable in trigger use:
current_setting('myapp.userid')
I have the following table in postgres:
CREATE TABLE "test" (
"id" serial NOT NULL PRIMARY KEY,
"value" text
)
I am doing following insertions:
insert into test (id, value) values (1, 'alpha')
insert into test (id, value) values (2, 'beta')
insert into test (value) values ('gamma')
In the first 2 inserts I am explicitly mentioning the id. However the table's auto increment pointer is not updated in this case. Hence in the 3rd insert I get the error:
ERROR: duplicate key value violates unique constraint "test_pkey"
DETAIL: Key (id)=(1) already exists.
I never faced this problem in Mysql in both MyISAM and INNODB engines. Explicit or not, mysql always update autoincrement pointer based on the max row id.
What is the workaround for this problem in postgres? I need it because I want a tighter control for some ids in my table.
UPDATE:
I need it because for some values I need to have a fixed id. For other new entries I dont mind creating new ones.
I think it may be possible by manually incrementing the nextval pointer to max(id) + 1 whenever I am explicitly inserting the ids. But I am not sure how to do that.
That's how it's supposed to work - next_val('test_id_seq') is only called when the system needs a value for this column and you have not provided one. If you provide value no such call is performed and consequently the sequence is not "updated".
You could work around this by manually setting the value of the sequence after your last insert with explicitly provided values:
SELECT setval('test_id_seq', (SELECT MAX(id) from "test"));
The name of the sequence is autogenerated and is always tablename_columnname_seq.
In the recent version of Django, this topic is discussed in the documentation:
Django uses PostgreSQL’s SERIAL data type to store auto-incrementing
primary keys. A SERIAL column is populated with values from a sequence
that keeps track of the next available value. Manually assigning a
value to an auto-incrementing field doesn’t update the field’s
sequence, which might later cause a conflict.
Ref: https://docs.djangoproject.com/en/dev/ref/databases/#manually-specified-autoincrement-pk
There is also management command manage.py sqlsequencereset app_label ... that is able to generate SQL statements for resetting sequences for the given app name(s)
Ref: https://docs.djangoproject.com/en/dev/ref/django-admin/#django-admin-sqlsequencereset
For example these SQL statements were generated by manage.py sqlsequencereset my_app_in_my_project:
BEGIN;
SELECT setval(pg_get_serial_sequence('"my_project_aaa"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_aaa";
SELECT setval(pg_get_serial_sequence('"my_project_bbb"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_bbb";
SELECT setval(pg_get_serial_sequence('"my_project_ccc"','id'), coalesce(max("id"), 1), max("id") IS NOT null) FROM "my_project_ccc";
COMMIT;
It can be done automatically using a trigger. This way you are sure that the largest value is always used as the next default value.
CREATE OR REPLACE FUNCTION set_serial_id_seq()
RETURNS trigger AS
$BODY$
BEGIN
EXECUTE (FORMAT('SELECT setval(''%s_%s_seq'', (SELECT MAX(%s) from %s));',
TG_TABLE_NAME,
TG_ARGV[0],
TG_ARGV[0],
TG_TABLE_NAME));
RETURN OLD;
END;
$BODY$
LANGUAGE plpgsql;
CREATE TRIGGER set_mytable_id_seq
AFTER INSERT OR UPDATE OR DELETE
ON mytable
FOR EACH STATEMENT
EXECUTE PROCEDURE set_serial_id_seq('mytable_id');
The function can be reused for multiple tables. Change "mytable" to the table of interest.
For more info regarding triggers:
https://www.postgresql.org/docs/9.1/plpgsql-trigger.html
https://www.postgresql.org/docs/9.1/sql-createtrigger.html
I'm converting a MySQL table to PostgreSQL for the first time in my life and running into the traditional newbie problem of having no auto_increment.
Now I've found out that the postgres solution is to use a sequence and then request the nextval() of this sequence as the default value every time you insert. I've also read that the SERIAL type creates a sequence and a primary key automatically, and that nextval() increments the counter even when called inside transactions to avoid locking the sequence.
What I can't find addressed is the issue of what happens when you manually insert values into a field with a UNIQUE or PRIMARY constraint and a nextval() of a sequence as default. As far as I can see, this causes the INSERT to fail when the sequence reaches that value.
Is there a simple (or common) way to fix this ?
A clear explanation would be very much appreciated.
Update: If you feel I shouldn't do this, will never be able to fix this or am making some flawed assumptions, please feel free to point them out in your answers. Above all, please tell me what to do instead to offer programmers a stable and robust database that can't be corrupted with a simple insert (preferably without hiding everything behind stored procedures)
If you're migrating your data then I would drop the sequence constraint on the column, perform all of your inserts, use setval() to set the sequence to the maximum value of your data and then reinstate your column sequence nextval() default.
You can create a trigger which will check if currval('id_sequence_name')>=NEW.id.
If your transaction did not use default value or nextval('id_sequence_name'), then a currval function will throw an error, as it works only when sequence was updated in current session. If you use nextval and then try to insert bigger primary key then it will throw another error. A transaction will be then aborted.
This would prevent inserting any bad primary keys which would break serial.
Example code:
create table test (id serial primary key, value text);
create or replace function test_id_check() returns trigger language plpgsql as
$$ begin
if ( currval('test_id_seq')<NEW.id ) then
raise exception 'currval(test_id_seq)<id';
end if;
return NEW;
end; $$;
create trigger test_id_seq_check before insert or update of id on test
for each row execute procedure test_id_check();
Then inserting with default primary key will work fine:
insert into test(value) values ('a'),('b'),('c'),('d');
But inserting too big primary key will error out and abort:
insert into test(id, value) values (10,'z');
To expand on Tometzky's great answer, here is a more general version:
CREATE OR REPLACE FUNCTION check_serial() RETURNS trigger AS $$
BEGIN
IF currval(TG_TABLE_SCHEMA || '.' || TG_TABLE_NAME || '_' || TG_ARGV[0] || '_seq') <
(row_to_json(NEW)->>TG_ARGV[0])::bigint
THEN RAISE SQLSTATE '55000'; -- same as currval() of uninitialized sequence
END IF;
RETURN NULL;
EXCEPTION
WHEN SQLSTATE '55000'
THEN RAISE 'manual entry of serial field %.%.% disallowed',
TG_TABLE_SCHEMA, TG_TABLE_NAME, TG_ARGV[0]
USING HINT = 'use DEFAULT instead of specifying value manually',
SCHEMA = TG_TABLE_SCHEMA, TABLE = TG_TABLE_NAME, COLUMN = TG_ARGV[0];
END;
$$ LANGUAGE plpgsql;
Which you can apply to any column, say test.id, thusly:
CREATE CONSTRAINT TRIGGER test_id_check
AFTER INSERT OR UPDATE OF id ON test
FOR EACH ROW EXECUTE PROCEDURE check_serial(id);
I don't exactly understand you question, but if your goal is just to do the insert, and have a valid field (e.g. an id), then insert the values without the id field, that's what "default" stands for. It will work.
E.g. havin a id serial NOT NULL and a CONSTRAINT table_pkey PRIMARY KEY(id) in the table definition will auto-set the id and auto-increment a sequence table_id_seq.
What about using a CHECK?
CREATE SEQUENCE pk_test
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
CREATE TABLE test (
id INT PRIMARY KEY CHECK (id=currval('pk_test')) DEFAULT nextval('pk_test'),
num int not null
);
ALTER SEQUENCE pk_test OWNED BY test.id;
-- Testing:
INSERT INTO test (num) VALUES (3) RETURNING id, num;
1,3 -- OK
2,3 -- OK
INSERT INTO test (id, num) values (30,3) RETURNING id, num;
/*
ERROR: new row for relation "test" violates check constraint "test_id_check"
DETAIL: Failing row contains (30, 3).
********** Error **********
ERROR: new row for relation "test" violates check constraint "test_id_check"
SQL state: 23514
Detail: Failing row contains (30, 3).
*/
DROP TABLE test;