I'm experiencing a peculiar problem with a Postgres table. When I try to perform a simple INSERT, it returns an error - duplicate key value violates unique constraint.
For starters, here's the schema for the table:
CREATE TABLE app.guardians
(
guardian_id serial NOT NULL,
first_name character varying NOT NULL,
middle_name character varying,
last_name character varying NOT NULL,
id_number character varying NOT NULL,
telephone character varying,
email character varying,
creation_date timestamp without time zone NOT NULL DEFAULT now(),
created_by integer,
active boolean NOT NULL DEFAULT true,
occupation character varying,
address character varying,
marital_status character varying,
modified_date timestamp without time zone,
modified_by integer,
CONSTRAINT "PK_guardian_id" PRIMARY KEY (guardian_id ),
CONSTRAINT "U_id_number" UNIQUE (id_number )
)
WITH (
OIDS=FALSE
);
ALTER TABLE app.guardians
OWNER TO postgres;
The table has 400 rows. Now suppose I try to perform this simple INSERT:
INSERT INTO app.guardians(first_name, last_name, id_number) VALUES('This', 'Fails', '123456');
I get the error:
ERROR: duplicate key value violates unique constraint "PK_guardian_id"
DETAIL: Key (guardian_id)=(2) already exists.
If I try running the same query again, the detail on the error message will be:
DETAIL: Key (guardian_id)=(3) already exists.
And
DETAIL: Key (guardian_id)=(4) already exists.
Incrementally until it gets to a non-existing guardian_id.
What could have gone wrong on this particular table and how is it rectified? I reckon it might have to do with the fact that the table had earlier been dropped using cascade and data re-entered afresh but I'm not sure on this theory.
The reason of this error is that you have incorrect sequence next_val. It happens when you insert field with auto increment manually
So, you have to alter your sequence next_val
alter sequence "PK_guardian_id"
start with (
select max(quardian_id) + 1
from app.guardians
)
Note:
To avoid blocking of concurrent transactions that obtain numbers from the same sequence, ALTER SEQUENCE's effects on the sequence generation parameters are never rolled back; those changes take effect immediately and are not reversible. However, the OWNED BY, OWNER TO, RENAME TO, and SET SCHEMA clauses cause ordinary catalog updates that can be rolled back.
ALTER SEQUENCE will not immediately affect nextval results in backends, other than the current one, that have preallocated (cached) sequence values. They will use up all cached values prior to noticing the changed sequence generation parameters. The current backend will be affected immediately.
Documentation:
https://www.postgresql.org/docs/9.6/static/sql-altersequence.html
Related
So my first question here on SO,
let me describe the setup:
I have a postgressql database (version 12) with a table guilds (containing an internal guild_id and a few other informations). The guild_id is used as foreign key for many other tables like a teams table. Now if a team is inserted in teams for another guild then the guild with the guild_id = 1, I want a trigger function to create the same team entry, but now with a modified guild_id (should be now 1).
Definition of the relevant things I have atm:
create table if not exists bot.guilds
(
guild_id bigserial not null
constraint guilds_pk
primary key,
guild_dc_id bigint not null,
);
create table if not exists bot.teams
(
team_id bigserial not null
constraint teams_pk
primary key,
guild_id bigserial not null
constraint teams_guilds_guild_id_fk
references bot.guilds
on delete cascade,
team_name varchar(20) not null,
team_nickname varchar(10) not null
);
alter table bot.teams
owner to postgres;
create unique index if not exists teams_guild_id_team_name_uindex
on bot.teams (guild_id, team_name);
create unique index if not exists teams_guild_id_team_nickname_uindex
on bot.teams (guild_id, team_nickname);
create function duplicate_teams() returns trigger
language plpgsql
as
$$
BEGIN
INSERT INTO bot.teams VALUES(1,NEW."team_name",NEW."team_nickname");
RETURN NEW;
END;
$$;
create trigger duplicate_team
after insert
on bot.teams
for each row
execute procedure bot.duplicate_teams();
If I try now to insert a new row in teams (INSERT INTO bot.teams ("guild_id", "team_name", "team_nickname")VALUES (14, 'test2', 'test2');), I get the following error message (orginial german, translated by me to english):
[42804] ERROR: Column »guild_id« has type integer, but the expression has the type character varying
HINT: You have to rewrite the expression or cast the value.
WITH: PL/pgSQL-function duplicate_teams() row 3 in SQL-expressions
After execution the orgininal insert statement isn't in the table neither the copy.
I tried to cast the values for the guild id to serial, integer, bigserial.. but everytime the same error. I'm confused by the error message part with "has the type character varying".
So my questions are:
Is my understanding correct, that the error is caused by the trigger? and due to the error in the trigger the original insert statement doesnt work too?
Why is the type varing even with a cast?
Where is the error in the code?
I tried to search for the problem, but found nothing helpfull. Any hints are welcome. Thank you for your help!
EDIT:
The answer from #Lukas Thaler works, but now I get a new error:
[23505] ERROR: doubled key value violates unique-constraint »teams_guild_id_team_name_uindex«
Detail: Key»(guild_id, team_name)=(1, test3)« exists already.
WHERE: SQL-Statement»INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
SQL-Statment »INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
But the table only contains only "3,11,TeamUtils,TU"...
bot.teams has four columns: team_id, guild_id (both numerical data types), team_name and team_nickname (both varchars). In your INSERT statement in the function definition, you only provide three values and no association to particular columns. The default is to insert them in order, which assigns 1 to team_id and (crucially) NEW."team_name" to guild_id, hence the insert fails with a type mismatch error.
Specifying
INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname");
in your function should resolve your problem
To answer your other questions:
The INSERT statement is being executed inside a transaction, and a failure in the trigger will cause the entire transaction to be aborted and rolled back, hence you don't see the original row inserted to the table, either
The type is not deviating from the cast, it was the wrong value being inserted that caused the data type mismatch
I have a table
CREATE TABLE users (
id BIGSERIAL NOT NULL PRIMARY KEY,
created_at TIMESTAMP DEFAULT NOW()
);
First I run
INSERT INTO users (id) VALUES (1);
After I run
INSERT INTO users (created_at) VALUES ('2016-11-10T09:37:59+00:00');
And I get
ERROR: duplicate key value violates unique constraint "users_pkey"
DETAIL: Key (id)=(1) already exists.
Why id sequence is not incremented when I insert "id" by myself?
That is because the DEFAULT clause only gets evaluated if you either omit the column in the SET clause or insert the special value DEFAULT.
In your first INSERT, the DEFAULT clause is not evaluated, so the sequence is not increased. Your second INSERT uses the DEFAULT clause, the sequence is increased and returns the value 1, which collides with the value explicitly given in the previous INSERT.
Don't mix INSERTs with automatic value creation using sequences and INSERTs that explicitly specify the column. Or if you have to, you sould make sure that the values cannot collide, e.g. by using even numbers for automatically generated values and odd numbers for explicit INSERTs.
I want to add revisioning for records in an existing application which stores data in a PostgreSQL database. I read about strategies e.g. in this question, this question and this blog post.
I think that the approach to create a second history table which will rarely be queried will work best. However I do have some practical problems. Let's say that this is my table I want to add revision control to:
create table people(
id serial not null primary key,
name varchar(255) not null
);
For this very simple table my history table could look like this:
create table people_history(
peopleId int not null references people(id) on delete cascade on update restrict,
revision int not null,
revisionTimestamp timestamptz not null default current_timestamp,
name character varying(255) not null,
primary key(peopleId, revision)
);
And this brings the first problems up:
How do I generate the revision number?
Of course I could create a sequence from which I request revision numbers which would be easy. However that would leave large gaps between revisions per person as many people share the same sequence and it would feel more natural if the revision numbers were ascending numbers without gaps per person.
So I am tempted to find my revision number by select max(revision)+1 from ... where peopleId=.... However that could lead to a race condition if two threads ask for the next revision number and try to insert. That is very unlikely I have to admit (especially in my case where only few updates happen anyway) and would not cause data to corrupt as that would be a duplicate primary key and thus cause a transaction rollback, but it is not pretty either. I wonder if there is a prettier solution.
How do I insert data into the history table?
Two ways come to mind: Manually on every statement that updates the main table or using a trigger. A trigger sounds less error-prone as it is less likely that I forget about a query somewhere. However I cannot communicate to the application exactly which revision number was just created, can I? So if I want to create a couple of event tables like this:
create table peopleUserEditEvent (
poepleId int not null,
revision int not null,
userId int not null references users(id) on delete set null on update restrict,
comment text not null default '',
primary key(paopleId, revision),
foreign key (peopleId, revision) references people_history
);
That lists some metadata for revisions which explains why the revision was changed. In this case a user with a specific ID edited the data and might have supplied a comment.
In another case (and another event table) a cronjob might have changed something and documents the event which probably has no userId and no comment but other metadata.
To add those event data I need the revision id and if the revision id was created by a trigger it will be difficult to find out (or is there a practical way to do so?).
Well, you need one replication strategy for all tables and column you have , you can create one table to maintain all changes and insert on anytime you make a UPDATE INSERT or DELETE statement, maybe with this exemple of framwork idempiere changelog can help you
CREATE TABLE ad_changelog (
ad_changelog_id NUMERIC(10,0) NOT NULL,
ad_session_id NUMERIC(10,0) NOT NULL,
ad_table_id NUMERIC(10,0) NOT NULL,
ad_column_id NUMERIC(10,0) NOT NULL,
isactive CHAR(1) DEFAULT 'Y'::bpchar NOT NULL,
created TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
createdby NUMERIC(10,0) NOT NULL,
updated TIMESTAMP WITHOUT TIME ZONE DEFAULT now() NOT NULL,
updatedby NUMERIC(10,0) NOT NULL,
record_id NUMERIC(10,0) NOT NULL,
oldvalue VARCHAR(2000),
newvalue VARCHAR(2000),
undo CHAR(1),
redo CHAR(1),
iscustomization CHAR(1) DEFAULT 'N'::bpchar NOT NULL,
description VARCHAR(255),
ad_changelog_uu VARCHAR(36) DEFAULT NULL::character varying,
CONSTRAINT adcolumn_adchangelog FOREIGN KEY (ad_column_id)
REFERENCES adempiere.ad_column(ad_column_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adsession_adchangelog FOREIGN KEY (ad_session_id)
REFERENCES adempiere.ad_session(ad_session_id)
MATCH PARTIAL
ON DELETE NO ACTION
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED,
CONSTRAINT adtable_adchangelog FOREIGN KEY (ad_table_id)
REFERENCES adempiere.ad_table(ad_table_id)
MATCH PARTIAL
ON DELETE CASCADE
ON UPDATE NO ACTION
DEFERRABLE
INITIALLY DEFERRED
)
WITH (oids = false);
CREATE INDEX ad_changelog_speed ON adempiere.ad_changelog
USING btree (ad_table_id, record_id);
CREATE UNIQUE INDEX ad_changelog_uu_idx ON adempiere.ad_changelog
USING btree (ad_changelog_uu COLLATE pg_catalog."default");
I have a Database which is migrated from MSSQL to PostgreSQL(9.2).
This Database have 100+ tables, These table have autonumbering filed(PRIMARY KEY field), given below is an example for a table
CREATE TABLE company
(
companyid integer NOT NULL DEFAULT nextval('seq_company_id'::regclass),
company character varying(100),
add1 character varying(100),
add2 character varying(100),
add3 character varying(100),
phoneoff character varying(30),
phoneres character varying(30)
CONSTRAINT gcompany_pkey PRIMARY KEY (companyid)
)
sample data
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company1','add1','add2','add3','00055544','7788848');
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company2','add9','add5','add2','00088844','7458844');
INSERT INTO company (company, add1, add2, add3, phoneoff, phoneres) VALUES
('company5','add5','add8','add7','00099944','2218844');
and below is the sequence for this table
CREATE SEQUENCE seq_company_id
INCREMENT 1
MINVALUE 1
MAXVALUE 9223372036854775807
START 1
CACHE 1;
ALTER TABLE seq_company_id
OWNER TO postgres;
while reading PostgreSQL Documentation i read about Serial Types so i wish to change all the existing auto numbering fields to serial.
How to do it?
i have tried
alter table company alter column companyid type serial
ERROR: type "serial" does not exist
********** Error **********
There is indeed no data type serial. It is just a shorthand notation for a default value populated from sequence (see the manual for details), essentially what you have now.
The only difference between your setup and a column defined as serial is that there is a link between the sequence and the column, which you can define manually as well:
alter sequence seq_gcompany_id owned by company.companyid;
With that link in place you can no longer distinguish your column from a column initially defined as serial. What this change does, is that the sequence will automatically be dropped if the table (or the column) is dropped that uses it.
I wanted to use timetravel function (F.39. spi, PostgreSQL 9.1 Documentation) in my application, however it doesn't seem to work properly for me. With inserting rows into table everything works just fine, I get start and stop date properly, but when I'm trying to update those rows postgres gives me error about violating of PRIMARY KEY constraint. He's trying to insert a tuple with the same primary id as previous tuple...
It's insane to remove primary key constraints from all tables in the database but it's the functionality I need. So maybe you have some expierience with timetravel?
Any sort of help will be appreciated. Thanks in advance.
DDL:
CREATE TABLE cities
(
city_id serial NOT NULL,
state_id integer,
name character varying(80) NOT NULL,
start_date abstime,
stop_date abstime,
CONSTRAINT pk_cities PRIMARY KEY (city_id ),
CONSTRAINT fk_cities_states FOREIGN KEY (state_id)
REFERENCES states (state_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
-- Trigger: time_travel on cities
-- DROP TRIGGER time_travel ON cities;
CREATE TRIGGER time_travel
BEFORE INSERT OR UPDATE OR DELETE
ON cities
FOR EACH ROW
EXECUTE PROCEDURE timetravel('start_date', 'stop_date');
STATEMENT GIVEN:
INSERT INTO cities(
state_id, name)
VALUES (20,'Paris');
and that's ok. I get start_date and stop_date.
But by:
UPDATE cities SET name='Rome' WHERE name='Paris'
I get error- described earlier.
Schema of states
-- Table: states
-- DROP TABLE states;
CREATE TABLE states
(
state_id serial NOT NULL, -- unikatowy numer wojewodztwa
country_id integer, -- identyfikator panstwa, w ktorym znajduje sie wojewodztwo
name character varying(50), -- nazwa wojewodztwa
CONSTRAINT pk_states PRIMARY KEY (state_id ),
CONSTRAINT uq_states_state_id UNIQUE (state_id )
)
WITH (
OIDS=FALSE
);
Unfortunately,as a new user I'm not allowed to post images here.
You can see them there:
Sample data from table cities: korpusvictifrew.cba.pl/postgres_cities.png
Sample data from table states: korpusvictifrew.cba.pl/states_data.png
Time travel converts an UPDATE into an UPDATE of the old record's stop_date and an INSERT of a new one with the changed data plus an infinity stop_date. You can't have more than one record for city_id due to pk_cities. The time travel triggers do not allow you to break that requirement.
You cannot use this:
CONSTRAINT pk_cities PRIMARY KEY (city_id )
You must use this
CONSTRAINT pk_cities PRIMARY KEY (city_id, stop_date)