Duplicate Key error even after using "On Conflict" clause - postgresql

My table has following structure
CREATE TABLE myTable
(
user_id VARCHAR(100) NOT NULL,
task_id VARCHAR(100) NOT NULL,
start_time TIMESTAMP NOT NULL,
SOME_COLUMN VARCHAR,
col1 INTEGER,
col2 INTEGER DEFAULT 0
);
ALTER TABLE myTable
ADD CONSTRAINT pk_4_col_constraint UNIQUE (task_id, user_id, start_time, SOME_COLUMN);
ALTER TABLE myTable
ADD CONSTRAINT pk_3_col_constraint UNIQUE (task_id, user_id, start_time);
CREATE INDEX IF NOT EXISTS index_myTable ON myTable USING btree (task_id);
However when i try to insert data into table using
INSERT INTO myTable VALUES (...)
ON CONFLICT (task_id, user_id, start_time) DO UPDATE
SET ... --updating other columns except for [task_id, user_id, start_time]
I get following error
ERROR: duplicate key value violates unique constraint "pk_4_col_constraint"
Detail: Key (task_id, user_id, start_time, SOME_COLUMN)=(XXXXX, XXX, 2021-08-06 01:27:05, XXXXX) already exists.
I got the above error when i tried to programatically insert the row. I was successfully able to the execute query successfully via SQL-IDE.
Now i have following questions:
How is that possible? When 'pk_3_col_constraint' is ensuring my data is unique at 3 columns, adding one extra column will not change anything. What's happening here?
I am aware that although my constraint name starts with 'pk' i am using UNIQUE constraint rather than Primary Key constraint(probably a mistake while creating constraints but either way this error shouldn't have occurred)
Why didn't i get the error when using SQL-IDE?
I read in few articles unique constraint works little different as compared to primary key constraint hence causes this issue at time. If this is known issue is there any way i can replicate this error to understand in more detail.
I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit version Postgres. My programmatic env was a Java AWS Lambda.
I have noticed people have faced this error occasionally in the past.
https://www.postgresql.org/message-id/15556-7b3ae3aba2c39c23%40postgresql.org
https://www.postgresql.org/message-id/flat/65AECD9A-CE13-4FCB-9158-23BE62BB65DD%40msqr.us#d05d2bb7b2f40437c2ccc9d485d8f41e but there are conclusions as to why it is happening

Related

Db2 error during enlarge of a column which is in the primary key

I'm using Db2 11.5.4000.1449 on window 10.
I have the table THETABLE with those columns:
THEKEY CHAR(30) NOT NULL
THEDATA CHAR(30)
Primary key: THEKEY
I try to enlarge the primary key column using the following statement:
ALTER TABLE THETABLE ALTER COLUMN THEKEY SET DATA TYPE CHAR(50)
But I got the error:
SQLCODE=-668, SQLSTATE=57007 reason code="7"
The official IBM documentation says which the table is in reorg pending state
The table is NOT in reorg pending state.
I've check using:
SELECT REORG_PENDING FROM SYSIBMADM.ADMINTABINFO
where TABSCHEMA='DB2ADMIN' and tabname='THETABLE'
The results of the below query is: N
I've tried to reorg both the table and indexes but the problem persists.
The only way I have found is to drop the primary key, alter the column and then add the primary key.
Note:
I have also other tables in which I've enlarged a CHAR column which is a primary key. (without dropping and recreate the primary key)
The problems does not come for all tables but only for some tables.
I have no idea why for some table is possible to enlarge a CHAR column which is a primary and and for some other tables not.
Have you any idea?

Problems with creating a postgresql trigger function to create a modified entry for every insert statement

So my first question here on SO,
let me describe the setup:
I have a postgressql database (version 12) with a table guilds (containing an internal guild_id and a few other informations). The guild_id is used as foreign key for many other tables like a teams table. Now if a team is inserted in teams for another guild then the guild with the guild_id = 1, I want a trigger function to create the same team entry, but now with a modified guild_id (should be now 1).
Definition of the relevant things I have atm:
create table if not exists bot.guilds
(
guild_id bigserial not null
constraint guilds_pk
primary key,
guild_dc_id bigint not null,
);
create table if not exists bot.teams
(
team_id bigserial not null
constraint teams_pk
primary key,
guild_id bigserial not null
constraint teams_guilds_guild_id_fk
references bot.guilds
on delete cascade,
team_name varchar(20) not null,
team_nickname varchar(10) not null
);
alter table bot.teams
owner to postgres;
create unique index if not exists teams_guild_id_team_name_uindex
on bot.teams (guild_id, team_name);
create unique index if not exists teams_guild_id_team_nickname_uindex
on bot.teams (guild_id, team_nickname);
create function duplicate_teams() returns trigger
language plpgsql
as
$$
BEGIN
INSERT INTO bot.teams VALUES(1,NEW."team_name",NEW."team_nickname");
RETURN NEW;
END;
$$;
create trigger duplicate_team
after insert
on bot.teams
for each row
execute procedure bot.duplicate_teams();
If I try now to insert a new row in teams (INSERT INTO bot.teams ("guild_id", "team_name", "team_nickname")VALUES (14, 'test2', 'test2');), I get the following error message (orginial german, translated by me to english):
[42804] ERROR: Column »guild_id« has type integer, but the expression has the type character varying
HINT: You have to rewrite the expression or cast the value.
WITH: PL/pgSQL-function duplicate_teams() row 3 in SQL-expressions
After execution the orgininal insert statement isn't in the table neither the copy.
I tried to cast the values for the guild id to serial, integer, bigserial.. but everytime the same error. I'm confused by the error message part with "has the type character varying".
So my questions are:
Is my understanding correct, that the error is caused by the trigger? and due to the error in the trigger the original insert statement doesnt work too?
Why is the type varing even with a cast?
Where is the error in the code?
I tried to search for the problem, but found nothing helpfull. Any hints are welcome. Thank you for your help!
EDIT:
The answer from #Lukas Thaler works, but now I get a new error:
[23505] ERROR: doubled key value violates unique-constraint »teams_guild_id_team_name_uindex«
Detail: Key»(guild_id, team_name)=(1, test3)« exists already.
WHERE: SQL-Statement»INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
SQL-Statment »INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname")«
PL/pgSQL-Function duplicate_teams() row 3 in SQL-Statement
But the table only contains only "3,11,TeamUtils,TU"...
bot.teams has four columns: team_id, guild_id (both numerical data types), team_name and team_nickname (both varchars). In your INSERT statement in the function definition, you only provide three values and no association to particular columns. The default is to insert them in order, which assigns 1 to team_id and (crucially) NEW."team_name" to guild_id, hence the insert fails with a type mismatch error.
Specifying
INSERT INTO bot.teams(guild_id, team_name, team_nickname) VALUES(1,NEW."team_name",NEW."team_nickname");
in your function should resolve your problem
To answer your other questions:
The INSERT statement is being executed inside a transaction, and a failure in the trigger will cause the entire transaction to be aborted and rolled back, hence you don't see the original row inserted to the table, either
The type is not deviating from the cast, it was the wrong value being inserted that caused the data type mismatch

Copying rows violates not-null constraint in PostgreSQL

I am trying to do what is described in this solution and also here. That means I would like to copy rows with many columns while changing only a few values. So my query looks like this:
CREATE TEMPORARY TABLE temp_table AS
SELECT * FROM original_table WHERE <conditions>;
UPDATE temp_table
SET <auto_inc_field>=NULL,
<fieldx>=<valuex>,
<fieldy>=<valuey>;
INSERT INTO original_table SELECT * FROM temporary_table;
However, the <auto_inc_field>=NULL part is not working for me, respectively my PostgreSQL 9.4 database:
Exception: null value in column "auto_inc_field" violates not-null constraint
The <auto_inc_field> column is defined as BIGINT, SERIAL, and has a primary key constraint.
What do I need to pass, if NULL is not working? Is there an alternative method?
I understand that the primary key is a serial. List all columns but the primary key in the insert command. List the correspondent columns and values in the select command:
insert into original_table (col_1, col_2, col_3)
select col_1, value_2, value_2
from original_table
where the_conditions;

Timetravel in postgres - violating PRIMARY KEY constraint

I wanted to use timetravel function (F.39. spi, PostgreSQL 9.1 Documentation) in my application, however it doesn't seem to work properly for me. With inserting rows into table everything works just fine, I get start and stop date properly, but when I'm trying to update those rows postgres gives me error about violating of PRIMARY KEY constraint. He's trying to insert a tuple with the same primary id as previous tuple...
It's insane to remove primary key constraints from all tables in the database but it's the functionality I need. So maybe you have some expierience with timetravel?
Any sort of help will be appreciated. Thanks in advance.
DDL:
CREATE TABLE cities
(
city_id serial NOT NULL,
state_id integer,
name character varying(80) NOT NULL,
start_date abstime,
stop_date abstime,
CONSTRAINT pk_cities PRIMARY KEY (city_id ),
CONSTRAINT fk_cities_states FOREIGN KEY (state_id)
REFERENCES states (state_id) MATCH SIMPLE
ON UPDATE CASCADE ON DELETE NO ACTION
)
WITH (
OIDS=FALSE
);
-- Trigger: time_travel on cities
-- DROP TRIGGER time_travel ON cities;
CREATE TRIGGER time_travel
BEFORE INSERT OR UPDATE OR DELETE
ON cities
FOR EACH ROW
EXECUTE PROCEDURE timetravel('start_date', 'stop_date');
STATEMENT GIVEN:
INSERT INTO cities(
state_id, name)
VALUES (20,'Paris');
and that's ok. I get start_date and stop_date.
But by:
UPDATE cities SET name='Rome' WHERE name='Paris'
I get error- described earlier.
Schema of states
-- Table: states
-- DROP TABLE states;
CREATE TABLE states
(
state_id serial NOT NULL, -- unikatowy numer wojewodztwa
country_id integer, -- identyfikator panstwa, w ktorym znajduje sie wojewodztwo
name character varying(50), -- nazwa wojewodztwa
CONSTRAINT pk_states PRIMARY KEY (state_id ),
CONSTRAINT uq_states_state_id UNIQUE (state_id )
)
WITH (
OIDS=FALSE
);
Unfortunately,as a new user I'm not allowed to post images here.
You can see them there:
Sample data from table cities: korpusvictifrew.cba.pl/postgres_cities.png
Sample data from table states: korpusvictifrew.cba.pl/states_data.png
Time travel converts an UPDATE into an UPDATE of the old record's stop_date and an INSERT of a new one with the changed data plus an infinity stop_date. You can't have more than one record for city_id due to pk_cities. The time travel triggers do not allow you to break that requirement.
You cannot use this:
CONSTRAINT pk_cities PRIMARY KEY (city_id )
You must use this
CONSTRAINT pk_cities PRIMARY KEY (city_id, stop_date)

PostgreSQL: after import some data, if insert there is error - IntegrityError duplicate key value violates unique constraint "place_country_pkey"

When I import some data to PostgreSQL through PhpPgAdmin there is all fine.
But when I try later to insert some data to populated before tables I get an error:
IntegrityError: duplicate key value violates unique constraint "place_country_pkey"
And this is happens only with prepopulated tables.
Here is my SQL:
DROP TABLE IF EXISTS place_country CASCADE;
CREATE TABLE place_country (
id SERIAL PRIMARY KEY,
country_en VARCHAR(100) NOT NULL,
country_ru VARCHAR(100) NOT NULL,
country_ua VARCHAR(100) NOT NULL
);
INSERT INTO place_country VALUES(1,'Ukraine','Украина','Україна');
How to avoid this?
Thanks!
Try not inserting the "1". IIRC, in Postgres, when you define a column as SERIAL, it means that it will auto-generate an ID with a counter to automatically populate that column. So use:
INSERT INTO place_country (country_en, country_ru, country_ua) VALUES (Ukraine','Украина','Україна');
Which is a good practice anyway, BTW (explicitly naming the columns in an INSERT, I mean).