Db2 error during enlarge of a column which is in the primary key - db2

I'm using Db2 11.5.4000.1449 on window 10.
I have the table THETABLE with those columns:
THEKEY CHAR(30) NOT NULL
THEDATA CHAR(30)
Primary key: THEKEY
I try to enlarge the primary key column using the following statement:
ALTER TABLE THETABLE ALTER COLUMN THEKEY SET DATA TYPE CHAR(50)
But I got the error:
SQLCODE=-668, SQLSTATE=57007 reason code="7"
The official IBM documentation says which the table is in reorg pending state
The table is NOT in reorg pending state.
I've check using:
SELECT REORG_PENDING FROM SYSIBMADM.ADMINTABINFO
where TABSCHEMA='DB2ADMIN' and tabname='THETABLE'
The results of the below query is: N
I've tried to reorg both the table and indexes but the problem persists.
The only way I have found is to drop the primary key, alter the column and then add the primary key.
Note:
I have also other tables in which I've enlarged a CHAR column which is a primary key. (without dropping and recreate the primary key)
The problems does not come for all tables but only for some tables.
I have no idea why for some table is possible to enlarge a CHAR column which is a primary and and for some other tables not.
Have you any idea?

Related

Avoid scan on attach partition with check constraint

I am recreating an existing table as a partitioned table in PostgreSQL 11.
After some research, I am approaching it using the following procedure so this can be done online while writes are still happening on the table:
add a check constraint on the existing table, first as not valid and then validating
drop the existing primary key
rename the existing table
create the partitioned table under the prior table name
attach the existing table as a partition to the new partitioned table
My expectation was that the last step would be relatively fast, but I don't really have a number for this. In my testing, it's taking about 30s. I wonder if my expectations are incorrect or if I'm doing something wrong with the constraint or anything else.
Here's a simplified version of the DDL.
First, the inserted_at column is declared like this:
inserted_at timestamp without time zone not null
I want to have an index on the ID even after I drop the PK for existing queries and writes, so I create an index:
create unique index concurrently my_events_temp_id_index on my_events (id);
The check constraint is created in one transaction:
alter table my_events add constraint my_events_2022_07_events_check
check (inserted_at >= '2018-01-01' and inserted_at < '2022-08-01')
not valid;
In the next transaction, it's validated (and the validation is successful):
alter table my_events validate constraint my_events_2022_07_events_check;
Then before creating the partitioned table, I drop the primary key of the existing table:
alter table my_events drop constraint my_events_pkey cascade;
Finally, in its own transaction, the partitioned table is created:
alter table my_events rename to my_events_2022_07;
create table my_events (
id uuid not null,
... other columns,
inserted_at timestamp without time zone not null,
primary key (id, inserted_at)
) partition by range (inserted_at);
alter table my_events attach partition my_events_2022_07
for values from ('2018-01-01') to ('2022-08-01');
That last transaction blocks inserts and takes about 30s for the 12M rows in my test database.
Edit
I wanted to add that in response to the attach I see this:
INFO: partition constraint for table "my_events_2022_07" is implied by existing constraints
That makes me think I'm doing this right.
The problem is not the check constraint, it is the primary key.
If you make the original unique index include both columns:
create unique index concurrently my_events_temp_id_index on my_events (id,inserted_at);
And if you make the new table have a unique index rather than a primary key on those two columns, then the attach is nearly instantaneous.
These seem to me like unneeded restrictions in PostgreSQL, both that the unique index on one column can't be used to imply uniqueness on the both columns, and that the unique index on both columns cannot be used to imply the primary key (nor even a unique constraint--but only a unique index).

Duplicate Key error even after using "On Conflict" clause

My table has following structure
CREATE TABLE myTable
(
user_id VARCHAR(100) NOT NULL,
task_id VARCHAR(100) NOT NULL,
start_time TIMESTAMP NOT NULL,
SOME_COLUMN VARCHAR,
col1 INTEGER,
col2 INTEGER DEFAULT 0
);
ALTER TABLE myTable
ADD CONSTRAINT pk_4_col_constraint UNIQUE (task_id, user_id, start_time, SOME_COLUMN);
ALTER TABLE myTable
ADD CONSTRAINT pk_3_col_constraint UNIQUE (task_id, user_id, start_time);
CREATE INDEX IF NOT EXISTS index_myTable ON myTable USING btree (task_id);
However when i try to insert data into table using
INSERT INTO myTable VALUES (...)
ON CONFLICT (task_id, user_id, start_time) DO UPDATE
SET ... --updating other columns except for [task_id, user_id, start_time]
I get following error
ERROR: duplicate key value violates unique constraint "pk_4_col_constraint"
Detail: Key (task_id, user_id, start_time, SOME_COLUMN)=(XXXXX, XXX, 2021-08-06 01:27:05, XXXXX) already exists.
I got the above error when i tried to programatically insert the row. I was successfully able to the execute query successfully via SQL-IDE.
Now i have following questions:
How is that possible? When 'pk_3_col_constraint' is ensuring my data is unique at 3 columns, adding one extra column will not change anything. What's happening here?
I am aware that although my constraint name starts with 'pk' i am using UNIQUE constraint rather than Primary Key constraint(probably a mistake while creating constraints but either way this error shouldn't have occurred)
Why didn't i get the error when using SQL-IDE?
I read in few articles unique constraint works little different as compared to primary key constraint hence causes this issue at time. If this is known issue is there any way i can replicate this error to understand in more detail.
I am running PostgreSQL 11.9 on x86_64-pc-linux-gnu, compiled by x86_64-pc-linux-gnu-gcc (GCC) 7.4.0, 64-bit version Postgres. My programmatic env was a Java AWS Lambda.
I have noticed people have faced this error occasionally in the past.
https://www.postgresql.org/message-id/15556-7b3ae3aba2c39c23%40postgresql.org
https://www.postgresql.org/message-id/flat/65AECD9A-CE13-4FCB-9158-23BE62BB65DD%40msqr.us#d05d2bb7b2f40437c2ccc9d485d8f41e but there are conclusions as to why it is happening

postgres not setting foreign key to null when truncating

I'm trying to truncate a set of tables, but it keeps complaining about a foreign key.
but that foreign key is set to on delete Set null
to reproduce:
create table test_players (id SERIAL PRIMARY KEY, name VARCHAR(255));
create table test_items (id SERIAL PRIMARY KEY, name VARCHAR(255), player_id INTEGER FOREIGN KEY (player_id) REFERENCES test_players(id) ON DELETE SET NULL);
now if you truncate the test_players it will complain:
ERROR: cannot truncate a table referenced in a foreign key constraint
DETAIL: Table "test_items" references "test_players".
HINT: Truncate table "test_items" at the same time, or use TRUNCATE ... CASCADE.
SQL state: 0A000
what must I do to make me be able to delete test_players without deleting the test_items?
You cannot do what you are attempting. You will have to do this in 3 steps.
Update test_items and for each player_id. Well technically you don't need this, but if you don't give yourself data integrity issues.
Drop the test_items to test_players FK.
Then truncate test_players
The reason is that truncate basically just zaps the table, it does NOT process individual rows. Therefore it would not process the FK set null, it throws the error you got instead. In fact even if the child table is empty, or for that matter even if the parent is empty. See fiddle here. The fiddle also contains a function to do it, and a test for it.
The of course you could just Delete from test_players and let the triggers take care of updating test_items. Takes longer, esp if larger table, but you keep your FK. Of course there's
Recreate your FK.

postgres key is not present in table constraint

When trying to ALTER TABLE in Postgres 9.5 to create foreign key constraint: from product_template.product_brand_id to product_brand.id
ALTER TABLE public.product_template
ADD CONSTRAINT product_template_product_brand_id_fkey
FOREIGN KEY (product_brand_id)
REFERENCES public.product_brand (id) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE SET NULL;
Returns error
ERROR: insert or update on table "product_template" violates foreign key constraint "product_template_product_brand_id_fkey"
DETAIL: Key (product_brand_id)=(12) is not present in table "product_brand".
STATEMENT: ALTER TABLE "product_template" ADD FOREIGN KEY ("product_brand_id") REFERENCES "product_brand" ON DELETE set null
Im confused why postgres is trying to find product_brand.product_brand_id, when the fkey is from product_template.product_brand_id to product_brand.id
Any ideas?
The error message simply states that there is at least one row in the table product_template that contains the value 12 in the column product_brand_id
But there is no corresponding row in the table product_brand where the column id contains the value 12
Key (product_brand_id)=(12) relates the source column of the foreign key, not the target column.
In simple terms, the value of FOREIGN KEY(product_brand_id) provided in your ALTER statement is not present in the source (product_brand) table.

PostgreSQL: after import some data, if insert there is error - IntegrityError duplicate key value violates unique constraint "place_country_pkey"

When I import some data to PostgreSQL through PhpPgAdmin there is all fine.
But when I try later to insert some data to populated before tables I get an error:
IntegrityError: duplicate key value violates unique constraint "place_country_pkey"
And this is happens only with prepopulated tables.
Here is my SQL:
DROP TABLE IF EXISTS place_country CASCADE;
CREATE TABLE place_country (
id SERIAL PRIMARY KEY,
country_en VARCHAR(100) NOT NULL,
country_ru VARCHAR(100) NOT NULL,
country_ua VARCHAR(100) NOT NULL
);
INSERT INTO place_country VALUES(1,'Ukraine','Украина','Україна');
How to avoid this?
Thanks!
Try not inserting the "1". IIRC, in Postgres, when you define a column as SERIAL, it means that it will auto-generate an ID with a counter to automatically populate that column. So use:
INSERT INTO place_country (country_en, country_ru, country_ua) VALUES (Ukraine','Украина','Україна');
Which is a good practice anyway, BTW (explicitly naming the columns in an INSERT, I mean).