i am getting a unique constraint issue in postgresql while updating a table. I have a table with 3 columns and an unique constraint on one of the column(internal_state). This table will have only two columns and values for internal_state are 1,0.
The update query is
UPDATE backfeed_state SET internal_state = internal_state - 1
WHERE EXISTS (SELECT 1 FROM backfeed_state d2 WHERE d2.internal_state = 1 )
Running this query is fine in MSSqlserver but in postgre it is throwing unique constraint error.
What i understand is in SQLServer after updating all the rows then only constraint on the columns are checking but in postgre after updating each row, constraints are checking. So after updating the first row(internal_state value from 1 to 0) postgre is checking the constraint and throwing error even before updating the second row.
Is there a way to avoid this situation?
http://www.postgresql.org/docs/9.0/static/sql-createtable.html in section "Non-deferred Uniqueness Constraints" - "When a UNIQUE or PRIMARY KEY constraint is not deferrable, PostgreSQL checks for uniqueness immediately whenever a row is inserted or modified."
Changing your unique constraint to deferrable will hold off checking until the end of the update. Either use SET CONSTRAINTS to disable at the session level (which is annoyingly repetitive) or drop and re-create the uniqueness constraint with the deferrable option (I'm not aware of an ALTER construct to do that without dropping).
Related
I am writing an idempotent schema change script for a Postgres 12 database. However I noticed that if I include the IF NOT EXISTS in an ADD COLUMN statement then even if the column already exists it is adding duplicate Indexes for the uniqueness constraint which already exists. Simple example:
-- set up base table
CREATE TABLE IF NOT EXISTS test_table
(id SERIAL PRIMARY KEY
);
-- statement intended to be idempotent
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50) UNIQUE;
Running this script creates a new index test_table_name_key[n] each time it is run. I can't find anything in the Postgres documentation and don't understand why this is allowed to happen? If I break it into two parts eg:
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50);
ALTER TABLE
ADD CONSTRAINT test_table_name_key UNIQUE (name);
Then the transaction fails because Postgres rejects the creation of a constraint which already exists (which I can then catch in a DO EXCEPTION block). As far as I can tell this is because doing it by this approach I am forced to give the constraint a name. This constrasts with the ALTER COLUMN SET NOT NULL which can be run multiple times without error or side effects as far as I can tell.
Question: why does it add a duplicate unique constraint and are there any problems with having multiple identical indexes on a table column? (I think this is a subtle 'error' and only spotted it by chance so am concerned it may arise in a production situation)
You can create multiple unique constraints on the same column as long as they have different names, simply because there is nothing in the PostgreSQL code that forbids that. Each unique constraint will create a unique index with the same name, because that is how unique constraints are implemented.
This can be a valid use case: for example, if the index is bloated, you could create a new constraint and then drop the old one.
But normally, it is useless and does harm, because each index will make data modifications on the table slower.
I want to use constraint so i can use upsert. Because i don't want duplicate entry on customer_identifier_value.
on conflict (customer_identifier_value) do nothing
[42P10] ERROR: there is no unique or exclusion constraint matching the ON CONFLICT specification
When i create the constraint
alter table subscriber_historization
add constraint customer_identifier_value_unique unique (customer_identifier_value);
[0A000] ERROR: insufficient columns in UNIQUE constraint definition
Detail: UNIQUE constraint on table "subscriber_historization" lacks column "processing_date" which is part of the partition key.
Here is the DDL.
-- auto-generated definition
create table subscriber_historization
(
customer_identifier_value text not null,
product_value text,
contract_date_end date,
processing_date date not null,
constraint subscriber_historization_pk
primary key (processing_date, customer_identifier_value)
)
partition by RANGE (processing_date);
If i use
ON CONFLICT ON CONSTRAINT subscriber_historization_pk DO NOTHING
The row will be inserted if process_date is different. Then there will be duplicate entry on customer_identifier_value.
How to use upsert then?
Thanks for your help.
You cannot prevent that with partitioned tables, because all unique indexes must contain the partitioning key.
Your only way out is to use SERIALIZABLE transaction isolation throughout and verify the constraint with a trigger. This will be a performance hit, however.
This is a limitation of partitioning.
I have a table named base_types that contains this constraint:
ALTER TABLE public.base_types
ADD CONSTRAINT base_type_gas_type_fk FOREIGN KEY (gas_type)
REFERENCES public.gas_types (gas_type) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
DEFERRABLE INITIALLY DEFERRED;
And I have a table named alarm_history that contains five constraints, including this one:
ALTER TABLE public.alarm_history
ADD CONSTRAINT alarm_history_device_fk FOREIGN KEY (device)
REFERENCES public.bases (alarm_device) MATCH SIMPLE
ON UPDATE NO ACTION
ON DELETE NO ACTION
DEFERRABLE INITIALLY DEFERRED;
I am trying to convert a database from one that didn't bother with anything weird and useless like constraints into one that uses them. I am beginning with this script:
delete from gas_types;
select conversion.convert_base_types();
alter table base_types validate constraint base_type_gas_type_fk;
select conversion.convert_alarm_history();
alter table alarm_history validate constraint alarm_history_base_fk;
alter table alarm_history validate constraint alarm_history_charge_fk;
alter table alarm_history validate constraint alarm_history_cooler_fk;
alter table alarm_history validate constraint alarm_history_device_fk;
alter table alarm_history validate constraint alarm_history_furnace_fk;
I duly get an error message telling me that the gas_type field in my new base_types record doesn't match anything in the gas_types table, since the gas_types table is empty. But if I comment out the base_types commands, I get 18,000 nice, shiny new records in the alarm_history table, despite the fact that every single one of them violates at least one of that table's five foreign key constraints, since all of the tables those keys are referring to are empty. I need to ensure that my converted data is consistent, and therefore I need to validate my constraints, but that's obviously not happening. Why not?
Since the constraints above are created as DEFERRABLE INITIALLY DEFERRED, they are not checked until the DML statements (your delete statement) are committed or in your case you until you explicitly validate the constraint.
This is the normal and expected operation of an initially deferred deferrable constraint.
To change this functionality within your current transaction you can issue a SET CONSTRAINTS command to alter this:
SET CONSTRAINTS alarm_history_device_fk IMMEDIATE;
delete from gas_types;
Which should raise a foreign key violation alerting you earlier that you have data dependent on the records you are tying to delete.
I have a column sort_order with a unique constraint on it.
The following SQL fails on Postgres 9.5:
UPDATE test
SET sort_order = sort_order + 1;
-- [23505] ERROR: duplicate key value violates unique constraint "test_sort_order_key"
-- Detail: Key (sort_order)=(2) already exists.
Clearly, if the sort_order values were unique before the update, they will still be unique after the update. Why is this?
The same statement works fine on Oracle and MS SQL, but also fails on MySQL and SQLite.
Here's the complete setup code for a SQL fiddle:
DROP TABLE IF EXISTS test;
CREATE TABLE test (
val TEXT,
sort_order INTEGER NOT NULL UNIQUE
);
INSERT INTO test
VALUES ('A', 1), ('B', 2);
Postgres decides to check constraints of type IMMEDIATELY at a different time than proposed in the SQL standard.
Specifically, the documentation for SET CONSTRAINTS states (emphasis mine):
NOT NULL and CHECK constraints are always checked immediately when a row is inserted or modified (not at the end of the statement). Uniqueness and exclusion constraints that have not been declared DEFERRABLE are also checked immediately.
Postgres chooses to execute this query using a plan that results in a temporary collision for sort_order and IMMEDIATELY fails. Note that means that for the same schema and the same data, the same query may work or fail depending on the execution plan.
You'll have to make the constraint DEFERRABLE or DEFERRABLE INITIALLY DEFERRED, which delays verification of the constraint until the end of the transaction or up to the point where a statement SET CONSTRAINTS ... IMMEDIATE is executed.
Addendum from #HansGinzel's comment:
for COPY, it seems, that (even IMMEDIATE) constraints are tested after all data are COPYied.
HI !
This is my table:
CREATE TABLE [ORG].[MyTable](
..
[my_column2] UNIQUEIDENTIFIER NOT NULL CONSTRAINT FK_C1 REFERENCES ORG.MyTable2 (my_column2),
[my_column3] INT NOT NULL CONSTRAINT FK_C2 REFERENCES ORG.MyTable3 (my_column3)
..
)
I've written this constraint to assure that combination my_column2 and my_column3 is always unique.
ALTER TABLE [ORG].[MyTable] ADD
CONSTRAINT UQ_MyConstraint UNIQUE NONCLUSTERED
(
my_column2,
my_column3
)
But then suddenly.. The DB stopped responding.. there is a lock or something..
Do you have any idea why?
What is bad with the constraint?
What I've already checked is that I have some locked objects when I select * from master.dbo.syslockinfo (joined with sysprocesses). After 10 minutes of inactivity.. this list is empty again and I can access all the objects in database. Weird..
It has to lock the table while checking the data to see if it violates the constraint or not, otherwise some bad data might get inserted while it was doing this
Some operations like rebuilding an index (Of course not using the ONLINE in Enterprise Edition) also will make the table inaccessible while it does this