Doesn't Postgres reuse unique indexes for unique key constraint? - postgresql

Having experience with Oracle I assumed that each unique constraint would reuse unique index.
I created schema population script that creates named unique index and then same unique constraint. In that way I hoped to set index name explicitly rather than relay on Postgres default naming schema.
As experiment was shown I got two indexes with same definition in a result:
CREATE UNIQUE INDEX agent_ux ON agent (branch_id, initials);
ALTER TABLE agent ADD CONSTRAINT agent_uk UNIQUE (branch_id, initials);
select indexname from pg_indexes where tablename = 'agent';
agent_ux
agent_uk
Doesn't Postgres reuse unique indexes for unique key constraint?
NOTE I can't drop index, corresponding to unique constraint (error says about related constraint), but index is automatically deleted if I delete constraint.

In postgres, creating a UNIQUE constraint automatically creates an index. You can also create the constraint by promoting an existing index, using the ALTER TABLE ttt add constraint ccc USING xxx syntax: Documentation
ALTER TABLE agent
ADD CONSTRAINT agent_uk UNIQUE USING agent_ux;
[untested]

Related

Avoid scan on attach partition with check constraint

I am recreating an existing table as a partitioned table in PostgreSQL 11.
After some research, I am approaching it using the following procedure so this can be done online while writes are still happening on the table:
add a check constraint on the existing table, first as not valid and then validating
drop the existing primary key
rename the existing table
create the partitioned table under the prior table name
attach the existing table as a partition to the new partitioned table
My expectation was that the last step would be relatively fast, but I don't really have a number for this. In my testing, it's taking about 30s. I wonder if my expectations are incorrect or if I'm doing something wrong with the constraint or anything else.
Here's a simplified version of the DDL.
First, the inserted_at column is declared like this:
inserted_at timestamp without time zone not null
I want to have an index on the ID even after I drop the PK for existing queries and writes, so I create an index:
create unique index concurrently my_events_temp_id_index on my_events (id);
The check constraint is created in one transaction:
alter table my_events add constraint my_events_2022_07_events_check
check (inserted_at >= '2018-01-01' and inserted_at < '2022-08-01')
not valid;
In the next transaction, it's validated (and the validation is successful):
alter table my_events validate constraint my_events_2022_07_events_check;
Then before creating the partitioned table, I drop the primary key of the existing table:
alter table my_events drop constraint my_events_pkey cascade;
Finally, in its own transaction, the partitioned table is created:
alter table my_events rename to my_events_2022_07;
create table my_events (
id uuid not null,
... other columns,
inserted_at timestamp without time zone not null,
primary key (id, inserted_at)
) partition by range (inserted_at);
alter table my_events attach partition my_events_2022_07
for values from ('2018-01-01') to ('2022-08-01');
That last transaction blocks inserts and takes about 30s for the 12M rows in my test database.
Edit
I wanted to add that in response to the attach I see this:
INFO: partition constraint for table "my_events_2022_07" is implied by existing constraints
That makes me think I'm doing this right.
The problem is not the check constraint, it is the primary key.
If you make the original unique index include both columns:
create unique index concurrently my_events_temp_id_index on my_events (id,inserted_at);
And if you make the new table have a unique index rather than a primary key on those two columns, then the attach is nearly instantaneous.
These seem to me like unneeded restrictions in PostgreSQL, both that the unique index on one column can't be used to imply uniqueness on the both columns, and that the unique index on both columns cannot be used to imply the primary key (nor even a unique constraint--but only a unique index).

Cannot create primary key using already created index

I have a table ideas with columns idea_id, element_id and element_value.
Initially, I had created a composite primary key(ideas_pkey) using all three columns but I started facing size limit issues with the index associated with the primary key as the element_value column had a huge value.
Hence, I created another unique index hashing the column with possible large values
CREATE UNIQUE INDEX ideas_pindex ON public.ideas USING btree (idea_id, element_id, md5(element_value))
Now I deleted the initial primary key ideas_pkey and wanted to recreate it using this newly created index like so
alter table ideas add constraint ideas_pkey PRIMARY KEY ("idea_id", "element_id", "element_value") USING INDEX ideas_pindex;
But this fails with the following error
ERROR: syntax error at or near "ideas_pindex"
LINE 2: ...a_id", "element_id", "element_value") USING INDEX ideas_...
^
SQL state: 42601
Character: 209
What am I doing wrong?
A primary key index can't be a functional index. You can instead just have a unique index on your table, or create another column storing the md5() of your larger column and use it in the PK.
That being said, there is also another error in your query: If you want to specify an index name, you can't specify the PK columns (they are derived from the underlying index). And if you want to specify the pk columns, you can't specify the index name/definition, as it will be automatically created. See the doc

Alter the table by adding constraint and using index in postgresql getting an error syntax error at or near "("IDX_emp_PK

CREATE INDEX IDX_emp_PK ON
EMP(ID);
ALTER TABLE EMP ADD
CONSTRAINT PK_emp PRIMARY KEY (ID)
USING INDEX IDX_emp_PK;
There are two errors in your script:
First: you can't use a non-unique index for a primary key constraint, so you need
CREATE UNIQUE INDEX idx_emp_pk ON emp(id);
When you add a primary or unique constraint based on an index, you can't specify columns (as they are already defined in the index):
ALTER TABLE emp ADD
CONSTRAINT pk_emp PRIMARY KEY
USING INDEX idx_emp_pk;

How to safely reindex primary key on postgres?

We have a huge table that contains bloat on the primary key index. We constantly archive old records on that table.
We reindex other columns by recreating the index concurrently and dropping the old one. This is to avoid interfering with production traffic.
But this is not possible for a primary key since there are foreign keys depending on it. At least based on what we have tried.
What's the right way to reindex the primary key safely without blocking DML statements on the table?
REINDEX CONCURRENTLY seems to work as well. I tried it on my database and didn't get any error.
REINDEX INDEX CONCURRENTLY <indexname>;
I think it possibly does something similar to what #jlandercy has described in his answer. While the reindex was running I saw an index with suffix _ccnew and the existing one was intact as well. Eventually I guess that index was renamed as the original index after dropping the older one and I eventually see a unique primary index on my table.
I am using postgres v12.7.
You can use pg_repack for this.
pg_repack is a PostgreSQL extension which lets you remove bloat from tables and indexes, and optionally restore the physical order of clustered indexes.
It doesn't hold exclusive locks during the whole process. It still does execute some locks, but this should be for a short period of time only. You can check the details here: https://reorg.github.io/pg_repack/
To perform repack on indexes, you can try:
pg_repack -t table_name --only-indexes
TL;DR
Just reindex it as other index using its index name:
REINDEX INDEX <indexname>;
MCVE
Let's create a table with a Primary Key constraint which is also an Index:
CREATE TABLE test(
Id BIGSERIAL PRIMARY KEY
);
Looking at the catalogue we see the constraint name:
SELECT conname FROM pg_constraint WHERE conname LIKE 'test%';
-- "test_pkey"
Having the name of the index, we can reindex it:
REINDEX INDEX test_pkey;
You can also fix the Constraint Name at the creation:
CREATE TABLE test(
Id BIGSERIAL NOT NULL
);
ALTER TABLE test ADD CONSTRAINT myconstraint PRIMARY KEY(Id);
If you must address concurrence, then use the method a_horse_with_no_name suggested, create a unique index concurrently:
-- Ensure Uniqueness while recreating the Primary Key:
CREATE UNIQUE INDEX CONCURRENTLY tempindex ON test USING btree(Id);
-- Drop PK:
ALTER TABLE test DROP CONSTRAINT myconstraint;
-- Recreate PK:
ALTER TABLE test ADD CONSTRAINT myconstraint PRIMARY KEY(Id);
-- Drop redundant Index:
DROP INDEX tempindex;
To check Index existence:
SELECT * FROM pg_index WHERE indexrelid::regclass = 'tempindex'::regclass

Dropping Unique Constraint - PostgreSQL

TL;DR
I am seeking clarity on this: does a FOREIGN KEY require a UNIQUE CONSTRAINT on the other side, specifically, in Postgres and, generally, in relational database systems?
Perhaps, I can test this, but I'll ask, if the UNIQUE CONSTRAINT is required by the FOREIGN KEY what would happen if I don't create it? Will the Database create one or will it throw an error?
How I got there
I had earlier on created a table with a column username on which I imposed a unique constraint. I then created another table with a column bearer_name having a FOREIGN KEY referencing the previous table's column username; the one which had a UNIQUE CONSTRAINT.
Now, I want to drop the UNIQUE CONSTRAINT on the username column from the database because I have later on created a UNIQUE INDEX on the same column and intuitively I feel that they serve the same purpose, or don't they? But the database is complaining that the UNIQUE INDEX has some dependent objects and so it can't be dropped unless I provide CASCADE as an option in order to drop even the dependent object. It's identifying the FOREIGN KEY on bearer_name column in the second table as the dependent object.
And is it possible for the FOREIGN KEY to be a point to the UNIQUE INDEX instead of the UNIQUE CONSTRAINT?
I am seeking clarity on this: does a FOREIGN KEY require a UNIQUE CONSTRAINT on the other side
No it does not require only UNIQUE CONSTRAINT. It could be PRIMARY KEY or UNIQUE INDEX.
Perhaps, I can test this, but I'll ask, if the UNIQUE CONSTRAINT is required by the FOREIGN KEY what would happen if I don't create it? Will the Database create one or will it throw an error?
CREATE TABLE tab_a(a_id INT, b_id INT);
CREATE TABLE tab_b(b_id INT);
ALTER TABLE tab_a ADD CONSTRAINT fk_tab_a_tab_b FOREIGN KEY (b_id)
REFERENCES tab_b(b_id);
ERROR: there is no unique constraint matching given keys
for referenced table "tab_b"
DBFiddle Demo
And is it possible for the FOREIGN KEY to be a point to the UNIQUE INDEX instead of the UNIQUE CONSTRAINT?
Yes, it is possible.
CREATE UNIQUE INDEX tab_b_i ON tab_b(b_id);
DBFiddle Demo2