Unique constraint with soft deleted rows excluded - postgresql

We have a table with a unique constraint on it, for feedback left from one user, for another, in relation to a sale.
ALTER TABLE feedback
ADD CONSTRAINT unique_user_subject_and_sale
UNIQUE (user_id, subject_id, sale_id)
This ensures we don't accidentally get duplicated rows of feedback.
Currently we sometimes hard-delete feedback left in error and left the user leave it again. We want to change to soft-delete:
ALTER TABLE feedback
ADD COLUMN deleted_at timestamptz
If deleted_at IS NOT NULL, consider the feedback deleted, though we still have the audit trail in our DB (and will probably show it ghosted out to site admins).
How can we keep our unique constraint when we're using soft-delete like this? Is it possible without using a more general CHECK() constraint the does an aggregate check (I've never tried using check constraint like this).
It's like I need to append a WHERE clause to the constraint.

Your unique index, later edited out.
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL
Your unique index has at least two side effects that might cause you some trouble.
In other tables, you can't set a foreign key constraint that references "feedback". A foreign key reference requires some combination of columns to be declared as either primary key or unique.
Your unique index allows multiple rows that differ only in the "deleted_at" timestamp. So it's possible to end up with rows that look like the example below. Whether this is a problem is application-dependent.
Example
user_id subject_id sale_id deleted_at
--
1 1 1 2012-01-01 08:00:01.33
1 1 1 2012-01-01 08:00:01.34
1 1 1 2012-01-01 08:00:01.35
PostgreSQL documents this kind of index as a partial index, should you need to Google it sometime. Other platforms use different terms for it--filtered index is one. You can limit the problems to a certain extent with a pair of partial indexes.
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_not_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NOT NULL
But I see no reason to go to this much trouble, especially given the potential problems with foreign keys. If your table looks like this
create table feedback (
feedback_id integer primary key,
user_id ...
subject_id ...
sale_id ...
deleted_at ...
constraint unique_user_subj_sale
unique (user_id, subject_id, sale_id)
);
then all you need is that unique constraint on {user_id, subject_id, sale_id}. You might further consider making all deletes use the "deleted_at" column instead of doing a hard delete.

Despite the fact the PostgreSQL documentation advises against using a unique index instead of a constraint (if the point is to have a constraint), it appears you can do
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL

You could create a constraint with deleted_at signature.
ALTER TABLE feedback
ADD CONSTRAINT unique_user_subject_and_sale
UNIQUE (user_id, subject_id, sale_id, deleted_at)
This does not create problems like a unique index.
The only problem would be removing user feedback, creating a new one, and then deleting it again in the same time.
And considering the accuracy of the column, the user would have to fit in 1 microsecond. Which, let's face it, is impossible, even if he did it through, the time between these requests will definitely be greater than 1 microsecond.
The only problem is that null value isn't checking in unique. So it won't works.
But you could add a default value for not deleted rows as the lowest value you could store in that column.
If you cannot accept the default value for deleted_at, if the object is not deleted, you can consider adding a deleted_token column and generating a key for it if you delete a value, and for an undeleted object, keep 0 or whatever.

Related

Avoid scan on attach partition with check constraint

I am recreating an existing table as a partitioned table in PostgreSQL 11.
After some research, I am approaching it using the following procedure so this can be done online while writes are still happening on the table:
add a check constraint on the existing table, first as not valid and then validating
drop the existing primary key
rename the existing table
create the partitioned table under the prior table name
attach the existing table as a partition to the new partitioned table
My expectation was that the last step would be relatively fast, but I don't really have a number for this. In my testing, it's taking about 30s. I wonder if my expectations are incorrect or if I'm doing something wrong with the constraint or anything else.
Here's a simplified version of the DDL.
First, the inserted_at column is declared like this:
inserted_at timestamp without time zone not null
I want to have an index on the ID even after I drop the PK for existing queries and writes, so I create an index:
create unique index concurrently my_events_temp_id_index on my_events (id);
The check constraint is created in one transaction:
alter table my_events add constraint my_events_2022_07_events_check
check (inserted_at >= '2018-01-01' and inserted_at < '2022-08-01')
not valid;
In the next transaction, it's validated (and the validation is successful):
alter table my_events validate constraint my_events_2022_07_events_check;
Then before creating the partitioned table, I drop the primary key of the existing table:
alter table my_events drop constraint my_events_pkey cascade;
Finally, in its own transaction, the partitioned table is created:
alter table my_events rename to my_events_2022_07;
create table my_events (
id uuid not null,
... other columns,
inserted_at timestamp without time zone not null,
primary key (id, inserted_at)
) partition by range (inserted_at);
alter table my_events attach partition my_events_2022_07
for values from ('2018-01-01') to ('2022-08-01');
That last transaction blocks inserts and takes about 30s for the 12M rows in my test database.
Edit
I wanted to add that in response to the attach I see this:
INFO: partition constraint for table "my_events_2022_07" is implied by existing constraints
That makes me think I'm doing this right.
The problem is not the check constraint, it is the primary key.
If you make the original unique index include both columns:
create unique index concurrently my_events_temp_id_index on my_events (id,inserted_at);
And if you make the new table have a unique index rather than a primary key on those two columns, then the attach is nearly instantaneous.
These seem to me like unneeded restrictions in PostgreSQL, both that the unique index on one column can't be used to imply uniqueness on the both columns, and that the unique index on both columns cannot be used to imply the primary key (nor even a unique constraint--but only a unique index).

Would this PostgresQL model work for long-term use and security?

I'm making a real-time chat app and was stuck figuring out how the DB model should look like. I've made this diagram, but would this work? My issue is more to do with foreign keys.
I know this is a very vague question. But have been struggling with this model for a while now. This is the first database I'm setting up so it's probably got a load of errors.
Actually you are fairly close, but over complicated it a bit. At the conceptual/logical model you have just 2 entities. Users and Messages
with a many-to-many relationship. At the physical level the Channels table resolves the M:M into the 2 one_to_many you have described. But the
viewing this way ravels a couple issues. The attribute user is not required in the Messages table and if physically implemented requires a not easily done validation
that the user there exists in the Channels table. Further everything that Message:User relationship provides is a available
via Users:Channels:Messages relationship. A similar argument applies to Channels column in Users - completely resolved by the resolution table. Suggestion: drop user from message table and channels from users.
Now lets look at the columns of Channels. It looks like you using a boiler plate for created_at and updated_at, but are they necessary?
Well at least for updated_at No. What can be updated? If either User or Message is updated you have a brand new entry. Yes it may seem like the same physical row (actually it is not)
but the meaning is completely different. Well how about last massage? What is it trying to indicate that the max value created at for the user does not give you?
I cannot see anything. I guess you could change the created at but what is the point of tracking when I changed that column. Suggestion: drop last message sent and updated at (unless required by Institution standards) from message table.
That leaves the Users table itself. Besides Channels mentioned above there is the Contacts column. Physically as a array it violates 1NF and becomes difficult to manage - (as wall as validating that the contact is in fact a user)
Logically it is creating a M:M on USER:USER. So resolve it the same way as User:Messages, pull it out into another table, say User_Contacts with 2 attributes to the Users table. Suggestion drop contacts for the users table and create a resolution table.
Unfortunately, I do not have a good ERD diagrammer, so I just provide DDL.
create table users (
user_id integer generated always as identity primary key
, name text
, phone_number text
, last_login timestamptz
, created_at timestamptz
, updated_at timestamptz
) ;
create type message_type as enum ('short', 'long'); -- list all values
create table messages(
msg_id integer generated always as identity primary key
, msg_type message_type
, message text
, created_at timestamptz
, updated_at timestamptz
);
create table channels( -- resolves M:M Users:Messages
user_id integer
, msg_id integer
, created_at timestamptz
, constraint channels_pk
primary key (user_id, msg_id)
, constraint channels_2_users_fk
foreign key (user_id)
references users(user_id)
, constraint channels_2_messages_fk
foreign key (msg_id)
references messages(msg_id )
);
create table user_contacts( -- resolves M:M Users:Users
user_id integer
, contact_id integer
, created_at timestamptz
, constraint user_contacts_pk
primary key (user_id, contact_id)
, constraint user_2_users_fk
foreign key (user_id)
references users(user_id)
, constraint contact_2_user_fk
foreign key (user_id)
references users(user_id)
, constraint contact_not_me_check check (user_id <> contact_id)
);
Notes:
Do not use text as PK, use either integer (bigint) or UUID, and generate them during insert.
Caution on ENUM. In Postgres you can add new values, but you cannot remove a value. Depending upon number of values and how often the change consider creating a lookup/reference table for them.
Do not use the data type TIME. It is really not that useful without the date. Simple example I login today at 15:00, you login tomorrow at 13:00. Now, from the database itself, which of us logged in first.

Composite PRIMARY KEY enforces NOT NULL constraints on involved columns

This is one strange, unwanted behavior I encountered in Postgres:
When I create a Postgres table with composite primary keys, it enforces NOT NULL constraint on each column of the composite combination.
For example,
CREATE TABLE distributors (m_id integer, x_id integer, PRIMARY KEY(m_id, x_id));
enforces NOT NULL constraint on columns m_id and x_id, which I don't want!
MySQL doesn't do this. I think Oracle doesn't do it as well.
I understand that PRIMARY KEY enforces UNIQUE and NOT NULL automatically but that makes sense for single-column primary key. In a multi-column primary key table, the uniqueness is determined by the combination.
Is there any simple way of avoiding this behavior of Postgres? When I execute this:
CREATE TABLE distributors (m_id integer, x_id integer);
I do not get any NOT NULL constraints of course. But I would not have a primary key either.
If you need to allow NULL values, use a UNIQUE constraint (or index) instead of a PRIMARY KEY (and add a surrogate PK column - I suggest a serial or IDENTITY column in Postgres 10 or later).
Auto increment table column
A UNIQUE constraint allows columns to be NULL:
CREATE TABLE distributor (
distributor_id GENERATED ALWAYS AS IDENTITY PRIMARY KEY
, m_id integer
, x_id integer
, UNIQUE(m_id, x_id) -- !
-- , CONSTRAINT distributor_my_name_uni UNIQUE (m_id, x_id) -- verbose form
);
The manual:
For the purpose of a unique constraint, null values are not considered equal, unless NULLS NOT DISTINCT is specified.
In your case, you could enter something like (1, NULL) for (m_id, x_id) any number of times without violating the constraint. Postgres never considers two NULL values equal - as per definition in the SQL standard.
If you need to treat NULL values as equal (i.e. "not distinct") to disallow such "duplicates", I see two three (since Postgres 15) options:
0. NULLS NOT DISTINCT
This option was added with Postgres 15 and allows to treat NULL values as "not distinct", so two of them conflict in a unique constraint or index. This is the most convenient option, going forward. The manual:
That means even in the presence of a unique constraint it is possible
to store duplicate rows that contain a null value in at least one of
the constrained columns. This behavior can be changed by adding the
clause NULLS NOT DISTINCT ...
Detailed instructions:
Create unique constraint with null columns
1. Two partial indexes
In addition to the UNIQUE constraint above:
CREATE UNIQUE INDEX dist_m_uni_idx ON distributor (m_id) WHERE x_id IS NULL;
CREATE UNIQUE INDEX dist_x_uni_idx ON distributor (x_id) WHERE m_id IS NULL;
But this gets out of hands quickly with more than two columns that can be NULL. See:
Create unique constraint with null columns
2. A multi-column UNIQUE index on expressions
Instead of the UNIQUE constraint. We need a free default value that is never present in involved columns, like -1. Add CHECK constraints to disallow it:
CREATE TABLE distributor (
distributor serial PRIMARY KEY
, m_id integer
, x_id integer
, CHECK (m_id &lt> -1)
, CHECK (x_id &lt> -1)
);
CREATE UNIQUE INDEX distributor_uni_idx
ON distributor (COALESCE(m_id, -1), COALESCE(x_id, -1));
When you want a polymorphic relation
Your table uses column names that indicate that they are probably references to other tables:
CREATE TABLE distributors (m_id integer, x_id integer);
So I think you probably are trying to model a polymorphic relation to other tables – where a record in your table distributors can refer to one m record xor one x record.
Polymorphic relations are difficult in SQL. The best resource I have seen about this topic is "Modeling Polymorphic Associations in a Relational Database". There, four alternative options are presented, and the recommendation for most cases is called "Exclusive Belongs To", which in your case would lead to a table like this:
CREATE TABLE distributors (
id serial PRIMARY KEY,
m_id integer REFERENCES m,
x_id integer REFERENCES x,
CHECK (
((m_id IS NOT NULL)::integer + (x_id IS NOT NULL)::integer) = 1
)
);
CREATE UNIQUE INDEX ON distributors (m_id) WHERE m_id IS NOT NULL;
CREATE UNIQUE INDEX ON distributors (x_id) WHERE x_id IS NOT NULL;
Like other solutions, this uses a surrogate primary key column because primary keys are enforced to not contain NULL values in the SQL standard.
This solution adds a 4th option to the three in #Erwin Brandstetter's answer for how to avoid the case where "you could enter something like (1, NULL) for (m_id, x_id) any number of times without violating the constraint." Here, that case is excluded by a combination of two measures:
Partial unique indexes on each column individually: two records (1, NULL) and (1, NULL) would not violate the constraint on the second column as NULLs are considered distinct, but they would violate the constraint on the first column (two records with value 1).
Check constraint: The missing piece is preventing multiple (NULL, NULL) records, still allowed because NULLs are considered distinct, and anyway because our partial indexes do not cover them to save space and write events. This is achieved by the CHECK constraint, which prevents any (NULL, NULL) records by making sure that exactly one column is NULL.
There's one difference though: all alternatives in #Erwin Brandstetter's answer allow at least one record (NULL, NULL) and any number of records with no NULL value in any column (like (1, 2)). When modeling a polymorphic relation, you want to disallow such records. That is achieved by the check constraint in the solution above.

Postgres unique constraint not enforcing uniqueness

Here is my constraint:
CREATE UNIQUE INDEX index_subscriptions_on_user_id_and_class_type_id_and_deleted_at
ON subscriptions
USING btree
(user_id, class_type_id, deleted_at);
This query proves the constraint is not actually working:
SELECT id, user_id, class_type_id,deleted_at
FROM subscriptions;
Here is the output:
Why is uniqueness not being enforced?
Unique indexes in Postgres are based on values being equal, but NULL is never equal to anything, including other NULLs. Therefore any row with a NULL deleted_at value is distinct from any other possible row - so you can insert any number of them.
One way around this is to create partial indexes, applying different rules to rows with and without NULLs:
CREATE UNIQUE INDEX ... ON subscriptions
(user_id, class_type_id) WHERE deleted_at IS NULL;
CREATE UNIQUE INDEX ... ON subscriptions
(user_id, class_type_id, deleted_at) WHERE deleted_at IS NOT NULL;
This happens because of the NULL value in the created_at column. A unique index (or constraint) allows multiple rows with NULL in them.
The only way you can prevent that is to either declare the column as NOT NULL in order to force a value for it, or reduce the unique columns to (user_id, class_type_id)

Do I need a primary key for my table, which has a UNIQUE (composite 4-columns), one of which can be NULL?

I have the following table (PostgreSQL 8.3) which stores prices of some products. The prices are synchronised with another database, basically most of the fields below (apart from one) are not updated by our client - but instead dropped and refreshed every once-in-a-while to sync with another stock database:
CREATE TABLE product_pricebands (
template_sku varchar(20) NOT NULL,
colourid integer REFERENCES colour (colourid) ON DELETE CASCADE,
currencyid integer NOT NULL REFERENCES currency (currencyid) ON DELETE CASCADE,
siteid integer NOT NULL REFERENCES site (siteid) ON DELETE CASCADE,
master_price numeric(10,2),
my_custom_field boolean,
UNIQUE (template_sku, siteid, currencyid, colourid)
);
On the synchronisation, I basically DELETE most of the data above except for data WHERE my_custom_field is TRUE (if it's TRUE, it means the client updated this field via their CMS and therefore this record should not be dropped). I then INSERT 100s to 1000s of rows into the table, and UPDATE where the INSERT fails (i.e. where the combination of (template_sku, siteid, currencyid, colourid) already exists).
My question is - what best practice should be applied here to create a primary key? Is a primary key even needed? I wanted to make the primary key = (template_sku, siteid, currencyid, colourid) - but the colourid field can be NULL, and using it in a composite primary key is not possible.
From what I read on other forum posts, I think I have done the above correctly, and just need to clarify:
1) Should I use a "serial" primary key just in case I ever need one? At the moment I don't, and don't think I ever will, because the important data in the table is the price and my custom field, only identified by the (template_sku, siteid, currencyid, colourid) combination.
2) Since (template_sku, siteid, currencyid, colourid) is the combination that I will use to query a product's price, should I add any further indexing to my columns, such as the "template_sku" which is a varchar? Or is the UNIQUE constraint a good index already for my SELECTs?
Should I use a "serial" primary key just in case I ever need one?
You can easily add a serial column later if you need one:
ALTER TABLE product_pricebands ADD COLUMN id serial;
The column will be filled with unique values automatically. You can even make it the primary key in the same statement (if no primary key is defined, yet):
ALTER TABLE product_pricebands ADD COLUMN id serial PRIMARY KEY;
If you reference the table from other tables I would advise to use such a surrogate primary key, because it is rather unwieldy to link by four columns. It is also slower in SELECTs with JOINs.
Either way, you should define a primary key. The UNIQUE index including a nullable column is not a full replacement. It allows duplicates for combinations including a NULL value, because two NULL values are never considered the same. This can lead to trouble.
As
the colourid field can be NULL
you might want to create two unique indexes. The combination (template_sku, siteid, currencyid, colourid) cannot be a PRIMARY KEY, because of the nullable colourid, but you can create a UNIQUE constraint like you already have (implementing an index automatically):
ALTER TABLE product_pricebands ADD CONSTRAINT product_pricebands_uni_idx
UNIQUE (template_sku, siteid, currencyid, colourid)
This index perfectly covers the queries you mention in 2).
Create a partial unique index in addition if you want to avoid "duplicates" with (colourid IS NULL):
CREATE UNIQUE INDEX product_pricebands_uni_null_idx
ON product_pricebands (template_sku, siteid, currencyid)
WHERE colourid IS NULL;
To cover all bases. I wrote more about that technique in a related answer on dba.SE.
The simple alternative to the above is to make colourid NOT NULL and create a primary key instead of the above product_pricebands_uni_idx.
Also, as you
basically DELETE most of the data
for your refill operation, it will be faster to drop indexes, that are not needed during the refill operation, and recreate those afterwards. It is faster by an order of magnitude to build an index from scratch than to add all rows incrementally.
How do you know, which indexes are used (needed)?
Test your queries with EXPLAIN ANALYZE.
Or use the built-in statistics. pgAdmin displays statistics in a separate tab for the selected object.
It may also be faster to select the few rows with my_custom_field = TRUE into a temporary table, TRUNCATE the base table and re-INSERT the survivors. Depends on whether you have foreign keys defined. Would look like this:
CREATE TEMP TABLE pr_tmp AS
SELECT * FROM product_pricebands WHERE my_custom_field;
TRUNCATE product_pricebands;
INSERT INTO product_pricebands SELECT * FROM pr_tmp;
This avoids a lot of vacuuming.