Using postgres sql, is there a way to set up a condition on foreign key where it's limited to to another table like a normal foreign key constraint, but also allows the value of 0 to exist without it being in the other table. For example:
table_a:
id
table_b:
id
foreign_key_on_table_a_id
table_a would have a list of things, and table_b relates to table_a, but has the foreign key constraint. I would also like it to allow for a value of 0 even though there is no id of 0 in table_a.
Is this the right constraint to use? Is there another/better way of doing this without adding the value into table_a?
I'd change foreign_key_on_table_a_id to allow NULL values. Then use an FK as usual and put NULLs in there instead of zero. You can have a NULL in a column that references another table.
Alternatively, you could write a function that returns true if a value is in the other table and false otherwise and then add a CHECK constraint:
CHECK (your_column = 0 or the_function(your_column))
You won't get any of the usual cascade behavior for FKs though and this CHECK is a massive kludge.
Related
Hi all,
Can any understand what's going on here?
The case is:
There are 2 tables, called "matricula" and "pagament" with a 1:1 relationship cardinality.
Table matricula primary key composed by 3 fields "edicio","curs" and "estudiant".
Table pagament primary key, the same as above. Furthermore, it references matricula.
As shown, trying to insert a row in pagament table is rejected because it does not exists a row in table matricula. However, asking for this row returns one result.
What am I missing?
Thanks you all
Carles
The problem is that the order of the fields in both tables is not the same, and, moreover, the restriction of the foreign key in table pagament, said that
foreign key (estudiant,curs,edicio) references matricula
without specifying the matricula fields.
It's been solved by setting this restriction as
foreign key (estudiant,curs,edicio) references matricula(estudiant,curs,edicio)
This is one strange, unwanted behavior I encountered in Postgres:
When I create a Postgres table with composite primary keys, it enforces NOT NULL constraint on each column of the composite combination.
For example,
CREATE TABLE distributors (m_id integer, x_id integer, PRIMARY KEY(m_id, x_id));
enforces NOT NULL constraint on columns m_id and x_id, which I don't want!
MySQL doesn't do this. I think Oracle doesn't do it as well.
I understand that PRIMARY KEY enforces UNIQUE and NOT NULL automatically but that makes sense for single-column primary key. In a multi-column primary key table, the uniqueness is determined by the combination.
Is there any simple way of avoiding this behavior of Postgres? When I execute this:
CREATE TABLE distributors (m_id integer, x_id integer);
I do not get any NOT NULL constraints of course. But I would not have a primary key either.
If you need to allow NULL values, use a UNIQUE constraint (or index) instead of a PRIMARY KEY (and add a surrogate PK column - I suggest a serial or IDENTITY column in Postgres 10 or later).
Auto increment table column
A UNIQUE constraint allows columns to be NULL:
CREATE TABLE distributor (
distributor_id GENERATED ALWAYS AS IDENTITY PRIMARY KEY
, m_id integer
, x_id integer
, UNIQUE(m_id, x_id) -- !
-- , CONSTRAINT distributor_my_name_uni UNIQUE (m_id, x_id) -- verbose form
);
The manual:
For the purpose of a unique constraint, null values are not considered equal, unless NULLS NOT DISTINCT is specified.
In your case, you could enter something like (1, NULL) for (m_id, x_id) any number of times without violating the constraint. Postgres never considers two NULL values equal - as per definition in the SQL standard.
If you need to treat NULL values as equal (i.e. "not distinct") to disallow such "duplicates", I see two three (since Postgres 15) options:
0. NULLS NOT DISTINCT
This option was added with Postgres 15 and allows to treat NULL values as "not distinct", so two of them conflict in a unique constraint or index. This is the most convenient option, going forward. The manual:
That means even in the presence of a unique constraint it is possible
to store duplicate rows that contain a null value in at least one of
the constrained columns. This behavior can be changed by adding the
clause NULLS NOT DISTINCT ...
Detailed instructions:
Create unique constraint with null columns
1. Two partial indexes
In addition to the UNIQUE constraint above:
CREATE UNIQUE INDEX dist_m_uni_idx ON distributor (m_id) WHERE x_id IS NULL;
CREATE UNIQUE INDEX dist_x_uni_idx ON distributor (x_id) WHERE m_id IS NULL;
But this gets out of hands quickly with more than two columns that can be NULL. See:
Create unique constraint with null columns
2. A multi-column UNIQUE index on expressions
Instead of the UNIQUE constraint. We need a free default value that is never present in involved columns, like -1. Add CHECK constraints to disallow it:
CREATE TABLE distributor (
distributor serial PRIMARY KEY
, m_id integer
, x_id integer
, CHECK (m_id <> -1)
, CHECK (x_id <> -1)
);
CREATE UNIQUE INDEX distributor_uni_idx
ON distributor (COALESCE(m_id, -1), COALESCE(x_id, -1));
When you want a polymorphic relation
Your table uses column names that indicate that they are probably references to other tables:
CREATE TABLE distributors (m_id integer, x_id integer);
So I think you probably are trying to model a polymorphic relation to other tables – where a record in your table distributors can refer to one m record xor one x record.
Polymorphic relations are difficult in SQL. The best resource I have seen about this topic is "Modeling Polymorphic Associations in a Relational Database". There, four alternative options are presented, and the recommendation for most cases is called "Exclusive Belongs To", which in your case would lead to a table like this:
CREATE TABLE distributors (
id serial PRIMARY KEY,
m_id integer REFERENCES m,
x_id integer REFERENCES x,
CHECK (
((m_id IS NOT NULL)::integer + (x_id IS NOT NULL)::integer) = 1
)
);
CREATE UNIQUE INDEX ON distributors (m_id) WHERE m_id IS NOT NULL;
CREATE UNIQUE INDEX ON distributors (x_id) WHERE x_id IS NOT NULL;
Like other solutions, this uses a surrogate primary key column because primary keys are enforced to not contain NULL values in the SQL standard.
This solution adds a 4th option to the three in #Erwin Brandstetter's answer for how to avoid the case where "you could enter something like (1, NULL) for (m_id, x_id) any number of times without violating the constraint." Here, that case is excluded by a combination of two measures:
Partial unique indexes on each column individually: two records (1, NULL) and (1, NULL) would not violate the constraint on the second column as NULLs are considered distinct, but they would violate the constraint on the first column (two records with value 1).
Check constraint: The missing piece is preventing multiple (NULL, NULL) records, still allowed because NULLs are considered distinct, and anyway because our partial indexes do not cover them to save space and write events. This is achieved by the CHECK constraint, which prevents any (NULL, NULL) records by making sure that exactly one column is NULL.
There's one difference though: all alternatives in #Erwin Brandstetter's answer allow at least one record (NULL, NULL) and any number of records with no NULL value in any column (like (1, 2)). When modeling a polymorphic relation, you want to disallow such records. That is achieved by the check constraint in the solution above.
Hello I wan to create a new table based on another one and create primary keys as well.
Currently this is how I'm doing it. Table B has no primary keys defined. But I would like to create them in table A. Is there a way using this select top 0 statement to do that? Or do I need to do an ALTER TABLE after I created tableA?
Thanks
select TOP 0 *
INTO [tableA]
FROM [tableB]
SELECT INTO does not support copying any of the indexes, constraints, triggers or even computed columns and other table properties, aside from the IDENTITY property (as long as you don't apply an expression to the IDENTITY column.
So, you will have to add the constraints after the table has been created and populated.
The short answer is NO. SELECT INTO will always create a HEAP table and, according to Books Online:
Indexes, constraints, and triggers defined in the source table are not
transferred to the new table, nor can they be specified in the
SELECT...INTO statement. If these objects are required, you must
create them after executing the SELECT...INTO statement.
So, after executing SELECT INTO you need to execute an ALTER TABLE or CREATE UNIQUE INDEX in order to add a primary key.
Also, if dbo.TableB does not already have an IDENTITY column (or if it does and you want to leave it out for some reason), and you need to create an artificial primary key column (rather than use an existing column in dbo.TableB to serve as the new primary key), you could use the IDENTITY function to create a candidate key column. But you still have to add the constraint to TableA after the fact to make it a primary key, since just the IDENTITY function/property alone does not make it so.
-- This statement will create a HEAP table
SELECT Col1, Col2, IDENTITY(INT,1,1) Col3
INTO dbo.MyTable
FROM dbo.AnotherTable;
-- This statement will create a clustered PK
ALTER TABLE dbo.MyTable
ADD CONSTRAINT PK_MyTable_Col3 PRIMARY KEY (Col3);
We have a table with a unique constraint on it, for feedback left from one user, for another, in relation to a sale.
ALTER TABLE feedback
ADD CONSTRAINT unique_user_subject_and_sale
UNIQUE (user_id, subject_id, sale_id)
This ensures we don't accidentally get duplicated rows of feedback.
Currently we sometimes hard-delete feedback left in error and left the user leave it again. We want to change to soft-delete:
ALTER TABLE feedback
ADD COLUMN deleted_at timestamptz
If deleted_at IS NOT NULL, consider the feedback deleted, though we still have the audit trail in our DB (and will probably show it ghosted out to site admins).
How can we keep our unique constraint when we're using soft-delete like this? Is it possible without using a more general CHECK() constraint the does an aggregate check (I've never tried using check constraint like this).
It's like I need to append a WHERE clause to the constraint.
Your unique index, later edited out.
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL
Your unique index has at least two side effects that might cause you some trouble.
In other tables, you can't set a foreign key constraint that references "feedback". A foreign key reference requires some combination of columns to be declared as either primary key or unique.
Your unique index allows multiple rows that differ only in the "deleted_at" timestamp. So it's possible to end up with rows that look like the example below. Whether this is a problem is application-dependent.
Example
user_id subject_id sale_id deleted_at
--
1 1 1 2012-01-01 08:00:01.33
1 1 1 2012-01-01 08:00:01.34
1 1 1 2012-01-01 08:00:01.35
PostgreSQL documents this kind of index as a partial index, should you need to Google it sometime. Other platforms use different terms for it--filtered index is one. You can limit the problems to a certain extent with a pair of partial indexes.
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale_not_null
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NOT NULL
But I see no reason to go to this much trouble, especially given the potential problems with foreign keys. If your table looks like this
create table feedback (
feedback_id integer primary key,
user_id ...
subject_id ...
sale_id ...
deleted_at ...
constraint unique_user_subj_sale
unique (user_id, subject_id, sale_id)
);
then all you need is that unique constraint on {user_id, subject_id, sale_id}. You might further consider making all deletes use the "deleted_at" column instead of doing a hard delete.
Despite the fact the PostgreSQL documentation advises against using a unique index instead of a constraint (if the point is to have a constraint), it appears you can do
CREATE UNIQUE INDEX feedback_unique_user_subject_and_sale
ON feedback(user_id, subject_id, sale_id)
WHERE deleted_at IS NULL
You could create a constraint with deleted_at signature.
ALTER TABLE feedback
ADD CONSTRAINT unique_user_subject_and_sale
UNIQUE (user_id, subject_id, sale_id, deleted_at)
This does not create problems like a unique index.
The only problem would be removing user feedback, creating a new one, and then deleting it again in the same time.
And considering the accuracy of the column, the user would have to fit in 1 microsecond. Which, let's face it, is impossible, even if he did it through, the time between these requests will definitely be greater than 1 microsecond.
The only problem is that null value isn't checking in unique. So it won't works.
But you could add a default value for not deleted rows as the lowest value you could store in that column.
If you cannot accept the default value for deleted_at, if the object is not deleted, you can consider adding a deleted_token column and generating a key for it if you delete a value, and for an undeleted object, keep 0 or whatever.
Is it possible in PostgreSQL to conditionally add a foreign key?
Something like:ALTER TABLE table1 ADD FOREIGN KEY (some_id) REFERENCES other_table WHERE some_id NOT IN (0,-1) AND some_id IS NOT NULL;
Specifically, my reference table has all positive integers (1+) but the table I need to add the foreign key to can contain zero (0), null and negative one (-1) instead, all meaning something different.
Notes:
I am fully aware that this is poor table design, but it was a clever trick built 10+ years ago when the features and resources we have available at this point did not exist. This system is running hundreds of retail stores so going back and changing the method at this point could take months which we don't have.
I can not use a trigger, this MUST be done with a foreign key.
The short answer is no, Postgres does not have conditional foreign keys. Some options you might consider are:
Just not have a FK constraint. Move this logic into the data access layer and live without the referential integrity.
Allow NULL in the column, which is perfectly valid even with a FK constraint. Then, use another column to store whatever the meaning of 0 and -1 is.
Add a dummy row in the referenced table for 0 and -1. Even if it just had bogus data, it would satisfy the FK constraint.
Hope this helps!
You can add another "shadow" column to table1 which holds the cleaned values (i.e. everything but 0 and -1). Use this column for the referential integrity checks. This shadow column is updated/filled by a simple trigger on table1 which writes all values but 0 and -1 into the shadow column. Both 0 and -1 could be mapped to null.
Then you have reference integrity and your unchanged original column. The downside: You have also a little trigger and some redundant data. But alas, this is the fate of a legacy schema!
Your requirement is equivalent to this check constraint:
create table t (a float check (a >= -1 and a = floor(a) or a is null));
You can implement this with a check constraint and a foreign key.
CREATE TABLE table1 (some_id INT, some_id_fkey INT REFERENCES other_table(other_id), CHECK (some_id IN (0,-1) OR some_id IS NOT DISTINCT FROM some_id_fkey));
(not tested)
Here's another possibility. Use PG Inheritance to enforce a partition of the table into has +1 in the flag column and otherwise. (Usual rules/triggers for maintaining this.) Then have the FK relationship between only the Has_PLUS_ONE child table and the referenced table.