Postgres create table like exclude constraints but include indexes - postgresql

I am trying to recreate a table's structure using following statement. However I would like to exclude constraints but include indexes from the original table.
CREATE TABLE users_audit (like users EXCLUDING CONSTRAINTS);
Above statement works as expected as it create a new table users_audit without constraints. However I am not sure if it is possible to copy indexes using INCLUDING INDEXES or even include everything except primary key constraint. Looking at the documentation it seems like I can only pass EXCLUDING | INCLUDING.

You can say
CREATE TABLE users_audit (like users EXCLUDING CONSTRAINTS INCLUDING INDEXES);
That will create all indexes, so it also will create the indexes that implement primary key and unique constraints. However, foreign key constraints won't be created.
There is no way to only create the indexes that do not belong to a constraint. For that, you could say INCLUDING INDEXES and afterwards drop all constraints.

Related

Unexpected creation of duplicate unique constraints in Postgres

I am writing an idempotent schema change script for a Postgres 12 database. However I noticed that if I include the IF NOT EXISTS in an ADD COLUMN statement then even if the column already exists it is adding duplicate Indexes for the uniqueness constraint which already exists. Simple example:
-- set up base table
CREATE TABLE IF NOT EXISTS test_table
(id SERIAL PRIMARY KEY
);
-- statement intended to be idempotent
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50) UNIQUE;
Running this script creates a new index test_table_name_key[n] each time it is run. I can't find anything in the Postgres documentation and don't understand why this is allowed to happen? If I break it into two parts eg:
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50);
ALTER TABLE
ADD CONSTRAINT test_table_name_key UNIQUE (name);
Then the transaction fails because Postgres rejects the creation of a constraint which already exists (which I can then catch in a DO EXCEPTION block). As far as I can tell this is because doing it by this approach I am forced to give the constraint a name. This constrasts with the ALTER COLUMN SET NOT NULL which can be run multiple times without error or side effects as far as I can tell.
Question: why does it add a duplicate unique constraint and are there any problems with having multiple identical indexes on a table column? (I think this is a subtle 'error' and only spotted it by chance so am concerned it may arise in a production situation)
You can create multiple unique constraints on the same column as long as they have different names, simply because there is nothing in the PostgreSQL code that forbids that. Each unique constraint will create a unique index with the same name, because that is how unique constraints are implemented.
This can be a valid use case: for example, if the index is bloated, you could create a new constraint and then drop the old one.
But normally, it is useless and does harm, because each index will make data modifications on the table slower.

DB2 access specific row, in an non Unique table, for update / delete operations

Can I do row-specific update / delete operations in a DB2 table Via SQL, in a NON QUNIQUE Primary Key Context?
The Table is a PHYSICAL FILE on the NATIVE SYSTEM of the AS/400.
It was, like many other Files, created without the unique definition, which leads DB2 to the conclusion, that The Table, or PF has no qunique Key.
And that's my problem. I can't override the structure of the table to insert a unique ID ROW, because, I would have to recompile ALL my correlating Programs on the AS/400, which is a serious issue, much things would not work anymore, "perhaps". Of course, I can do that refactoring for one table, but our system has thousands of those native FILES, some well done with Unique Key, some without Unique definition...
Well, I work most of the time with db2 and sql on that old files. And all files which have a UNIQUE Key are no problem for me to do those important update / delete operations.
Is there some way to get an additional column to every select with a very unique row id, respective row number. And in addition, what is much more important, how can I update this RowNumber.
I did some research and meanwhile I assume, that there is no chance to do exact alterations or deletes, when there is no unique key present. What I would wish would be some additional ID-ROW which is always been sent with the table, which I can Refer to when I do my update / delete operations. Perhaps my thinking here has an fallacy as non Unique Key Tables are purposed to be edited in other ways.
Try the RRN function.
SELECT RRN(EMPLOYEE), LASTNAME
FROM EMPLOYEE
WHERE ...;
UPDATE EMPLOYEE
SET ...
WHERE RRN(EMPLOYEE) = ...;

Composite key with user-supplied string column, foreign keys

Let's say I have the following table
TABLE subgroups (
group_id t_group_id NOT NULL REFERENCES groups(group_id),
subgroup_name t_subgroup_name NOT NULL,
more attributes ...
)
subgroup_name is UNIQUE to a group(group_id).
A group can have many subgroups.
The subgroup_names are user-supplied. (I would like to avoid using a subgroup_id column. subgroup_name has meaning in the model and is more than just a label, I am providing a list of predetermined names but allow a user to add his owns for flexibility).
This table has 2 levels of referencing child tables containing subgroup attributes (with many-to-one relations);
I would like to have a PRIMARY KEY on (group_id, upper(trim(subgroup_name)));
From what I know, postgres doesn't allow to use PRIMARY KEY/UNIQUE on a function.
IIRC, the relational model also requires columns to be used as stored.
CREATE UNIQUE INDEX ON subgroups (group_id, upper(trim(subgroup_name))); doesn't solve my problem
as other tables in my model will have FOREIGN KEYs pointing to those two columns.
I see two options.
Option A)
Store a cleaned up subgroup name in subgroup_name
Add an extra column called subgroup_name_raw that would contained the uncleaned string
Option B)
Create both a UNIQUE INDEX and PRIMARY KEY on my key pair. (seems like a huge waste)
Any insights?
Note: I'm using Postgres 9.2
Actually you can do a UNIQUE constraint on the output of a function. You can't do it in the table definition though. What you need to do is create a unique index after. So something like:
CREATE UNIQUE INDEX subgroups_ukey2 ON subgroups(group_id, upper(trim(subgroup_name)));
PostgreSQL has a number of absolutely amazing indexing capabilities, and the ability to create unique (and partial unique) indexes on function output is quite underrated.

Setting constraint for two unique fields in PostgreSQL

I'm new to postgres. I wonder, what is a PostgreSQL way to set a constraint for a couple of unique values (so that each pair would be unique). Should I create an INDEX for bar and baz fields?
CREATE UNIQUE INDEX foo ON table_name(bar, baz);
If not, what is a right way to do that? Thanks in advance.
If each field needs to be unique unto itself, then create unique indexes on each field. If they need to be unique in combination only, then create a single unique index across both fields.
Don't forget to set each field NOT NULL if it should be. NULLs are never unique, so something like this can happen:
create table test (a int, b int);
create unique index test_a_b_unq on test (a,b);
insert into test values (NULL,1);
insert into test values (NULL,1);
and get no error. Because the two NULLs are not unique.
You can do what you are already thinking of: create a unique constraint on both fields. This way, a unique index will be created behind the scenes, and you will get the behavior you need. Plus, that information can be picked up by information_schema to do some metadata inferring if necessary on the fact that both need to be unique. I would recommend this option. You can also use triggers for this, but a unique constraint is way better for this specific requirement.

DB associative entities and indexing

This is kind of a general DB design question. If one has an associative entity table, i.e. a cross-reference, containing records that basically just consist of two FK references, should it be indexed in some way? Is it necessary to explicitly index that table, since the PKs in the associated tables are already indexed by definition? If one should index it, should it be a combination index, consisting of the two FK fields together?
Indexes on the referenced pk columns in the other tables do not cover it.
By defining the two fk columns as composite primary key of the "associative entity" table (as you should in most cases - provided that associations are unique), you implicitly create a multi-column index.
That covers all queries involving both or the first columns optimally.
It also covers queries on the second column, but in a less effective way.
If you have important queries involving just the second column, create an additional index on that one, too.
Read all the details about the topic at this related question on dba.SE.
Or this question on SO, also covering this topic.
Suppose your associative table has a schema such as:
CREATE TABLE Association
(
ReferenceA INTEGER NOT NULL REFERENCES TableA CONSTRAINT FK1_Association,
ReferenceB INTEGER NOT NULL REFERENCES TableB CONSTRAINT FK2_Association,
PRIMARY KEY(ReferenceA, ReferenceB) CONSTRAINT PK_Association
);
The chances are that your DBMS will automatically create some indexes.
Some DBMS will create an index for each of the two foreign keys and also a unique index for the primary key. This is slightly wasteful since the PK index could be used for accessing ReferenceA too.
Ideally, there will be just two indexes: the PK (unique) index and the (duplicates allowed) FK index for ReferenceB, assuming that the PK index has ReferenceA as the first column.
If a DBMS does not automatically create indexes to enforce the referential integrity constraints, you'll want to create the RI or FK duplicates-allowed index. If it doesn't automatically create an index to enforce the PK constraint, you'll want to create that unique index too. The upside is that you'll only create the indexes for the ideal case.
Depending on your DBMS, you might find it more effective to create the table without the constraints, then to add the indexes, and then to add the constraints (which will then use the indexes you created). Things like fragmentation schemes can also factor into this; I ignored them above.
The concept remains simple — you want two indexes in total, one to enforce uniqueness on both columns and provide fast access on the leading column, and a non-unique or duplicates-allowed index on the trailing column.