PostgreSQL unique index using gist - postgresql

I have a table "locations" with "from" and "to" columns with "point" type.
I'm only able to use "gist" indexes on those columns as the b-tree is not available for "point" type.
I would like to have a unique index on both of the columns (to ensure there is no same location stored).
This is not possible due to error "access method "gist" does not support unique indexes".
Is it somehow possible to achieve this? I could workaround it by creating regular text column storing "from_lat,from_lng:to_lat,to_lng" and add unique index on it, but is there a better way?

You can use an exclusion constraint. A unique constraint (or index) is essentially just a special case of an exclusion constraint.
An exclusion constraints an be defined using GIST:
alter table locations
add constraint unique_points
exclude using gist ("from" with ~=, "to" with ~=);
The operator ~= checks for the equality of two points

You may use the "exclude" constraint, but this is not a unique index nor a primary key at all. Furthermore an upsert is impossible, because to resolve "on conflict (...) do update" a primary or unique key is required.
Referring https://www.postgresql.org/docs/13/indexes-unique.html it is impossible to use any range type as unique key (or pkey, including violations by overlapping ranges etc.)

Related

PostgreSQL Unique constraint and compound index

I have a table with a unique constraint on two fields, I also use this as an index for faster performance. I want to query a third field as part of this index but I don't want the third field to be part of the unique constraint. i.e. I don't want a new composite index just for the third field as it's quite large.
Is there a way to do this in Postgres? I presently create the unique constraint and get the index created for free, can I specify the three-field composite index and tell the unique constraint to use this index, and Postgres will figure out it can use this index as a UC?
You can use the INCLUDE option:
create unique index on the_table (column_1, column_2)
include (column_3);

ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found

This can be marked as duplicate but I am finding issue when I refereed
Create Unqiue case-insensitive constraint on two varchar fields
I have a table std_tbl having some duplicate records in one of the columns say Column_One.
I created a unique constraint on that column
ALTER TABLE std_tbl
ADD CONSTRAINT Unq_Column_One
UNIQUE (Column_One) ENABLE NOVALIDATE;
I used ENABLE NOVALIDATE as I want to keep existing duplicate records and validate future records for duplicates.
But here, the constaint does not look for case sensitive words, like if value of Column_One is 'abcd', it allows 'Abcd' and 'ABCD' to insert in the table.
I want this behaviour to be case insensitive so that it should not look for case while validating data. For this I came up with this solution.
CREATE UNIQUE INDEX Unq_Column_One_indx ON std_tbl (LOWER(Column_One));
But it is giving me the error:
ORA-01452: cannot CREATE UNIQUE INDEX; duplicate keys found
Please help me out...
This occurs when you try to execute a CREATE UNIQUE INDEX statement on one or more columns that contain duplicate values.
Two ways to resolve (that I know of):
Remove the UNIQUE keyword from your CREATE UNIQUE INDEX statement and rerun the command (i.e. if the values need not be unique).
If they must be unique, delete the extraneous records that are causing the duplicate values and rerun the CREATE UNIQUE INDEX statement.

How to create a primary key using the hash method in postgresql

Is there any way to create a primary key using the hash method? Neither of the following statements work:
oid char(30) primary key using hash
primary key(oid) using hash
I assume, you meant to use the hash index method / type.
Primary keys are constraints. Some constraints can create index(es) in order to work properly (but this fact should not be relied upon). F.ex. a UNIQUE constraint will create a unique index. Note, that only B-tree currently supports unique indexes. The PRIMARY KEY constraint is a combination of the UNIQUE and the NOT NULL constraints, so (currently) it only supports B-tree.
You can set up a hash index too, if you want (besides the PRIMARY KEY constraint) -- but you cannot make that unique.
CREATE INDEX name ON table USING hash (column);
But, if you are willing to do this, you should be aware that there is some limitation on the hash indexes (up until PostgreSQL 10):
Hash index operations are not presently WAL-logged, so hash indexes might need to be rebuilt with REINDEX after a database crash if there were unwritten changes. Also, changes to hash indexes are not replicated over streaming or file-based replication after the initial base backup, so they give wrong answers to queries that subsequently use them. For these reasons, hash index use is presently discouraged.
Also:
Currently, only the B-tree, GiST and GIN index methods support multicolumn indexes.
Note: Unfortunately, oid is not the best name for a column in PostgreSQL, because it can also be a name for a system column and type.
Note 2: The char(n) type is also discouraged. You can use varchar or text instead, with a CHECK constraint -- or (if the id is so uuid-like) the uuid type itself.

Postgres - unique index on primary key

On Postgres, a unique index is automatically created for primary key columns. From the docs,
When an index is declared unique, multiple table rows with equal
indexed values are not allowed. Null values are not considered equal.
A multicolumn unique index will only reject cases where all indexed
columns are equal in multiple rows.
From my understanding, it seems like this index only checks uniqueness and isn't actually present for faster access when querying by primary key id's. Does this mean that this index structure doesn't consist of a sorted table (or a tree) for the primary key column? Is this correct?
In theory a unique or primary key constraint could be enforced without the presence of an index, but it would be a painful process. The index is mainly there for performance purposes.
However some databases (eg Oracle) allow a unique or primary key constraint to be supported by a non-unique index. Primarily this allows the enforcement of the constraint to be deferred until the end of a transaction, so lack of uniqueness can be permitted temporarily during a transaction, but also allows indexes to be built in parallel and with the constraint then defined as a secondary step.
Also, I'm not sure how the internals work on a PostgreSQL btree index, but all Oracle btree's are internally declared to be unique either:
on the key column(s), for an index that is intended to be UNIQUE, or
on the key column(s) plus the indexed row's ROWID, for a non-unique index.
Quite the contrary, The index is created in order to allow faster access - mainly to check for duplicates when a new record is inserted but can also be used by other queries against PK columns. The best structure for uk indexes is a btree because during the insert the index is created - If the rdbms detects collision in the leaf he will raise a unique constraint violation.

Setting constraint for two unique fields in PostgreSQL

I'm new to postgres. I wonder, what is a PostgreSQL way to set a constraint for a couple of unique values (so that each pair would be unique). Should I create an INDEX for bar and baz fields?
CREATE UNIQUE INDEX foo ON table_name(bar, baz);
If not, what is a right way to do that? Thanks in advance.
If each field needs to be unique unto itself, then create unique indexes on each field. If they need to be unique in combination only, then create a single unique index across both fields.
Don't forget to set each field NOT NULL if it should be. NULLs are never unique, so something like this can happen:
create table test (a int, b int);
create unique index test_a_b_unq on test (a,b);
insert into test values (NULL,1);
insert into test values (NULL,1);
and get no error. Because the two NULLs are not unique.
You can do what you are already thinking of: create a unique constraint on both fields. This way, a unique index will be created behind the scenes, and you will get the behavior you need. Plus, that information can be picked up by information_schema to do some metadata inferring if necessary on the fact that both need to be unique. I would recommend this option. You can also use triggers for this, but a unique constraint is way better for this specific requirement.