How to create a case insensitive index or constraint in HSQL - postgresql

How can I create a case insensitive index or constraint in HSQL running in PostgreSQL (;sql.syntax_pgs=true) mode?
In Postgres, it can be done with lower() or lcase():
CREATE UNIQUE INDEX lower_username_index ON enduser_table ((lcase(name)));
PostgreSQL also has the CITEXT datatype, but unfortunately it does not seem to be supported in HSQL.
I'm currently at HSQL 2.2.8 and PostgreSQL 9.0.5. Alternatively, other in-memory databases that might be a better fit for testing PostgreSQL DDL and SQL?
Thanks in advance!

With HSQLDB, it's better to define a UNIQUE constraint, rather than a unique index.
There are two ways of achieving your aim:
Change the type of the column to VARCHAR_IGNORECASE, then use ALTER TABLE enduser_table ADD CONSTRAINT CONST_1 UNIQUE(name)
Alternatively, create a generated column then create the UNIQUE constraint on this column. `ALTER TABLE enduser_table ADD COLUMN lc_name VARCHAR(1000) GENERATED ALWAYS AS (LCASE(name))'
With both methods, duplicate values in the NAME column are rejected. With the first method, the index is used for searches on the NAME column. With the second method, the index is used for searches on the lc_name column.
(UPDATE) If you want to use the PosgreSQL CITEXT type, define the type in HSQLDB, then use the first alternative.
CREATE TYPE CITEXT AS VARCHAR_IGNORECASE(2000)
CREATE TABLE enduser_table (name CITEXT, ...
ALTER TABLE enduser_table ADD CONSTRAINT CONST_1 UNIQUE(name)

Related

Unexpected creation of duplicate unique constraints in Postgres

I am writing an idempotent schema change script for a Postgres 12 database. However I noticed that if I include the IF NOT EXISTS in an ADD COLUMN statement then even if the column already exists it is adding duplicate Indexes for the uniqueness constraint which already exists. Simple example:
-- set up base table
CREATE TABLE IF NOT EXISTS test_table
(id SERIAL PRIMARY KEY
);
-- statement intended to be idempotent
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50) UNIQUE;
Running this script creates a new index test_table_name_key[n] each time it is run. I can't find anything in the Postgres documentation and don't understand why this is allowed to happen? If I break it into two parts eg:
ALTER TABLE test_table
ADD COLUMN IF NOT EXISTS name varchar(50);
ALTER TABLE
ADD CONSTRAINT test_table_name_key UNIQUE (name);
Then the transaction fails because Postgres rejects the creation of a constraint which already exists (which I can then catch in a DO EXCEPTION block). As far as I can tell this is because doing it by this approach I am forced to give the constraint a name. This constrasts with the ALTER COLUMN SET NOT NULL which can be run multiple times without error or side effects as far as I can tell.
Question: why does it add a duplicate unique constraint and are there any problems with having multiple identical indexes on a table column? (I think this is a subtle 'error' and only spotted it by chance so am concerned it may arise in a production situation)
You can create multiple unique constraints on the same column as long as they have different names, simply because there is nothing in the PostgreSQL code that forbids that. Each unique constraint will create a unique index with the same name, because that is how unique constraints are implemented.
This can be a valid use case: for example, if the index is bloated, you could create a new constraint and then drop the old one.
But normally, it is useless and does harm, because each index will make data modifications on the table slower.

With Check Option Postgresql

I have an ALTER TABLE statement, written in T-SQL (SQL Server):
ALTER TABLE myTable WITH CHECK ADD CONSTRAINT [FK_myTable_myColumn] FOREIGN KEY(myColumn) REFERENCES otherTable (Column)
If I want to translate this statement in Postgresql, how can I make this? Paying attention to WITH CHECK ADD CONSTRAINT
You need to
remove WITH CHECK - I don't know what this is supposed to do, but you can't have a "check constraint" together with a foreign key constraint in Postgres
use standard compliant identifiers (without the square brackets)
ALTER TABLE my_table
ADD CONSTRAINT fk_mytable_mycolumn
FOREIGN KEY(my_column) REFERENCES other_table (column)

UNACCENT when checking for UNIQUE contraint violations in PostgreSQL

We have a UNIQUE constraint on a table to prevent our city_name and state_id combinations from being duplicated. The problem we have found is that accents circumvent this.
Example:
"Montréal" "Quebec"
and
"Montreal" "Quebec"
We need a way to have the unique constraint run UNACCENT() and preferably wrap it in LOWER() as well for good measure. Is this possible?
You can create an immutable version of unaccent:
CREATE FUNCTION noaccent(text) RETURNS text
LANGUAGE sql IMMUTABLE STRICT AS
'SELECT unaccent(lower($1))';
and use that in a unique index on the column.
An alternative is to use a BEGORE INSERT OR UPDATE trigger that fills a new column with the unaccented value and put a unique constraint on that column.
You can create unique indexes on expressions, see the Postgres manual:
https://www.postgresql.org/docs/9.3/indexes-expressional.html
So in your case it could be something like
CREATE UNIQUE INDEX idx_foo ON my_table ( UNACCENT(LOWER(city_name)), state_id )

Adding Identity column in Ingres Db

I am trying to add an identity column in a table through alter query using Ingres DB. While creating the table, i am able to define the identity column but not when i am trying to add it through alter query. Kindly Suggest me an alter query for it.
It's not as straightforward as you might think, "alter table" has a a number of restrictions which make this a multi-step operation. Try this:
create table something(a integer, b varchar(20)) with page_size=8192;
alter table something add column c integer not null with default;
modify something to reconstruct;
alter table something alter column c integer not null generated always as identity;
modify something to reconstruct;

POSTGRESQL:autoincrement for varchar type field

I'm switching from MongoDB to PostgreSQL and was wondering how I can implement the same concept as used in MongoDB for uniquely identifying each raws by MongoId.
After migration, the already existing unique fields in our database is saved as character type. I am looking for minimum source code changes.
So if any way exist in postgresql for generating auto increment unique Id for each inserting into table.
The closest thing to MongoDB's ObjectId in PostgreSQL is the uuid type. Note that ObjectId has only 12 bytes, while UUIDs have 128 bits (16 bytes).
You can convert your existsing IDs by appending (or prepending) f.ex. '00000000' to them.
alter table some_table
alter id_column
type uuid
using (id_column || '00000000')::uuid;
Although it would be the best if you can do this while migrating the schema + data. If you can't do it during the migration, you need to update you IDs (while they are still varchars: this way the referenced columns will propagate the change), drop foreign keys, do the alter type and then re-apply foreign keys.
You can generate various UUIDs (for default values of the column) with the uuid-ossp module.
create extension "uuid-ossp";
alter table some_table
alter id_column
set default uuid_generate_v4();
Use a sequence as a default for the column:
create sequence some_id_sequence
start with 100000
owned by some_table.id_column;
The start with should be bigger then your current maximum number.
Then use that sequence as a default for your column:
alter table some_table
alter id_column set default nextval('some_id_sequence')::text;
The better solution would be to change the column to an integer column. Storing numbers in a text (or varchar) column is a really bad idea.