Prevent non-collation characters in a NVarChar column using constraint? - tsql

Little weird requirement, but here it goes. We have a CustomerId VarChar(25) column in a table. We need to make it NVarChar(25) to work around issues with type conversions.
CHARINDEX vs LIKE search gives very different performance, why?
But, we don't want to allow non-latin characters to be stored in this column. Is there any way to place such a constraint on column? I'd rather let database handle this check. In general we OK with NVarChar for all of our strings, but some columns like ID's is not a good candidates for this because of possibility of look alike strings from different languages
Example:
CustomerId NVarChar(1) - PK
Value 1: BOPOH
Value 2: ВОРОН
Those 2 strings different (second one is Cyrillic)
I want to prevent this entry scenario. I want to make sure Value 2 can not be saved into the field.

Just in case it helps somebody. Not sure it's most "elegant" solution but I placed constraint like this on those fields:
ALTER TABLE [dbo].[Carrier] WITH CHECK ADD CONSTRAINT [CK_Carrier_CarrierId] CHECK ((CONVERT([varchar](25),[CarrierId],(0))=[CarrierId]))
GO

Related

Fast way to check if PostgreSQL jsonb column contains certain string

The past two days I've been reading a lot about jsonb, full text search, gin index, trigram index and what not but I still can not find a definitive or at least a good enough answer on how to fastly search if a row of type JSONB contains certain string as a value. Since it's a search functionality the behavior should be like that of ILIKE
What I have is:
Table, lets call it app.table_1 which contains a lot of columns one of which is of type JSONB, so lets call it column_jsonb
The data inside column_jsonb will always be flatten (no nested objects, etc) but the keys can vary. An example of the data in the column with obfuscated values looks like this:
"{""Key1"": ""Value1"", ""Key2"": ""Value2"", ""Key3"": null, ""Key4"": ""Value4"", ""Key5"": ""Value5""}"
I have a GIN index for this column which doesn't seems to affect the search time significantly (I am testing with 20k records now which takes about 550ms). The indes looks like this:
CREATE INDEX ix_table_1_column_jsonb_gin
ON app.table_1 USING gin
(column_jsonb jsonb_path_ops)
TABLESPACE pg_default;
I am interested only in the VALUES and the way I am searching them now is this:
EXISTS(SELECT value FROM jsonb_each(column_jsonb) WHERE value::text ILIKE search_term)
Here search_term is variable coming from the front end with the string that the user is searching for
I have the following questions:
Is it possible to make the check faster without modifying the data model? I've read that trigram index might be usfeul for similar cases but at least for me it seems that converting jsonb to text and then checking will be slower and actually I am not sure if the trigram index will actually work if the column original type is JSONB and I explicitly cast each row to text? If I'm wroing I would really appreciate some explanation with example if possible.
Is there some JSONB function that I am not aware of which offers what I am searching for out of the box, I'm constrained to PostgreSQL v 11.9 so some new things coming with version 12 are not available for me.
If it's not possible to achieve significant improvement with the current data structure can you propose a way to restructure the data in column_jsonb maybe another column of some other type with data persisted in some other way, I don't know...
Thank you very much in advance!
If the data structure is flat, and you regularly need to search the values, and the values are all the same type, a traditional key/value table would seem more appropriate.
create table table1_options (
table1_id bigint not null references table1(id),
key text not null,
value text not null
);
create index table1_options_key on table1_options(key);
create index table1_options_value on table1_options(value);
select *
from table1_options
where value ilike 'some search%';
I've used simple B-Tree indexes, but you can use whatever you need to speed up your particular searches.
The downsides are that all values must have the same type (doesn't seem to be a problem here) and you need an extra table for each table. That last one can be mitigated somewhat with table inheritance.

How to query PostgreSQL for all records where hstore is empty?

I would like to query my PostgreSQL table for all rows that have an empty hstore. It must be obvious to all but me, as I can't find any documentation on how to do it nor other StackOverflow questions answering the question. Checking for NULL isn't helpful, as I get back all rows, even those rows where properties has no keys/values:
SELECT widgets.*
FROM "widgets"
WHERE properties IS NOT NULL
Any ideas how to do this?
One option would be to return all keys of the column as an array and check if that array is not empty:
SELECT *
FROM widgets
WHERE properties IS NOT NULL
AND array_length(akeys(properties),1) > 0;
A simpler way of doing this as per a message on the PostgreSQL listerve is:
SELECT * FROM widgets WHERE properties = ''::HSTORE;
an aside:
As a general rule, I would recommend that you don't have a column which can be empty sometimes and NULL other times, outside of distinct cases where you want a semantic difference there, e.g. for an application that interprets the two differently. Especially for BOOLEAN columns, I generally choose to prevent this extra degree of freedom.
I often do something do something similar to:
ALTER TABLE widgets ALTER COLUMN properties SET NOT NULL;
-- OR:
ALTER TABLE widgets ADD CONSTRAINT its_null_or_nothin CHECK ( properties <> ''::HSTORE );
Depending on what you find easier to remember, you can also create an empty hstore via hstore('{}'::TEXT[]).

Using TSQL, CAST() with COLLATE is non-deterministic. How to make it deterministic? What is the work-around?

I have a function that includes:
SELECT #pString = CAST(#pString AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS
This is useful, for example, to remove accents in french; for example:
UPPER(CAST('Éléctricité' AS VARCHAR(255)) COLLATE SQL_Latin1_General_Cp1251_CS_AS)
gives ELECTRICITE.
But using COLLATE makes the function non-deterministic and therefore I cannot use it as a computed persisted value in a column.
Q1. Is there another (quick and easy) way to remove accents like this, with a deterministic function?
Q2. (Bonus Question) The reason I do this computed persisted column is to search. For example the user may enter the customer's last name as either 'Gagne' or 'Gagné' or 'GAGNE' or 'GAGNÉ' and the app will find it using the persisted computed column. Is there a better way to do this?
EDIT: Using SQL Server 2012 and SQL-Azure.
You will find that it is in fact deterministic, it just has different behavior depending on the character you're trying to collate.
Check the page for Windows 1251 encoding for behavior on accepted characters, and unacceptable characters.
Here is a collation chart for Cyrillic_General_CI_AI. This is codepage 1251 Case Insensitive and Accent Insensitive. This will show you the mappings for all acceptable characters within this collation.
As for the search question, as Keith said, I would investigate putting a full text index on the column you are going to be searching on.
The best answer I got was from Sebastian Sajaroff. I used his example to fix the issue. He suggested a VIEW with a UNIQUE INDEX. This gives a good idea of the solution:
create table Test(Id int primary key, Name varchar(20))
create view TestCIAI with schemabinding as
select ID, Name collate SQL_Latin1_General_CP1_CI_AI as NameCIAI from Test
create unique clustered index ix_Unique on TestCIAI (Id)
create unique nonclustered index ix_DistinctNames on TestCIAI (NameCIAI)
insert into Test values (1, 'Sébastien')
--Insertion 2 will fail because of the unique nonclustered indexed on the view
--(which is case-insensitive, accent-insensitive)
insert into Test values (2, 'Sebastien')

In Postgres, is it performance critical to define low cardinality column as int and not text?

I have a column with 4 options.
The column is define as text.
The table is big table 100 millions of record and keep going.
The table use as report table.
The index on the table is - provider_id,date,enum_field.
I wonder if i should change the enum_filed from text to int and how much this is performance critical.
Using postgres 9.1
Table:
provider_report:
id bigserial NOT NULL,
provider_id bigint,
date timestamp without time zone,
enum_field character varying,
....
Index:
provider_id,date,enum_field
TL;DR version: worrying about this is probably not worth your time.
Long version:
There is an enum type in Postgres:
create type myenum as enum('foo', 'bar');
There are pros and cons related to using it vs a varchar or an integer field. Mostly pros imho.
In terms of size, it's stored as an oid, so int32 type. This makes it smaller than a varchar populated with typical values (e.g. 'draft', 'published', 'pending', 'completed', whatever your enum is about), and the same size as an int type. If you've very few values, a smallint / int16 will be admittedly be smaller. Some of your performance change will come from there (smaller vs larger field, i.e. mostly negligible).
Validation is possible in each case, be it through the built-in catalog lookup for the enum, or a check constraint or a foreign key for a varchar or an int. Some of your performance change will come from there, and it'll probably not be worth your time either.
Another benefit of the enum type, is that it is ordered. In the above example, 'foo'::myenum < 'bar'::myenum', making it possible to order by enumcol. To achieve the same using a varchar or an int, you'll need a separate table with a sortidx column or something... In this case, the enum can yield an enormous benefit if you ever want to order by your enum's values. This brings us to (imho) the only gotcha, which is related to how the enum type is stored in the catalog...
Internally, each enum's value carries an oid, and the latter are stored as is within the table. So it's technically an int32. When you create the enum type, its values are stored in the correct order within the catalog. In the above example, 'foo' would have an oid lower than 'bar'. This makes it very efficient for Postgres to order by an enum's value, since it amounts to sorting int32 values.
When you ALTER your enum, however, you may end up in a situation where you change that order. For instance, imagine you alter the above enum in such a way that myenum is now ('foo', 'baz', 'bar'). For reasons tied to efficiency, Postgres does not assign a new oid for existing values and rewrite the tables that use them, let alone invalidate cached query plans that use them. What it does instead, is populate a separate field in the the pg_catalog, so as to make it yield the correct sort order. From that point forward, ordering by the enum field requires an extra lookup, which de facto amounts to joining the table with a separate values table that carries a sortidx field -- much like you would do with a varchar or an int if you ever wanted to sort them.
This is usually fine and perfectly acceptable. Occasionally, it's not. When not there is a solution: alter the tables with the enum type, and change their values to varchar. Also locate and adjust functions and triggers that make use of it as you do. Then drop the type entirely, and then recreate it to get fresh oid values. And finally alter the tables back to where they were, and readjust the functions and triggers. Not trivial, but certainly feasible.
It will be best to define an enum_field as ENUM type. It will take minimal space and check, which values are allowed.
As for performance: the only reliable way if it really affects performance - to test it (with proper set of correct tests). My guess - the difference will be less than 5%.
And if you really want to change the table - don't forget to VACUUM it after the change.

Why does Postgres handle NULLs inconsistently where unique constraints are involved?

I recently noticed an inconsistency in how Postgres handles NULLs in columns with a unique constraint.
Consider a table of people:
create table People (
pid int not null,
name text not null,
SSN text unique,
primary key (pid)
);
The SSN column should be kept unique. We can check that:
-- Add a row.
insert into People(pid, name, SSN)
values(0, 'Bob', '123');
-- Test the unique constraint.
insert into People(pid, name, SSN)
values(1, 'Carol', '123');
The second insert fails because it violates the unique constraint on SSN. So far, so good. But let's try a NULL:
insert into People(pid, name, SSN)
values(1, 'Carol', null);
That works.
select *
from People;
0;"Bob";"123"
1;"Carol";"<NULL>"
A unique column will take a null. Interesting. How can Postgres assert that null is in any way unique, or not unique for that matter?
I wonder if I can add two rows with null in a unique column.
insert into People(pid, name, SSN)
values(2, 'Ted', null);
select *
from People;
0;"Bob";"123"
1;"Carol";"<NULL>"
2;"Ted";"<NULL>"
Yes I can. Now there are two rows with NULL in the SSN column even though SSN is supposed to be unique.
The Postgres documentation says, For the purpose of a unique constraint, null values are not considered equal.
Okay. I can see the point of this. It's a nice subtlety in null-handling: By considering all NULLs in a unique-constrained column to be disjoint, we delay the unique constraint enforcement until there is an actual non-null value on which to base that enforcement.
That's pretty cool. But here's where Postgres loses me. If all NULLs in a unique-constrained column are not equal, as the documentation says, then we should see all of the nulls in a select distinct query.
select distinct SSN
from People;
"<NULL>"
"123"
Nope. There's only a single null there. It seems like Postgres has this wrong. But I wonder: Is there another explanation?
Edit:
The Postgres docs do specify that "Null values are considered equal in this comparison." in the section on SELECT DISTINCT. While I do not understand that notion, I'm glad it's spelled out in the docs.
It is almost always a mistake when dealing with null to say:
"nulls behave like so-and-so here, *so they should behave like
such-and-such here"
Here is an excellent essay on the subject from a postgres perspective. Briefly summed up by saying nulls are treated differently depending on the context and don't make the mistake of making any assumptions about them.
The bottom line is, PostgreSQL does what it does with nulls because the SQL standard says so.
Nulls are obviously tricky and can be interpreted in multiple ways (unknown value, absent value, etc.), and so when the SQL standard was initially written, the authors had to make some calls at certain places. I'd say time has proved them more or less right, but that doesn't mean that there couldn't be another database language that handles unknown and absent values slightly (or wildly) differently. But PostgreSQL implements SQL, so that's that.
As was already mentioned in a different answer, Jeff Davis has written some good articles and presentations on dealing with nulls.
NULL is considered to be unique because NULL doesn't represent the absence of a value. A NULL in a column is an unknown value. When you compare two unknowns, you don't know whether or not they are equal because you don't know what they are.
Imagine that you have two boxes marked A and B. If you don't open the boxes and you can't see inside, you never know what the contents are. If you're asked "Are the contents of these two boxes the same?" you can only answer "I don't know".
In this case, PostgreSQL will do the same thing. When asked to compare two NULLs, it says "I don't know." This has a lot to do with the crazy semantics around NULL in SQL databases. The article linked to in the other answer is an excellent starting point to understanding how NULLs behave. Just beware: it varies by vendor.
Multiple NULL values in a unique index are okay because x = NULL is false for all x and, in particular, when x is itself NULL. You'll also run into this behavior in WHERE clauses where you have to say WHERE x IS NULL and WHERE x IS NOT NULL rather than WHERE x = NULL and WHERE x <> NULL.