I have two databases, one is running on postgresql 8.4 and the other on postgresql 9.1.
Both are on CentOS machines with the same locale (en_US).
Suppose i have a table with this data:
id | description
1 Morango
2 CAFÉ
3 pera
4 Uva
The odd thing is, when i run a query like this one:
SELECT * FROM products WHERE description ~* 'café'
On the 8.4 machine i get no results, but on the 9.1 machine i got the row (CAFÉ).
Apparently they differ on how to compare the upcase unicode character.
Could someone give me some insight about this problem?
Is it the different version o postgresql that can cause this problem?
Are there any additional configuration i could make to equalize the behavior from the two machines?
UPDATE: Both databases are UTF-8
Case-insensitive regex matching for non-US Unicode characters was basically not supported before 9.0.
See this snippet in the 9.0 release notes:
E.14.3.6. Functions
[...]
Support locale-specific regular expression processing with UTF-8
server encoding (Tom Lane)
Locale-specific regular expression functionality includes
case-insensitive matching and locale-specific character classes.
Previously, these features worked correctly for non-ASCII characters
only if the database used a single-byte server encoding (such as
LATIN1). They will still misbehave in multi-byte encodings other than
UTF-8.
Related
I have a varchar column that contains only ASCII symbols. I don't need to sort by this field, but I need to search it by full equality.
Default locale is en.UTF8. Will I gain anything if I create this column with collate "C"?
Yes, it makes a difference.
Even if you do not sort deliberately, there are various operations requiring sort steps internally (some aggregate functions, DISTINCT, nested loop joins etc.).
Also, any index on the field has to sort values internally - and observe collation rules unless COLLATE "C" applies (no collation rules).
For searches by full equality you'll want an index - which works either way (for equality), but it's faster overall without collation rules. Depending on the details of your use case, the effect may be negligible or substantial. The impact grows with the length of your strings. I ran a benchmark on a related case some time ago:
Slow query ordering by a column in a joined table
Also, there are more pattern matching options with locale "C". The alternative would be to create an index with the special varchar_pattern_ops operator class.
Related:
PostgreSQL LIKE query performance variations
Operator “~<~” uses varchar_pattern_ops index while normal ORDER BY clause doesn't?
Pattern matching with LIKE, SIMILAR TO or regular expressions in PostgreSQL
Postgres 9.5 introduced performance improvements with a technique called "abbreviated keys", which ran into problems with some locales. So it was deactivated, except for the C locale. Quoting The release notes of Postgres 9.5.2:
Disable abbreviated keys for string sorting in non-C locales (Robert Haas)
PostgreSQL 9.5 introduced logic for speeding up comparisons of string
data types by using the standard C library function strxfrm() as a
substitute for strcoll(). It now emerges that most versions of glibc
(Linux's implementation of the C library) have buggy implementations
of strxfrm() that, in some locales, can produce string comparison
results that do not match strcoll(). Until this problem can be better
characterized, disable the optimization in all non-C locales. (C
locale is safe since it uses neither strcoll() nor strxfrm().)
Unfortunately, this problem affects not only sorting but also entry
ordering in B-tree indexes, which means that B-tree indexes on text,
varchar, or char columns may now be corrupt if they sort according to
an affected locale and were built or modified under PostgreSQL 9.5.0
or 9.5.1. Users should REINDEX indexes that might be affected.
It is not possible at this time to give an exhaustive list of
known-affected locales. C locale is known safe, and there is no
evidence of trouble in English-based locales such as en_US, but some
other popular locales such as de_DE are affected in most glibc
versions.
The problem also illustrates where collation rules come in, generally.
Is it possible to order the results of a PostgreSQL query by a title field that contains characters like [](),; etc but do so ignoring these punctuation characters and sorting only by the text characters?
I've read articles on changing the database collation or locale but have not found any clear instructions on how to do this on an existing database an on a per-column basis. Is this even possible?
"Normalize" for sorting
You could use regexp_replace() with the pattern '[^a-zA-Z]' in the ORDER BY clause but that only recognizes pure ASCII letters. Better use the class shorthand '\W' which recognizes additional non-ASCII letters in your locale like äüóèß etc.
Or you could improvise and "normalize all characters with diacritic elements to their base form with the help of the unaccent() function. Consider this little demo:
SELECT *
, regexp_replace(x, '[^a-zA-Z]', '', 'g')
, regexp_replace(x, '\W', '', 'g')
, regexp_replace(unaccent(x), '\W', '', 'g')
FROM (
SELECT 'XY ÖÜÄöüäĆČćč€ĞğīїıŁłŃńŇňŐőōŘřŠšŞşůŽžż‘´’„“”–—[](),;.:̈� XY'::text AS x) t
->SQLfiddle for Postgres 9.2.
->SQLfiddle for Postgres 9.1.
Regular expression code has been updated in version 9.2. I am assuming this is the reason for the improved handling in 9.2 where all letter characters in the example are matched, while 9.1 only matches some.
unaccent() is provided by the additional module unaccent. Run:
CREATE EXTENSION unaccent;
once per database to use in (Postgres 9.1+, older versions use a different technique).
locales / collation
You must be aware that Postgres relies on the underlying operating system for locales (including collation). The sort order is governed by your chosen locale, or more specific LC_COLLATE. More in this related answer:
String sort order (LC_COLLATE and LC_CTYPE)
There are plans to incorporate collation support into Postgres directly, but that's not available at this time.
Many locales ignore the special characters you describe for sorting character data out of the box. If you have a locale installed in your system that provides the sort order you are looking for, you can use it ad-hoc in Postgres 9.1 or later:
SELECT foo FROM bar ORDER BY foo COLLATE "xy_XY"
To see which collations are installed and available in your current Postgres installation:
SELECT * FROM pg_collation;
Unfortunately it is not possible to define your own custom collation (yet) unless you hack the source code.
The collation rules are usually governed by the rules of a language as spoken in a country. The sort order telephone books would be in, if there were still telephone books ... Your operating system provides them.
For instance, in Debian Linux you can use:
locale -a
to display all generated locales. And:
dpkg-reconfigure locales
as root user (one way of several) to generate / install more.
If you want to have this ordering in one particular query you can
ORDER BY regexp_replace(title, '[^a-zA-Z]', '', 'g')
It will delete all non A-Z from sting and order by resulting field.
According to the PostgreSQL 9.2 documentation, if I am using a locale other than the C locale (en_US.UTF-8 in my case), btree indexes on text columns for supporting queries like
SELECT * from my_table WHERE text_col LIKE 'abcd%'
need to be created using text_pattern_ops like so
CREATE INDEX my_idx ON my_table (text_col text_pattern_ops)
Now section 11.9 of the documentation states that this results in a "character by character" comparison. Are these (non-wide) C characters or does the comparison understand UTF-8?
Good question, I'm not totally sure but my tentative understanding is:
Here Postgresql means "real characters" (eventually multibyte), not bytes. The comparison "understands UTF-8" always, with or without this special index.
The point is that, for locales that have special (non C) collation rules, we normally want to follow those rules (and call the respective locale libraries) when doing comparisons ( <, >...) and sorting. But we don't want to use those collations for POSIX regular matching and LIKE patterns. Hence the existence of two different types of indexes for text.
The operators in the text_pattern_ops operator class actually do a memcmp() on the strings, so the documentation is perhaps slightly inaccurate talking about characters.
But this doesn't really affect the question whether they support UTF-8. The indexing of pattern matching operations in the described fashion does support UTF-8. The underlying operators don't have to worry about the encoding.
I want to store unicode characters in on of the column of PostgreSQL8.4 datat base table. I want to store non-English language data say want to store the Indic language texts. I have achieved the same in Oracle XE by converting the text into unicode and stored in the table using nvarchar2 column data type.
The same way I want to store unicode characters of Indic languages say (Tamil,Hindi) in one of the column of a table. How to I can achieve that,what data type should I use?
Please guide me, thanks in advance
Just make sure the database is initialized with encoding utf8. This applies to the whole database for 8.4, later versions are more sophisticated. You might want to check the locale settings too - see the manual for details, particularly around matching with LIKE and text pattern ops.
If I have fields of NVARCHAR (or NTEXT) data type in a Microsoft SQL Server database, what would be the equivalent data type in a PostgreSQL database?
I'm pretty sure postgres varchar is the same as Oracle/Sybase/MSSQL nvarchar even though it is not explicit in the manual:
http://www.postgresql.org/docs/7.4/static/datatype-character.html
Encoding conversion functions are here:
http://www.postgresql.org/docs/current/static/functions-string.html
http://www.postgresql.org/docs/current/static/functions-string.html#CONVERSION-NAMES
Example:
create table
nvctest (
utf8fld varchar(12)
);
insert into nvctest
select convert('PostgreSQL' using ascii_to_utf_8);
select * from nvctest;
Also, there is this response to a similar question from a Postgresql rep:
All of our TEXT datatypes are
multibyte-capable, provided you've
installed PostgreSQL correctly.
This includes: TEXT (recommended)
VARCHAR CHAR
Short answer: There is no PostgreSQL equivalent to SQL Server NVARCHAR.
The types of NVARCHAR(N) on different database are not equivalent.
The standard allows for a wide choice of character collations and encodings/character sets. When dealing with unicode PostgreSQL and SQLServer fall into different camps and no equivalence exists.
These differ w.r.t.
length semantics
representable content
sort order
padding semantics
Thus moving data from one DB system (or encoding/character set) to another can lead to truncation/content loss.
Specifically there is no equivalent between a PostgreSQL (9.1) character type and SQL Server NVARCHAR.
You may migrate the data to a PostgreSQL binary type, but would then loose text querying capabilities.
(Unless PostgreSQL starts supporting a UTF-16 based unicode character set)
Length semantics
N is interpreted differently (Characters, Bytes, 2*N = Bytes) depending on database and encoding.
Microsoft SQL Server uses UCS2 encoding with the VARCHAR length interpreted as UCS-2 points, that is length*2 = bytes length ( https://learn.microsoft.com/en-us/sql/t-sql/data-types/nchar-and-nvarchar-transact-sql?view=sql-server-2017 ):
their NVARCHAR(1) can store 1 UCS2 Characters (2 bytes of UCS2).
Oracle UTF-encoding has the same semantics ( and internal CESU-8 storage).
Postgres 9.1 only has a Unicode UTF-8 character set (https://www.postgresql.org/docs/9.1/multibyte.html) , which, like
Oracle (in AL32UTF8 or AL16UTF16 encoding) can store 1 full UCS32 codepoints. That is potentially up to 4 bytes (See e.g
http://www.oracletutorial.com/oracle-basics/oracle-nvarchar2/ which explicitly state the NVARCHAR2(50) column may take up to 200 bytes).
The difference becomes significant when dealing with characters outside the basic multilingual plane which count as one "char unit" in utf8 ucs32 (go, char, char32_t, PostgreSQL ), but are represented as surrogate pairs in UTF-16 which count as two units ( Java, Javascript, C#, ABAP, wchar_t , SQLServer).
e.g.
U+1F60A SMILING FACE WITH SMILING EYES will use up all space in SQL Server NVARCHAR(2).
But only one character unit in PostgreSQL.
Classical enterprise grade DBs will offer at least a choice with UTF-16 like semantics (SAP HANA (CESU-8), DB 2 with collation, SQL Anywhere (CESU8BIN), ...)
E.g. Oracle also offers what they misleadingly call an UTF-8 Collation, which is effectivly CESU-8.
This has the same length semantics, representable content as UTF-16 (=Microsoft SQL Server) and is a suitable collation used by UTF-16 based enterprise systems ( e.g. SAP R/3 ) or under a Java application server.
Note that some databases may still interpret NVARCHAR(N) as a length in byte limitation, even with a variable length unicode encoding ( Example SAP IQ ).
Unrepresentable content
UTF-16 / CESU-8 based system can represent half surrogate pairs, while
UTF-8/UTF-32 based system can not.
This content is unrepresentable in this character set, but are a frequent occurrence in UTF-16 based enterprise systems.
(e.g. Windows pathnames may contain such non-utf-8 representable characters, see e.g. https://github.com/rust-lang/rust/issues/12056).
Thus UTF-16 is a "superset" of UTF-8/UTF-16 which is typically a killer-criteria when dealing with data from enterprise/os-systems based on this encoding ( SAP, Windows, Java, JavaScript ). Note that Javascript JSON encoding took specific care to be able to represent these characters (https://www.rfc-editor.org/rfc/rfc8259#page-10 ).
(2) and (3) are more relevant when migration queries, but not for data migration.
Binary sort order:
Note that binary sort order of CESU-8/UTF-16 is different than UTF-8/UTF-32.
UTF-16/CESU-8/Java/JavaScript/ABAP sort order:
U+0041 LATIN CAPITAL LETTER A
U+1F60A SMILING FACE WITH SMILING EYES
U+FB03 LATIN SMALL LIGATURE ffi
UTF-8 / UCS-32 (go) sort order:
U+0041 LATIN CAPITAL LETTER A
U+FB03 LATIN SMALL LIGATURE ffi
U+1F60A SMILING FACE WITH SMILING EYES
Padding semantics
Padding semantics differ on databases esp. when comparing VARCHAR with CHAR content.
It's varchar and text, assuming your database is in UNICODE encoding. If your database is in a non-UNICODE encoding, there is no special datatype that will give you a unicode string - you can store it as a bytea stream, but that will not be a string.
Standard TEXT datatype is perfectly fine for it.