Postgresql order by - danish characters is expanded - postgresql

I'm trying to make a "order by" statement in a sql query work. But for some reason danish special characters is expanded in stead of their evaluating their value.
SELECT roadname FROM mytable ORDER BY roadname
The result:
Abildlunden
Æblerosestien
Agern Alle 1
The result in the middle should be the last.
The locale is set to danish, so it should know the value of the danish special characters.

What is the collation of your database? (You might also want to give the PostgreSQL version you are using) Use "\l" from psql to see.
Compare and contrast:
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "en_GB";
word
---------------
Abildlunden
Æblerosestien
Agern Alle 1
(3 rows)
steve#steve#[local] =# select * from (values('Abildlunden'),('Æblerosestien'),('Agern Alle 1')) x(word)
order by word collate "da_DK";
word
---------------
Abildlunden
Agern Alle 1
Æblerosestien
(3 rows)
The database collation is set when you create the database cluster, from the locale you have set at the time. If you installed PostgreSQL through a package manager (e.g. apt-get) then it is likely taken from the system-default locale.
You can override the collation used in a particular column, or even in a particular expression (as done in the examples above). However if you're not specifying anything (likely) then the database default will be used (which itself is inherited from the template database when the database is created, and the template database collation is fixed when the cluster is created)
If you want to use da_DK as your default collation throughout, and it's not currently your database default, your simplest option might be to dump the database, then drop and re-create the cluster, specifying the collation to initdb (or pg_createcluster or whatever tool you use to create the server)
BTW the question isn't well-phrased. PostgreSQL is very much not ignoring the "special" characters, it is correctly expanding "Æ" into "AE"- which is a correct rule for English. Collating "Æ" at the end is actually more like the unlocalised behaviour.
Collation documentation: http://www.postgresql.org/docs/current/static/collation.html

Related

Query failed: collation "numerickn" for encoding "UTF8" does not exist

I have a column (vendor_name) in Postgresql (AWS RDS) table which can contain alphanumeric values. I would like to do a natural sort on this column.
The sample data in the table is as follows
delta 20221120
delta 20220109
costco delivery 564
costco delivery 561
united 01672519702943
Uber
I have created a colllate in the db as below.
CREATE COLLATION IF NOT EXISTS numerickn (provider = icu, locale = 'en-u-kn-true')
If anyone sorts on the vendor name column in the UI grid, I am adding the following clause dynamically in my query.
ORDER BY "vendor" COLLATE "numerickn"
However, it gives the following error, though I see collation exists in DB.
Error: Query failed: collation "numerickn" for encoding "UTF8" does
not exist
I am not sure why it does not work if collate exists in the DB. In my vendor name, numeric can appear anywhere within the string, so there is no pattern.
I could not find why it was not working in the stage environment while in my local it was working.
In the end, I moved away from colation logic and implement the natural sort in a different way found in stack overflow only.
PostgreSQL ORDER BY issue - natural sort
I am using Nodejs in my api code. My solution goes as follows
qOrderBy = String.raw` ORDER BY ARRAY(
SELECT ROW(
CAST(COALESCE(NULLIF(match[1], ''), '9223372036854775807') AS BIGINT),
match[2]
)
FROM REGEXP_MATCHES(vendor, '(\d*)|(\D*)', 'g')
AS match ) ${sortOrder}`
}

Postgresql ORDER BY not working as expected

Let's try this simple example to represent the problem I'm facing.
Assume this table:
CREATE TABLE testing1
(
id serial NOT NULL,
word text,
CONSTRAINT testing1_pkey PRIMARY KEY (id)
);
and that data:
insert into testing1 (word) values ('Heliod, God');
insert into testing1 (word) values ('Heliod''s Inter');
insert into testing1 (word) values ('Heliod''s Pilg');
insert into testing1 (word) values ('Heliod, Sun');
Then I want to run this query to get the results ordered by the word column:
SELECT
id, word
FROM testing1
WHERE UPPER(word::text) LIKE UPPER('heliod%')
ORDER BY word asc;
But look at the output, it's not ordered. I would expect the rows to be in that order, using their ids: 2, 3, 1, 4 (or, if I use the word's values: Heliod's Inter, Heliod's Pilg, Heliod, God, Heliod, Sun). This is what I get:
I thought that maybe something could confuse postgresql because of the WHERE criteria I used, but the below happens if I just order by on the rows:
Am I missing something here? I couldn't find anything in the docs about ordering values that contain quotes (I suspect that the quotes cause that behaviour because of their special meaning in postgresql, but I may be wrong).
I am using UTF-8 encoding for my database (not sure if it matters though) and this issue is happening on Postgresql version 12.7.
The output of
show lc_ctype;
is
"en_GB.UTF-8"
and the output of
show lc_collate;
is
"en_GB.UTF-8"
That is the correct way to order the rows in en_US.UTF-8. It does 'weird' (to someone used to ASCII) things with punctuation and whitespace, skipping on a first pass and considering it only for otherwise tied values.
If you don't want those rules, maybe use the C collation instead.
Indeed, I've tried #jjanes's suggestion to use the C collation and the output is the one I would expect:
SELECT
id, word
FROM testing1
ORDER BY word collate "C" ;
How weird, I have been using postgresql for some years now and I never noticed that behaviour.
Relevant section from the docs:
23.2.2.1. Standard Collations
On all platforms, the collations named default, C, and POSIX are available. > Additional collations may be available depending on operating system
support. The default collation selects the LC_COLLATE and LC_CTYPE values
specified at database creation time. The C and POSIX collations both
specify “traditional C” behavior, in which only the ASCII letters “A”
through “Z” are treated as letters, and sorting is done strictly by
character code byte values.

VARCHAR comparison on an indexed column

Postgres is behaving differently from the 'common sense' expected behavior:
Given a table 'my_table' and a VARCHAR(250) column named 'MyVarcharColumn' where an index IDX_MyvarcharColumn is created based on the 'MyVarcharColumn'.
Collation: Default
Postgres version: 11.12
LC_COLLATE: en_US.utf8
Enconding: UTF8
CTYPE: en_US.utf8
The problem is presented below:
Given a query (A)
SELECT * FROM my_table t
WHERE 'mystring' = t.MyVarcharColumn
When running the query above, no records are returned even though there is a value 'mystring' present in 'my_table'.
Workaround:
SELECT * FROM my_table t
WHERE 'mystring' = t.MyVarcharColumn collate "C"
By adding 'collate "C"' the query works fine, obviously no one wants to have to add the "collate" statement at the end of every query.
Second 'Workaround':
By recreating the databases indexes 'REINDEX database myDB' the query also starts to work as expected without the need of adding the statement 'collate'.
The question is: is there a way to avoid using the collate statement and/or the REINDEX to make this work without a workaround?
Re-creating the database with a different collation it also not an option at the moment.
Using lower(column_name) to compare isn't an option since it does not use indexes and it would make the query slow.

SQL Command to insert Chinese Letters

I have a database with one column of the type nvarchar. If I write
INSERT INTO table VALUES ("玄真")
It shows ¿¿ in the table. What should I do?
I'm using SQL Developer.
Use single quotes, rather than double quotes, to create a text literal and for a NVARCHAR2/NCHAR text literal you need to prefix it with N
SQL Fiddle
Oracle 11g R2 Schema Setup:
CREATE TABLE table_name ( value NVARCHAR2(20) );
INSERT INTO table_name VALUES (N'玄真');
Query 1:
SELECT * FROM table_name
Results:
| VALUE |
|-------|
| 玄真 |
First, using NVARCHAR might not even be necessary.
The 'N' character data types are for storing data that doesn't 'fit' in the database's defined character set. There's an auxiliary character set defined as the NCHAR Character set. It's kind of a band aid - once you create a database it can be difficult to change its character set. Moral of this story - take great care in defining the Character Set when creating your database and do not just accept the defaults.
Here's a scenario (LiveSQL) where we're storing a Chinese string in both NVARCHAR and VARCHAR2.
CREATE TABLE SO_CHINESE ( value1 NVARCHAR2(20), value2 varchar2(20 char));
INSERT INTO SO_CHINESE VALUES (N'玄真', '我很高興谷歌翻譯。' )
select * from SO_CHINESE;
Note that both the character sets are in the Unicode family. Note also I told my VARCHAR2 string to hold 20 characters. That's because some characters may require up to 4 bytes to be stored. Using a definition of (20) would give you only room to store 5 of those characters.
Let's look at the same scenario using SQL Developer and my local database.
And to confirm the character sets:
SQL> clear screen
SQL> set echo on
SQL> set sqlformat ansiconsole
SQL> select *
2 from database_properties
3 where PROPERTY_NAME in
4 ('NLS_CHARACTERSET',
5 'NLS_NCHAR_CHARACTERSET');
PROPERTY_NAME PROPERTY_VALUE DESCRIPTION
NLS_NCHAR_CHARACTERSET AL16UTF16 NCHAR Character set
NLS_CHARACTERSET AL32UTF8 Character set
First of all, you should to establish the Chinese character encoding on your Database, for example
UTF-8, Chinese_Hong_Kong_Stroke_90_BIN, Chinese_PRC_90_BIN, Chinese_Simplified_Pinyin_100_BIN ...
I show you an example with SQL Server 2008 (Management Studio) that incorporates all of this Collations, however, you can find the same characters encodings in other Databases (MySQL, SQLite, MongoDB, MariaDB...).
Create Database with Chinese_PRC_90_BIN, but you can choose other Coallition:
Select a Page (Left Header) Options > Collation > Choose the Collation
Create a Table with the same Collation:
Execute the Insert Statement
INSERT INTO ChineseTable VALUES ('玄真');

TYPO3 RTE: Saving mathematical/greek symbols doesn't work

I need to display some mathematical/greek symbols in the RTE and later in the frontend. Inserting them via copy/paste or the "Insert characters" option works great, but as soon as I save the text, the inserted symbol get's replaced with a question mark and T3 throws following error:
1: These fields of record 56 in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
I think there is an issue with the character set of T3 or my DB, but I don't know where to start looking.
Tested on my 7.6.8 and it seems to work OK. When I login to my mysql and run this query:
SELECT default_character_set_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
(7_6_local_typo3_org is database name) it returns:
+----------------------------+
| default_character_set_name |
+----------------------------+
| utf8 |
+----------------------------+
1 row in set (0.00 sec)
and also collation:
SELECT default_collation_name FROM information_schema.SCHEMATA
WHERE schema_name = "7_6_local_typo3_org";
+------------------------+
| default_collation_name |
+------------------------+
| utf8_general_ci |
+------------------------+
1 row in set (0.00 sec)
Then also I have in my my.cnf (mysql config file):
character-set-server = utf8
collation-server = utf8_general_ci
Similar problem when pasting HTML with UTF-Icons into Raw-HTML content-element in TYPO3-8.7.x but it works when i encode the symbols, for example:
<span class="menuicon">⌚</span>
Possible reasons for error message
1: These fields of record X in table "tt_content" have not been saved correctly: bodytext! The values might have changed due to type casting of the database.
in a TYPO3 installation (example installation's version: 10.4.20) can be
the MySQL/MariaDB tables of this TYPO3 installation are using an inappropriate/outdated character set and/or collation (Step 1 below).
this TYPO3 installation is not yet configured to use utf8mb4 for the database (Step 2 below).
TYPO3 supports utf8mb4 since at least version 9.5. With it comes proper Unicode support, including emojis, mathematical symbols, and Greek letters (e.g. ⌚∰β) in CKEditor bodytext.
I migrated my TYPO3 installation's database and configuration to utf8mb4 in the following way, getting rid of the aforementioned error message and saving and displaying Unicode multibyte characters correctly.
Be sure to apply these migrations in a test environment first, then check existing content and test usual content editing scenarios before applying these migrations on a production system to make sure MySQL/MariaDB converted between the character sets correctly and without data loss (truncation).
Step 1
Update TYPO3 database tables to use utf8mb4 as character set and utf8mb4_unicode_ci as collation.
The following bash one-liner loops over all tables in database typo3 and applies these updates. It assumes MySQL/MariaDB root privileges, a password-less socket connection, and a TYPO3 database (table_schema) named typo3. Adapt accordingly. Tested successfully on
Debian 11 MariaDB Server (10.5.12-MariaDB-0+deb11u1)
Ubuntu 20.04 LTS MySQL Server (8.0.27-0ubuntu0.20.04.1)
for tbl in $(mysql --disable-column-names --batch -e 'select distinct TABLE_NAME from information_schema.tables where table_schema="typo3" and table_type="BASE TABLE";'); do echo "updating table $tbl" && mysql -e "ALTER TABLE typo3.${tbl} CONVERT TO CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;"; done
To ensure that during this conversion (from a "smaller" encoding to the up-to-four-bytes-per-character utf8mb4 encoding) no (string) data gets lost/truncated, MySQL/MariaDB automatically adapts a text/string column's datatype to a larger text/string datatype, e.g. from TEXT to MEDIUMTEXT.
To restore some TYPO3 (extension) table's column back to its specified datatype, visit TYPO3 backend -> Maintenance -> Analyze Database Structure. This tool will allow to restore those column's original (smaller) datatypes. This may cause data truncations. I'm not sure whether TYPO3 will warn if truncation actually occurs, though assuming the TYPO3 (extension) developers had utf8mb4 in mind when specifying/designing a column's datatype and the user-provided content of a particular database cell is not too large, truncation should not be happening (overview of text/string datatype sizes).
Step 2
Configure TYPO3 to use utf8mb4. For example, when leveraging typo3conf/AdditionalConfiguration.php, have the following configurations in AdditionalConfiguration.php:
// ...
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['charset'] = 'utf8mb4';
$GLOBALS['TYPO3_CONF_VARS']['DB']['Connections']['Default']['tableoptions']['collate'] = 'utf8mb4_unicode_ci';
// ...