How do you set strings to uppercase / lowercase in Unicode? - unicode

This is mostly a theoretical question I'm just very curious about. (I'm not trying to do this by coding it myself or anything, I'm not reinventing wheels.)
My question is how the uppercase/lowercase table of equivalence works for Unicode.
For example, if I had to do this in ASCII, I'd take a character, and if it falls withing the [a-z] range, I'd sum the difference between A and a.
If it doesn't fall on that range, I'd have a small equivalence table for the 10 or so accented characters plus ñ.
(Or, I could just have a full equivalence array with 256 entries, most of which would be the same as the input)
However, I'm guessing that there's a better way of specifying the equivalences in Unicode, given that there are hundreds of thousands of characters, and that theoretically, a new language or set of characters can be added (and I'm expecting that you wouldn't need to patch windows when that happens).
Does Windows have a huge hard-coded equivalence table for each character? Or how is this implemented?
A related question is how SQL Server implements Unicode-based accent-insensitive and case-insensitive queries. Does it have an internal table that tells it that é ë è E É È and Ë are all equivalent to "e"?
That doesn't sound very fast when it comes to comparing strings.
How does it access Indexes quickly? Does it already index values converted to their "base" characters, corresponding to that field's collation?
Does anyone know the internals for these things?
Thank you!

I'm going to address the MS SQL Server part of this question, but the "correct" answer actually depends on the language(s) supported and application.
When you create a table in SQL Server, each text field has either an implicitly or explicitly specified collation. This affects both sort order and comparison behavior. The default, for most English (US) locales, is Latin1_General_CI_AS, or Latin 1, Case-insensitive, Accent-Sensitive. That means that, for example, a=A, but a!=Ä and a!=ä. You can also use accent-insensitive (Latin1_General_CI_AI) which treats all the diacritic variations of "A" as equal.
Some locales support other categories of comparison; for example, French orders words containing diacritics somewhat differently than German does. Turkish considers a dotless i and dotted i semantically different, so I and i don't match even with case-insensitive comparisons if you use Turkish, case-insensitive, accent-sensitive collation.
You can change the collation per database, per table, per field, and, with some cost, even per-query. My understanding is that indices normalize according to the specified collation order, which means that basically the index keeps a flattened version of the original string. For example, with case-insensitive collations, Apple and apple are stored as apple. Queries are flattened with the same collation before the search.
In Japanese, there's another category of normalization, where fullwidth and halfwidth characters like ア=ア, and in some cases, two halfwidth characters are flattened to a single, semantically equivalent character (バ=バ). Finally, for some languages, there's another ball of wax with composite characters, where isolated diacritic characters can be composed with other characters (e.g. the umlaut in ä is one character, composed with the simple form a). Vietnamese, Thai and a few other languages have variations of this category. If there's a canonical form, Unicode normalization allows the composed and decomposed forms to be treated as equivalent. Unicode normalization is typically applied before any comparisons are made.
To summarize, for a case-insensitive comparison, you do something much like you would when comparing ASCII-range strings: flatten the left and right side of the comparison "to lower case" (for example), then compare the array as a binary array. The difference is that you need to
1) normalize the strings to the same unicode form (kC or kD)
2) normalize the strings to the same case according to the rules of that locale
3) normalize the accents according to the accent-sensitivity rules
4) compare according to a binary comparison
4) if applicable, such as in the case of sorting, compare using additional secondary and ternary sorting rules, which include things analogous to things like "Mc" sorts before "M" in some languages.
And yes, Windows stores tables for all of these rules. You don't get all of them by default in every installation, unless you add support for them with the East Asian Language Support and Complex Scripts support from control panel.

There is a mapping file that contains all the case mappings that have a 1:1 mapping ratio. Usually operating systems/frameworks/libraries support a specific version of Unicode, and since this case mappings file is versioned, you would get the mappings for whichever version of Unicode your particular OS/framework/library/whatever happened to support.
For more information on Unicode case mappings, see: http://www.unicode.org/faq/casemap_charprop.html

Most writing systems do not have separate uppercase and lowercase letters. According to Wikipedia, exceptions include "Roman, Greek, Cyrillic and Armenian alphabets".
So there aren't that many letters to worry about. This page shows that large ranges of characters follow a simple scheme of adding 1 to an uppercase character to get the lowercase equivalent (though of course there are some exceptions).

The correct answer is a little more complicated, depending on what you are trying to do.
When comparing character strings, for sorting or searching applications, the correct algorithm to use is specified in UTS #10: "Unicode Collation Algorithm". Case-insensitivity is part of the mix, but there are different ways to represent a many characters, and applications often need to treat the various representations as equivalent.
The sorting rules are locale-dependent. This is mainly an issue when you are sorting results for display to a user. Ignoring the rules can frustrate users and even result in security vulnerabilities.
If you are just trying to capitalize words for display purposes, the rules there can be tricky too; there are one-to-many conversions and other issues. Depending on the locale, the same letter may capitalize differently. The letter's position in a word can make a difference. There's also a a distinct notion of "title case", where you just want to capitalize the first letter of each word. Sometimes the title-case of a character is not the same as its upper-case.

Related

Postgresql support for national character data types

Looking for a reference that discusses PostgreSQL's support for the NATIONAL CHARACTER set of data types. e.g. this query runs without error:
select cast('foo' as national character varying(10))
yet the docs don't seem to discuss that type Postgres character data types
Does Postgres implement these differently from the CHARACTER data types? That is, how does the NATIONAL keyword affect how data is stored or represented?
Can someone share a link or two to any references I can't seem to find? (other than some mailing list correspondence from a while back)
If you request a national character varying in PostgresSQL, you'll get a regular character varying.
PostgreSQL uses the same encoding for normal and national characters.
“National character” is a leftover from the bad old days when people still used single-byte encodings like LATIN-1 and needed a different encoding for characters that didn't fit.
PostgreSQL has always supported UNICODE encodings, so this is not an issue. Just make sure that you don't specify an encoding other than the default UTF8.
NATIONAL CHARACTER has no real meaning in the SQL:92 standard (section 4.2.1), saying only that it means “a particular implementation-defined character repertoire”. If you are surprised, don’t be. There are many screwy aspects to the SQL standard.
As for text handling in Postgres, you would likely be interested in learning about:
character encoding
Unicode
UTF-8
collations
support for ICU in Postgres 10 and later.
See:
More robust collations with ICU support in PostgreSQL 10 by Peter Eisentraut, post, 2017-05.
Collations: Introduction, Features, Problems by Peter Eisentraut, video, 2019-07-12.
Unicode collation algorithm ( UCA )
ICU User Guide – Locale
List of locales with 209 languages, 501 regions & variants, as defined in ICU

Will it make a difference if I use non-ASCII chars as ENUM values in PostgreSQL?

I think most people will use alphabet chars (English) as ENUM values in PostgreSQL. What about using Asian chars (Japanese, Chinese, Korean)? Will that make a different on performance or other perspective? Is that recommended or not?
It shouldn't make a difference, as these strings are only ever compared for equality, which is independent from the encoding. Under the hood, they are double precision.
The more important consideration when using enums is that they are only suitable if the values don't ever change. For example, you cannot remove an enum value.

Multiple languages with utf8 in postgresql

How exactly is one meant to seamlessly support all languages stored within postgres's utf8 character set? We seem to be required to specify a single language-specific collation along with the character set, such as en_US.utf8. If I'm not mistaken, we don't have the ability to store both English (en_US) and Chinese (zh_CN) in the same utf8 column, while maintaining any kind of meaningful collation behavior. If I define a column as en_US.utf8, how is it supposed to handle values containing Chinese (zh_CN) characters / byte sequences? The reality is that a single column value can contain multiple languages (ex: "Hello and 晚安"), and simply cannot be collated according to a single language.
Yes, I can physically store any character sequences; but what is the defined behavior for ordering on a en_US.utf8 column that contains English, German, Chinese, Japanese and Korean strings?
I understand that mysql's utf8mb4_unicode_ci collation isn't perfect, and that it is not following any set standard for how to collate the entire unicode set. I can already hear the anti-mysql crowd sighing about how mysql's language-agnostic collations are arbitrary, semantically meaningless, or even purely invalid. But the fact is, it works well enough, and fulfills the expectation that utf8 = multi-language unicode support.
Is postgres just being extremely stubborn with the fact that it's semantically incorrect to collate across the unicode spectrum? I know the developers are very strict when it comes to "doing things according to spec", but this inability to juggle multiple languages is frustrating to say the least. Am I missing something that solves the multi-language problem, or is the official stance that a single utf8 column can handle any language, but only one language at a time?
You are right that there will never be a perfect way to collate strings across languages.
PostgreSQL has decided not to create its own collations but to use those provided by the operating system. The idea behind this is to avoid re-inventing the wheel and to reduce maintenance effort.
So the traditional PostgreSQL answer to your question would be: if you want a string collation that works reasonably well for strings in different languages, complain to your operating system vendor or pick an operating system that provides such a collation.
However, this approach has drawbacks that the PostgreSQL community is aware of:
Few – if any – people decide on an operating system based on the collation support it provides.
PostgreSQL's sorting behaviour depends on the underlying operating system, which leads to frequent questions by confused users on the mailing lists.
With some operating systems collation behaviour can change during an operating system upgrade, leading to corrupt database indexes (see for example this thread).
It may well be that PostgreSQL changes its approach; there have been repeated efforts to use ICU libraries instead of operating system collations (see for example this recent thread), which would mitigate some of these problems.

Which Japanese sorting / collation orders are supported by ICU / CLDR / UCA?

The Japanese language, I believe, has more than one sort order equivalent to alphabetical order in English.
I believe there's at least one based on pronunciation (I think the kana have used two orders historically) and one based on radical + stroke count. Chinese also has multiple orders with one based on radical/stroke but due to Unicode Han Unification the same character can have a different stroke count for Chinese and Japanese.
Since I believe the standard for sort order in Unicode is the CLDR for the data with the UCA for the algorithm, and the reference implementation is ICU.
Implementations generally lag behind standards and this information is proving hard to track down to canonical sources.
If I set up a collator with the language specifier ja, which sort order should I expect to be used?
If several are available for Japanese, or are planned to be available at some point, which specifiers should be used for those? For example the specifier for the traditional alphabetical order of Spanish is es-u-co-trad.
The basic Japanese sort order provided by the CLDR (and therefore ICU) is based on the sort order specified in JIS X 4061-1996:
Kana are sorted by their gojuuon (五十音) order (with Hiragana preceding Katakana).
Kanji are sorted by their order in JIS X 0208, which is by their "representative reading" (and following all Kana).
A ja-u-co-unihan collation is also available, which includes the rules for sorting radicals by their stroke order (followed by the standard rules above). This only useful if you are actually sorting radicals.
If you need more accurate sorting of Kanji—for instance, by the reading of the words they are used in—you will need to perform some kind of morphological analysis with a dictionary to figure out what readings to use, and then apply the Unicode Collation Algorithm on those.

cant find Varchar Chart of acceptable characters

Does anyone know of a simple chart or list that would show all acceptable varchar characters? I cannot seem to find this in my googling.
What codepage? Collation? Varchar stores characters assuming a specific codepage. Only the lower 127 characters (the ASCII subset) is standard. Higher characters vary by codepage.
The default codepage used matches the collation of the column, whose defaults are inherited from the table,database,server. All of the defaults can be overriden.
In sort, there IS no "simple chart". You'll have to check the character chart for the specific codepage, eg. using the "Character Map" utility in Windows.
It's far, far better to use Unicode and nvarchar when storing to the database. If you store text data from the wrong codepage you can easily end up with mangled and unrecoverable data. The only way to ensure the correct codepage is used, is to enforce it all the way from the client (ie the desktop app) to the application server, down to the database.
Even if your client/application server uses Unicode, a difference in the locale between the server and the database can result in faulty codepage conversions and mangled data.
On the other hand, when you use Unicode no conversions are needed or made.