Mongo locale variant meaning - mongodb

Some mongoDB locales have variants: for instance Catalan has variant search and Spanish has variants search and traditional. What do those variants mean and what effect do they have on string comparisons? MongoDB documentation specifies which variants are available for each supported language (see this page from their maunal) but it does not specify what do they mean.

The collation data comes from CLDR - Unicode Common Locale Data Repository.
Downloading the common archive and looking for the ca locale (common/collation/ca.xml), it has the following notes. Standard variant:
<!-- standard collation &L<<ŀ=l·<<<Ŀ=L· is equivalent to root collation order
(except root uses prefix rules for the middle dot, rather than contractions)
references="Enciclopèdia Catalana: Diccionari de la llengua catalana ISBN 84-85194-46-2" -->
Search variant:
# Below are the rules specific to ca.
# Per Apple language group, these are modified from standard rules below
# to make L primary-different from L-dot for search.

Related

How to determine the simplified Unicode variant of a semantic variant of a traditional Chinese character?

As mentioned in an answer to Simplified Chinese Unicode table, the Unihan database specifies whether a traditional character has a simplified variant (kSimplifiedVariant). However, some characters have semantic variants (kSemanticVariant) which themselves have simplified variants. For example U+8216 舖 has a semantic variant U+92EA 鋪 which in turn has a simplified variant U+94FA 铺.
Should traditional to simplified mappings convert U+8216 to U+94FA?
If so, what's the easiest way of generating or downloading the full mapping, given that the Unihan database does not list U+94FA as a kSimplifiedVariant directly for U+8216, only for the intermediate form U+92EA?

Why doesn't ICU4J match UTF-8 sort order?

I am having a hard time understanding unicode sorting order.
When I run Collator.getInstance(Locale.ENGLISH).compare("_", "#") under ICU4J 55.1 I get a return value of -1 indicating that _ comes before #.
However, looking at http://www.utf8-chartable.de/unicode-utf8-table.pl?utf8=dec I see that # (U+0023) comes before _ (U+005F). Why is ICU4J returning a value of -1?
First, UTF-8 is just an encoding. It specifies how to store the Unicode code points physically, but does not handle sorting, comparisons, etc.
Now, the page you linked to shows everything in numerical Code Point order. That is the order things would sort in if using a binary collation (in SQL Server, that would be collations with names ending in _BIN and _BIN2). But the non-binary ordering is far more complex. The rules are described here: Unicode Collation Algorithm (UCA).
The base rules are found here: http://www.unicode.org/repos/cldr/tags/release-28/common/uca/allkeys_CLDR.txt
It shows:
005F ; [*010A.0020.0002] # LOW LINE
...
0023 ; [*0290.0020.0002] # NUMBER SIGN
It is very important to keep in mind that any locale / culture can override these base rules. Hence, while the few lines noted above explain this specific circumstance, other circumstances would need to check http://www.unicode.org/repos/cldr/tags/release-28/common/collation/ to see if there are any locale-specific overrides.
Converting Mark Ransom's comments into an answer:
The ordering of individual characters is based on a collation table, which has little relationship to the codepoint numbers. See: http://www.unicode.org/reports/tr10/#Default_Unicode_Collation_Element_Table
If you follow the first link on that page, it leads to allkeys.txt which gives the default collation ordering.
In particular, _ is 005F ; [*020B.0020.0002] # LOW LINE while # is 0023 ; [*0391.0020.0002] # NUMBER SIGN. Note that the collation numbers for _ are lower than the numbers for #.

How do you translate Sakai tool names and descriptions?

The names of several Sakai tools always appear in English even if I have set the Java default locale to Russian.
I see this problem with the following tools in a new Sakai 10 build: Roster and Sign-up.
How do I translate these tool names and descriptions?
Typically strings are collected into .properties files in the resource bundle of a Sakai tool. The strings in these files must be carefully translated into a new language, named using the HTML language codes, pt-BR for Brazilian Portuguese, for example. There are limits on the size of strings in properties, but most strings won't this limit.

Unicode character default collation table

I don't know which site this question belongs exactly, so posting it here.
I use Postgresql 9.2 on RHEL 6.4 and observe the following:
select foo
from unnest('{а,ә,б,в,г,д,е,ж}'::text[]) as foo
order by foo collate "kk_KZ.utf8"
gives
а
ә
б
в
г
д
е
ж
BUT
select foo
from unnest('{а,ә,б,в,г,д,е,ж}'::text[]) as foo
order by foo collate "en_US.utf8"
gives
а
б
в
г
д
е
ә -- misplaced
ж
Further, I found that there is the Default Unicode Collation Element Table [1], which lists the character in question (04D9 ; [.199D.0020.0002.04D9] # CYRILLIC SMALL LETTER SCHWA) in proper order.
I understand that it is silly to expect the cyrillic characters be handled properly by "en_US.utf8" locale, but what is the correct behavior by Unicode or any other relevant standards in cases, where a character does not normally belong to language/locale used for collation?
[1] http://www.unicode.org/Public/UCA/latest/allkeys.txt
It's not misplaced. It might be to you, but it's not to me. :-) In all seriousness, there is no correct behavior by Unicode; there simply cannot be. A character set is a mapping; the collation is a locale-specific set of rules to sort the characters in that set -- and even within the same locale there can be multiple collations.
The ICU docs has colorful examples of how thorny this kind of stuff gets, in case you're curious. Quoting extensively:
http://userguide.icu-project.org/collation
[H]ere are some of the ways languages vary in ordering strings:
The letters A-Z can be sorted in a different order than in English. For example, in Lithuanian, "y" is sorted between "i" and "k".
Combinations of letters can be treated as if they were one letter. For example, in traditional Spanish "ch" is treated as a single letter, and sorted between "c" and "d".
Accented letters can be treated as minor variants of the unaccented letter. For example, "é" can be treated equivalent to "e".
Accented letters can be treated as distinct letters. For example, "Å" in Danish is treated as a separate letter that sorts just after "Z".
Unaccented letters that are considered distinct in one language can be indistinct in another. For example, the letters "v" and "w" are two different letters according to English. However, "v" and "w" are considered variant forms of the same letter in Swedish.
A letter can be treated as if it were two letters. For example, in traditional German "ä" is compared as if it were "ae".
Thai requires that the order of certain letters be reversed.
French requires that letters sorted with accents at the end of the string be sorted ahead of accents in the beginning of the string. For example, the word "côte" sorts before "coté" because the acute accent on the final "e" is more significant than the circumflex on the "o".
Sometimes lowercase letters sort before uppercase letters. The reverse is required in other situations. For example, lowercase letters are usually sorted before uppercase letters in English. Latvian letters are the exact opposite.
Even in the same language, different applications might require different sorting orders. For example, in German dictionaries, "öf" would come before "of". In phone books the situation is the exact opposite.
Sorting orders can change over time due to government regulations or new characters/scripts in Unicode.
Postgresql uses the locales provided by the operating system. In your setup, locales are provided by glibc. Glibc uses a heavily modified version of an "ancient" version of ISO 14651 (see glibc Bug 14095 - Review / update collation data from Unicode / ISO 14651 for information on current pains in trying to update glibc locale data).
As of glibc 2.28, to be released on 2018-08-01, glibc will use data from ISO 14651:2016 (which is synchronized to Unicode 9), and will give the order the OP expects for en_US.
ISO 14651 is Method for comparing character strings and description of the common template tailorable ordering and it is similar to the UCA, with some differences. The CTT (Common Template Table) is the ISO14651 equivalent of the DUCET, and they are aligned.
The first time CYRILLIC SMALL LETTER SCHWA appeared in a collation table in glibc was for the az_AZ locale (Azerbaijani), where it is ordered after CYRILLIC SMALL LETTER IE. This corresponds to:
commit fcababc4e18fee81940dab20f7c40b1e1fb67209
Author: Ulrich Drepper <drepper#redhat.com>
Date: Fri Aug 3 08:42:28 2001 +0000
Update.
2001-08-03 Ulrich Drepper <drepper#redhat.com>
* locale/iso-639.def: Add Tigrinya.
From there, that ordering was eventually moved to the file iso14651_t1 as per Bug 672 - Include iso14651_t1 in collation rules, which was an effort to simplify glibc locale data. This corresponds to:
commit 5d2489928c0040d2a71dd0e63c801f2cf98e7efc
Author: Ulrich Drepper <drepper#redhat.com>
Date: Sun Feb 18 04:34:28 2007 +0000
[BZ #672]
2005-01-16 Denis Barbier <barbier#linuxfr.org>
[BZ #672]
* locales/ca_ES: Replace current collation rules by including
iso14651_t1 and adding extra rules if needed. There should be
no noticeable changes in sorted text. only ligatures and
ignoreable characters have modified weights.
* locales/da_DK: Likewise.
* locales/en_CA: Likewise.
* locales/es_US: Likewise.
* locales/fi_FI: Likewise.
* locales/nb_NO: Likewise.
[BZ #672]
* locales/iso14651_t1: Simplified. Extended.
Most locales in glibc start from iso14651_t1, and tailor it, which is what you are seeing with en_US.
While glibc based its default ordering in Azerbaijani, the DUCET instead bases it on the ordering for Kazakh and Tatar, which is where the difference comes from.
The Unicode Collation Algorithm allows any tailorings to be made to the DUCET.
There isn't a "correct" behaviour. There are various behaviours one could expect, and the most appropriate depends on the context, the audience. Sometimes any behaviour could be correct, since there isn't really a reason to force any order of cyrillic betters in an American English collation.
The Common Locale Data Repository provides locale-specific tailorings to the DUCET. The CLDR uses LDML (Locale Data Markup Language) to specify the tailorings, and the syntax is given by the Unicode Technical Specification #35, part 5.
The latest version of the data provided by the CLDR for en_US has no tailorings: it uses a modified version of the DUCET (as stated in UTS#35 under "Root collation"). It lists the cyrillic schwa after the cyrillic A, i.e., the order you were expecting.
There is also data for an en_US_POSIX locale, and that one includes some modifications, but none changes anything that isn't in ASCII.
It appears the en_US locale installed in your system uses a tailoring that puts the schwa next to E probably because of their similar form. It could be argued that would cause fewer surprises to an American English audience than sorting the schwa after A: ask people what that is and see how many will just tell you it is an "upside-down E". It isn't right or wrong, but if you ask me, it seems more appropriate than the collation found in the CLDR.

Unicode range mapping between languages

there is 7707 languages listed in this link http://www.sil.org/iso639-3/download.asp and http://en.wikipedia.org/wiki/ISO_639:a.
And also Unicode support the writing system of the languages, but i want to know mapping beetween the languages and unicode range.
Unicode range is listed in this link http://www.unicode.org/roadmaps/bmp/
Example one of unicode range : "start"=> "0x0900", "end"=> "0x097F", "block_name"=> "Devanagari" (what language use this range of unicode ?)
there is any documentation ? I need full languages mapping that are supported in unicode range.
You can take a look at ICU4C locale (http://icu-project.org/apiref/icu4c/uloc_8h.html)
You can get all the locales (with uloc_getAvailable), then for each locale call uloc_addLikelySubtags, and then uloc_getScript on the result.
This is going to give you the most likely script used by a language. But there are languages that use more than one script. Some of them are captured by ICU, but some are not.