If you already know the name of the codepoint, such as GREEK SMALL LETTER PHI, you can obtain the character using \c. There is an equivalent NQP in getunicode. However, is there a way of looking up all GREEK SMALL LETTER, for instance?
I have tried to find out where those names are stored, but I haven't found them in NQP, Rakudo or MoarVM source. Any idea?
You can just go through all of Unicode codepoints, and test them
(0..0x10FFFF).grep: *.uniname.starts-with('GREEK SMALL LETTER ');
You might want to get the values that are Greek and lowercase instead of only the ones that have GREEK SMALL LETTER in their name.
(0..0x10FFFF).grep: {
.uniprop('Script') eq 'Greek'
and
.uniprop('Lowercase')
}
Related
When reporting bugs in one of my programs (which handles specialized XML files containing Bible text, which may contain copyrighted or otherwise protected material), there is an option to submit an anonymized XML file with the report. As some bugs only happen with certainly formatted text in the fields, punctuation should be preserved and only letters and digits anonymized.
At the moment, the code first identifies fields that should be anonymized, then iterates over each character and replaces uppercase Latin letters by X, lowercase Latin letters by x, Digits by 0 and Greek letters by Χ / χ.
Now I got a report that it does not work with Cyrillic text. I fixed it now by replacing any Unicode letters by X or x, but that will change the script of the letters unneccessarily and may make bugs unreproducible.
I could now of course add another special case for Cyrillic to solve this, and wait for the next bug report.
But I wondered if anybody else may have already compiled a list or database of "anonymizing" characters for each script available in Unicode. Or found a way to extract it from the UnicodeData either included in Java or provided by the Unicode consortium. Preferrably as a Java library, but any other file that can be used from Java should be fine, too.
I am pre-processing dirty text fields. I have managed to delete single characters and numbers, but there are still Greek letters (from formulas) that I want to delete altogether.
Any type of Greek letter can occur at any position in the string.
Any ideas how to do it?
select regexp_replace(' ω ω α ω alkanediylbis alkylimino bis alkanolpolyethoxylate the formula where straight branched chain alkylene group also known alkanediyl group that has the range carbon atoms and least carbon atoms length and can the same different and are primary alkyl groups which contain carbon atoms each and can the same different and are alkylene groups which contain the range from carbon atoms each and and are the same different numerals the range each ', '\W+', '')
[Α-Ωα-ω] will match the standard Greek alphabet. (Note that the Α here is a distinct character from the Latin A, though they probably look identical).
Some commonly-used symbols are outside of the standard alphabet, so at the very least, you probably want to match the whole Greek Unicode block using [\u0370-\u03FF].
Unicode also has
The Greek Extended block containing letters with diacritics
The Coptic block with some very similar-looking characters
The Mathematical Operators block with its own ∆/∏/∑ symbols
Several copies of the Greek alphabet in the Mathematical Alphanumeric Symbols block
...and probably more.
Rather than trying to list everything you want to replace, it might be easier to list the things you want to keep. For example, to remove everything outside the printable ASCII range:
select regexp_replace(
'ABCΑαΒβΓγΔδΕεΖζΗηΘθΙιΚκΛλΜμΝνΞξΟοΠπΡρΣσςΤτΥυΦφΧχΨψΩω123',
'[^\u0020-\u007E]', '', 'g'
);
regexp_replace
----------------
ABC123
Turkish has dotted and dotless I as two separate characters, each with their own uppercase and lowercase forms.
Uppercase Lowercase
I U+0049 ı U+0131
İ U+0130 i U+0069
Whereas in other languages using the Latin alphabet, we have
Uppercase Lowercase
I U+0049 i U+0069
Now, The Unicode Consortium could have implemented this as six different characters, each with its own casing rules, but instead decided to use only four, with different casing rules in different locales. This seems rather odd to me. What was the rationale behind that decision?
A possible implementation with six different characters:
Uppercase Lowercase
I U+0049 i U+0069
I NEW ı U+0131
İ U+0130 i NEW
Codepoints currently used:
U+0049 ‹I› \N{LATIN CAPITAL LETTER I}
U+0130 ‹İ› \N{LATIN CAPITAL LETTER I WITH DOT ABOVE}
U+0131 ‹ı› \N{LATIN SMALL LETTER DOTLESS I}
U+0069 ‹i› \N{LATIN SMALL LETTER I}
There is one theoretical and one practical reason.
The theoretical one is that the i of most Latin-script alphabets and the i of the Turkish and Azerbaijani alphabets are the same, and again the I of most Latin-script alphabets and the I of the Turkish and Azerbaijani are the same. The alphabets differ in the relationship between those too. One could easily enough argue that they are in fact different (as your proposed encoding treats them) but that's how the Language Commission considered them in defining the alphabet and orthography in the 1920s in Turkey, and Azerbaijani use in the 1990s copied that.
(In contrast, there are Latin-based scripts for which i should be considered semantically the same as i though never drawn with a dot [just use a different font for differently shaped glyphs], particularly those that date before Carolingian or which derive from one that is, such as how Gaelic script was derived from Insular script. Indeed, it's particularly important never to write Irish in Gaelic script with a dot on the i that could be compared with the sí buailte diacritic of the orthography that was used with it. Sadly many fonts attempting this script make not only add a dot, but make the worse orthographical error of making it a stroke and hence confusable with the fada diacritic, which as it could appear on an i while the sí buailte could not, and so makes the spelling of words appear wrong. There are probably more "Irish" fonts with this error than without).
The practical reason is that existing Turkish character encodings such as ISO/IEC 8859-9, EBCDIC 1026 and IBM 00857 which had common subsets with either ASCII or EBCDIC already treated i and I as the same as those in ASCII or EBCDIC (that is to say, those in most Latin script alphabets) and ı and İ as separate characters which are their case-changed equivalents; exactly as Unicode does now. Compatibility with such scripts requires continuing that practice.
Another practical reason for that implementation is that doing otherwise would create a great confusion and difficulty for Turkish keyboard layout users.
Imagine it was implemented the way you suggested, and pressing the ıI key and the iİ key on Turkish keyboards produced Turkish-specific Unicode characters. Then, even though Turkish keyboard layout otherwise includes all ASCII/Basic Latin characters (e.g. q, w, x are on the keyboard even though they are not in the Turkish alphabet), one character would have become impossible to type. So, for example Turkish users wouldn't be able to visit wikipedia.org, because what they actually typed would be w�k�ped�a.org. Maybe web browsers could implement a workaround specifically for Turkish users, but think of the other use cases and heaps non-localized applications that would become difficult to use. Perhaps Turkish keyboard layout could add an additional key to become ASCII-complete again, so that there are three keys, i.e. ıI, iİ, iI. But it would be a pointless waste of a key in an already crowded layout and would be even more confusing, so Turkish users would need to think which one is appropriate in every context: "I am typing a user name, which tend to expect ASCII characters, so use the iI key here", "When creating my password with the i character, did I use the iI key or the iİ key?"
Due to a myriad of such problems, even if Unicode included Turkish-specific i and I characters, most likely the keyboard layouts would ignore it and continue to use regular ASCII/Basic Latin characters, so the new characters would be completely unused and moot. Except they would still probably occasionally come up in places and create confusion, so it's a good thing that they didn't go that route.
I am new to Unicode have been given the requirement to look at some translated text, iterate over all of the characters of that translation and determine if all the characters are valid for the target culture (language and location).
For example, if I am translating a document from English to Greek, I want to detect if there are any English/ASCII "A"s in the Greek translation and report that as an error. This may likely be the case from corrupted data from a translation memory.
Is there any existing grouping of Unicode characters by culture? Or is there any existing strategy for developing this kind of grouping? I see that there is some grouping of characters at (http://www.unicode.org/charts/). But it seems that this is not quite what I am looking for at first glance.
Does any thing exist like "Here are the valid Unicode characters for Spanish - Spain: [some Unicode range(s)]" or "Here are the valid Unicode characters for Russian - Russia: [some Unicode range(s)]"
Or has anyone developed a strategy to define these?
If this is not the right place to ask this question, I would welcome any direction on where might be a good place to ask the question.
This is something that CLDR (Common Locale Data Repository) deals with. It is not part of the Unicode Standard, but it is an activity and a resource managed by the Unicode Consortium. The LDML specification defines the format of the locale data. The Character Elements define some sets of characters: “main/standard”, “auxiliary”, “index”, and “punctuation”.
The data for Greek includes only Greek letters and some basic punctuation. This, like all such data at CLDR, is largely subjective. And even though the CLDR process is meant to produce well-reviewed data based on consensus, the reality is different. It can be argued that in normal Greek texts, Latin letters are not uncommon, especially in technical areas. For example, the international symbol for the ampere is “A” as a Latin letter; the symbol for the kilogram is “kg”, in Latin letters, even though the word for it is written Greek letters in Greek.
Thus, no matter how you run the analysis, the occurrence of Latin “A” in Greek text could be flagged as potentially suspicious, but not an error.
There are C/C++ and Java libraries that implement access to CLDR data, as part of ICU.
I have a phonebook app where I generate the title for section headers by comparing the first letters of entries.
The indexes are predefined so I expect letters to be assigned from A-Z and for numbers #.
The problem is there are many letter with accents including ü, İ, ç etc in many languages. In my approach, since these chars do not fall under the range A-Z, they are assigned to # which is not desired.
The native iOS Phonebook app assigns for example ü to U and so on. Is there a simple way to make this casting without defining a set of chars?
Thanks.
Check out Unicode Normalization. You probably want some combination of NFD and extraction of the adequate data. If you look at this file from Unicode, you will see something like
00E9;LATIN SMALL LETTER E ACUTE;Ll;0;L;0065 0301;;;;N;;;00C9;;00C9
Where 00E9, ie 'é', is decomposed as 0065 0301. You pick up 0065 (a), and discard 0301 (´). This file should get you started nicely. There may be equivalent functions in Objective-C/iOS, but I wouldn't know where to start...