Can Unicode NFC normalization increase the length of a string? - unicode

If I apply Unicode Normalization Form C to a string, will the number of code points in the string ever increase?

Yes, there are code points that expand to multiple code points after applying NFC normalization. Within the Basic Multilingual Plane, for example, there are 70 code points that expand to 2 code points after applying NFC normalization, and there are 2 code points (U+FB2C and U+FB2D within the Alphabetic Presentation Forms block) that expand to 3 code points.
One guarantee that you have for this so-called "expansion factor" is that no string will ever expand more than 3 times in length (in terms of number of code units) after NFC normalization is applied:
There is also a Unicode Consortium stability policy that canonical mappings are always limited in all versions of Unicode, so that no string when decomposed with NFC expands to more than 3× in length (measured in code units). This is true whether the text is in UTF-8, UTF-16, or UTF-32. This guarantee also allows for certain optimizations in processing, especially in determining buffer sizes.
Section 9, Detecting Normalization Forms. UAX #15: Unicode Normalization Forms.
I have written a Java program to determine which code points within a Unicode block expand to multiple code points: http://ideone.com/9PUOCb
Alternatively, Tom Christiansen's unichars utility, part of the Unicode::Tussle CPAN module, can be used. (Note: Mac users may see an error at the make test installation step saying that the Perl version is too old. If you see this error, you can install the module by running notest install Unicode::Tussle within a CPAN shell.)
Examples:
Print the code points in the BMP that expand to 3 code points:
unichars 'length(NFC) == 3'
‭‭ שּׁ U+FB2C HEBREW LETTER SHIN WITH DAGESH AND SHIN DOT
‭ שּׂ U+FB2D HEBREW LETTER SHIN WITH DAGESH AND SIN DOT
Count the number of code points in all planes that expand to more than one code point:
unichars -a 'length(NFC) > 1' | wc -l
85
See also the frequently asked question What are the maximum expansion factors for the different normalization forms?

Related

How can I draw devanagari characters on a screen?

I am trying to represent devanagari characters on a screen, but in the dev environment where I'm programming I don't have unicode support. Then, to write characters I use binary matrices to color the related screen's pixels. I sorted these matrices according to the unicode order. For the languages that uses the latin alphabet I had no issues, I only needed to write the characters one after the other to represent a string, but for the devanagari characters it's different.
In the devanagari script some characters, when placed next to others can completely change the appearance of the word itself, both in the order and in the appearance of the characters. The resulting characters are considered as a single character, but when read as unicode they actually return 2 distinct characters.
This merging sometimes occurs in a simple way:
क + ् = क्
ग + ् = ग्
फ + ि = फि
But other times you get completely different characters:
क + ् + क = क्क
ग + ् + घ = ग्घ
क + ् + ष = क्ष
I found several papers describing the complex grammatical rules that determine how these characters merges (https://www.unicode.org/versions/Unicode8.0.0/UnicodeStandard-8.0.pdf), but the more I look into it the more I realize that I need to learn Hindi for understand that rules and then create an algorithm.
I would like to understand the principles behind these characters combinations but without necessarily having to learn the Hindi language. I wonder if anyone before me has already solved this problem or found an alternative solution and would like to share it with me.
Whether Devanagari text is encoded using Unicode or ISCII, display of the text requires a combination of shaping engine and font data that maps a string of characters into an appropriate sequence of positioned glyphs. The set of glyphs needed for Devanagari will be a fair bit larger than the initial set of characters.
The shaping steps involves an analysis of clusters, re-ordering of certain elements within clusters, substitution of glyphs, and finally positioning adjustments to the glyphs. Consider this example:
क + ् + क + ि = क्कि
The cluster analysis is needed to recognize elements against a general cluster pattern — e.g., which comprise the "base" consonant within the cluster, which are additional consonants that will conjoin to it, which are vowels and what the type of vowel with regard to visual positioning. In that sequence, the <ka, virama, ka> sequence will form a base that vowel or other marks are positioned relative to. The second ka is the "base" consonant and the inital <ka, virama> sequence will conjoin as a "half" form. And the short-i vowel is one that needs to be re-positioned to the left of the conjoined-consonant combination.
The Devanagari section in the Unicode Standard describes in a general way some of the actions that will be needed in display, but it's not a specific implementation guide.
The OpenType font specification supports display of scripts like Devanagari through a combination of "OpenType Layout" data in the font plus shaping implementations that interact with that data. You can find documentation specifically for Devanagari font implementations here:
https://learn.microsoft.com/en-us/typography/script-development/devanagari
You might also find helpful the specification for the "Universal Shaping Engine" that several implementations use (in combination with OpenType fonts) for shaping many different scripts:
https://learn.microsoft.com/en-us/typography/script-development/use
You don't necessarily need to use OpenType, but you will want some implementation with the functionality I've described. If you're running in a specific embedded OS environment that isn't, say, Windows IOT, you evidently can't take advantage of the OpenType shaping support built into Windows or other major OS platforms. But perhaps you could take advantage of Harfbuzz, which is an open-source OpenType shaping library:
https://github.com/harfbuzz/harfbuzz
This would need to be combined with Devanagari fonts that have appropriate OpenType Layout data, and there are plenty of those, including OSS options (e.g., Noto Sans Devanagari).

Why summation top and bottom ⎲⎳?

I totally understand the necessity of integral, and brackets by pieces (2320 2321 239B-23AE) Since it helps building large notations.
But the for the large summation 23B3 23B4, if one stretch these two, they will still lose their shapes. I do not understand what is the logic behind separating this character, or why not a corresponding one for the product symbol 220F. Furthermore, I wonder in which case these two symbols should be used.
This doesn't answer the question fully, but the following paragraph from Unicode Technical Report #28: Unicode 3.2 is probably the most authoritative explanation you can find:
Symbol Pieces. [to follow “APL Functional Symbols”] The characters in the range U+239B..U+23B3, plus U+23B7, comprise a set of bracket and other symbol fragments for use in mathematical typesetting. These pieces originated in older font standards, but have been used in past mathematical processing as characters in their own right to make up extra-tall glyphs for enclosing multi-line mathematical formulae. Mathematical fences are ordinarily sized to the content that they enclose. However, in creating a large fence, the glyph is not scaled proportionally; in particular the displayed stem weights must remain compatible with the accompanying smaller characters. Thus, simple scaling of font outlines cannot be used to create tall brackets. Instead, a common technique is to build up the symbol from pieces. In particular, the characters U+239B LEFT PARENTHESIS UPPER HOOK through U+23B3 SUMMATION BOTTOM represent a set of glyph pieces for building up large versions of the fences (, ), [, ], {, and }, and of the large operators ∑ and ∫. These brace and operator pieces are compatibility characters. They should not be used in stored mathematical text, but are often used in the data stream created by display and print drivers.
This is followed by a table showing how pieces are intended to be used together to create specific symbols.
Unicode mostly standardises existing character repertoires, and keeps their peculiarities so that conversions round-trip properly. A corresponding product symbol is not part of Unicode because the originating character repertoire did not have one. Ask on https://www.unicode.org/consortium/distlist-unicode.html about the provenance of the summation top/bottom.

Why does unicode multiple characters representing the same letter?

ASCII has versions of the whole Roman alphabet. I was surprised recently to learn that Unicode contains other version/s of those same characters. One example is "U+1D5C4: MATHEMATICAL SANS-SERIF SMALL K", or "𝗄".
Can't LaTeX math mode, or MS Word equation editor, or whatever other program just use a sans-serif font if it wants the letters in a mathematical formula to be sans-serif?
These characters exist so that the semantic distinction between them can be encoded in plain text, or where the specific font shape can't be controlled.
The block you mention is only intended for use in mathematical and technical contexts, where the distinction between, say, 𝑑 as a variable vs. d as a differential operator vs. 𝖽 as an object (in category theory) is important. TR #25 gives another example where losing the distinction between ℋ and H can completely change the meaning of an equation. Being able to encode this formatting into the text itself is also important for ISO 31-11.
All of these characters maintain compatibility mappings with their "normal" Latin and Greek counterparts, so the distinction between them should not affect searching and sorting.
You are confusing the display mode with the encoding for texts.
The idea is that unicode has ALL the symbols used to write known to mankind grouped by usage. That's why you will find many code-points that look alike.
So a formula with a k is different is supposed to be different then a word written with a k. The sans-serif part is just a description of the kind of k best used to display. Tomorrow somebody might want to add a serif k and then how would you describe the difference?

What is the need of combining characters in Unicode?

What is a practical application of having a combining character representation of a symbol in Unicode when a single code point mapping to the symbol will alone suffice?
What programming/non-programming advantage does it give us?
There is no particular programming advantage in using a decomposed presentation (base character and combining character) when a precomposed presentation exists, e.g. using U+0065 U+0065 LATIN SMALL LETTER E U+0301 COMBINING ACUTE ACCENT instead of U+00E9 LATIN SMALL LETTER E WITH ACUTE “é”. Such decomposed presentations are something that needs to be dealt with in programming, part of the problem, rather than an advantage. So it’s similar to asking about the benefits of having the letter U in the character code.
The reasons why decomposed presentations (or the letter U) are used in actual data and need to be handled are external to programming and hence off-topic at SO.
Decomposing all decomposable characters may have advantages in processing, as it makes the data more uniform, canonical. This would relate to some particular features of the processing needed, and it would be implemented by performing (with a library routine, usually) normalization to NFD or NFKD form. But this would normally be part of the processing, not something imposed on input format. If some string matching is performed, it is mostly desirable to treat decomposed and precomposed representations of a character as equivalent, and normalization makes this easy. But this a way of dealing with the two different representations, not a cause for their existence, and it can equally well be performed by normalizing to NFC (i.e., precompose everything that can be precomposed). See the Unicode FAQ section Normalization.
The decomposed components are better for text editing, and "possibly but not definite" with good compression ratio.
When editing text, there are times when modifying an accent mark is wanted, but precomposed (precomposed is not a word by Firefox spell check) characters do not allow partial modifications. Sometimes, users may want to modify the base character without removing the accent. Those kinds of editing prefers using decomposed characters.
About compression ratio, it makes more sense during the days of separate encoding per language. In such times, the 8-bit encoding per language allows each language to have their own character sets. Some languages have better compression ratio for decomposed character. The small space of the 8-bits means that they could only fit so many unique code points and use variable width with decomposed characters.

Choosing a checksum for short code to prevent typing errors

i need to choose a checksum algorithm to detect when users mistyped a 4 character [A-Z0-9] code by adding 1 character at the end of the code (in [A-Z0-9] also).
Summing ASCII codes and applying a modulo is a bad solution, since inverting 2 key strokes won't be noticed.
I would probably use the Fletcher algorithm, but i would like to know is anyone knows an algorithm designed for this use case (very very small amount of byte, position dependant) ?
Thank you.
You can try the ISO 7064 Mod x,y algorithms. According to the ISO description:
The check character systems specified in ISO/IEC 7064:2002 can detect ( http://www.iso.org/iso/home/store/catalogue_ics/catalogue_detail_ics.htm?csnumber=31531 ):
all single substitution errors (the substitution of a single character for another, for example 4234 for 1234);
all or nearly all single (local) transposition errors (the transposition of two single characters, either adjacent or with one character between them, for example 12354 or 12543 for 12345);
all or nearly all shift errors (shifts of the whole string to the left or right);
a high proportion of double substitution errors (two separate single substitution errors in the same string, for example 7234587 for 1234567);
high proportion of all other errors.
There are some partial implementations you can find like:
http://code.google.com/p/checkdigits/wiki/CheckDigitSystems (includes Java and Javascript implementations of several checksums algorithms).
http://www.codeproject.com/Articles/16540/Error-Detection-Based-on-Check-Digit-Schemes (explains and includes VC implementations).
For example, you could use ISO 7064 Mod 37,36, which can use 0-9 and A-Z (the data and the check character). The detailed description of the algorithm (if you don't feel like buying the ISO) can be found in:
http://www.cdfa.ca.gov/ahfss/animal_health/pdfs/NAIS/Program_Standard_and_Technical_Reference10-07.pdf (it's used for animal identification)
http://www.ifpi.org/content/library/GRid_Standard_v2_1.pdf (also used by the music industry)
http://www.ddex.net/sites/default/files/DDEX-DPID-10-2006.pdf (other media companies)