Is there a "end of heading" or "beginning of transmission" character in Unicode? - unicode

Unicode has characters for START OF HEADING (␁ U+0001), START OF TEXT (␂ U+0002), END OF TEXT (␃ U+0003), and END OF TRANSMISSION (␄ U+0004). What's confusing about this is that, while there is a START OF HEADING character, there is no END OF HEADING character, and while there is an END OF TRANSMISSION character, there is no START OF TRANSMISSION character.
Where are these missing characters?
How should I go about representing the start of a transmission, or the end of a heading, using Unicode?
If the answer is "just use START OF HEADING in place of START OF TRANSMISSION," then what should I do if my "transmission" doesn't have a "heading"?
If the second part of the answer is "just use START OF TEXT in place of END OF HEADING," what happens if there is something between the heading and the text?†
† I can't imagine that this happens often (if ever), but I'm asking just in case someone out there ever tries to put something between the end of the heading and the start of their text.
Stack Exchange doesn't have a Unicode site, so I'm posting this here. If someone thinks that it would fit better on one of the other Network sites, please let me know in the comments.

The characters U+0000 to U+001F are imported directly from ASCII. If it didn't exist in ASCII, it doesn't exist in Unicode, in that range.
Most are obsolete; in-band delimiters are not so much used nowadays. If you're using an existing protocol with in-band delimiters, it'll have rules based on ASCII usage; if you're designing a new protocol, there are probably better ways to proceed.
As far as I recall, there's no need for end-of-header in typical usage, because that's coincident with start-of-text. There's presumably no need for start-of-transmission because the first thing you receive is the start of transmission, after synchronization (start bits in async disciplines, SYN in sync).

Related

The list of unicode unusual characters

Where can I get the complete list of all unicode characters that doesn't behave as simple characters. Examples: character 0x0363 (won't be printed without another one before), character 0x0084 (does weird things when printed). I need just a raw list of such unusual characters to replace them with something harmless to avoid unwanted output effects. Regular characters (those who not in this list) should use exactly one character place when printed (= cursor moved +1 to the right), should not depend on previous or next characters, and should not affect printing style in any way.
Edit because of multiple comments:
I have some unicode string, usually consists of "usual" characters like 0x20-0x7E or cyrillic letters. Also, there are a lot of other unicode characters that are usual and may be safely assumed as having strlen() = 1. The string is printed on the terminal and I should know the resulting position of the cursor. I don't want to use some complex and non-stable libraries to do that, i want to have simplest possible logic to do that. Every problematic character may be replaced with U+0xFFFD or something like "<U+0363>" (ASCII string with its index instead of character itself). I want to have a list of "possibly-problematic" characters to replace. It is acceptable to have some non-problematic characters in this list too, but not much.
There is no simple algorithm for this. You'll likely need a complex, but extremely stable library: libicu, or something based on it. Basically every other library that does this kind of work is based on libicu, which is maintained by the Unicode organization.
If you don't want to use the official library (or something based on their library), you'll need to parse the Unicode Character Database yourself. In particular, you need to look at Character Properties, and parse the files in the UCD.
I believe you're asking for Bidi_Class (i.e. "direction") to be Left_To_Right, Canonical_Combining_Class to be Not_Reordered, and Joining_Type to be Non_Joining.
You probably also want to check the General_Category and avoid M* (Marks) and C* (Other).
This should work for some Emoji, but this whole approach will break a lot of emoji that look simple and are not. Most famously: ❤️, which is two "characters," not one. You may want to filter out Emoji. As a simple starting point, you may want to restrict yourself to the Basic Multilingual Plane (BMP), which are code points 0000-FFFF. Anything above this range is, almost by definition, rare or unusual. The BMP does include some emoji, but most emoji (and all new emoji) are outside the range.
Remember that the glyphs for single characters can still have radically different widths, even in nominally fixed-width fonts. For example, 𒈙 (U+12219 CUNEIFORM SIGN LUGAL OPPOSING LUGAL) is a completely "normal" character in the way you're describing. It is left-to-right. It doesn't depend on or influence characters around it (it's non-combining and non-joining). Its "length in characters" is 1. Its glyph is also extremely wide in most fonts and breaks a lot of layout. I don't know anything in the Unicode database that would warn you of this, since "glyph width" is entirely a function of fonts, not characters, and Unicode explicitly does not consider fonts. (That said, most of the most problematic characters are outside the BMP. Probably the most common exception is DŽ, but many fixed-width fonts have a narrow glyph for it: DŽ.)
Let's write some cuneiform in a fixed-width font.
Normally, every character should line up with a character above.
Here: 𒈙. See how these characters don't align correctly?
Not only is it a very wide glyph, but its width is not even a multiple.
At least not in my font (Mac Safari 15.0).
But DŽ is ok.
Also remember that there are multiple ways to encode the same "character." For example, é can be a "simple" character (U+00E9), or it can be two characters (U+0065, U+0301). So in some cases é may print in your scheme, and in others it won't. I suspect this is fine for your problem, but if it isn't, you're going to need to apply a normalization form (likely NFC).

How does UTF-16 achieve self-synchronization?

I know that UTF-16 is a self-synchronizing encoding scheme. I also read the below Wiki, but did not quite get it.
Self Synchronizing Code
Can you please explain me with an example of UTF-16?
In UTF-16 characters outside of the BMP are represented using a surrogate pair in with the first code unit (CU) lies between 0xD800—0xDBFF and the second one between 0xDC00—0xDFFF. Each of the CU represents 10 bits of the code point. Characters in the BMP is encoded as itself.
Now the synchronization is easy. Given the position of any arbitrary code unit:
If the code unit is in the 0xD800—0xDBFF range, it's the first code unit of two, just read the next one and decode. Voilà, we have a full character outside of BMP
If the code unit is in the 0xDC00—0xDFFF range, it's the second code unit of two, just go back one unit to read the first part, or advance to the next unit to skip the current character
If it's in neither of those ranges then it's a character in BMP. We don't need to do anything more
In UTF-16 CU is the unit, i.e. the smallest element. We work at the CU level and read the CU one-by-one instead of byte-by-byte. Because of that along with historical reasons UTF-16 is only self-synchronizable at CU level.
The point of self-synchronization is to know whether we're in the middle of something immediately instead of having to read again from the start and check. UTF-16 allows us to do that
Since the ranges for the high surrogates, low surrogates, and valid BMP characters are disjoint, it is not possible for a surrogate to match a BMP character, or for (parts of) two adjacent characters to look like a legal surrogate pair. This simplifies searches a great deal. It also means that UTF-16 is self-synchronizing on 16-bit words: whether a code unit starts a character can be determined without examining earlier code units. UTF-8 shares these advantages, but many earlier multi-byte encoding schemes (such as Shift JIS and other Asian multi-byte encodings) did not allow unambiguous searching and could only be synchronized by re-parsing from the start of the string (UTF-16 is not self-synchronizing if one byte is lost or if traversal starts at a random byte).
https://en.wikipedia.org/wiki/UTF-16#Description
Of course that means UTF-16 may be not suitable for working over a medium without error correction/detection like a bare network environment. However in a proper local environment it's a lot better than working without self-synchronization. For example in DOS/V for Japanese every time you press Backspace you must iterate from the start to know which character was deleted because in the awful Shift-JIS encoding there's no way to know how long the character before the cursor is without a length map

Detect if character is simplified or traditional Chinese character

I found this question which gives me the ability to check if a string contains a Chinese character. I'm not sure if the unicode ranges are correct but they seem to return false for Japanese and Korean and true for Chinese.
What it doesn't do is tell if the character is traditional or simplified Chinese. How would you go about finding this out?
update
Q: How can I recognize from the 32 bit value of a Unicode character if this is a Chinese, Korean or Japanese character?
http://unicode.org/faq/han_cjk.html
Their argument that the characters regardless of their shape have the same meaning and therefore should be represented by the same code. Well, it's not meaningless to me because I am analyzing individual characters which doesn't work with their solution:
A better solution is to look at the text as a whole: if there's a fair amount of kana, it's probably Japanese, and if there's a fair amount of hangul, it's probably Korean.
As already stated, you can't reliably detect the script style from a single character, but it is possible for a sufficiently long sample of text. See https://github.com/jpatokal/script_detector for a Ruby gem that does the job, and Simplified Chinese Unicode table for a general discussion.
It is possible for some characters. The Traditional and Simplified character sets overlap, so you have basically three sets of characters:
Characters that are traditional only.
Characters that are simplified only.
Characters that have been left untouched, and are available in both.
Take the character 面 for instance. It belongs both to #2 and #3... As a simplified character, it stands for 面 and 麵, face and noodles. Whereas 麵 is a traditional character only. So in the Unihan database, 麵 has a kSimplifiedVariant, which points to 面. So you can deduct that it is a traditional character only.
But 面 also has a kTraditionalVariant, which points to 麵. This is where the system breaks: if you use this data to deduct that 面 is a simplified character only, you'd be wrong...
On the other hand, 韩 has a kTraditionalVariant, pointing to 韓, and these two are a "real" Simplified/Traditional pair. But nothing in the Unihan database differentiates cases like 韓/韩 from cases like 麵/面.
As I think you've discovered, you can't. Simplified and traditional are just two styles of writing the same characters - it's like the difference between Roman and Gothic script for European languages.

What is difference between \n and \r? [duplicate]

This question already has answers here:
Closed 12 years ago.
Possible Duplicate:
What is the difference between \r and \n?
Hi,
What is difference between \n (newline) and \r (carriage return)? They both move current cursor to the next line. Are they same?
\r returns the cursor to the beginning of the line, NOT to the next line. When you use \nin Linux, \r is implied, in windows, it is not.
Using \r in Unix-like systems may result in overwriting the same line.
I suggest you read this.
In short, a newline in Windows is "\r\n", while a newline in Unix is just "\n" (and, just to make life difficult, a newline in older Macs is "\r")
Actually, a carriage return is supposed to move the cursor to the beginning of the current line. Then, newline moves the cursor exactly down one.
Nowadays, compilers will often automatically convert one or the other to \r\n on Windows or \n on Linux. Mac used to use \r but they have changed to the \n convention.
(edit: removed false/untested statements)
Read The Great Newline Schism it explains everything in deep detail with great humor.
Ah the old days of the typewriter...
The difference between the two stems from days of yonder when typing was done directly to paper. It required two actions to go to the next line:
pushing the 'carriage' (big cilinder on the top) back to the left (this is where the character would end up).
shifting the paper one line up. (thus going down one line)
Splitting these two actions facilitated going back to a precise character position to correct it (there was no way to go up one line, or left one character!). Holding paper whiteout on the erroneous character and hitting that key would neatly whiteout exactly that erroneous character, then you could go back again and hit the correct key
(there was a key for not moving the carriage though).
In the young computer age these actions were translated 1 to 1 into \r for carriage return and \n for shifting the 'paper'.
Nowadays the major operating systems apparently have differing opinions on whether this is still necessary for computer technology where going back to previous position is much easier. However, in modern programming languages you'll generally see that \n is assumed to mean \r\n.
No they're not. Modern text editors often treat them the same however because their old uses don't make much sense for digital word processors.
For example \r literally means "return to the beginning of the line". While this might have been useful for a typewriter if you just wanted to overwrite everything on that line this sort of functionality doesn't make much sense for digital type.
\n on the other hand would simply move down a line without returning to the beginning. This was also useful on a typewriter for indentation or bulleting. Again, not something that makes much sense for digital type.
Telnet is one example where both characters are still used in this manner.
Both characters were included in ascii language simply because when it was being spec'd they hadn't realized that functionality that was useful on a typewriter didn't make much sense on a computer.

How can I figure out what code page I am looking at?

I have a device with some documentation on how to send it text. It uses 0x00-0x7F to send 'special' characters like accented characters, euro signs, ...
I am guessing they copied an existing code page and made some changes, but I have no idea how to figure out what code page is closest to the one in my documentation.
In theory, this should be easy to do. For example, they map Á to 0x41, so if I could find some way to go through all code pages and find the ones that have this character on that position, it would be a piece of cake.
However, all I can find on the internet are links to code page dumps just like the one I'm looking at, or software that uses heuristics to read text and guess the most likely code page. Surely someone out there has made it possible to look up what code page one is looking at ?
If it uses 0x00 to 0x7F for the "special" characters, how does it encode the regular ASCII characters?
In most of the charsets that support the character Á, its codepoint is 193 (0xC1). If you subtract 128 from that, you get 65 (0x41). Maybe your "codepage" is just the upper half of one of the standard charsets like ISO-8859-1 or windows-1252, with the high-order bit set to zero instead of one (that is, subtracting 128 from each one).
If that's the case, I would expect to find a flag you can set to tell it whether the next bunch of codepoints should be converted using the "upper" or "lower" encoding. I don't know of any system that uses that scheme, but it's the most sensible explanation I can come with for the situation you describe.
There is no way to auto-detect the codepage without additional information. Below the display layer it’s just bytes and all bytes are created equal. There’s no way to say “I’m a 0x41 from this and that codepage”, there’s only “I’m 0x41. Display me!”
What endian is the system? Perhaps you're flipping bit orders?
In most codepages, 0x41 is just the normal "A", I don't think any standard codepages have "Á" in that position. It could have a control character somewhere before the A that added the accent, or uses a non-standard codepage.
I don't see any use in knowing the "closest codepage", you just need to use the docs you got with the device.
Your last sentence is puzzling, what do you mean by "possible to look up what code page one is looking at"?
If you include your whole codepage, people here on SO could be more helpful and give you more insight about this issue, having one data point 0x41=Á doesn't help much.
Somewhat random idea, but if you can get replicate a significant amount of the text off the device, you could try running it through something like the detect function in http://chardet.feedparser.org/.