What's the purpose of the noncharacters U+FDD0 to U+FDEF? - unicode

U+FFFE needs to be a noncharacter in order to allow the Byte Order Mark to work.
U+FFFF is described in The Unicode Standard as "useful for internal purposes as sentinels". Makes sense.
But I can't figure out, and The Unicode Standard doesn't really explain, why the set of noncharacters includes some random block within "Arabic Presentation Forms-A". What are these for? (Besides the eye of the basilisk?)

OK the question is "what are they for" and "Why are they in the middle of the Arabic Presentation Forms".
There was a need for a block of 32 non-characters "to make additional codes available to programmers to use for internal processing purposes" http://scripts.sil.org/cms/scripts/page.php?site_id=nrsi&id=IWS-Chapter04a#4d3110c8
It was required that it be in the Basic Multilingual Plane (BMP), i.e. 0x0000 to 0xFFFF, so that they could have single-codepoint representations in UTF-16.
There was a block of unused codepoints in the Arabic Presentation Forms block.
It had been agreed not to encode any more Arabic Presentation Forms, so these were never going to be used.
http://www.unicode.org/mail-arch/unicode-ml/y2001-m10/0014.html
Therefore it was agreed that these codepoints, which were never going to be used otherwise, would be designated noncharacters so they could be used internally by applications/programmers.

These noncharacters are for internal use by application and should not be interchanged.
I tried to explain based on what is said in Unicode standard.
Unicode got 66 non-characters. For all 17 planes they have two each, last two code points of the plane ending with FFFE FFFF. 32 other no-characters are continuous block U+FDD0 to U+FDEF.
So total count
17*2 + 32 = 66
Read following text from the unicode chapter 16, which says that its in some random place because of "historic reason", I'm curious but I don't think there is any ambiguity.
For historical reasons, the range U+FDD0..U+FDEF is contained within the Arabic Presentation Forms-A block, but those noncharacters are not
"Arabic noncharacters" or "right-to-left noncharacters," and are not distinguished in any
other way from the other noncharacters, except in their code point values
U+FEFF is BOM and U+FFFE is byte-swapped version of it. But since U+FFFE is a noncharacter, when an interpreting process finds U+FFFE as the first character, it signals either that the process has encountered text that is of the incorrect byte order or that the file is not valid Unicode text, It just gives a signal, not a standard way. It can be either of the one, reverse bytes or a wrong text.
In the Unicode section 3.2 clause C2 says
C2 A process shall not interpret a noncharacter code point as an abstract character.
The noncharacter code points may be used internally, such as for sentinel values or delimiters, but should not be exchanged publicly.
So as application developers you are free to use these characters as you wish. They are used as sentinel or delimter or may be some baslik characters, but they should not be interchanged.
Section 16.7 says
In effect, noncharacters can be thought of as application-internal private-use code points.
Unlike the private-use characters discussed in Section 16.5, Private-Use Characters, which
are assigned characters and which are intended for use in open interchange, subject to
interpretation by private agreement, noncharacters are permanently reserved (unassigned)
and have no interpretation whatsoever outside of their possible application-internal private uses
Again U+FFFF is not reserved as sentinel by Unicode standard but just given the typical use case. Read in section 16.7
U+FFFF and U+10FFFF. These two noncharacter code points have the attribute of being
associated with the largest code unit values for particular Unicode encoding forms. In
UTF-16, U+FFFF is associated with the largest 16-bit code unit value, FFFF16
U+10FFFF is associated with the largest legal UTF-32 32-bit code unit value, 10FFFF16
This attribute renders these two noncharacter code points useful for internal purposes as sentinels. For
example, they might be used to indicate the end of a list, to represent a value in an index
guaranteed to be higher than any valid character value, and so on

As mentioned here at xkcd, U+FDD0 is actually the Unicode character for the eye of a basilisk. For (obvious) reasons of personal safety however, the character is not rendered to the screen... :)

Related

Are control sequences the same number in every encoding?

I am writing a parser, and the original spec states:
The file header ends with the control sequence Ctrl-Z
They do not specify which encode the header is written in (could be latin1, utf8, windows-1252,...), so I wonder whether the sequence the same number in every language.
It appears to be the case, that it always correspond to decimal 26 or the hexa 1A
It would be good to know in a more general way, whether this is for all sequences.
Most likely, ASCII is assumed. For/if ASCII, especially if you say "Ctrl-Z" corresponds to binary representation/"codepoint" dec 26 hex 1A, this would be the SUB (substitute) sequence.
Other alternatives of the extended character sets/encodings wouldn't apply here, because if dec 26 in ASCII, it's within the first/lower 7 bits of the byte (dec 0-126 of 255 total). The 8th bit then was used to toggle all the previous codepoints/patterns once more and gain/use the other half, the other remaining 127 codepoints from dec 128-255. The idea here is that the extended character sets usually share/retain the lower ASCII codepoints/mappings (also for backward compatibility), but introduce their own special characters in the higher codepoint bit-patterns/range 128-255. And there's then many different ones of this type, trying to support more writing scripts of the world with such custom extended code sets. Like Windows-1252 which is an European mix, ISO-8859-1 for German, ISO-8859-15 which is identical but only adds the Euro currency symbol, code page 437 for IBM DOS shapes to use characters for drawing a TUI on the console (this, for example, has a different mapping at it's code points for what is the control sequences in ASCII), and so on. The problem obviously is, there's a lot of these:
each only gains 128 more characters
you can't combine/load/apply any two of them at the same time (if characters would be needed from multiple different code sets)
each application has to know (or be told) beforehand in which code set a file was saved in order to interpret/display/map the correct character rendering/symbols on the screen for these byte patterns, and if the user or a tool/app applies and saves the wrong code set to save its character renderings while not recognizing that, because the source was previously created/saved with a different code set, some characters didn't appear with the intended original renderings, now the file is "corrupt" because some bytes were stored under the assumption they would be rendered with code set A and some under the assumption they're for code set B, and can't apply both as there's also no mechanism in these flat dumb plain-text files on some old, memory-short DOS file systems to tell which part of a file is for which code-set, the characters can never be rendered correctly and it can be difficult work or impossible to retroactively guess + repair what the desired interpretation/rendering was for the binary pattern in a byte
no hope to get anywhere with only 127 more characters added on top of ASCII when it comes to Chinese etc.
So then the improvement was to not use the 8th bit for these stupid code pages, but instead use it as a marker that, if set, it's an indication that another byte is following (UTF-8), hence expanding the amount of code-points greatly. This can even be repeated with the next, subsequent byte. But, it's optional. If the character is within the 7-bit ASCII codepoints, then UTF-8 does not need to set the 8th bit and add another byte.
Also means, the extended code pages and UTF-8 cannot be mixed (used/applied at the same time). For many/most code pages and for UTF-8/UTF-16 as well, the character-onto-codepoint (latter is the bit pattern) mappings are identical to ASCII. If your characters are within the first/lower 7 bits of the byte, it does not matter what the encoding theoretically would be, as the 8th bit is not used for any of code pages or UTF-8. It only matters a great deal if/for characters that do have the 8th bit set/used (and usually if there's bytes like that, the choice of its encoding would usually then then for the entire file, just that some bytes may stay within the single-byte ASCII, but really should take great care at inserting/interpreting binary patterns that have the 8th bit set in a byte).
Easy rule is: if all bytes (or the byte in question) don't have the 8th bit set, it only matters whether the lower 7 bits are ASCII or not. EBCDIC for example is a non-ASCII alternative, where dec 26 hex 1A is UBS (unit backspace), while it also does have a SUB (substitute) but it's on codepoint (binary pattern) dec 63 hex 3F. Other encodings may not have ASCII's SUB at all, or something similar but with a slightly different meaning/use, or maybe ASCII has it's SUB from EBCDIC, etc. But there's no need to wonder/worry about UTF-8, as it does not apply if ASCII can be assumed, for the characters as encoded in ASCII are encoded identically UTF-8 as a single byte with the highest bit not set.
Maybe it can be determined from the spec if all the characters mentioned are within the ASCII range and according to the ASCII codepoint definitions, or if there's other characters that might only be found in UTF-8 (or UTF-16 or UTF-32) or in one of the old extended code pages (but not found in others), or if there's any indication that the encoding might not be ASCII/ASCII-based.
It's obviously problematic if a spec doesn't explicitly state the encoding it's implicitly assuming, if the spec is about a format, protocol or data representation. On the other hand, maybe the Ctrl-Z is misleading, because dec 26 hex 1A is always the same, no matter what the encoding could be if it were text/characters. Maybe the spec just uses this pattern as a construct with no meaning in terms of character display whatsoever, and is introducing only it's own particular local meaning as defined within the spec.

How do you determine the byte width of a UTF-16 character?

What are the rules for reading a UTF-16 byte stream, to determine how many bytes a character takes up? I've read the standards, but based on empirical observations of real-world UTF-16 encoded streams, it looks like there are certain where the standards don't hold true (or there's an aspect of the standard that I'm missing).
From the reading the UTF-16 standard https://www.rfc-editor.org/rfc/rfc2781:
Value of leading 2 bytes
Resulting character length (bytes)
0x0000-0xC7FF
2
0xD800-0xDBFF
4
0xDC00-0xDFFF
Invalid sequence (RFC2781 2.2.2)
0xDFFF-0xFFFF
4
In practice, this appears to hold true, for some cases at least. Using an ad-hoc SQL script (SQL Server 2019; UTF-16 collation), but also verified with an online decoder:
Character
Unicode Name
ISO 10646
UTF-16 Encoding (hexadecimal, big endian)
Size (bytes)
A
LATIN CAPITAL LETTER A
U+0041
00 41
2
Б
CYRILLIC CAPITAL LETTER BE
U+0411
04 11
2
ァ
KATAKANA LETTER SMALL A
U+30A1
30 A1
2
🐰
RABBIT FACE
U+1F430
D8 3D DC 30
4
However when encoding the following ISO 10646 character into UTF-16, it appears to be 4 bytes, but reading the leading 2 bytes appears to give no indication that it will be this long:
Character
Unicode Name
UTF-16 Encoding (hexadecimal, big endian)
Size (bytes)
⚕️
STAFF OF AESCULAPIUS
26 95 FE 0F
4
Whilst I'd rather keep my question software-agnostic; the following SQL will reproduce this behaviour on Microsoft SQL Server 2019, with default collation and default language. (Note that SQL Server is little endian).
select cast(N'⚕️' as varbinary);
----------
0x95260FFE
Quite simply, how/why do you read 0x2695 and think "I'll need to read in the next word for this character."? Why doesn't this appear to align with the published UTF-16 standard?
The formal definition of all of this is called an "extended grapheme cluster," and it's defined in the Unicode Text Segmentation report. As Joachim Sauer notes, it's wise to be careful with the term "character" in Unicode.
Code points are what "U+...." syntax is referring to, and is attempting to capture a "unit" of written language, for example "an acute accent." But what a reader would think of a character (for example "an e with an acute accent") is a "grapheme cluster" and is made up of one or more code points. What is ultimately rendered to the screen is a "glyph" which is both context- and font-dependent.
Grapheme clusters in Unicode are actually more subtle than this. Unicode attempts to define them in a "neutral" way. (There's really no such thing as "neutral" when thinking about languages, but Unicode does try.) For example, in Slovak, ch, dz, and dž are each one letter, but are considered two grapheme clusters in Unicode. (Try to count the "letters" in a Slovak word. There are words that contain the letter dz and other words that have the letter d followed by the letter z. Oh human writing systems. I love you so much.)
The mapping of grapheme clusters to glyphs is also complex. For example, in Arabic, the single glyph لا is actually two grapheme clusters, ل (ARABIC LETTER LAM) followed by ا (ARABIC LETTER ALEF). If you use your mouse to select the glyph, you'll see there are two selectable pieces, and if you copy and paste them to another window you'll see them transform into their component parts. (Just to make thing even more complicated, Unicode also defines a single code point for ligature, ARABIC LIGATURE LAM WITH ALEF ISOLATED FORM: ﻻ. If you try to select part of that one, you'll find you can't. It's one "character.")
Your specific case is a bit more special. The Variation Selector predates Unicode, and is mostly designed to handle different variations of Han (Chinese) characters. However, as with every Unicode feature, it eventually has come to be used primarily for emoji. VS-16 is the "emoji" presentation form. The most famous example is the red heart, which is HEAVY BLACK HEART ❤, followed by VS-16: ❤️.
Similarly, your character U+2695 STAFF OF AESCULAPIUS is a single code point, and it looks like this by default (text style): ⚕. When you add VS-16, it is rendered in "emoji style": ⚕️. In some ways it's the same "character." Or is it? Depends on what you're using it for.
Emoji style is typically a bit larger and centered in its block, sometimes adding color. Notice where the period after the staff is drawn in each case (there are no extra spaces in the second example; the glyph is just much wider).
There are other combining systems as well:
U+0031: 1
U+0031 U+20e3: 1⃣ (+ COMBINING ENCLOSING KEYCAP, default text style)
U+0031 U+20e3 U+fe0f: 1⃣️ (+ VARIATION SELECTOR-16, emoji style)
All of these predate Unicode. Modern emoji is dramatically more complicated, and includes several combining systems of its own (including two that are currently just used for flags).
But luckily, to your actual question, your wife is correct, and you can generally just consume all trailing code points that are marked "combining" to form an extended grapheme cluster, and that is kind of a "character" for some broad enough definition of "character."
All of your assertions are completely correct; your interpretation of the UTF-16 standards is correct and complete.
In your empirical observations however, you've assumed that you only have one character. In actuality, you've ran into a nuance of the Unicode implementation. Your "character" is actually two (albeit technically, not visually): U+2695 "STAFF OF AESCULAPIUS" followed by U+FE0F "VARIATION SELECTOR-16". The second character is a non-spacing mark which combines with the base character for the purpose of rendering a character variant.
This results in the byte sequence 26 95 FE 0F, however as you note neither of the words fall within the UTF-16 reserved extension character range. But this is because neither of them require the UTF-16 4 byte extension. They're simply classified as two discrete Unicode characters.
From 7.9 Combining Marks in ISO 10646: Universal Coded Character Set (UCS):,
Combining marks are a special class of characters in the Unicode Standard that are
intended to combine with a preceding character, called their base.
Combining marks usually have a visible glyphic form... a combining mark may interact graphically with neighbouring characters in various ways.
http://unicode.org/L2/L2010/10038-fcd10646-main.pdf
To explain why I'm answering my own question; I had my SO question all ready to fire off. My wife came into my office; after looking over my shoulder she whispered into my ear, "You know combination characters are a thing, right?". I've however still asked the question and answered it myself, in case my wife's sweet nothings help another member of the community.

Why can't we store Unicode directly?

I read some article about Unicode and UTF-8.
The Unicode standard describes how characters are represented by code points. A code point is an integer value, usually denoted in base 16. In the standard, a code point is written using the notation U+12CA to mean the character with value 0x12ca (4,810 decimal). The Unicode standard contains a lot of tables listing characters and their corresponding code points:
Strictly, these definitions imply that it’s meaningless to say ‘this is character U+12CA‘. U+12CA is a code point, which represents some particular character; in this case, it represents the character ‘ETHIOPIC SYLLABLE WI’. In informal contexts, this distinction between code points and characters will sometimes be forgotten.
To summarize the previous section: a Unicode string is a sequence of code points, which are numbers from 0 through 0x10FFFF (1,114,111 decimal). This sequence needs to be represented as a set of bytes (meaning, values from 0 through 255) in memory. The rules for translating a Unicode string into a sequence of bytes are called an encoding.
I wonder why we have to encode U+12CA to UTF-8 or UTF-16 instead of saving the binary of 12CA in the disk directly. I think the reason is:
Unicode is not Self-synchronizing code, so if
10 represent A
110 represent B
10110 represent C
When I see 10110 in the disk we can't tell it's A and B or just C.
Unicode uses much more space instead of UTF-8 or UTF-16.
Am I right?
Read about Unicode, UTF-8 and the UTF-8 everywhere website.
There are more than a million Unicode code-points (you mentionned 1,114,111...). So you need at least 21 bits to be able to separate all of them (since 221 > 1114111).
So you can store Unicode characters directly, if you represent each of them by a wide enough integral type. In practice, that type would be some 32 bits integer (because it is not convenient to handle 3-bytes i.e. 24 bits integers). This is called UCS-4 and some systems or software do already handle their Unicode string in such a format.
Notice also that displaying Unicode strings is quite difficult, because of the variety of human languages (and also since Unicode has combining characters). Some need to be displayed right to left (Arabic, Hebrew, ....), others left to right (English, French, Spanish, German, Russian ...), and some top to down (Chinese, ...). A library displaying Unicode strings should be capable of displaying a string containing English, Chinese and Arabic words.... Then you see that decoding UTF-8 is the easy part of Unicode string displaying (and storing UCS-4 strings won't help much).
But, since English is the dominant language in IT technology (for economical reasons), it is very often cheaper to keep strings in UTF8 form. If most of the strings handled by your system are English (or in some other European language using the Latin alphabet), it is cheaper and it takes less space to keep them in UTF-8.
I guess than when China will become a dominant power in IT, things might change (or maybe not).
(I have no idea of the most common encoding used today on Chinese supercomputers or smartphones; I guess it is still UTF-8)
In practice, use a library (perhaps libunistring or Glib in C), to process UTF-8 strings and another one (e.g. pango and GTK in C) to display them. You'll find many Unicode related libraries in various programming languages.
I wonder why we have to encode U+12CA to UTF-8 or UTF-16 instead of saving the binary of 12CA in the disk directly.
How do you write 12CA to a disk directly? It is a bigger value than a byte can hold, so you need to write at least two bytes. Do you write 12 followed by CA? You just encoded it in UTF-16BE. That's what an encoding is...a definition of how to write an abstract number as bytes.
Other reading:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)
Pragmatic Unicode
For good and specific reasons, Unicode doesn't specify any particular encoding. If it makes sense for your scenario, you can specify your own.
Because Unicode doesn't specify any serialization, there is no way to "directly" store Unicode, just like you can't "directly" store a mathematical number or a flow chart to implement a program you designed. The question isn't really well-defined.
There are a number of existing serialization formats (encodings) so it is very likely that it makes the most sense to use an existing one unless your requirements are significantly different than what any existing encoding provides; even then, is it really worth the cost?
A stream of bits is just a stream of bits. Conventionally, we chop them up into groups of 8 and call that a "byte" and the latter half of your question is really "if it's not a byte, how can you tell which bits belong to which symbol?" There are many ways to do that, but the common ones generally define a sequence of some particular length (8, 16, and 32 are often convenient for reasons of compatibility with bus width on modern computers etc) but again, if you really wanted to, you could come up with something different. Huffman trees come to mind as one way to implement a way to communicate a structure of variable length (and is used for precisely that in many compression algorithms).
Consider one situation, even if you can directly save unicode binary into disk and close the file, what happens when you open the file again? It's just a bunch of binary, you don't know how many bytes a char occupied right, which means, if '🥶'(U+129398) and 'A' are the content of your file, then if you take it 1 byte for a char, then '🥶' can't be decoded correctly, which takes 2 bytes, then instead 1 emoji you see, you get two, which is U+63862 and U+65536 unicode char.

What is the meaning of the indicator XXX in the Unicode charts

Consider the unicode chart for C1 Controls and Latin-1 supplement in Unicode Charts. If a character has a glyph, it is shown, if it does not have a glyph, a special dotted line and symbolic marker or identifier is given. In this case, both 0080 and 0081 seem to have some "invalid marker", which I think is what "XXX" means. Is that what it means?
Secondly, what should be the behaviour of a Unicode aware string type that has a value stored into the string of value 0x80 (hex) or 128 (decimal)? Should it be converted to some other point, such as the mapping like this:
Byte Value 128 in many ANSI Codepages is the EURO marker.
Storing a 128 decimal value is equivalent to storing U+20AC ?
The magic "non orthogonality" I have encountered in a particular language or operating system API implementation of its MBCS and Unicode types, and Java's interesting handling, leads me to wonder, what is the real intended use of the U+0080 character? This reference link confuses me by showing that Java treats this character as a Euro symbol (ANSI codepage to Unicode one way friendliness) but that it's name is <control> which is not anything I know how to deal with. Wikipedia says it's PAD here
Can anyone help me? Did I skip a foundational concepts day at Unicode School? What am I missing?
Update The block from 0080 to 0098 is non printable control characters. This much I know. What I wonder is what does the XXX mean and how am I to think of this character when I am processing unicode data with this value in it?
According to the explanation in Ch. 17 (About the Code Charts) of the Unicode Standard, p. 573, by the “Dashed Box Convention”, characters that have no visible rendering as such “are represented by a square dashed box. This box surrounds a short mnemonic abbreviation of the character’s name.” The characters referred to in the questions are control characters, in the C1 Controls area.
The Unicode Standard says, in Ch. 16, p. 544, about C0 and C1 Controls: “The Unicode Standard provides for the intact interchange of these code points, neither adding to nor subtracting from their semantics. The semantics of the control codes are gen-erally determined by the application with which they are used. However, in the absence of specific application uses, they may be interpreted according to the control function semantics specified in ISO/IEC 6429:1992.” And the abbreviations in the square dashed boxes reflect the meanings given in ISO/IEC 6429:1992.
Some code points in the C1 Controls area are not defined in ISO/IEC 6429:1992. For them, such as U+0080, the code chart has “XXX” in place of a mnemonic abbreviation. So this indicates that the Unicode standard does not refer to any meaning for those code points, beyond their being control characters with some abstract properties.
Thus, “XXX” does not mean “invalid”, but rather “completely undefined meaning”. The meaning of such code points can be defined by various standards or other conventions, as long as they are consistent with the general definitions—e.g., it would be incompatible to define U+0080 as a graphic character.
Such code points must not be replaced or omitted in any character-level processing; applications that actually change data may do whatever they want, but any general conversion routines, for example, must keep these code points (characters) intact. They must not be treated as malformed or invalid; but an application may treat them as undefined. By Unicode principles, it’s OK to be ignorant of a character, but not completely wrong about it.
This has nothing to do with the meaning of bytes like 0x80 in 8-bit codes like Windows-1252. But if you send e.g. data labeled as ISO-8859-1 encoded (where e.g. 0x80 is in principle U+0080) to a web browser, it will actually treat it as Windows-1252 encoded. The reason is that characters like U+0080 are practically never used in ISO-8859-1 data; occurrence of 0x80 in ISO-8859-1 labeled data is virtually always either windows-1252 mislabeled or messed-up data that cannot be meaningfully processed. So browsers take the practical route and treat ISO-8859-1 as windows-1252; this is being formalized in HTML5 and related specifications.

XML and Unicode specifications: what’s a legal character?

My manager asked me to explain why I called jdom’s checkCharacterData before passing my string to an XMLStreamWriter, so I referred to the XML spec and then got confused.
XML 1.0 and XML 1.1 say that a valid XML character is “tab, carriage return, line feed, and the legal characters of Unicode and ISO/IEC 10646.” That sounds stupid: tab, carriage return, and line feed are legal characters of Unicode. Then there’s the comment “any Unicode character, excluding the surrogate blocks, FFFE, and FFFF,” which was modified in XML 1.1 to refer to U+0000 – U+10FFFF excluding U+0000, U+D800 – U+DFFF, and U+FFFE – U+FFFF; note that NUL is excluded. Then there’s the Note that says authors are “discouraged” from using the compatibility characters including some characters that are already excluded by the BNF.
Question: What is/was a legal Unicode character? Is NUL a valid Unicode character? (I found a pdf of ISO 10646 (2nd edition, 2010) which doesn’t seem to exclude U+0000.) Did ISO 10646 or Unicode change between the 2000 edition and the 2010 edition to include control characters that were previously excluded? And as for XML, is there a reason that the text is so lenient/sloppy while the BNF is strict?
Question: What is/was a legal Unicode character?
The Unicode Glossary defines it thus:
Character. (1) The smallest component of written language that has semantic value; refers to the abstract meaning and/or shape, rather than a specific shape (see also glyph), though in code tables some form of visual representation is essential for the reader’s understanding. (2) Synonym for abstract character. (3) The basic unit of encoding for the Unicode character encoding. (4) The English name for the ideographic written elements of Chinese origin. [See ideograph (2).]
Is NUL a valid Unicode character? (I found a pdf of ISO 10646 (2nd edition, 2010) which doesn’t seem to exclude U+0000.)
NUL is a codepoint, and it falls under the definition of "abstract character" so it is a character by sense 2 above.
Did ISO 10646 or Unicode change between the 2000 edition and the 2010 edition to include control characters that were previously excluded?
NUL has been a control character from early versions.
Appendix D contains a list of changes.
It says in table D.2 that there have been 65 control characters from Version 1 through Version 3 without change.
Table D-2 documents the number of characters assigned in the different versions of the Unicode standard.
V1.0 V1.1 V2.0 V2.1 V3.0
...
Controls 65 65 65 65 65
And as for XML, is there a reason that the text is so lenient/sloppy while the BNF is strict?
Writing specifications that are both complete and succinct is hard. When the text disagrees with the BNF, trust the BNF.
The use of the word “character” is intentionally fuzzy in the Unicode standard, but mostly it is used in a technical sense: a code point designated as an assigned character code point. This does not completely coincide with the intuitive concept of character. For example, the intuitive character that consists of letter i with macron and grave accent does not exist as a code point; in Unicode, it can only be represented as a sequence of two or three code points. As another example, the so-called control characters are not characters in the intuitive sense.
When other standards and specifications refer to “Unicode characters,” they refer to code points designated as assigned character code points. The set of Unicode characters varies by Unicode standard version, since new code points are assigned. Technically, the UnicodeData.txt file (at ftp://ftp.unicode.org/Public/UNIDATA/) indicates which code points are characters.
U+0000, conventionally denoted by NUL, has been a Unicode character since the beginning.
The XML specifications are inexact in many ways as regards to characters, as you have observed. But the essential definition is the BNF production for “Char” and the statement “XML processors MUST accept any character in the range specified for Char.” This means that in XML specifications, the concept of character is broader than Unicode character. The ranges in the production contain unassigned code points, actually a huge number of them.
The comment to the “Char” production in XML specifications is best ignored. It is very confusing and even incorrect. The “Char” production simply refers to a set of Unicode code points (different sets in different versions of XML). The set includes code points that you should never use in character data, as well as code points that should be avoided for various reasons. But such rules are at a level different from the formal rules of XML and requirements on XML implementations.
When selecting or writing a routine for checking character data, it depends on the application and purpose what should be accepted and what should be done with code points that fail the test. Even surrogate code points might be processed in some way instead of being just discarded; they may well appear due to confusions with encodings (or e.g. when a Java string has been naively taken as a string of Unicode characters – it is as such just a sequence of 16-bit code units).
I would ignore the verbage and just focus on the definitions:
XML 1.0:
Char ::= #x9 | #xA | #xD | [#x20-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
Document authors are encouraged to avoid "compatibility characters", as defined in section 2.3 of [Unicode]. The characters defined in the following ranges are also discouraged. They are either control characters or permanently undefined Unicode characters:
[#x7F-#x84], [#x86-#x9F], [#xFDD0-#xFDEF],
[#x1FFFE-#x1FFFF], [#x2FFFE-#x2FFFF], [#x3FFFE-#x3FFFF],
[#x4FFFE-#x4FFFF], [#x5FFFE-#x5FFFF], [#x6FFFE-#x6FFFF],
[#x7FFFE-#x7FFFF], [#x8FFFE-#x8FFFF], [#x9FFFE-#x9FFFF],
[#xAFFFE-#xAFFFF], [#xBFFFE-#xBFFFF], [#xCFFFE-#xCFFFF],
[#xDFFFE-#xDFFFF], [#xEFFFE-#xEFFFF], [#xFFFFE-#xFFFFF],
[#x10FFFE-#x10FFFF].
XML 1.1:
Char ::= [#x1-#xD7FF] | [#xE000-#xFFFD] | [#x10000-#x10FFFF]
RestrictedChar ::= [#x1-#x8] | [#xB-#xC] | [#xE-#x1F] | [#x7F-#x84] | [#x86-#x9F]
Document authors are encouraged to avoid "compatibility characters", as defined in Unicode [Unicode]. The characters defined in the following ranges are also discouraged. They are either control characters or permanently undefined Unicode characters:
[#x1-#x8], [#xB-#xC], [#xE-#x1F], [#x7F-#x84], [#x86-#x9F], [#xFDD0-#xFDDF],
[#x1FFFE-#x1FFFF], [#x2FFFE-#x2FFFF], [#x3FFFE-#x3FFFF],
[#x4FFFE-#x4FFFF], [#x5FFFE-#x5FFFF], [#x6FFFE-#x6FFFF],
[#x7FFFE-#x7FFFF], [#x8FFFE-#x8FFFF], [#x9FFFE-#x9FFFF],
[#xAFFFE-#xAFFFF], [#xBFFFE-#xBFFFF], [#xCFFFE-#xCFFFF],
[#xDFFFE-#xDFFFF], [#xEFFFE-#xEFFFF], [#xFFFFE-#xFFFFF],
[#x10FFFE-#x10FFFF].
It sounds stupid because it is stupid. The First Edition of XML (1998) read "the legal graphic characters of Unicode." For whatever reason, the word "graphic" was removed from the Second Edition of 2000, perhaps because it is inaccurate: XML allows many characters that are not graphic characters.
The definition in the Char production is indeed the right place to look.