Why does Unicode have multiple planes? - unicode

At the moment, Unicode has 17 planes
But why are there 17? Why aren't they all just included in 1 plane?
Why are the 4th to 13th planes unoccupied? I expected they might be contiguous

Looks like initially there was only one plane. But as Unicode has gone from it's first creation to now being on Unicode 13.0.0 as of March 2020, more planes were added each time.
Looks like they don't go contiguously because the first ones are used for encoding all languages, and the final places only contain non-graphical characters. The "gap" is left for any new text that needs to be added to Unicode. Quote:
It is not anticipated that all these planes will be used in the foreseeable future, given the total sizes of the known writing systems left to be encoded. The number of possible symbol characters that could arise outside of the context of writing systems is potentially huge. At the moment, these 11 planes out of 17 are unused.

Related

Are characters made up of pixels?

On a computer screen, are the characters made up of pixels? If so, it means that characters are images!
And if the characters are made up of pixels, then why are there ASCII, UNICODE and other standards that associate binary digits to different character formats, but there are no standards that associate image formats with binary digits? Because if both are made up of pixels (characters and images), what is the difference between them?
No, #1: Characters are not "on a computer screen". What goes on the screen is the result of all kinds of rendering and painting and combining onto a 2-D grid of pixels.
No, #2: Unicode characters are independent of the specific fonts used to present them graphically. So, with one font, a character will end up producing certain pixels, and with another font - other pixels altogether.
No, #3: Character strings are held in your computer's memory as sequences of bytes, i.e. numeric values (with each character typically occupying one byte, or two, or a variable number of bytes).
On a computer screen, are the characters made up of pixels? if so, it means that characters are images!
On a typical modern screen, yes the graphical representation of a character is a group of pixels. No, computers don't always have a screen
For example in the past people used to interact with computers via multiple types of terminals like a mechanical terminal where the texts are printed directly to paper. Or sometimes a vector screen or a 16-segment/14-segment display is also used where the text representation has no pixel at all. Many computers don't even have a screen or a way to display characters and interact with humans via switches, LEDs, punched cards, network or serial port...
So the premise of the question is already wrong. Characters has nothing to do with pixels. Even when displaying characters on the screen then the pixels representing a character also vary depending on the font face and font size
Character traditionally means a symbol or a glyph representing something. In computing character means a unit of information that roughly corresponds to a grapheme, grapheme-like unit, or symbol, such as in an alphabet or syllabary in the written form of a natural language. None of them says anything about pixels
Each language has a known set of symbols, so logically they're grouped together and each assigned a number. The whole set of those numbers and their mappings is called a character set. You can see that it makes sense to associate numbers with characters but doing the same for images make no sense. What are the common thing in images that we can map?
In the past there were no need to cooperate with people using other languages so each group of people chose a small set that works for their own language. However with the advent of portable devices and the internet, that doesn't work anymore. It'll be extremely awkward to receive a message that you can't read, or send an email that the customer sees as a bunch of garbage. That's why a bigger character set called Unicode was invented
However character set is just a way to map numbers to glyphs in computers. To deal with characters we also need a way to encode those numbers which is called character encoding. For example in a variable length encoding a long number may be encoded using more bytes. Unicode has multiple encodings like UTF-1, UTF-7, UTF-8, UTF-16 or UTF-32

Large product ∏ symbol in unicode

I am looking for large symbols in unicode like these:
∏ ∐ ∑ ∫
⨀ ⨁ ⨂
⊕ ⊖ ⊗ ⊘ ⊙
⎲
⎳
⌠
⌡
The only one I found is by combining two unicode symbols ⎲and ⎳. Not sure why that exists, but not a large product symbol. That's all I am really looking for (∏ over multiple lines like the sigma). If any of the other ones exist over 2 lines that would be great to know as well. Perhaps there is some way to manually make the large ∏ symbol out of smaller primitives.
⎲and ⎳. Not sure why that exists
When a collection of existing glyphs is added to Unicode, it is desirable to make encoding between character sets round-trip safe. So glyphs that are duplicates or variants of each other are kept anyway.
As of Unicode 10, these are the greek letter pi (and its compat decompositions) available: ∏Ππϖᴨℼℿ There are no top and bottom halves like for integral and summation.
You should not attempt to build a glyph piecewise from other glyphs shifted into position. (You said "primitives", but Unicode does not work that way.) The result is not accessible and somewhat likely to break in rendering on systems other than yours.
The correct solution is to use the ∏ glyph and simply scale up its font size. Look into MathML if you are using only ad-hoc notation so far.

Unicode Noncharacters

Is there a good resource for finding the last two characters of each plane, particularly planes 3–13?
Obviously 0xFFFE and 0xFFFF is a non character, as well as 0x10FFFE and 0x10FFFF, but I can't find a complete list as to where the last characters are of each plane, as I can't tell where each plane ends.
On Unicodes website it refers to the last two characters of every plane being non characters.
The official source can already be found in http://unicode.org/charts/index.html; search up for "Noncharacters in Charts." In fact, the noncharacters at the end of Plane 3 to D [as of Unicode 12.1] are the only designated code points in these planes.
There are exactly 66 noncharacters in Unicode. There are 34 noncharacters residing at the final two code points of each of the 17 planes, and there is an additional contiguous range of 32 noncharacters from U+FDD0 to U+FDEF in the Arabic Presentation Forms-B block.
Any code point ending with FFFE or FFFF is a noncharacter. For the exceptions, any 4-digit code point beginning with FDD or FDE is a noncharacter.
I'll enumerate the noncharacters:
FDD0-FDEF [These 32 are designated in Unicode 3.1, to allocate more code points for internal use]
FFFE [Probably the most notable one, this one is involved in BOM usage]
FFFF [Can be used as a sentinel, equal to -1 in a 16-bit signed int]
X⁠FFFE [16 of them in the supplementary planes; X is a hexadecimal digit or 10]
X⁠FFFF [16 of them in the supplementary planes; X is a hexadecimal digit or 10]
The Unicode Character Database contains authoritative information on the status of each code point. Using it, you can determine the last assigned code point of each plane. This may (actually, will) change over time, as new characters are assigned. You would also need to define what you mean by “character” – in particular, whether you regard Private Use code points as “characters”.
Each Unicode plane contains 216 code points, starting from 0x000000, and the last two characters of each plane are noncharacters. Therefore, all 0x••FFFE and 0x••FFFF code points are noncharacters, where •• is anything from 0x00 through 0x10 (identifying the plane).
..., as I can't tell where each plane ends.
Every plane by definition ends at U+xxFFFF.
On Unicodes website it refers to the last two characters of every plane being non characters.
No. The Unicode Standard Version 9.0 - Core Specification says (in section 23.7 Noncharacters):
The Unicode Standard sets aside 66 noncharacter code points. The last two code points of each plane are noncharacters: U+FFFE and U+FFFF on the BMP, U+1FFFE and U+1FFFF on Plane 1, and so on, up to U+10FFFE and U+10FFFF on Plane 16, for a total of 34 code points. In addition, there is a contiguous range of another 32 noncharacter code points in the BMP: U+FDD0..U+FDEF. For historical reasons, the range U+FDD0..U+FDEF is contained within the Arabic Presentation Forms-A block, but those noncharacters are not “Arabic noncharacters” or “right-to-left noncharacters,” and are not distinguished in any other way from the other noncharacters, except in their code point values.
Note the keyword "code points", not "characters", they are always U+xxFFFE and U+xxFFFF.

In a Unicode string, how are planes indicated (or are they not)?

I have read the article by Joel and have done a lot of searching. Every site and article on Unicode talks about how there are 16 bits per code point, but Unicode supports more than 2^16 code points with Unicode planes.
But none explain how a Unicode string indicates the plane. Further more, this leaves the question of how a Unicode string can hold characters from multiple planes.
So, how are planes indicated in Unicode strings?
Someone can feel free to correct me on this, I'm still learning about Unicode myself.
I think your confusion is between a code point and how an encoding represents that code point. The number of bits/bytes per code point is going to depend on your encoding. Let's take the simplest example of UTF-32. UTF-32 uses, drum roll, please - 32 bits for each code point. It can directly represent every Unicode character in each plane. UTF-16 is a variable length encoding. It encodes each code-point in one or two code-units. The first plane is represented using a single code-unit. The rest, well, you can read more about it here. http://en.wikipedia.org/wiki/UTF-16 and http://en.wikipedia.org/wiki/UTF-8.
In essence, if the encoding supports specific planes, they are there and represented in the encoding. It's just more clear in the case of UTF-32 than the others.
I wrote a chapter that explains this topic (and some other Unicode issues) in a manual for an open-source project. Here is a link to the PDF manual (read Chapter 10). And here is a link to that chapter in the HTML version of the manual.

Space-saving character encoding for japanese?

In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach?
Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!
Edit2:
Thank you very much for your answer. Im sorry, that my problem wasn't clear enough described.
What I really want to solve, is: The CJK Unicode range is over 20000 characters. But only a subset of around 2000 characters are necessary to display japanese text properly. These characteres are spreaded in range from U+4E00 to U+9FA5. So I need to transform these Unicode Codepoints (only the 2000 for japanese) somehow to the coordinates of my created texture (where I can order the characters also like I want).
i.e. U+4E03 is a japanese character, but U+4E04, U+4E05, U+4E06 is not. Then U+4E07 is a japanese character as well. So the easiest solution, I can see: after character U+4E03 leave three spaces in my texture (or write the not necessary characters U+4E04, U+4E05, U+4E06 there) and then write U+4E07. But this would waste soo much texture space (20000 characters, even if only 2000 are necessary). So I want to be able to put in my texture only: "...U+4E03, U+4E07...". But I have no idea how to write my displayText function then - because I cant know where are the texture coordinates of the glyph I want to display. There would be a hashmap or something like this necessary, but I have no idea how to store these data (it would be a mess to write for every character something like ...{U+4E03, 128}, {U+4E07, 129}... to fill the hasmap).
To the questions:
1) No specific format - so I will write the displayText function by myself.
2) No reason against unicode - its only that CJK range problem for my bitmapfont.
3) I think, thats generally plattform & language independent, but in my case Im using C++ with OpenGL on Mac OS X/iOS.
Thank you very much for your help! If you have any further idea for this, it would really help me a lot!
What is the real problem you want to solve?
Is it that a UTF-8 encoded string occupies three bytes per character? If yes, switch to UTF-16. Otherwise don't blame UTF-8. (Explanation: UTF-8 is just an algorithm to convert a sequence of integers to a sequence of bytes. It has nothing to do with the grouping of characters in codepages. That in turn is what Unicode code points are for.)
Is it that the Unicode code points are distributed over many "codepages" (where a "codepage" means a block of 256 adjacent Unicode code points)? If yes, invent a mapping from the Unicode code points (0x000000 - 0x10FFFF) to a smaller set of integers. In terms of memory this should cost no more than 4 bytes times the number of characters you really need. The lookup time would be approximately 24 memory accesses, 24 integer comparisons and 24 branch instructions. (In fact, this would be a binary search in a tree map.) And if that's too expensive you could use a mapping based on a hash table.
Is it something else? Then please give us some examples, to better understand your problem.
As far as I understand it you should probably write a small utility program that takes as input a set of Unicode code points that you want to use in your application and then generates The code and data for displaying texts. This raises the questions:
Do you have to use a specific bitmap font format or will you write the displayText function yourself?
Is there any reason against using Unicode for all strings and to convert them to your bitmap-optimized encoding just for the time when you render text? The encoding conversion would of course be internal to the displayText method and not visible to the normal application code.
Just out of interest: Is the problem specific to a certain programming language or environment?
Update:
I am assuming that your main problem is some function like this:
Rectangle position(int codepoint)
If I had to do this, I would start by having one bitmap for each character. The bitmap's file name would be the codepoint, so that the "big picture" can be regenerated easily, just in case you find some more characters you need. The preparation consists of the following steps:
Load all the bitmaps and determine their dimensions. The result of this step is a map from integers to (width, height) pairs.
Compute a good layout for the character images in the big picture and remember where each character was placed. Save the big picture. Save the mapping from codepoints to (x, y, width, height) to another file. This can be a text file, or if you don't have disk space, a binary file. The details don't matter.
The displayText function would then work as follows:
void displayText(int x, int y, String s) {
for (char c : s.toCharArray()) { // TODO: handle code points correctly
int codepoint = c;
Rectangle position = positions.get(codepoint);
if (position != null) {
// draw bitmap
x += position.width;
}
}
}
Map<Integer, Rectangle> positions = loadPositionsFromFile();
Now the only problem that is left is how this map can be represented in memory using as little memory as possible, and still be fast enough. That, of course, depends on your programming language.
The in-memory representation could be a few arrays that contain x, y, width, height. For each element, a 16 bit integer should be enough. And probably you only need 8 bits for width and height anyway. Another array would then map the codepoint to the index into positionData (or some special value if the codepoint is not available). This would be an array of 20000 16 bit integers, so in summary you have:
2000 * (2 + 2 + 1 + 1) = 12000 bytes for positionX, positionY, positionWidth and positionHeight
20000 * 2 = 40000 bytes for codepointToIndexInPositionArrays, if you use an array instead of a map.
Compared to the size of the bitmap itself, this should be small enough. And since the arrays don't change they can be in read-only memory.
I believe the most efficient (lossless) method for encoding this data will be to use a Huffman encoding to store your document information. This is a classic information theory problem. You will need to perform a mapping to go from your compressed space to your character space.
This technique will compress your document as efficiently as possible, based on character frequency per document (or whatever domain/documents you choose to apply it to). Only the characters you use will be stored, and they will be stored in an efficient manner directly proportional to how often they are used.
I think the best way for you to solve this problem is to use an existing implementation (UTF16, UTF8...) This will be much less error prone than implementing your own Huffman coding in order to save a little bit of space. Disk space and bandwidth is cheap, errors that anger customers or managers are not. It is my belief that a Huffman encoding will theoretically be the most efficient (lossless) encoding possible, but not the most practical for this application. Check out the link though, this might help with some of these concepts.
-Brian J. Stinar-
UTF-8 is usually a very efficient encoding. If your application focuses primarily on Asia and other regions with multi-byte character sets, you may benefit more from using UTF-16. You could of course write your own encoding, but it won't save yo that much data and it will provide you with a lot of work.
If you really need to compact your data (and I wonder if and why) you could best use some algorithm to compress you UTF data. Most algorithms work more efficient on larger blocks of data, but there are also algorithms for compressing small chunks of text. I think you will save yourself a lot of time if you explore these instead of defining your own encoding.
The paper is pretty much obsolete, it isn't 1980 any more, scrounging bits is not a requirement of almost any display application. When developing an application, e.g. the iPhone you have to plan for l10n across multiple languages so saving a few bits for just Japanese is a bit pointless.
Japan is still on Shift-JIS because like China with GB18030, Hong Kong with BIG5, etc, they have a big, stable, and efficient resource pool already locked into locale encodings. Migrating to Unicode requires re-writing a significant amount of framework tools and the additional testing that ensues.
If you look at the iPod it saves bits by only supporting Latin, Chinese, Japanese, and Korean, skipping Thai and other scripts. As memory prices and dropped and storage increased with the iPhone Apple have been able to add support for more scripts.
UTF-8 is the way to save space, use UTF-8 for storage and convert to UCS-2 or higher for more convenient manipulation and display. The differences between Shift-JIS and Unicode are really pretty minor.
Chinese alone has more than 4096 characters, and I'm not talking punctuation, but characters that are used to form words. From Wikipedia:
The number of Chinese characters contained in the Kangxi dictionary is approximately 47,035, although a large number of these are rarely used variants accumulated throughout history.
Even though many of those are rarely used, even if 90% weren't needed you'd still exhaust your quota. (I think the actual number used in modern text is somewhere around 10 - 20k.)
If you know in advance which characters you'll need to use your best bet may be to create an indirection table of Unicode codepoints to indexes into your texture. Then you only have to put as many characters in your texture as you'll actually use. I believe Flash (and some PDFs) do something like this internally.
You could use multiple bitmaps and load them on demand, instead of a single bitmap that tries to encompass all possible characters.