What is char size in a computer architecture? - cpu-architecture

This Wikipedia article on word sizes provides a table of word sizes in different computer architectures. It has different columns like 'integer size', 'floating point size' etc. I suppose, integer size is the size of arguments for ALU, floating point size is the size of arguments for FPU, unit of address resolution is the number of bits/trits/digits represented by a single address. word size is given as the natural size of data used by the processor (which is still confusing somewhat).
But I'm wondering what does the char size column in the table represents? Is it the smallest object size theoretically possible? Is it the smallest alignment possible? What are the common operations defined over data of char size? In x86, x86-64, ARM architectures char size is 8 bits, which is same as the smallest integer size. But on some other architectures, char size is 5/6/7 bits which is very different from the integer size in that architecture.

In modern C, a char is guaranteed to be independently modifiable, without disturbing surrounding data. It's usually chosen to be the width of the narrowest load/store instruction. So on Alpha or word-addressable CPUs, a char had to be the word size, or else every char store would have to compile to an atomic RMW on the containing word. (Rather than a much cheaper non-atomic RMW like some early compilers actually used, before C11 introduces a thread-aware memory model to the language.) See Can modern x86 hardware not store a single byte to memory? (which covers modern ISAs in general) and C++ memory model and race conditions on char arrays for the requirements C++11 and C11 place on char.
But that Wikipedia table of word and char sizes in historical machines is clearly not about that, given the sizes. (e.g. smaller than a word on some word-addressable machines, I'm pretty sure).
It's about how software (and character I/O hardware like terminals) packed multiple character of the machine's native character encoding (e.g. a subset of ASCII, EBCDIC, or something earlier) into machine words.
Unicode, and variable-length character encodings like UTF-8 and UTF-16, are recent inventions compared to that history. https://en.wikipedia.org/wiki/Character_encoding#History
Many systems used fewer than 8 bits per character, e.g. 6 (64 unique encodings) is enough for the upper and lower case Latin alphabet plus some special characters and control codes.
These historical character sets are what motivated some of the choices for programming languages to use certain special characters or not, because they were developed on systems that had a certain character set.
Historical machines really did do things like pack 3 characters of text into an 18-bit word.
You might want to search on https://retrocomputing.stackexchange.com/, or even ask a question there after doing some more reading.

Related

What is a realistic maximum number of unicode combining characters?

I'm looking for a maximum number of unicode combining characters that appear after a non-combining one in a realistic natural text.
I know that in unicode text there can be an arbitrary number of combinings placed anywhere in the text. However, I am writing a specialized application that has to operate under constrained resources and because of that and other technical reasons displaying an arbitrary number of combining chars after a non-combining one is not an option. However I would still like to display natural languages properly if possible and support for a small number of combinings should not be a problem.
My intuition that natural languages don't need more than some two or three combinings after a proper char, but I'm not sure and can't find any source on that number.
Ok, for a lack of a better answer, here's what I did (for future reference if needed):
I ended up using a SmallVec -like thing with a threshold of 8 bytes before allocation and some 50 bytes upper limit (text stored in UTF-8). That should make everyone happy I think and performance doesn't suffer.
Take those numbers with a pinch of salt, they are arbitrary and I might tune them anyway.

How are datatypes that need more than 32 bits stored in a 32 bit OS

How are datatypes that would need more than 32 bits stored in the system?
For example consider an unsigned int or a long which can have a value greater than 2 to the power of 32, how is it stored in the memory?
Any OS, or compiler, will use the number of bits that it needs. So if an OS or language has a need for 64-bit integers, it will just store such integers into an 8-byte representation.
There are standards for this, for integers as well as floating point numbers. See this article on Wikipedia for more: http://en.wikipedia.org/wiki/Computer_numbering_formats
The 32-bits in a 32-bit architecture to the number of bits the CPU registers are wide (There are some exceptions, such as floating point registers). This does not mean that the system can't handle datatypes larger than this, only that it must deal with these datatypes 32-bits at a time.
For example, machines have an "Add With Carry" instruction, which allows the machine to chain link multiple adds together so that arbitrarily sized numbers, say two 512-bit numbers, can be added in 16 steps (512/32).

Why UTF-32 exists whereas only 21 bits are necessary to encode every character?

We know that codepoints can be in this interval 0..10FFFF which is less than 2^21. Then why do we need UTF-32 when all codepoints can be represented by 3 bytes? UTF-24 should be enough.
Computers are generally much better at dealing with data on 4 byte boundaries. The benefits in terms of reduced memory consumption are relatively small compared with the pain of working on 3-byte boundaries.
(I speculate there was also a reluctance to have a limit that was "only what we can currently imagine being useful" when coming up with the original design. After all, that's caused a lot of problems in the past, e.g. with IPv4. While I can't see us ever needing more than 24 bits, if 32 bits is more convenient anyway then it seems reasonable to avoid having a limit which might just be hit one day, via reserved ranges etc.)
I guess this is a bit like asking why we often have 8-bit, 16-bit, 32-bit and 64-bit integer datatypes (byte, int, long, whatever) but not 24-bit ones. I'm sure there are lots of occasions where we know that a number will never go beyond 221, but it's just simpler to use int than to create a 24-bit type.
First there were 2 character coding schemes: UCS-4 that coded each character into 32 bits, as an unsigned integer in range 0x00000000 - 0x7FFFFFFF, and UCS-2 that used 16 bits for each codepoint.
Later it was found out that using just the 65536 codepoints of UCS-2 would get one into problems anyway, but many programs (Windows, cough) relied on wide characters being 16 bits wide, so UTF-16 was created. UTF-16 encodes the codepints in the range U+0000 - U+FFFF just like UCS-2; and U+10000 - U+10FFFF using surrogate pairs, i.e. a pair of two 16-bit values.
As this was a bit complicated, UTF-32 was introduced, as a simple one-to-one mapping for characters beyond U+FFFF. Now, since UTF-16 can only encode up to U+10FFFF, it was decided that this is will be the maximum value that will be ever assigned, so that there will be no further compatibility problems, so UTF-32 indeed just uses 21 bits. As an added bonus, UTF-8, which was initially planned to be a 1-6-byte encoding, now never needs more than 4 bytes for each code point. Therefore it can be easily proven that it never requires more storage than UTF-32.
It is true that a hypothetical UTF-24 format would save memory. However its savings would be dubious anyway, as it would mostly consume more storage than UTF-8, except for just blasts of emoji or such - and not many interesting texts of significant length consist solely of emojis.
But, UTF-32 is used as in memory representation for text in programs that need to have simply-indexed access to codepoints - it is the only encoding where the Nth element in a C array is also the Nth codepoint - UTF-24 would do the same for 25 % memory savings but more complicated element accesses.
It's true that only 21 bits are required (reference), but modern computers are good at moving 32-bit units of things around and generally interacting with them. I don't think I've ever used a programming language that had a 24-bit integer or character type, nor a platform where that was a multiple of the processor's word size (not since I last used an 8-bit computer; UTF-24 would be reasonable on an 8-bit machine), though naturally there have been some.
UTF-32 is a multiple of 16bit. Working with 32 bit quantities is much more common than working with 24 bit quantities and is usually better supported. It also helps keep each character 4-byte aligned (assuming the entire string is 4-byte aligned). Going from 1 byte to 2 bytes to 4 bytes is the most "logical" procession.
Apart from that: The Unicode standard is ever-growing. Codepoints outside of that range could eventually be assigned (it is somewhat unlikely in the near future, however, due to the huge number of unassigned codepoints still available).

How the word-length of an ISA is implemented in the hardware and software?

I have learnt that word-length is an ISA feature, which has to be implemented in hardware and software both. I have a vague idea only about the answer. I need correction or confirmation. Does the word-length becomes size of the general purpose register in the CPU? Does the word-length become the size of the 'int'(just plain int, not long or short) for a compiler?
The word length is the number of bits natively handled by the system. Common versions right now are 32-bit words and 64-bit words.
For example, a byte can hold a number from 0-255. However, a 32-bit integer is from 0-4,294,967,295. An integer is the native "word size" of the system, so is 4-bytes wide in 32-bit systems and therefore is considerably larger than 0-255.
In fact, in many systems/compilers/etc. types which are smaller than a system's native word size are converted to that word size simply because it's more efficient than trying to put multiple values into a single word. A boolean, for example, can be represented by a single bit. However, if you write a piece of software that uses 32 boolean values, it's not going to squeeze them all into a single word. Each will be assigned its own word when it runs on the metal.
I am taking liberty and interpreting this question as size of integer on a computer in C or C++. In that case this link will help - Does the size of an int depend on the compiler and/or processor?.
However if read it literally then size of word of CPU should be size of its register.
Hardware implementation : Word-length is the number of bytes fetched by the CPU at a time and can also be called the natural size of the machine. though there is nothing natural about the computers. it also becomes size of the CPU's register in implementation, since it needs registers to store what it fetches. Having said that, it is possible to use a bigger register for storing purpose. IA-32 softwares (with word length 32bits) can run on x86-64 (with word length 64 bits). Software implementation: word-length becomes the size of 'int' (just plain int, not long,short)

Space-saving character encoding for japanese?

In my opinion a common problem: character encoding in combination with a bitmap-font. Most multi-language encodings have an huge space between different character types and even a lot of unused code points there. So if I want to use them I waste a lot of memory (not only for saving multi-byte text - i mean specially for spaces in my bitmap-font) - and VRAM is mostly really valuable... So the only reasonable thing seems to be: Using an custom mapping on my texture for i.e. UTF-8 characters (so that no space is waste). BUT: This effort seems to be same with use an own proprietary character encoding (so also own order of characters in my texture). In my specially case I got texture space for 4096 different characters and need characters to display latin languages as well as japanese (its a mess with utf-8 that only support generall cjk codepages). Had somebody ever a similiar problem (I really wonder, if not)? If theres already any approach?
Edit: The same Problem is described here http://www.tonypottier.info/Unicode_And_Japanese_Kanji/ but it doesnt provide an real solution how to save these bitmapfont mappings to utf-8 space efficent. So any further help is welcome!
Edit2:
Thank you very much for your answer. Im sorry, that my problem wasn't clear enough described.
What I really want to solve, is: The CJK Unicode range is over 20000 characters. But only a subset of around 2000 characters are necessary to display japanese text properly. These characteres are spreaded in range from U+4E00 to U+9FA5. So I need to transform these Unicode Codepoints (only the 2000 for japanese) somehow to the coordinates of my created texture (where I can order the characters also like I want).
i.e. U+4E03 is a japanese character, but U+4E04, U+4E05, U+4E06 is not. Then U+4E07 is a japanese character as well. So the easiest solution, I can see: after character U+4E03 leave three spaces in my texture (or write the not necessary characters U+4E04, U+4E05, U+4E06 there) and then write U+4E07. But this would waste soo much texture space (20000 characters, even if only 2000 are necessary). So I want to be able to put in my texture only: "...U+4E03, U+4E07...". But I have no idea how to write my displayText function then - because I cant know where are the texture coordinates of the glyph I want to display. There would be a hashmap or something like this necessary, but I have no idea how to store these data (it would be a mess to write for every character something like ...{U+4E03, 128}, {U+4E07, 129}... to fill the hasmap).
To the questions:
1) No specific format - so I will write the displayText function by myself.
2) No reason against unicode - its only that CJK range problem for my bitmapfont.
3) I think, thats generally plattform & language independent, but in my case Im using C++ with OpenGL on Mac OS X/iOS.
Thank you very much for your help! If you have any further idea for this, it would really help me a lot!
What is the real problem you want to solve?
Is it that a UTF-8 encoded string occupies three bytes per character? If yes, switch to UTF-16. Otherwise don't blame UTF-8. (Explanation: UTF-8 is just an algorithm to convert a sequence of integers to a sequence of bytes. It has nothing to do with the grouping of characters in codepages. That in turn is what Unicode code points are for.)
Is it that the Unicode code points are distributed over many "codepages" (where a "codepage" means a block of 256 adjacent Unicode code points)? If yes, invent a mapping from the Unicode code points (0x000000 - 0x10FFFF) to a smaller set of integers. In terms of memory this should cost no more than 4 bytes times the number of characters you really need. The lookup time would be approximately 24 memory accesses, 24 integer comparisons and 24 branch instructions. (In fact, this would be a binary search in a tree map.) And if that's too expensive you could use a mapping based on a hash table.
Is it something else? Then please give us some examples, to better understand your problem.
As far as I understand it you should probably write a small utility program that takes as input a set of Unicode code points that you want to use in your application and then generates The code and data for displaying texts. This raises the questions:
Do you have to use a specific bitmap font format or will you write the displayText function yourself?
Is there any reason against using Unicode for all strings and to convert them to your bitmap-optimized encoding just for the time when you render text? The encoding conversion would of course be internal to the displayText method and not visible to the normal application code.
Just out of interest: Is the problem specific to a certain programming language or environment?
Update:
I am assuming that your main problem is some function like this:
Rectangle position(int codepoint)
If I had to do this, I would start by having one bitmap for each character. The bitmap's file name would be the codepoint, so that the "big picture" can be regenerated easily, just in case you find some more characters you need. The preparation consists of the following steps:
Load all the bitmaps and determine their dimensions. The result of this step is a map from integers to (width, height) pairs.
Compute a good layout for the character images in the big picture and remember where each character was placed. Save the big picture. Save the mapping from codepoints to (x, y, width, height) to another file. This can be a text file, or if you don't have disk space, a binary file. The details don't matter.
The displayText function would then work as follows:
void displayText(int x, int y, String s) {
for (char c : s.toCharArray()) { // TODO: handle code points correctly
int codepoint = c;
Rectangle position = positions.get(codepoint);
if (position != null) {
// draw bitmap
x += position.width;
}
}
}
Map<Integer, Rectangle> positions = loadPositionsFromFile();
Now the only problem that is left is how this map can be represented in memory using as little memory as possible, and still be fast enough. That, of course, depends on your programming language.
The in-memory representation could be a few arrays that contain x, y, width, height. For each element, a 16 bit integer should be enough. And probably you only need 8 bits for width and height anyway. Another array would then map the codepoint to the index into positionData (or some special value if the codepoint is not available). This would be an array of 20000 16 bit integers, so in summary you have:
2000 * (2 + 2 + 1 + 1) = 12000 bytes for positionX, positionY, positionWidth and positionHeight
20000 * 2 = 40000 bytes for codepointToIndexInPositionArrays, if you use an array instead of a map.
Compared to the size of the bitmap itself, this should be small enough. And since the arrays don't change they can be in read-only memory.
I believe the most efficient (lossless) method for encoding this data will be to use a Huffman encoding to store your document information. This is a classic information theory problem. You will need to perform a mapping to go from your compressed space to your character space.
This technique will compress your document as efficiently as possible, based on character frequency per document (or whatever domain/documents you choose to apply it to). Only the characters you use will be stored, and they will be stored in an efficient manner directly proportional to how often they are used.
I think the best way for you to solve this problem is to use an existing implementation (UTF16, UTF8...) This will be much less error prone than implementing your own Huffman coding in order to save a little bit of space. Disk space and bandwidth is cheap, errors that anger customers or managers are not. It is my belief that a Huffman encoding will theoretically be the most efficient (lossless) encoding possible, but not the most practical for this application. Check out the link though, this might help with some of these concepts.
-Brian J. Stinar-
UTF-8 is usually a very efficient encoding. If your application focuses primarily on Asia and other regions with multi-byte character sets, you may benefit more from using UTF-16. You could of course write your own encoding, but it won't save yo that much data and it will provide you with a lot of work.
If you really need to compact your data (and I wonder if and why) you could best use some algorithm to compress you UTF data. Most algorithms work more efficient on larger blocks of data, but there are also algorithms for compressing small chunks of text. I think you will save yourself a lot of time if you explore these instead of defining your own encoding.
The paper is pretty much obsolete, it isn't 1980 any more, scrounging bits is not a requirement of almost any display application. When developing an application, e.g. the iPhone you have to plan for l10n across multiple languages so saving a few bits for just Japanese is a bit pointless.
Japan is still on Shift-JIS because like China with GB18030, Hong Kong with BIG5, etc, they have a big, stable, and efficient resource pool already locked into locale encodings. Migrating to Unicode requires re-writing a significant amount of framework tools and the additional testing that ensues.
If you look at the iPod it saves bits by only supporting Latin, Chinese, Japanese, and Korean, skipping Thai and other scripts. As memory prices and dropped and storage increased with the iPhone Apple have been able to add support for more scripts.
UTF-8 is the way to save space, use UTF-8 for storage and convert to UCS-2 or higher for more convenient manipulation and display. The differences between Shift-JIS and Unicode are really pretty minor.
Chinese alone has more than 4096 characters, and I'm not talking punctuation, but characters that are used to form words. From Wikipedia:
The number of Chinese characters contained in the Kangxi dictionary is approximately 47,035, although a large number of these are rarely used variants accumulated throughout history.
Even though many of those are rarely used, even if 90% weren't needed you'd still exhaust your quota. (I think the actual number used in modern text is somewhere around 10 - 20k.)
If you know in advance which characters you'll need to use your best bet may be to create an indirection table of Unicode codepoints to indexes into your texture. Then you only have to put as many characters in your texture as you'll actually use. I believe Flash (and some PDFs) do something like this internally.
You could use multiple bitmaps and load them on demand, instead of a single bitmap that tries to encompass all possible characters.