Are there bytes that are not used in the UTF-8 encoding? - unicode

As I understand it, UTF-8 is a superset of ASCII, and therefore includes the control characters which are not used to represent printable characters.
My question is: Are there any bytes (of the 256 different) that are not used by the UTF-8 encoding?
I wondered if you could convert/encode UTF-8 text to binary.
Here my though process:
I have no idea how the UTF-8 text encoding works and how it can use so many characters (only that it uses multiple bytes for characters not in ASCII (Latin-1??)) but I know that ASCII text is valid in UTF-8 so the control characters (bytes 0-30) are not used differently by the UTF-8 encoding but they are at the same time not used for displaying characters, right??
So of the 256 different bytes, only ~230 are used. For a 1000 (binary) long Unicode text there are only 1000^230 different texts? Right?
If that is true, you could convert it to a binary data which is smaller than 1000 bytes.
Wolfram alpha: 1000 bytes of unicode (assumption unicode only uses 230 of the 256 different bytes) --> 496 bytes

Yes, it is possible to devise encodings which are more space-efficient than UTF-8, but you have to weigh the advantages against the disadvantages.
For example, if your primary target is (say) ISO-8859-1, you could map the character codes 0xA0-0xFF to themselves, and only use 0x80-0x9F to select an extension map somewhat vaguely like UTF-8 uses (nearly) all of 0x80-0xFF to encode sequences which can represent all of Unicode > 0x80. You would gain a significant advantage when the majority of your text does not use characters in the ranges 0x80-0x9F or 0x0100-0x1EFFFFFFFF, but correspondingly lose when this is not the case.
Or you could require the user to keep a state variable which tells you which range of characters is currently selected, and have each byte in the stream act as an index into that range. This has significant disadvantages, but used to be how these things were done way back when (witness e.g. ISO-2022).
The original UTF-8 draft before Ken Thompson and Rob Pike famously intervened was probably also somewhat more space-efficient than the final specification, but the changes they introduced had some very attractive properties, trading (I assume) some space efficiency for lack of contextual ambiguity.
I would urge you to read the Wikipedia article about UTF-8 to understand the design desiderata -- the spec is possible to grasp in just a few minutes, although you might want to reserve an hour or more to follow footnotes etc. (The Thompson anecdote is currently footnote #7.)
All in all, unless you are working on space travel or some similarly effeciency-intensive application, losing UTF-8 compatibility is probably not worth the time you have already spent, and you should stop now.

0xF8-0xFF are not valid anywhere in UTF-8, and some other bytes are not valid at certain positions.
The lead byte of a character indicates the number of bytes used to encode the character, and each continuation byte has 10 as its two high order bits. This is so that you can pick any byte within the text and find the start of the character containing it. If you don't mind losing this ability, you could certainly come up with more efficient encoding.

You have to distinguish Characters, Unicode and UTF-8 encoding:
In encodings like ASCII, LATIN-1, etc. there is a one-to-one relation of one character to one number between 0 and 255 so a character can be encoded by exactly one byte (e.g. "A"->65). For decoding such a text you need to know which encoding was used (does 65 really mean "A"?).
To overcome this situation Unicode assigns every Character (including all kinds of special things like control characters, diacritic marks, etc.) a unique number in the range from 0 to 0x10FFFF (so-called Unicode codepoint). As this range does not fit into one byte the question is how to encode. There are several ways to do this, e.g. simplest way would always use 4 bytes for each character. As this consumes a lot of space a more efficient encoding is UTF-8: Here every Unicode codepoint (= Character) is encoded in one, two, three or four bytes (for this encoding not all byte values from 0 to 255 are used but this is only a technical detail).

Related

Understanding the need of encoding and decoding in context to saving the strings on disk

I have read the answer here. I understand what a byte stream is (a stream of 1s and 0s), encoding is (a mapping from that stream to what characters that we humans understand) and decoding is (a reverse mapping from characters to corresponding bytes).
I still cannot reconcile the entire concept in my head. In the RAM we already have everything as bytes only. And I guess my interpreter is inherently using some decoding scheme to show me the characters corresponding to that bytes stream. What then do we mean by having to encode before saving to the disk? If my interpreter is using 'utf-8' to show us this text that I am typing and I ask it to save this text using 'cp-1252' have I changed the underlying bytes stream?
There are different ways to see it.
On way: "Hello World!" could be encoded in different way. You want the semantic of the string: so a salutation and a target. But if you save to a UTF-8 file, you will have different values, as in a UTF-16LE file, or in a EBCDIC encoding.
E.g. A is 65 on ASCII encoding, but 193 in EBCDIC encoding (used e.g. by many IBM mainframes), 0 65 on a UTF-16 encoding (or 65 0). Etc. So when you save a number, you need to specify the encoding (as expected for the reader, so it may depend on file format).
But also libraries on a language could not handle all encodings (for all functions). Usually it is better to decode, using the standard libraries, and then encode when the data should go out. So you need to implement just encoding and decoding (e.g. for EBCDIC), and not all sorting, upper/lower case handling, is_digits, is_symbol, etc.
it is standard practice to divide semantic with real values. Or display with logic. If you are a control freak, you can do all without decoding values. But it is error prone, and you should know so many details, that few people want to know.
An other example, do you need to know the real values of your data/strings? You have a number, it is encoded little-endian or big-endian? Or maybe as a float (e.g. JavaScript). We just know it, when we save data (e.g. to send in internet, we need a way to tell the ordering. Or when saving images: we tell the ordering, so on some machines, the bytes will be swapped, when reading a large number).
Or an other example: you take a selfies. You have an image, but you can save it as a PNG file, or a JPEG file: you will get very different files, with different values. But you know the encoding (fortunately, for such image files, the first bytes describe the format, and then few data about the encoding). For you it is enough to know that it is your image. But do you think computer will take the bytes of the two formats? Probably no. When you read the image, you will convert in a different encoding in memory (but you probably do not need to care about it): often a RGB (or RGBA) format, but how many bit per channel, or if there is some colour rendering (from profiles), you do not know [JPEG saves it as YCC]
Python has a stricter semantic view: you do not know how Python will encode the string. It may be 8bit: ASCII/Latin1, or 16-bit (UCS2), or 32-bit (UTF-32). It handles the internal encoding dynamically, according the most efficient way to store a string. You can still get a codepoint, a for each character, and many string/character function. Just then you encode a string, you have a fix sequence of numbers. On the string side you really do not know how strings are represented in memory. So this keep the two different parts of Unicode clearly separated: semantic value (description of all character), and the encoding/decoding (how to represent the values in bytes).
When you are handling a string in Python, you should just care about the semantic. The implementation (and so the physical layout of string in memory) is not your businesses, and Python can change it. (it changed it).
But with your example:
You may not get much of it, because recent standardisation: ASCII become nearly the only encoding for the most common Latin letters, and symbols. Latin-1 is compatible with ASCII, just extending from 7-bit to 8-bit. "Windows ANSI" uses Latin-1 and add characters on the non-allocated parts. Unicode based from Latin-1 (for first 256 characters). So you may see a character with a fixed number (or not available), but this was not the rule, also in early Windows.
So your cp-1252 is for most characters compatible with UTF-8 (but few characters). But if you uses other encoding, you should do much a transcoding (changing from an encoding to an other). But usually you do this just when you save: you keep the internal encoding, but you do a copy to be saved.
A byte is 8 bits, whether it is in RAM, on disk, or on the wire.
A bit is the "atom" of computer data. A byte is the "molecule", except that there is only one kind of byte.
A bit is the smallest unit of information in computers. It is usually said to represent 0 or 1, or OFF or ON.
Whether you "interpret" a byte as a number (0 to 255), a signed number (-128 to +127), an "ascii" character, like the characters I am typing, depends on what you (or the computer) does with the byte. Or a byte can be part of a bigger number, one that requires several bytes to represent.
Because there are too many "letters" or "characters" (especially in Chinese), to fit in a byte, there is the additional concept of a "character" may be composed of multiple bytes. UTF-8 is the main standard today. Giacomo discusses several less-common encodings that say what "character" is represented by a byte (or bytes). Remember, each byte is composed of 8 bits.
English letters and numbers and some punctuation is represented (encoded) in bytes in the same way for Ascii, Latin1, cp-1252, and UTF-8 (and some other encodings). But as soon as you get into European accented letters, the encodings diverge.
A common thing you may hear of is to represent one byte as two hexadecimal digits.

Understanding encoding schemes

I cannot understand some key elements of encoding:
Is ASCII only a character or it also has its encoding scheme algorithm ?
Does other windows code pages such as Latin1 have their own encoding algorithm ?
Are UTF7, 8, 16, 32 the only encoding algorithms ?
Does the UTF alghoritms are used only with the UNICODE set ?
Given the ASCII text: Hello World, if I want to convert it into Latin1 or BIG5, which encoding algorithms are being used in this process ? More specifically, does Latin1/Big5 use their own encoding alghoritm or I have to use a UTF alghoritm ?
1: Ascii is just an encoding — a really simple encoding. It's literally just the positive end of a signed byte (0...127) mapped to characters and control codes.
Refer to https://www.ascii.codes/ to see the full set and inspect the characters.
There are definitely encoding algorithms to convert ascii strings to and from strings in other encodings, but there is no compression/decompression algorithm required to write or read ascii strings like there is for utf8 or utf16, if that's what you're implying.
2: LATIN-1 is also not a compressed (usually called 'variable width') encoding, so there's no algorithm needed to get in and out of it.
See https://kb.iu.edu/d/aepu for a nice description of LATIN-1 conceptually and of each character in the set. Like a lot of encodings, its first 128 slots are just ascii. Like ascii, it's 1 byte in size, but it's an unsigned byte, so after the last ascii character (DEL/127), LATIN1 adds another 128 characters.
As with any conversion from one string encoding to another, there is an algorithm specifically tailored to that conversion.
3: Again, unicode encodings are just that — encodings. But they're all compressed except for utf32. So unless you're working with utf32 there is always a compression/decompression step required to write and read them.
Note: When working with utf32 strings there is one nonlinear oddity that has to be accounted for... combining characters. Technically that is yet another type of compression since they save space by not giving a codepoint to every possible combination of uncombined character and combining character. They "precombine" a few, but they would run out of slots very quickly if they did them all.
4: Yes. The compression/decompression algorithms for the compressed unicode encodings are just for those encodings. They would not work for any other encoding.
Think of it like zip/unzip. Unzipping anything other than a zipped file or folder would of course not work. That goes for things that are not compressed in the first place and also things that are compressed but using another compression algorithm (e.g.: rar).
I recently wrote the utf8 and utf16 compression/decompression code for a new cross-platform library being developed, and I can tell you quite confidently if you feed a Big5-encoded string into my method written specifically for decompressing utf8... not only would it not work, it might very well crash.
Re: your "Hello World" question... Refer to my answer to your second question about LATIN-1. No conversion is required to go from ascii to LATIN-1 because the first 128 characters (0...127) of LATIN-1 are ascii. If you're converting from LATIN-1 to ascii, the same is true for the lower half of LATIN-1, but if any of the characters beyond 127 are in the string, it would be what's called a "lossy"/partial conversion or an outright failure, depending on your tolerance level for lossiness. In your example, however, all of the characters in "Hello World" have the exact same values in both encodings, so it would convert perfectly, without loss, in either direction.
I know practically nothing about Big5, but regardless, don't use utf-x algos for other encodings. Each one of those is written very specifically for 1 particular encoding (or in the case of conversion: pair of encodings).
If you're curious about utf8/16 compression/decompression algorithms, the unicode website is where you should start (watch out though. they don't use the compression/decompression metaphor in their documentation):
http://unicode.org
You probably won't need anything else.
... except maybe a decent codepoint lookup tool: https://www.unicode.codes/
You can roll your own code based on the unicode documentation, or use the official unicode library:
http://site.icu-project.org/home
Hope this helps.
In general, most encoding schemes like ASCII or Latin-1 are simply big tables mapping characters to specific byte sequences. There may or may not be some specific algorithm how the creators came up with those specific character⟷byte associations, but there's generally not much more to it than that.
One of the innovations of Unicode specifically is the indirection of assigning each character a unique number first and foremost, and worrying about how to encode that number into bytes secondarily. There are a number of encoding schemes for how to do this, from the UCS and GB 18030 encodings to the most commonly used UTF-8/UTF-16 encodings. Some are largely defunct by now like UCS-2. Each one has their pros and cons in terms of space tradeoffs, ease of processing and transportability (e.g. UTF-7 for safe transport over 7-bit system like email). Unless otherwise noted, they can all encode the full set of current Unicode characters.
To convert from one encoding to another, you pretty much need to map bytes from one table to another. Meaning, if you look at the EBCDIC table and the Windows 1250 table, the characters 0xC1 and 0x41 respectively both seem to represent the same character "A", so when converting between the two encodings, you'd map those bytes as equivalent. Yes, that means there needs to be one such mapping between each possible encoding pair.
Since that is obviously rather laborious, modern converters virtually always go through Unicode as a middleman. This way each encoding only needs to be mapped to the Unicode table, and the conversion can be done with encoding A → Unicode code point → encoding B. In the end you just want to identify which characters look the same/mean the same, and change the byte representation accordingly.
A character encoding is a mapping from a sequence of characters to a sequence of bytes (in the past there were also encodings to a sequence of bits - they are falling out of fashion). Usually this mapping is one-to-one but not necessarily onto. This means there may be byte sequences that don't correspond to a character sequence in this encoding.
The domain of the mapping defines which characters can be encoded.
Now to your questions:
ASCII is both, it defines 128 characters (some of them are control codes) and how they are mapped to the byte values 0 to 127.
Each encoding may define its own set of characters and how they are mapped to bytes
no, there are others as well ASCII, ISO-8859-1, ...
Unicode uses a two step mapping: first the characters are mapped to (relatively) small integers called "code points", then these integers are mapped to a byte sequence. The first part is the same for all UTF encodings, the second step differs. Unicode has the ambition to contain all characters. This means, most characters are in the "UNICODE set".
Every character in the world has been assigned a unicode value [ numbered from 0 to ...]. It is actually an unique value. Now, it depends on an individual that how he wants to use that unicode value. He can even use it directly or can use some known encoding schemes like utf8, utf16 etc. Encoding schemes map that unicode value into some specific bit sequence [ can vary from 1 byte to 4 bytes or may be 8 in future if we get to know about all the languages of universe/aliens/multiverse ] so that it can be uniquely identified in the encoding scheme.
For example ASCII is an encoding scheme which only encodes 128 characters out of all characters. It uses one byte for every character which is equivalent to utf8 representation. GSM7 is one other format which uses 7 bit per character to encode 128 characters from unicode character list.
Utf8:
It uses 1 byte for characters whose unicode value is till 127.
Beyond this it has its own way of representing the unicode values.
Uses 2 byte for Cyrillic then 3 bytes for Hindi characters.
Utf16:
It uses 2 byte for characters whose unicode value is till 127.
and it also uses 2 byte for Cyrillic, Hindi characters.
All the utf encoding schemes fixes initial bits in specific pattern [ eg: 110|restbits] and rest bits [eg: initialbits|11001] takes the unicode value to make a unique representation.
Wikipedia on utf8, utf16, unicode will make it clear.
I coded an utf translator which converts incoming utf8 text across all languages into its equivalent utf16 text.

Two bytes of unicode letters is a myth?

I have read an article talks about text encoding. It refers that saying that a unicode letter is two bytes is a myth.
It explains that but my english is not good enugh to understand the reasons.
Kindly, any one here can explain that fact if it is true and the reasons? Please ,keep simple English as possible as you can.
It can need more, or less depending on unicode format and what character you wish to represent. At most 4 bytes per character:
Character encoding standards define not only the identity of each
character and its numeric value, or code point, but also how this
value is represented in bits.
The Unicode Standard defines three encoding forms that allow the same
data to be transmitted in a byte, word or double word oriented format
(i.e. in 8, 16 or 32-bits per code unit). All three encoding forms
encode the same common character repertoire and can be efficiently
transformed into one another without loss of data. The Unicode
Consortium fully endorses the use of any of these encoding forms as a
conformant way of implementing the Unicode Standard.
UTF-8 is popular for HTML and similar protocols. UTF-8 is a way of
transforming all Unicode characters into a variable length encoding of
bytes. It has the advantages that the Unicode characters corresponding
to the familiar ASCII set have the same byte values as ASCII, and that
Unicode characters transformed into UTF-8 can be used with much
existing software without extensive software rewrites.
UTF-16 is popular in many environments that need to balance efficient
access to characters with economical use of storage. It is reasonably
compact and all the heavily used characters fit into a single 16-bit
code unit, while all other characters are accessible via pairs of
16-bit code units.
UTF-32 is useful where memory space is no concern, but fixed width,
single code unit access to characters is desired. Each Unicode
character is encoded in a single 32-bit code unit when using UTF-32.
All three encoding forms need at most 4 bytes (or 32-bits) of data for
each character.
See http://www.unicode.org/standard/principles.html
Windows, and many legacy applications, has traditionally used 16 bits (two bytes) to represent unicode characters, but the actual standard is 21 bits (0x000000 to 0x10ffff). That's why there are so many different encodings (UTF-8 and so on). Today the most common internal representation of unicode characters inside of programs should be UTF-32 (32 bits, 4 bytes), while most are stored on disk in UTF-8 format.
For more information about the different unicode encoding schemes see this Wikipedia article: http://en.wikipedia.org/wiki/Comparison_of_Unicode_encodings

What is the Best UTF [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I'm really confused about UTF in Unicode.
there is UTF-8, UTF-16 and UTF-32.
my question is :
what UTF that are support all Unicode blocks ?
What is the best UTF(performance, size, etc), and why ?
What is different between these three UTF ?
what is endianness and byte order marks (BOM) ?
Thanks
what UTF that are support all Unicode blocks ?
All UTF encodings support all Unicode blocks - there is no UTF encoding that can't represent any Unicode codepoint. However, some non-UTF, older encodings, such as UCS-2 (which is like UTF-16, but lacks surrogate pairs, and thus lacks the ability to encode codepoints above 65535/U+FFFF), may not.
What is the best UTF(performance, size, etc), and why ?
For textual data that is mostly English and/or just ASCII, UTF-8 is by far the most space-efficient. However, UTF-8 is sometimes less space-efficient than UTF-16 and UTF-32 where most of the codepoints used are high (such as large bodies of CJK text).
What is different between these three UTF ?
UTF-8 encodes each Unicode codepoint from one to four bytes. The Unicode values 0 to 127, which are the same as they are in ASCII, are encoded like they are in ASCII. Bytes with values 128 to 255 are used for multi-byte codepoints.
UTF-16 encodes each Unicode codepoint in either two bytes (one UTF-16 value) or four bytes (two UTF-16 values). Anything in the Basic Multilingual Plane (Unicode codepoints 0 to 65535, or U+0000 to U+FFFF) are encoded with one UTF-16 value. Codepoints from higher plains use two UTF-16 values, through a technique called 'surrogate pairs'.
UTF-32 is not a variable-length encoding for Unicode; all Unicode codepoint values are encoded as-is. This means that U+10FFFF is encoded as 0x0010FFFF.
what is endianness and byte order marks (BOM) ?
Endianness is how a piece of data, particular CPU architecture or protocol orders values of multi-byte data types. Little-endian systems (such as x86-32 and x86-64 CPUs) put the least-significant byte first, and big-endian systems (such as ARM, PowerPC and many networking protocols) put the most-significant byte first.
In a little-endian encoding or system, the 32-bit value 0x12345678 is stored or transmitted as 0x78 0x56 0x34 0x12. In a big-endian encoding or system, it is stored or transmitted as 0x12 0x34 0x56 0x78.
A byte order mark is used in UTF-16 and UTF-32 to signal which endianness the text is to be interpreted as. Unicode does this in a clever way -- U+FEFF is a valid codepoint, used for the byte order mark, while U+FFFE is not. Therefore, if a file starts with 0xFF 0xFE, it can be assumed that the rest of the file is stored in a little-endian byte ordering.
A byte order mark in UTF-8 is technically possible, but is meaningless in the context of endianness for obvious reasons. However, a stream that begins with the UTF-8 encoded BOM almost certainly implies that it is UTF-8, and thus can be used for identification because of this.
Benefits of UTF-8
ASCII is a subset of the UTF-8 encoding and therefore is a great way to introduce ASCII text into a 'Unicode world' without having to do data conversion
UTF-8 text is the most compact format for ASCII text
Valid UTF-8 can be sorted on byte values and result in sorted codepoints
Benefits of UTF-16
UTF-16 is easier than UTF-8 to decode, even though it is a variable-length encoding
UTF-16 is more space-efficient than UTF-8 for characters in the BMP, but outside ASCII
Benefits of UTF-32
UTF-32 is not variable-length, so it requires no special logic to decode
“Answer me these questions four, as all were answered long before.”
You really should have asked one question, not four. But here are the answers.
All UTF transforms by definition support all Unicode code points. That is something you needn’t worry about. The only problem is that some systems are really UCS-2 yet claim they are UTF-16, and UCS-2 is severely broken in several fundamental ways:
UCS-2 is not a valid Unicode encoding.
UCS-2 supports only ¹⁄₁₇ᵗʰ of Unicode. That is, Plane 0 only, not Planes 1–16.
UCS-2 permits code points that The Unicode Standard guarantees will never be in a valid Unicode stream. These include
all 2,048 UTF-16 surrogates, code points U+D800 through U+DFFF
the 32 non-character code points between U+FDD0 and U+FDEF
both sentinels at U+FFEF and U+FFFF
For what encoding is used internally by seven different programming languages, see slide 7 on Feature Support Summary in my OSCON talk from last week entitled “Unicode Support Shootout”. It varies a great deal.
UTF-8 is the best serialization transform of a stream of logical Unicode code points because, in no particular order:
UTF-8 is the de facto standard Unicode encoding on the web.
UTF-8 can be stored in a null-terminated string.
UTF-8 is free of the vexing BOM issue.
UTF-8 risks no confusion of UCS-2 vs UTF-16.
UTF-8 compacts mainly-ASCII text quite efficiently, so that even Asian texts that are in XML or HTML often wind up being smaller in bytes than UTF-16. This is an important thing to know, because it is a counterintuitive and surprising result. The ASCII markup tags often make up for the extra byte. If you are really worried about storage, you should be using proper text compression, like LZW and related algorithms. Just bzip it.
If need be, it can be roped into use for trans-Unicodian points of arbitrarily large magnitude. For example, MAXINT on a 64-bit machine becomes 13 bytes using the original UTF-8 algorithm. This property is of rare usefulness, though, and must be used with great caution lest it be mistaken for a legitimate UTF-8 stream.
I use UTF-8 whenever I can get away with it.
I have already given properties of UTF-8, so here are some for the other two:
UTF-32 enjoys a singular advantage for internal storage: O(1) access to code point N. That is, constant time access when you need random access. Remember we lived forever with O(N) access in C’s strlen function, so I am not sure how important this is. My impression is that we almost always process our strings in sequential not random order, in which case this ceases to be a concern. Yes, it takes more memory, but only marginally so in the long run.
UTF-16 is a terrible format, having all the disadvantages of UTF-8 and UTF-32 but none of the advantages of either. It is grudgingly true that when properly handled, UTF-16 can certainly be made to work, but doing so takes real effort, and your language may not be there to help you. Indeed, your language is probably going to work against you instead. I’ve worked with UTF-16 enough to know what a royal pain it is. I would stay clear of both these, especially UTF-16, if you possibly have any choice in the matter. The language support is almost never there, because there are massive pods of hysterical porpoises all contending for attention. Even when proper code-point instead of code-unit access mechanisms exist, these are usually awkward to use and lengthy to type, and they are not the default. This leads too easily to bugs that you may not catch until deployment; trust me on this one, because I’ve been there.
That’s why I’ve come to talk about there being a UTF-16 Curse. The only thing worse than The UTF-16 Curse is The UCS-2 Curse.
Endianness and the whole BOM thing are problems that curse both UTF-16 and UTF-32 alike. If you use UTF-8, you will not ever have to worry about these.
I sure do hope that you are using logical (that is, abstract) code points internally with all your APIs, and worrying about serialization only for external interchange alone. Anything that makes you get at code units instead of code points is far far more hassle than it’s worth, no matter whether those code units are 8 bits wide or 16 bits wide. You want a code-point interface, not a code-unit interface. Now that your API uses code points instead of code units, the actual underlying representation no longer matters. It is important that this be hidden.
Category Errors
Let me add that everyone talking about ASCII versus Unicode is making a category error. Unicode is very much NOT “like ASCII but with more characters.” That might describe ISO 10646, but it does not describe Unicode. Unicode is not merely a particular repertoire but rules for handling them. Not just more characters, but rather more characters that have particular rules accompanying them. Unicode characters without Unicode rules are no longer Unicode characters.
If you use an ASCII mindset to handle Unicode text, you will get all kinds of brokenness, again and again. It doesn’t work. As just one example of this, it is because of this misunderstanding that the Python pattern-matching library, re, does the wrong thing completely when matching case-insensitively. It blindly assumes two code points count as the same if both have the same lowercase. That is an ASCII mindset, which is why it fails. You just cannot treat Unicode that way, because if you do you break the rules and it is no longer Unicode. It’s just a mess.
For example, Unicode defines U+03C3 GREEK SMALL LETTER SIGMA and U+03C2 GREEK SMALL LETTER FINAL SIGMA as case-insensitive versions of each other. (This is called Unicode casefolding.) But since they don’t change when blindly mapped to lowercase and compared, that comparison fails. You just can’t do it that way. You can’t fix it in the general case by switching the lowercase comparison to an uppercase one, either. Using casemapping when you need to use casefolding belies a shakey understanding of the whole works.
(And that’s nothing: Python 2 is broken even worse. I recommend against using Python 2 for Unicode; use Python 3 if you want to do Unicode in Python. For Pythonistas, the solution I recommend for Python’s innumerably many Unicode regex issues is Matthew Barnett’s marvelous regex library for Python 2 and Python 3. It is really quite neat, and it actually gets Unicode casefolding right — amongst many other Unicode things that the standard re gets miserably wrong.)
REMEMBER: Unicode is not just more characters: Unicode is rules for handling more characters. One either learns to work with Unicode, or else one works against it, and if one works against it, then it works against you.
All of them support all Unicode code points.
They have different performance characteristics - for example, UTF-8 is more compact for ASCII characters, whereas UTF-32 makes it easier to deal with the whole of Unicode including values outside the Basic Multilingual Plane (i.e. above U+FFFF). Due to its variable width per character, UTF-8 strings are hard to use to get to a particular character index in the binary encoding - you have scan through. The same is true for UTF-16 unless you know that there are no non-BMP characters.
It's probably easiest to look at the wikipedia articles for UTF-8, UTF-16 and UTF-32
Endianness determines (for UTF-16 and UTF-32) whether the most significant byte comes first and the least significant byte comes last, or vice versa. For example, if you want to represent U+1234 in UTF-16, that can either be { 0x12, 0x34 } or { 0x34, 0x12 }. A byte order mark indicates which endianess you're dealing with. UTF-8 doesn't have different endiannesses, but seeing a UTF-8 BOM at the start of a file is a good indicator that it is UTF-8.
Some good questions here and already a couple good answers. I might be able to add something useful.
As said before, all three cover the full set of possible codepoints, U+0000 to U+10FFFF.
Depends on the text, but here are some details that might be of interest. UTF-8 uses 1 to 4 bytes per char; UTF-16 uses 2 or 4; UTF-32 always uses 4. A useful thing to note is this. If you use UTF-8 then then English text will be encoded with the vast majority of characters in one byte each, but Chinese needs 3 bytes each. Using UTF-16, English and Chinese will both require 2. So basically UTF-8 is a win for English; UTF-16 is a win for Chinese.
The main difference is mentioned in the answer to #2 above, or as Jon Skeet says, see the Wikipedia articles.
Endianness: For UTF-16 and UTF-32 this refers to the order in which the bytes appear; for example in UTF-16, the character U+1234 can be encoded either as 12 34 (big endian), or 34 12 (little endian). The BOM, or byte order mark is interesting. Let's say you have a file encoded in UTF-16, but you don't know whether it is big or little endian, but you notice the first two bytes of the file are FE FF. If this were big-endian the character would be U+FEFF; if little endian, it would signify U+FFFE. But here's the thing: In Unicode the codepoint FFFE is permanently unassigned: there is no character there! Therefore we can tell the encoding must be big-endian. The FEFF character is harmless here; it is the ZERO-WIDTH NO BREAK SPACE (invisible, basically). Similarly if the file began with FF FE we know it is little endian.
Not sure if I added anything to the other answers, but I have found the English vs. Chinese concrete analysis useful in explaining this to others in the past.
One way of looking at it is as size over complexity. Generally they increase in the number of bytes they need to encode text, but decrease in the complexity of decoding the scheme they use to represent characters. Therefore, UTF-8 is usually small but can be complex to decode, whereas UTF-32 takes up more bytes but is easy to decode (but is rarely used, UTF-16 being more common).
With this in mind UTF-8 is often chosen for network transmission, as it has smaller size. Whereas UTF-16 is chosen where easier decoding is more important than storage size.
BOMs are intended as information at the beginning of files which describes which encoding has been used. This information is often missing though.
Joel Spolsky wrote a nice introductory article about Unicode:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!)

What are Unicode, UTF-8, and UTF-16?

What's the basis for Unicode and why the need for UTF-8 or UTF-16?
I have researched this on Google and searched here as well, but it's not clear to me.
In VSS, when doing a file comparison, sometimes there is a message saying the two files have differing UTF's. Why would this be the case?
Please explain in simple terms.
Why do we need Unicode?
In the (not too) early days, all that existed was ASCII. This was okay, as all that would ever be needed were a few control characters, punctuation, numbers and letters like the ones in this sentence. Unfortunately, today's strange world of global intercommunication and social media was not foreseen, and it is not too unusual to see English, العربية, 汉语, עִבְרִית, ελληνικά, and ភាសាខ្មែរ in the same document (I hope I didn't break any old browsers).
But for argument's sake, let’s say Joe Average is a software developer. He insists that he will only ever need English, and as such only wants to use ASCII. This might be fine for Joe the user, but this is not fine for Joe the software developer. Approximately half the world uses non-Latin characters and using ASCII is arguably inconsiderate to these people, and on top of that, he is closing off his software to a large and growing economy.
Therefore, an encompassing character set including all languages is needed. Thus came Unicode. It assigns every character a unique number called a code point. One advantage of Unicode over other possible sets is that the first 256 code points are identical to ISO-8859-1, and hence also ASCII. In addition, the vast majority of commonly used characters are representable by only two bytes, in a region called the Basic Multilingual Plane (BMP). Now a character encoding is needed to access this character set, and as the question asks, I will concentrate on UTF-8 and UTF-16.
Memory considerations
So how many bytes give access to what characters in these encodings?
UTF-8:
1 byte: Standard ASCII
2 bytes: Arabic, Hebrew, most European scripts (most notably excluding Georgian)
3 bytes: BMP
4 bytes: All Unicode characters
UTF-16:
2 bytes: BMP
4 bytes: All Unicode characters
It's worth mentioning now that characters not in the BMP include ancient scripts, mathematical symbols, musical symbols, and rarer Chinese, Japanese, and Korean (CJK) characters.
If you'll be working mostly with ASCII characters, then UTF-8 is certainly more memory efficient. However, if you're working mostly with non-European scripts, using UTF-8 could be up to 1.5 times less memory efficient than UTF-16. When dealing with large amounts of text, such as large web-pages or lengthy word documents, this could impact performance.
Encoding basics
Note: If you know how UTF-8 and UTF-16 are encoded, skip to the next section for practical applications.
UTF-8: For the standard ASCII (0-127) characters, the UTF-8 codes are identical. This makes UTF-8 ideal if backwards compatibility is required with existing ASCII text. Other characters require anywhere from 2-4 bytes. This is done by reserving some bits in each of these bytes to indicate that it is part of a multi-byte character. In particular, the first bit of each byte is 1 to avoid clashing with the ASCII characters.
UTF-16: For valid BMP characters, the UTF-16 representation is simply its code point. However, for non-BMP characters UTF-16 introduces surrogate pairs. In this case a combination of two two-byte portions map to a non-BMP character. These two-byte portions come from the BMP numeric range, but are guaranteed by the Unicode standard to be invalid as BMP characters. In addition, since UTF-16 has two bytes as its basic unit, it is affected by endianness. To compensate, a reserved byte order mark can be placed at the beginning of a data stream which indicates endianness. Thus, if you are reading UTF-16 input, and no endianness is specified, you must check for this.
As can be seen, UTF-8 and UTF-16 are nowhere near compatible with each other. So if you're doing I/O, make sure you know which encoding you are using! For further details on these encodings, please see the UTF FAQ.
Practical programming considerations
Character and string data types: How are they encoded in the programming language? If they are raw bytes, the minute you try to output non-ASCII characters, you may run into a few problems. Also, even if the character type is based on a UTF, that doesn't mean the strings are proper UTF. They may allow byte sequences that are illegal. Generally, you'll have to use a library that supports UTF, such as ICU for C, C++ and Java. In any case, if you want to input/output something other than the default encoding, you will have to convert it first.
Recommended, default, and dominant encodings: When given a choice of which UTF to use, it is usually best to follow recommended standards for the environment you are working in. For example, UTF-8 is dominant on the web, and since HTML5, it has been the recommended encoding. Conversely, both .NET and Java environments are founded on a UTF-16 character type. Confusingly (and incorrectly), references are often made to the "Unicode encoding", which usually refers to the dominant UTF encoding in a given environment.
Library support: The libraries you are using support some kind of encoding. Which one? Do they support the corner cases? Since necessity is the mother of invention, UTF-8 libraries will generally support 4-byte characters properly, since 1, 2, and even 3 byte characters can occur frequently. However, not all purported UTF-16 libraries support surrogate pairs properly since they occur very rarely.
Counting characters: There exist combining characters in Unicode. For example, the code point U+006E (n), and U+0303 (a combining tilde) forms ñ, but the code point U+00F1 forms ñ. They should look identical, but a simple counting algorithm will return 2 for the first example, and 1 for the latter. This isn't necessarily wrong, but it may not be the desired outcome either.
Comparing for equality: A, А, and Α look the same, but they're Latin, Cyrillic, and Greek respectively. You also have cases like C and Ⅽ. One is a letter, and the other is a Roman numeral. In addition, we have the combining characters to consider as well. For more information, see Duplicate characters in Unicode.
Surrogate pairs: These come up often enough on Stack Overflow, so I'll just provide some example links:
Getting string length
Removing surrogate pairs
Palindrome checking
Unicode
is a set of characters used around the world
UTF-8
a character encoding capable of encoding all possible characters (called code points) in Unicode.
code unit is 8-bits
use one to four code units to encode Unicode
00100100 for "$" (one 8-bits);11000010 10100010 for "¢" (two 8-bits);11100010 10000010 10101100 for "€" (three 8-bits)
UTF-16
another character encoding
code unit is 16-bits
use one to two code units to encode Unicode
00000000 00100100 for "$" (one 16-bits);11011000 01010010 11011111 01100010 for "𤭢" (two 16-bits)
Unicode is a fairly complex standard. Don’t be too afraid, but be
prepared for some work! [2]
Because a credible resource is always needed, but the official report is massive, I suggest reading the following:
The Absolute Minimum Every Software Developer Absolutely, Positively Must Know About Unicode and Character Sets (No Excuses!) An introduction by Joel Spolsky, Stack Exchange CEO.
To the BMP and beyond! A tutorial by Eric Muller, Technical Director then, Vice President later, at The Unicode Consortium (the first 20 slides and you are done)
A brief explanation:
Computers read bytes and people read characters, so we use encoding standards to map characters to bytes. ASCII was the first widely used standard, but covers only Latin (seven bits/character can represent 128 different characters). Unicode is a standard with the goal to cover all possible characters in the world (can hold up to 1,114,112 characters, meaning 21 bits/character maximum. Current Unicode 8.0 specifies 120,737 characters in total, and that's all).
The main difference is that an ASCII character can fit to a byte (eight bits), but most Unicode characters cannot. So encoding forms/schemes (like UTF-8 and UTF-16) are used, and the character model goes like this:
Every character holds an enumerated position from 0 to 1,114,111 (hex: 0-10FFFF) called a code point.
An encoding form maps a code point to a code unit sequence. A code unit is the way you want characters to be organized in memory, 8-bit units, 16-bit units and so on. UTF-8 uses one to four units of eight bits, and UTF-16 uses one or two units of 16 bits, to cover the entire Unicode of 21 bits maximum. Units use prefixes so that character boundaries can be spotted, and more units mean more prefixes that occupy bits. So, although UTF-8 uses one byte for the Latin script, it needs three bytes for later scripts inside a Basic Multilingual Plane, while UTF-16 uses two bytes for all these. And that's their main difference.
Lastly, an encoding scheme (like UTF-16BE or UTF-16LE) maps (serializes) a code unit sequence to a byte sequence.
character: π
code point: U+03C0
encoding forms (code units):
      UTF-8: CF 80
      UTF-16: 03C0
encoding schemes (bytes):
      UTF-8: CF 80
      UTF-16BE: 03 C0
      UTF-16LE: C0 03
Tip: a hexadecimal digit represents four bits, so a two-digit hex number represents a byte.
Also take a look at plane maps on Wikipedia to get a feeling of the character set layout.
The article What every programmer absolutely, positively needs to know about encodings and character sets to work with text explains all the details.
Writing to buffer
if you write to a 4 byte buffer, symbol あ with UTF8 encoding, your binary will look like this:
00000000 11100011 10000001 10000010
if you write to a 4 byte buffer, symbol あ with UTF16 encoding, your binary will look like this:
00000000 00000000 00110000 01000010
As you can see, depending on what language you would use in your content this will effect your memory accordingly.
Example: For this particular symbol: あ UTF16 encoding is more efficient since we have 2 spare bytes to use for the next symbol. But it doesn't mean that you must use UTF16 for Japan alphabet.
Reading from buffer
Now if you want to read the above bytes, you have to know in what encoding it was written to and decode it back correctly.
e.g. If you decode this :
00000000 11100011 10000001 10000010
into UTF16 encoding, you will end up with 臣 not あ
Note: Encoding and Unicode are two different things. Unicode is the big (table) with each symbol mapped to a unique code point. e.g. あ symbol (letter) has a (code point): 30 42 (hex). Encoding on the other hand, is an algorithm that converts symbols to more appropriate way, when storing to hardware.
30 42 (hex) - > UTF8 encoding - > E3 81 82 (hex), which is above result in binary.
30 42 (hex) - > UTF16 encoding - > 30 42 (hex), which is above result in binary.
Originally, Unicode was intended to have a fixed-width 16-bit encoding (UCS-2). Early adopters of Unicode, like Java and Windows NT, built their libraries around 16-bit strings.
Later, the scope of Unicode was expanded to include historical characters, which would require more than the 65,536 code points a 16-bit encoding would support. To allow the additional characters to be represented on platforms that had used UCS-2, the UTF-16 encoding was introduced. It uses "surrogate pairs" to represent characters in the supplementary planes.
Meanwhile, a lot of older software and network protocols were using 8-bit strings. UTF-8 was made so these systems could support Unicode without having to use wide characters. It's backwards-compatible with 7-bit ASCII.
Unicode is a standard which maps the characters in all languages to a particular numeric value called a code point. The reason it does this is that it allows different encodings to be possible using the same set of code points.
UTF-8 and UTF-16 are two such encodings. They take code points as input and encodes them using some well-defined formula to produce the encoded string.
Choosing a particular encoding depends upon your requirements. Different encodings have different memory requirements and depending upon the characters that you will be dealing with, you should choose the encoding which uses the least sequences of bytes to encode those characters.
For more in-depth details about Unicode, UTF-8 and UTF-16, you can check out this article,
What every programmer should know about Unicode
Why Unicode? Because ASCII has just 127 characters. Those from 128 to 255 differ in different countries, and that's why there are code pages. So they said: let’s have up to 1114111 characters.
So how do you store the highest code point? You'll need to store it using 21 bits, so you'll use a DWORD having 32 bits with 11 bits wasted. So if you use a DWORD to store a Unicode character, it is the easiest way, because the value in your DWORD matches exactly the code point.
But DWORD arrays are of course larger than WORD arrays and of course even larger than BYTE arrays. That's why there is not only UTF-32, but also UTF-16. But UTF-16 means a WORD stream, and a WORD has 16 bits, so how can the highest code point 1114111 fit into a WORD? It cannot!
So they put everything higher than 65535 into a DWORD which they call a surrogate-pair. Such a surrogate-pair are two WORDS and can get detected by looking at the first 6 bits.
So what about UTF-8? It is a byte array or byte stream, but how can the highest code point 1114111 fit into a byte? It cannot! Okay, so they put in also a DWORD right? Or possibly a WORD, right? Almost right!
They invented utf-8 sequences which means that every code point higher than 127 must get encoded into a 2-byte, 3-byte or 4-byte sequence. Wow! But how can we detect such sequences? Well, everything up to 127 is ASCII and is a single byte. What starts with 110 is a two-byte sequence, what starts with 1110 is a three-byte sequence and what starts with 11110 is a four-byte sequence. The remaining bits of these so called "startbytes" belong to the code point.
Now depending on the sequence, following bytes must follow. A following byte starts with 10, and the remaining bits are 6 bits of payload bits and belong to the code point. Concatenate the payload bits of the startbyte and the following byte/s and you'll have the code point. That's all the magic of UTF-8.
ASCII - Software allocates only 8 bit byte in memory for a given character. It works well for English and adopted (loanwords like façade) characters as their corresponding decimal values falls below 128 in the decimal value. Example C program.
UTF-8 - Software allocates one to four variable 8-bit bytes for a given character. What is meant by a variable here? Let us say you are sending the character 'A' through your HTML pages in the browser (HTML is UTF-8), the corresponding decimal value of A is 65, when you convert it into decimal it becomes 01000010. This requires only one byte, and one byte memory is allocated even for special adopted English characters like 'ç' in the word façade. However, when you want to store European characters, it requires two bytes, so you need UTF-8. However, when you go for Asian characters, you require minimum of two bytes and maximum of four bytes. Similarly, emojis require three to four bytes. UTF-8 will solve all your needs.
UTF-16 will allocate minimum 2 bytes and maximum of 4 bytes per character, it will not allocate 1 or 3 bytes. Each character is either represented in 16 bit or 32 bit.
Then why does UTF-16 exist? Originally, Unicode was 16 bit not 8 bit. Java adopted the original version of UTF-16.
In a nutshell, you don't need UTF-16 anywhere unless it has been already been adopted by the language or platform you are working on.
Java program invoked by web browsers uses UTF-16, but the web browser sends characters using UTF-8.
UTF stands for stands for Unicode Transformation Format. Basically, in today's world there are scripts written in hundreds of other languages, formats not covered by the basic ASCII used earlier. Hence, UTF came into existence.
UTF-8 has character encoding capabilities and its code unit is eight bits while that for UTF-16 it is 16 bits.