Content Transfer Encoding 7bit or 8 bit - email

While sending email content, it is required to set "Content Transfer Encoding" header. I observed many headers of emails that I received. Some emails using "7bit" and some are using "8bit".
What is the difference between these two? Which is recommended? Is there any special encoding required for email body in order to set these headers?

It can be a bit dense to read, but the "Content-Transfer-Encoding" section of RFC 1341 has all of the details:
http://www.w3.org/Protocols/rfc1341/5_Content-Transfer-Encoding.html
The situation kinda goes from bad to worse. Here's my summary:
Background
SMTP, by definition (RFC 821), limits mail to lines of 1000 characters of 7 bits each. That means that none of the bytes you send down the pipe can have the most significant ("highest-order") bit set to "1".
The content that we want to send will often not obey this restriction inherently. Think of an image file, or a text file that contains Unicode characters: the bytes of these files will often have their 8th bit set to "1". SMTP doesn't allow this, so you need to use "transfer encoding" to describe how you've worked around the mismatch.
The values for the Content-Transfer-Encoding header describe the rule that you've chosen to solve this problem.
7Bit Encoding
7bit simply means "My data consists only of US-ASCII characters, which only use the lower 7 bits for each character." You're basically guaranteeing that all of the bytes in your content already adhere to the restrictions of SMTP, and so it needs no special treatment. You can just read it as-is.
Note that when you choose 7bit, you're agreeing that all of the lines in your content are less than 1000 characters in length.
As long as your content adheres to these rule, 7bit is the best transfer encoding, since there's no extra work necessary; you just read/write the bytes as they come off the pipe. It's also easy to eyeball 7bit content and make sense of it. The idea here is that if you're just writing in "plain English text" you'll be fine. But that wasn't true in 2005 and it isn't true today.
8Bit Encoding
8bit means "My data may include extended ASCII characters; they may use the 8th (highest) bit to indicate special characters outside of the standard US-ASCII 7-bit characters." As with 7bit, there's still a 1000-character line limit.
8bit, just like 7bit, does not actually do any transformation of the bytes as they're written to or read from the wire. It just means that you're not guaranteeing that none of the bytes will have the highest bit set to "1".
This seems like a step up from 7bit, since it gives you more freedom in your content. However, RFC 1341 contains this tidbit:
As of the publication of this document, there are no standardized Internet transports for which it is legitimate to include unencoded 8-bit or binary data in mail bodies. Thus there are no circumstances in which the "8bit" or "binary" Content-Transfer-Encoding is actually legal on the Internet.
RFC 1341 came out over 20 years ago. Since then we've gotten 8bit MIME Extensions in RFC 6152. But even then, line limits still may apply:
Note that this extension does NOT eliminate the possibility of an SMTP server limiting line length; servers are free to implement this extension but nevertheless set a line length limit no lower than 1000 octets.
Binary Encoding
binary is the same as 8bit, except that there's no line length restriction. You can still include any characters you want, and there's no extra encoding. Similar to 8bit, RFC 1341 states that it's not really a legitimate encoding transfer encoding. RFC 3030 extended this with BINARYMIME.
Quoted Printable
Before the 8BITMIME extension, there needed to be a way to send content that couldn't be 7bit over SMTP. HTML files (which might have more than 1000-character lines) and files with international characters are good examples of this. The quoted-printable encoding (Defined in Section 5.1 of RFC 1341) is designed to handle this. It does two things:
Defines how to escape non-US-ASCII characters so that they can be represented in only 7-bit characters. (Short version: they get displayed as an equals sign plus two 7-bit characters.)
Defines that lines will be no greater than 76 characters, and that line breaks will be represented using special characters (which are then escaped).
Quoted Printable, because of the escaping and short lines, is much harder to read by a human than 7bit or 8bit, but it does support a much wider range of possible content.
Base64 Encoding
If your data is largely non-text (ex: an image file), you don't have many options. 7bit is off the table. 8bit and binary were unsupported prior to the MIME extension RFCs. quoted-printable would work, but is really inefficient (every byte is going to be represented by 3 characters).
base64 is a good solution for this type of data. It encodes 3 raw bytes as 4 US-ASCII characters, which is relatively efficient. RFC 1341 further limits the line length of base64-encoded data to 76 characters to fit within an SMTP message, but that's relatively easy to manage when you're just splitting or concatenating arbitrary characters at fixed lengths.
The big downside is that base64-encoded data is pretty much entirely unreadable by humans, even if it's just "plain" text underneath.

With content-transfer-encoding: 7bit the bytes that are used in body (or more correct within part's boundaries) should represent ascii characters but not extended-ascii characters. This means 0-127 decimal (8th bit not used).
Since 8th bit is not used it means that you cannot encode your text using utf-8 or iso8859-7 bytes because they use the 8th bit. Nor you can add binary content.
With content-transfer-encoding: 8bit you can use any possible byte which means that you can encode your text using utf-8 bytes or iso8859-7 bytes (both assuming that 8BITMIME extension is used in SMTP). You are however still unsafe adding binary content due to the max line-restriction that still applies which could break your bytes with newlines.
Now even with 7bit content-transfer-encoding you can still set content-type's charset param to utf-8 as long as you still keep your bytes between the boundaries of 0-127.
e.g. A possible way to represent characters outside ascii using the 7bit content-transfer-encoding could be to use html code characters (with content-type: text/html)
Many email clients will set content-transfer-encoding to 7bit or 8bit depending on the case. E.g. 7bit when sending english text, 8bit when sending multilingual text. And there are always the options of quoted-printable and base64 whose characters are also not using 8th bit, but this is out of scope of the
question.

Related

How to handle >1000 character lines in 8bit mime

When using 8BITMIME smtp, you can set Content-Transfer-Encoding: 8bit in Mime messages and send text without encoding.
Except, there is still a line limit of 1000 octets (plus the line endings should all be <CR><LF>)
When my library gets arbitrary UTF-8 data from a user, how should I go about splitting lines? Is there any way to split a 1002 octets line in a safe way? And what about a 1002 octet word (without whitespace).
In Quoted-Printable you can do =<CR><LF>, is there something similar for 8bit?
There is no way for 8bit to have longer lines, just like there is no way for 7bit to (legitimately) contain 8-bit characters. If you want arbitrarily long lines, the binary content type is available, but the standard, robust approach is to use a content-transfer-encoding such as quoted-printable or base64. Then the content within the encoding can be completely free-form.

Replacing Base64 - What are the limitations?

Because HTTP and HTTPs are already 8 bit clean there is no need to use an 8 bit clean encoding system ( such as Base64 ). We can encode using 8 bits.
Are there any inherit limitations? I.E. what governs what can be represented by 8 bits or 256 permutations?
I noticed that Unicode, UTF - 8 bytes ( 1 byte representation ) can only represent 128 permutations, b.c. the MSB must be 0 to signal that a 1 byte representation must be used. So this is not a possibility.
What are the limitations in creating a system that uses all 8 bits specifically for the use of transmitting data in an 8 bit clean system?
The only requirement is that the data must be visibly represented using 256 symbols.
HTTP (or any protocol/system) being 8-bit clean does not mean that you can simply use any 8-bit value wherever you want within the protocol. It means only that the protocol or system is capable of handling 8-bit encoding given the right circumstances.
For example, HTTP uses carriage return+line feed (Hex values 0D0A) to delimit header fields and the body of the message, so you can't use those values together anywhere in the headers. Further, the headers and body may have limitations on their character encoding based on what type of data is contained in them. If the HTTP Content-Type is set to text/html; charset=utf-8, characters in the body like < (Hex value 3C) are reserved for HTML tags. The HTTP body may be 8-bit clean, but that doesn't mean you can put any 8-bit content you want in it, you still have to conform to UTF-8 (or some other encoding) and abide by the content rules that HTML imposes.
The purpose of Base64 is to encode arbitrary binary data for use inside other encoding schemes where characters other than [A-Za-z0-9+/] are reserved for special uses, or are totally invalid (such as inside HTML, or in a URL query string). You cannot just replace Base64 with a full 8-bit encoding scheme because an 8-bit scheme is not valid in situations where Base64 is necessary. This is true even if the protocol you're using is, itself, 8-bit clean.
In short, whatever binary encoding scheme you use is dependent on much more than just 8-bit clean vs not 8-bit clean. It depends on the protocol you're using the encoding inside of, what the protocols control characters are, and in what situations those characters are reserved.
Update:
If all you're really looking to do is return raw binary in an HTTP response, just set the HTTP Content-Type to application/octet-stream. This will allow you to return arbitrary binary in the HTTP body without any need for encoding.

Unicode Encodings

I have a question as to how programs parse strings if they do not a priori know the encoding that is used.
As I understand it, the UTF-8 encoding stores ASII characters with 1 byte, and all other chracters with up to as many as 6 (I think it's 6) bytes. Thus, for example, two spaces would be stored in memory as 0x2020.
How then, would a program be able to determine the difference between this string and the string`0x2020 encoded using the UTF-16 encoding which corresponds to the single character which evidently is a character that appears similar to the symbol sometimes used to denote the adjoint of an operator in mathematics (I just looked that up here).
It seems as if the parser would always have to know the encoding of a string before hand. If so, how is this implemented in practice? Is there a byte preceeding each string which tells the parser what encoding is used or something?
In general, it is not possible to know for certain the exact encoding used based solely on the stream of bytes that can represent text. However, if there is a byte order mark somewhere, you can use it at least as a hint as to what encoding is being used.
But with no hints or some kind of contract/exchange of metadata between the producer and consumer of the text, you can't be 100% sure. You can try using a heuristic, but then you get these kinds of problems if you end up guessing wrong.
If you want to be really sure, set up some kind of protocol or contract between the producer and the consumer of the text so that the text and the encoding scheme is known. You can hardcode the encoding scheme (for example, your program may parse UTF-8 and only UTF-8), or ensure the producer of the text always prepend a byte order mark or specially designed header bytes to communicate the encoding scheme.
Does the language always store strings in a certain encoding so that
the display function could safely assume that the string was encoded,
say, using UTF-8?
In depends on the language.
In C#, yes. A char is defined by the language specification (8.2.1) as a UTF-16 code unit, and thus a string is always UTF-16. Just like Java.
In Ruby 1.9, a string is a byte array with an associated Encoding.
But in pre-Unicode languages like C (and badly-designed post-Unicode languages like PHP), a string is just a byte array with no encoding information. You have to rely on convention. It's a real interesting experience to write a program that uses both a library that assumes UTF-8 strings and another that assumes windows-1252 strings.
A question that's equally relevant to all languages is: How do you determine the encoding of a byte array that contains encoded text? There are several different approaches:
Encoding declarations.
In protocols that use MIME types (notably, SMTP and HTTP), you can declare Content-Type: text/html; charset=UTF-8. In HTML, you can use <meta http-equiv="Content-Type" content="text/html; charset=UTF-8"> or the newer <meta charset="UTF-8">. In XML, there's <?xml version="1.0" encoding="UTF-8"?>. In Python source code, there's # -*- coding: UTF-8 -*-.
Unfortunately, such declarations aren't always accurate. And they aren't available at all for locally-stored plain .txt files, so then a different approach must be used.
Byte-order mark (BOM)
Putting the special character U+FEFF at the beginning of a file lets you distinguish between the various UTF encodings.
But it's not usable for legacy encodings like ISO-8859-x or Windows-125x, and not always used with UTF-8.
Validation
Some encodings have strict rules about what makes a valid string. The best-known is UTF-8, with its rigid separation of leading/trailing bytes, prohibition of "overlong" encodings, etc. UTF-32 is even easier to recognize because the restriction of Unicode to 17 "planes" means that every code unit must have the form 00 {00-10} xx xx (or xx xx {00-10} 00 for little-endian).
So if text validates as being UTF-8 or UTF-32, you can safely assume that it is. There's a possibility of false positives, but it's very low.
However, this approach doesn't work well for UTF-16, where the false-positive rate is too high. (The only way for an even-length byte array to not be valid UTF-16 is to contain unpaired surrogates, or U+FFFE or U+FFFF.)
Statistical analysis
Use character frequency tables of various language/encoding combinations. This is the approach used by chardet (in combination with BOM and validation).
Falling back on a default encoding
When all else fails, assume ISO-8859-1, windows-1252, or Encoding.Default.

Imap message encodeing problem

Some of the mails contents fetched from imap server looks like =C3=B6=C3=BC=C3=B6=C3=BC=C3=B6=C3=BC= what kind of encoding is this? Mail header encoding is UTF-8 but decoding with UTF-8 i got scrambled msg.
Any help is much appreciated.
Quoted-Printable
It is used to transmit 8-bit data over a 7-bit medium.
Characters are converted from 8-bit to three 7-bit characters in the form =XX where XX is the hexadecimal character code for the 8-bit character, the = character will become =3D.
The length of a line is restricted to 76 characters, soft line breaks are added to comply with this rule, this is done by ending with a = to indicate that the line should continue.
https://www.rfc-editor.org/rfc/rfc2045
http://en.wikipedia.org/wiki/Quoted-printable
Online Decoder

RS-232C and Email in 7bit char set

The book "Designing Embedded Hardware" in the chapter "9.3. Old Faithful: RS-232C" mentions that emails are still sent in 7bit char set because of RS-232C:
It's also not unheard of to see
RS-232C systems still using 7-bit data
frames (another leftover from the
'60s), rather than the more common
8-bit. In fact, this is one of the reasons why you'll
still see email being sent on the
Internet limited to a 7-bit character
set, just in case the packets happen
to be routed via a serial connection
that supports only 7-bit
transmissions.
How can I confirm the observation?
Check out the spec. The original rfc822, for ARPA Internet Text Messages, explicitly states:
A message consists of header fields
and, optionally, a body. The body is
simply a sequence of lines containing
ASCII characters.
Since ASCII is 7-bit, voila.
Note, however, that there are a whole bunch of additions to that original spec, all the MIME extensions, which allow message header extensions for non-ascii text.
The Quoted-printable MIME encoding is specifically designed to encode 8-bit data in 7-bit characters. This encoding is widely used to encode email.
Note also that the text you quoted says "in case the packets happen to be routed via a serial connection" which is misleading, especially if they're talking in a context of IP packets. IP packets assume an 8-bit data path, and cannot be sent directly over a 7-bit RS-232 link without additional encoding (and then it's not a 7-bit data path anymore, it's 8-bit).
The systems that were restricted to 7 bits were already old when email first became popular. The chances that you will find one today approach zero.
Since certain characters have special meaning to email programs (most notably the end-of-line character), it still makes sense to limit the character set.