Testing email encoded for Japanese - email

After figuring out how to encode email that will be read by a Japanese user (encoding for IS0-2022-JP and then base64 encoding), I need to figure out how to test that this actually works. I'm not fluent in Japanese. How does one go about testing that the email reads correctly? The message I'm sending would be written by my program in English, to be read by a Japanese user.

If you have a hard time dealing with Japanese characters for test cases, why not test using wide latin? Wide latin is the latin character set formatted to monospaced and with their own code points in ISO-2020 and in unicode. If you get yourself a Japanese IME, you should be able to input them.
Here is some comparisons:
Hello World // standard ASCII codepoints
Hello World // Japanese wide latin
Since you will be able to enter and read everything with ease, this strategy might be good for testing.

If you're sending English, you shouldn't need to worry about encoding. ISO-2022-JP starts in ASCII.

Related

Chinese in Japanese encoding

This may sound like a stupid question. I typed some Chinese characters into an empty text file in VS code text editor (default utf8). Then I saved the file in an encoding for Japanese: shift JIS, which apparently doesn't cover all the characters I have typed in.
However, before I close the file, all Chinese characters are displayed properly in VS code. Now after I closed the file and reopened it using shift JIS encoding, several characters are displayed as a question mark ?. I guess these are the Chinese characters not covered by the Japanese encoding?
What happened in the process? Is there anyway I can 'get back' the Chinese characters that are now shown in ?? I don't really understand how encoding works in this scenario...
Not all encodings cover all characters. (Unicode encodings, in principle, do, but even they don't have quite everything yet.) If you save some text in an encoding which does not include all characters in that text, something has to give.
Options:
you get an error message,
nothing saves at all,
the characters which cannot be included are silently dropped,
the characters which cannot be included are converted to some other character (such as the question mark).
Once that conversion is done, the data is lost, and cannot be recovered. Why not use UTF-8 or another Unicode encoding? (GB 18030 might be the best for large amounts of Chinese text.)

Find non-ASCII characters in a text file and convert them to their Unicode equivalent

I am importing .txt file from a remote server and saving it to a database. I use a .Net script for this purpose. I sometimes notice a garbled word/characters (Ullerهkersvنgen) inside the files, which makes a problem while saving to the database.
I want to filter all such characters and convert them to unicode before saving to the database.
Note: I have been through many similar posts but had no luck.
Your help in this context will be highly appreciated.
Thanks.
Assuming your script does know the correct encoding of your text snippet than that should be the regular expression to find all Non-ASCII charactres:
[^\x00-\x7F]+
see here: https://stackoverflow.com/a/20890052/1144966 and https://stackoverflow.com/a/8845398/1144966
Also, the base-R tools package provides two functions to detect non-ASCII characters:
tools::showNonASCII()
tools::showNonASCIIfile()
You need to know or at least guess the character encoding of the data in order to be able to convert it properly. So you should try and find information about the origin and format of the text file and make sure that you read the file properly in your software.
For example, “Ullerهkersvنgen” looks like a Scandinavian name, with Scandinavian letters in it, misinterpreted according to a wrong character encoding assumption or as munged by an incorrect character code conversion. The first Arabic letter in it, “ه”, is U+0647 ARABIC LETTER HEH. In the ISO-8859-6 encoding, it is E7 (hex.); in windows-1256, it is E5. Since Scandinavian text are normally represented in ISO-8859-1 or windows-1252 (when Unicode encodings are not used), it is natural to check what E7 and E5 mean in them: “ç” and “å”. For linguistic reasons, the latter is much more probable here. The second Arabic letter is “ن” U+0646 ARABIC LETTER NOON, which is E4 in windows-1256. And in ISO-8859-1, E4 is “ä”. This makes perfect sense: the word is “Ulleråkersvägen”, a real Swedish street name (in Uppsala, at least).
Thus, the data is probably ISO-8859-1 or windows-1252 (Windows Latin 1) encoded text, incorrectly interpreted as windows-1256 (Windows Arabic). No conversion is needed; you just need to read the data as windows-1252 encoded. (After reading, it can of course be converted to another encoding.)

What string of characters should a source send to disambiguate the byte-encoding they are using?

I'm decoding bytestreams into unicode characters without knowing the encoding that's been used by each of a hundred or so senders.
Many of the senders are not technically astute, and will not be able to tell me what encoding they are using. It will be determined by the happenstance of the toolchains they are using to generate the data.
The senders are, for the moment, all UK/English based, using a variety of operating systems.
Can I ask all the senders to send me a particular string of characters that will unambiguously demonstrate what encoding each sender is using?
I understand that there are libraries that use heuristics to guess at the encoding - I'm going to chase that up too, as a runtime fallback, but first I'd like to try and determine what encodings are being used, if I can.
(Don't think it's relevant, but I'm working in Python)
A full answer to this question depends on a lot of factors, such as the range of encodings used by the various upstream systems, and how well your users will comply with instructions to type magic character sequences into text fields, and how skilled they will be at the obscure keyboard combinations to type the magic character sequences.
There are some very easy character sequences which only some users will be able to type. Only users with a Cyrillic keyboard and encoding will find it easy to type "Ильи́ч" (Ilyich), and so you only have to distinguish between the Cyrillic-capable encodings like UTF-8, UTF-16, iso8859_5, and koi8_r. Similarly, you could come up with Japanese, Chinese, and Korean character sequences which distinguish between users of Japanese, simplified Chinese, traditional Chinese, and Korean systems.
But let's concentrate on users of western European computer systems, and the common encodings like ISO-8859-15, Mac_Roman, UTF-8, UTF-16LE, and UTF-16BE. A very simple test is to have users enter the Euro character '€', U+20AC, and see what byte sequence gets generated:
byte ['\xa4'] means iso-8859-15 encoding
bytes ['\xe2', '\x82', '\xac'] mean utf-8 encoding
bytes ['\x00', '\xac'] mean utf-16be encoding
bytes ['\xac', '\x00'] mean utf-16le encoding
byte ['\x80'] means cp1252 ("Windows ANSI") encoding
byte ['\xdb'] means macroman encoding
iso-8859-1 won't be able to represent the Euro character at all. iso-8859-15 is the Euro-supporting successor to iso-8859-1.
U.S. users probably won't know how to type a Euro character. (OK, that's too snarky. 3% of them will know.)
You should check what each of these byte sequences, interpreted as any of the possible encodings, is not a character sequence that users would likely type themselves. For instance, the '\xa4' of the iso-8859-15 Euro symbol could also be the iso-8859-1 or cp1252 or UTF-16le encoding of '¤', the macroman encoding of '§', or the first byte of any of thousands of UTF-16 characters, such as U+A4xx Yi Syllables, or U+01A4 LATIN SMALL LETTER OI. It would not be a valid first byte of a UTF-8 sequence. If some of your users submit text in Yi, you might have a problem.
The Python 3.x documentation, 7.2.3. Standard Encodings lists the character encodings which the Python standard library can easily handle. The following program lets you see how a test character sequence is encoded into bytes by various encodings:
>>> for e in ['iso-8859-1','iso-8859-15', 'utf-8', 'utf-16be', 'utf-16le', \
... 'cp1252', 'macroman']:
... print e, list( euro.encode(e, 'backslashreplace'))
So, as an expedient, satisficing hack, consider telling your users to type a '€' as the first character of a text field, if there are any problems with encoding. Then your system should interpret any of the above byte sequences as an encoding clue, and discard them. If users want to start their text content with a Euro character, they start the field with '€€'; the first gets swallowed, the second remains part of the text.

Sending multilingual email. Which charset should I suse?

I want to send emails in a number of languages (en/es/fr/de/ru/pl). I notice that Gmail uses KOI8-R charset when sending emails contatining Cyrillic characters.
Can I just use KOI8-R for ALL my emails, or is there any reason to select a particular charset for each language?
i would recommend alway using utf-8 nowadays.
wikipedia on utf-8:
UTF-8 (8-bit UCS/Unicode
Transformation Format) is a
variable-length character encoding for
Unicode. It is able to represent any
character in the Unicode standard, yet
is backwards compatible with ASCII.
For these reasons, it is steadily
becoming the preferred encoding for
e-mail, web pages,[1] and other places
where characters are stored or
streamed.
Use UTF-8. KOI8-R wouldn't be ideal for non-Russian languages, and changing codesets always tends to be a headache on the receiving side.

How can I convert non-ASCII characters encoded in UTF8 to ASCII-equivalent in Perl?

I have a Perl script that is being called by third parties to send me names of people who have registered my software. One of these parties encodes the names in UTF-8, so I have adapted my script accordingly to decode UTF-8 to ASCII with Encode::decode_utf8(...).
This usually works fine, but every 6 months or so one of the names contains cyrillic, greek or romanian characters, so decoding the name results in garbage characters such as "ПодражанÑкаÑ". I have to follow-up with the customer and ask him for a "latin character version" of his name in order to issue a registration code.
So, is there any Perl module that can detect whether there are such characters and automatically translates them to their closest ASCII representation if necessary?
It seems that I can use Lingua::Cyrillic::Translit::ICAO plus Lingua::DetectCharset to handle Cyrillic, but I would prefer something that works with other character sets as well.
I believe you could use Text::Unidecode for this, it is precisely what it tries to do.
In the documentation for Text::Unicode, under "Caveats", it appears that this phrase is incorrect:
Make sure that the input data really is a utf8 string.
UTF-8 is a variable-length encoding, whereas Text::Unidecode only accepts a fixed-length (two-byte) encoding for each character. So that sentence should read:
Make sure that the input data really is a string of two-byte Unicode characters.
This is also referred to as UCS-2.
If you want to convert strings which really are utf8, you would do it like so:
my $decode_status = utf8::decode($input_to_be_converted);
my $converted_string = unidecode ($input_to_be_converted);
If you have to deal with UTF-8 data that are not in the ascii range, your best bet is to change your backend so it doesn't choke on utf-8. How would you go about transliterating kanji signs?
If you get cyrilic text there is no "closest ASCII representation" for many characters.