I'm running this code in Windows Powershell and it includes this file called languages.txt where I'm trying to convert between bytes to strings:
Here is languages.txt:
Afrikaans
አማርኛ
Аҧсшәа
العربية
Aragonés
Arpetan
Azərbaycanca
Bamanankan
বাংলা
Bân-lâm-gú
Беларуская
Български
Boarisch
Bosanski
Буряад
Català
Чӑвашла
Čeština
Cymraeg
Dansk
Deutsch
Eesti
Ελληνικά
Español
Esperanto
فارسی
Français
Frysk
Gaelg
Gàidhlig
Galego
한국어
Հայերեն
हिन्दी
Hrvatski
Ido
Interlingua
Italiano
עברית
ಕನ್ನಡ
Kapampangan
ქართული
Қазақша
Kreyòl ayisyen
Latgaļu
Latina
Latviešu
Lëtzebuergesch
Lietuvių
Magyar
Македонски
Malti
मराठी
მარგალური
مازِرونی
Bahasa Melayu
Монгол
Nederlands
नेपाल भाषा
日本語
Norsk bokmål
Nouormand
Occitan
Oʻzbekcha/ўзбекча
ਪੰਜਾਬੀ
پنجابی
پښتو
Plattdüütsch
Polski
Português
Română
Romani
Русский
Seeltersk
Shqip
Simple English
Slovenčina
کوردیی ناوەندی
Српски / srpski
Suomi
Svenska
Tagalog
தமிழ்
ภาษาไทย
Taqbaylit
Татарча/tatarça
తెలుగు
Тоҷикӣ
Türkçe
Українська
اردو
Tiếng Việt
Võro
文言
吴语
ייִדיש
中文
Then, here is the code I used:
import sys
script, input_encoding, error = sys.argv
def main(language_file, encoding, errors):
line = language_file.readline()
if line:
print_line(line, encoding, errors)
return main(language_file, encoding, errors)
def print_line(line, encoding, errors):
next_lang = line.strip()
raw_bytes = next_lang.encode(encoding, errors=errors)
cooked_string = raw_bytes.decode(encoding, errors=errors)
print(raw_bytes, "<===>", cooked_string)
languages = open("languages.txt", encoding="utf-8")
main(languages, input_encoding, error)
Here's the output (credit: Learn Python 3 the Hard Way by Zed A. Shaw):
I don't know why it doesn't upload the characters and shows question blocks instead. Can anyone help me?
The first string which fails is አማርኛ. The first character, አ is in unicode 12A0 (see here). In UTF-8, that is b'\xe1\x8a\xa0'. So, that part is obviously fine. The file really is UTF-8.
Printing did not raise an exception, so your output encoding can handle all of the characters. Everything is fine.
The only remaining reason I see for it to fail is that the font used in the console does not support all of the characters.
If it is just for play, you should not worry about it. Consider it working correctly.
On the other hand, I would suggest changing some things in your code:
You are running main recursively for each line. There is absolutely no need for that and it would run into recursion depth limit on a longer file. User a for loop instead.
for line in lines:
print_line(line, encoding, errors)
You are opening the file as UTF-8, so reading from it automatically decodes UTF-8 into Unicode, then you encode it back into row_bytes and then encode again into cooked_string, which is the same as line. It would be a better exercise to read the file as raw binary, split it on newlines and then decode. Then you'd have a clearer picture of what is going on.
with open("languages.txt", 'rb') as f:
raw_file_contents = f.read()
Related
I'm currently working on something that requires me to pass a Base64 string to a PowerShell script. But while decoding the string back to the original I'm getting some unexpected results as I need to use UTF-7 during decoding and I don't understand why. Would someone know why?
The Mozilla documentation would suggest that it's insufficient to use Base64 if you have Unicode characters in your string. Thus you need to use a workaround that consists of using encodeURIComponent and a replace. I don't really get why the replace is needed and shortened it to btoa(escape('✓ à la mode')) to encode the string. The result of that operation would be JXUyNzEzJTIwJUUwJTIwbGElMjBtb2Rl.
Using PowerShell to decode the string back to the original, I need to first undo the Base64 encoding. In order to do System.Convert can be used (which results in a byte array) and its output can be converted to a UTF-8 string using System.Text.Encoding. Together this would look like the following:
$bytes = [System.Convert]::FromBase64String($inputstring)
$utf8string = [System.Text.Encoding]::UTF8.GetString($bytes)
What's left to do is URL decode the whole thing. As it is a UTF-8 string I'd expect only to need to run the URL decode without any further parameters. But if you do that you end up with a accented a that looks like � in a file or ? on the console. To get the actual original string it's necessary to tell the URL decode to use UTF-7 as the character set. It's nice that this works but I don't really get why it's necessary since the string should be UTF-8 and UTF-8 certainly supports an accented a. See the last two lines of the entire script for what I mean. With those two lines you will end up with one line that has the garbled text and one which has the original text in the same file encoded as UTF-8
Entire PowerShell script:
Add-Type -AssemblyName System.Web
$inputstring = "JXUyNzEzJTIwJUUwJTIwbGElMjBtb2Rl"
$bytes = [System.Convert]::FromBase64String($inputstring)
$utf8string = [System.Text.Encoding]::UTF8.GetString($bytes)
[System.Web.HttpUtility]::UrlDecode($utf8string) | Out-File -Encoding utf8 C:\temp\output.txt
[System.Web.HttpUtility]::UrlDecode($utf8string, [System.Text.UnicodeEncoding]::UTF7) | Out-File -Append -Encoding utf8 C:\temp\output.txt
Clarification:
The problem isn't the conversion of the Base64 to UTF-8. The problem is some inconsistent behavior of the UrlDecode of C#. If you run escape('✓ à la mode') in your browser you will end up with the following string %u2713%20%E0%20la%20mode. So we have a Unicode representation of the check mark and a HTML entity for the á. If we use this directly in UrlDecode we end up with the same error. My current assumption would be that it's an issue with the encoding of the PowerShell window and pasting characters into it.
Turns out it actually isn't all that strange. It's just for what I want to do it's advantages to use a newer function. I'm still not sure why it works if you use the UTF-7 encoding. But anyways, as an explanation:
... The hexadecimal form for characters, whose code unit value is 0xFF or less, is a two-digit escape sequence: %xx. For characters with a greater code unit, the four-digit format %uxxxx is used.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/escape
As TesselatingHecksler pointed out What is the proper way to URL encode Unicode characters? would indicate that the %u format wasn't formerly standardized. A newer version to escape characters exists though, which is encodeURIComponent.
The encodeURIComponent() function encodes a Uniform Resource Identifier (URI) component by replacing each instance of certain characters by one, two, three, or four escape sequences representing the UTF-8 encoding of the character (will only be four escape sequences for characters composed of two "surrogate" characters).
The output of this function actually works with the C# implementation of UrlDecode without supplying an additional encoding of UTF-7.
The original linked Mozilla article about a Base64 encode for an UTF-8 strings modifies the whole process in a way to allows you to just call the Base64 decode function in order to get the whole string. This is realized by converting the URL encode version of the string to bytes.
I use luasocket to GET a web page which contains Chinese characters "开奖结果" (the page itself is encoded in charset="gb2312"), as below:
require "socket"
host = '61.129.89.226'
fileformat = '/fcopen/cp_kjgg_dfw.jsp?lottery_type=ssq&lottery_issue=%s'
function getlottery(num)
c = assert(socket.connect(host, 80))
c:send('GET ' .. string.format(fileformat, num) .. " HTTP/1.0\r\n\r\n")
content = c:receive('*l')
while content do
if content and content:find('开奖结果') then -- failed
print(content)
end
content = c:receive('*l')
end
c:close()
end
--http://61.129.89.226/fcopen/cp_kjgg_dfw.jsp?lottery_type=ssq&lottery_issue=2012138
getlottery('2012138')
Unfortunately, it fails to match the expected characters:
content:find('开奖结果') -- failed
I know Lua is capable of finding unicode characters:
Lua 5.1.4 Copyright (C) 1994-2008 Lua.org, PUC-Rio
> if string.find("This is 开奖结果", "开奖结果") then print("found!") end
found!
Then I guess it might be caused by how luasocket retrieves data from the web. Could anyone shed some lights on this?
Thanks.
If the page is encoded in GB2312, and your script (the file itself) is encoded in utf-8, there's no way the match will work. Because .find() will look for utf-8 codepoints, and it will just slide over the characters you're looking for, because they're not encoded the same way...
开 奖 结 果
GB bfaa bdb1 bde1 b9fb
UTF-16 5f00 5956 7ed3 679c
UTF-8 e5bc80 e5a596 e7bb93 e69e9c
I have a reasonable size flat file database of text documents mostly saved in 8859 format which have been collected through a web form (using Perl scripts). Up until recently I was negotiating the common 1252 characters (curly quotes, apostrophes etc.) with a simple set of regex's:
$line=~s/\x91/\&\#8216\;/g; # smart apostrophe left
$line=~s/\x92/\&\#8217\;/g; # smart apostrophe right
... etc.
However since I decided I ought to be going Unicode, and have converted all my scripts to read in and output utf8 (which works a treat for all new material), the regex for these (existing) 1252 characters no longer works and my Perl html output outputs literally the 4 characters: '\x92' and '\x93' etc. (at least that's how it appears on a browser in utf8 mode, downloading (ftp not http) and opening in a text editor (textpad) it's different, a single undefined character remains, and opening the output file in Firefox default (no content type header) 8859 mode renders the correct character).
The new utf8 pragmas at the start of the script are:
use CGI qw(-utf8);
use open IO => ':utf8';
I understand this is due to utf8 mode making the characters double byte instead of single byte and applies to those chars in the 0x80 to 0xff range, having read up the article on wikibooks relating to this, however I was non the wiser as to how to filter them. Ideally I know I ought to resave all the documents in utf8 mode (since the flat file database now contains a mixture of 8859 and utf8), however I will need some kind of filter in the first place if I'm going to do this anyway.
And I could be wrong as to the 2-byte storage internally, since it did seem to imply that Perl handles stuff very differently according to various circumstances.
If anybody could provide me with a regex solution I would be very grateful. Or some other method. I have been tearing my hair out for weeks on this with various attempts and failed hacking. There's simply about 6 1252 characters that commonly need replacing, and with a filter method I could resave the whole flippin lot in utf8 and forget there ever was a 1252...
Encoding::FixLatin was specifically written to help fix data broken in the same manner as yours.
Ikegami already mentioned the Encoding::FixLatin module.
Another way to do it, if you know that each string will be either UTF-8 or CP1252, but not a mixture of both, is to read it as a binary string and do:
unless ( utf8::decode($string) ) {
require Encode;
$string = Encode::decode(cp1252 => $string);
}
Compared to Encoding::FixLatin, this has two small advantages: a slightly lower chance of misinterpreting CP1252 text as UTF-8 (because the entire string must be valid UTF-8) and the possibility of replacing CP1252 with some other fallback encoding. A corresponding disadvantage is that this code could fall back to CP1252 on strings that are not entirely valid UTF-8 for some other reason, such as because they were truncated in the middle of a multi-byte character.
You could also use Encode.pm's support for fallback.
use Encode qw[decode];
my $octets = "\x91 Foo \xE2\x98\xBA \x92";
my $string = decode('UTF-8', $octets, sub {
my ($ordinal) = #_;
return decode('Windows-1252', pack 'C', $ordinal);
});
printf "<%s>\n",
join ' ', map { sprintf 'U+%.4X', ord $_ } split //, $string;
Output:
<U+2018 U+0020 U+0046 U+006F U+006F U+0020 U+263A U+0020 U+2019>
Did you recode the data files? If not, opening them as UTF-8 won't work. You can simply open them as
open $filehandle, '<:encoding(cp1252)', $filename or die ...;
and everything (tm) should work.
If you did recode, something seem to have gone wrong, and you need to analyze what it is, and fix it. I recommend using hexdump to find out what actually is in a file. Text consoles and editors sometimes lie to you, hexdump never lies.
I decided to use Python 3 for making my website, but I encountered a problem with Unicode output.
It seems like plain print(html) #html is astr should be working, but it's not. I get UnicodeEncodeError: 'ascii' codec can't encode characters[...]: ordinal not in range(128). This must be because the webserver doesn't support unicode output.
The next thing I tried was print(html.encode('utf-8')), but I got something like repr output of the byte string: it is placed inside b'...' and all the escape characters are in raw form (e.g. \n and \xd0\x9c)
Please show me the correct way to output a Unicode (str) string as a raw UTF-8 encoded bytes string in Python 3.1
The problem here is that you stdout isn't attached to an actual terminal and will use the ASCII encoding by default. Therefore you need to write to sys.stdout.buffer, which is the "raw" binary output of sys.stdout. This can be done in various ways, the most common one seems to be:
import codecs, sys
writer = codecs.getwriter('utf8')(sys.stdout.buffer)
And the use writer. In a CGI script you may be able to replace sys.stdout with the writer so:
sys.stdout = codecs.getwriter('utf8')(sys.stdout.buffer)
Might actually work so you can print normally. Try that!
I've tried everything Google and StackOverflow have recommended (that I could find) including using Encode. My code works but it just uses UTF8 and I get the wide character warnings. I know how to work around those warnings but I'm not using UTF8 for anything else so I'd like to just convert it and not have to adapt the rest of my code to deal with it. Here's my code:
my $xml = XMLin($content);
# Populate the #titles array with each item title.
my #titles;
for my $item (#{$xml->{channel}->{item}}) {
my $title = Encode::decode_utf8($item->{title});
#my $title = $item->{title};
#utf8::downgrade($title, 1);
Encode::from_to($title, 'utf8', 'iso-8859-1');
push #titles, $title;
}
return #titles;
Commented out you can see some of the other things I've tried. I'm well aware that I don't know what I'm doing here. I just want to end up with a plain old ASCII string though. Any ideas would be greatly appreciated. Thanks.
The answer depends on how you want to use the title. There are 3 basic ways to go:
Bytes that represent a UTF-8 encoded string.
This is the format that should be used if you want to store the UTF-8 encoded string outside your application, be it on disk or sending it over the network or anything outside the scope of your program.
A string of Unicode characters.
The concept of characters is internal to Perl. When you perform Encode::decode_utf8, then a bunch of bytes is attempted to be converted to a string of characters, as seen by Perl. The Perl VM (and the programmer writing Perl code) cannot externalize that concept except through decoding UTF-8 bytes on input and encoding them to UTF-8 bytes on output. For example, your program receives two bytes as input that you know they represent UTF-8 encoded character(s), let's say 0xC3 0xB6. In that case decode_utf8 returns a representation that instead of two bytes, sees one character: ö.
You can then proceed to manipulate that string in Perl. To illustrate the difference further, consider the following code:
my $bytes = "\xC3\xB6";
say length($bytes); # prints "2"
my $string = decode_utf8($bytes);
say length($string); # prints "1"
The special case of ASCII, a subset of UTF-8.
ASCII is a very small subset of Unicode, where characters in that range are represented by a single byte. Converting Unicode into ASCII is an inherently lossy operation, as most of the Unicode characters are not ASCII characters. You're either forced to drop every character in your string which is not in ASCII or try to map from a Unicode character to their closest ASCII equivalents (which isn't possible in the vast majority of cases), when trying to coerce a Unicode string to ASCII.
Since you have wide character warnings, it means that you're trying to manipulate (possibly output) Unicode characters that cannot be represented as ASCII or ISO-8859-1.
If you do not need to manipulate the title from your XML document as a string, I'd suggest you leave it as UTF-8 bytes (I'd mention that you should be careful not to mix bytes and characters in strings). If you do need to manipulate it, then decode, manipulate, and on output encode it in UTF-8.
For further reading, please use perldoc to study perlunitut, perlunifaq, perlunicode, perluniintro, and Encode.
Although this is an old question, I just spent several hours (!) trying to do more or less the same thing! That is: read data from a UTF-8 XML file, and convert that data into the Windows-1252 codepage (I could also have used Latin1, ISO-8859-1 etc.) in order to be able to create filenames with accented letters.
After much experimentation, and even more searching, I finally managed to get the conversion working. The "trick" is to use Encode::encode instead of Encode::decode.
For example, given the code in the original question, the correct (or at least one :-) way to convert from UTF-8 would be:
my $title = Encode::encode("Windows-1252", $item->{title});
or
my $title = Encode::encode("ISO-8859-1", $item->{title});
or
my $title = Encode::encode("<your-favourite-codepage-here>", $item->{title});
I hope this helps others having similar problems!
You can use the following line to simply get rid of the warning. This assumes that you want to use UTF8, which shouldn't normally be a problem.
binmode(STDOUT, ":encoding(utf8)");