Documentation all directs me to unicode support, yet I don't think my request has anything to do with Unicode. I want to work with raw bytes within the context of a single scalar; I need to be able to figure out its length (in bytes), take substrings of it (in bytes), write the bytes to disc, and over the network. Is there an easy way to do this, without treating the bytes as any sort of encoding in perl?
EDIT
More explicitly,
my $data = "Perl String, unsure of encoding and don't need to know";
my #data_chunked_into_1024_bytes_each = #???
Perl strings are, conceptually, strings of characters, which are positive 32-bit integers that (normally) represent Unicode code points. A byte string, in Perl, is just a string in which all the characters have values less than 256.
(That's the conceptual view. The internal representation is somewhat more complicated, as the perl interpreter tries to store byte strings — in the above sense — as actual byte strings, while using a generalized UTF-8 encoding for strings that contain character values of 256 or higher. But this is all supposed to be transparent to the user, and in fact mostly is, except for some ugly historical corner cases like the bitwise not (~) operator.)
As for how to turn a general string into a byte string, that really depends on what the string you have contains and what the byte string is supposed to contain:
If your string already is a string of bytes — e.g. if you read it from a file in binary mode — then you don't need to do anything. The string shouldn't contain any characters above 255 to being with, and if it does, that's an error and will probably be reported as such by the encryption code.
Similarly, if your string is supposed to encode text in the ASCII or ISO-8859-1 encodings (which encode the 7- and 8-bit subsets of Unicode respectively), then you don't need to do anything: any characters up to 255 are already correctly encoded, and any higher values are invalid for those encodings.
If your input string contains (Unicode) text that you want to encode in some other encoding, then you'll need to convert the string to that encoding. The usual way to do that is by using the Encode module, like this:
use Encode;
my $byte_string = encode( "name of encoding", $text_string );
Obviously, you can convert the byte string back to the corresponding character string with:
use Encode;
my $text_string = decode( "name of encoding", $byte_string );
For the special case of the UTF-8 encoding, it's also possible to use the built-in utf8::encode() function instead of Encode::encode():
utf8::encode( $string );
which does essentially the same thing as:
use Encode;
$string = encode( "utf8", $string );
Note that, unlike Encode::encode(), the utf8::encode() function modifies the input string directly. Also note that the "utf8" above refers to Perl's extended UTF-8 encoding, which allows values outside the official Unicode range; for strictly standards-compliant UTF-8 encoding, use "utf-8" with a hyphen (see Encode documentation for the gory details). And, yes, there's also a utf8::decode() function that does pretty much what you'd expect.
If I understood your question correctly, what you want is the pack/unpack functions: http://perldoc.perl.org/functions/pack.html
As long as your string doesn't contain characters above codepoint 255, it will mostly work as plain byte string, with length and substr operating on bytes. Additionally, most output functions like print expect octets/bytes by default and will actually complain if you try to stuff anything else to them.
You may need to explicitly encode/decode your output if it is known to be in some encoding, but more details can only be added if you ask another specific question for each problematic part of your program.
Related
In a Perl script of mine, I have to write a mix of UTf-8 and raw bytes into files.
I have a big string in which everything is encoded as UTF-8. In that "source" string, UTF-8 characters are just like they should be (that is, UTF-8-valid byte sequences), while the "raw bytes" have been stored as if they were codepoints of the value held by the raw byte. So, in the source string, a "raw" byte of 0x50 would be stored as one 0x50 byte; whereas a "raw" byte of 0xff would be stored as a 0xc3 0xbf two-byte utf-8-valid sequence. When I write these "raw" bytes back, I need to put them back to single-byte form.
I have other data structures allowing me to know which parts of the string represent what kind of data. A list of fields, types, lengths, etc.
When writing in a plain file, I write each field in turn, either directly (if it's UTF-8) or by encoding its value to ISO-8859-1 if it's meant to be raw bytes. It works perfectly.
Now, in some cases, I need to write the value not directly to a file, but as a record of a BerkeleyDB (Btree, but that's mostly irrelevant) database.
To do that, I need to write ALL the values that compose my record, in a single write operation. Which means that I need to have a scalar that holds a mix of UTF-8 and raw bytes.
Example:
Input Scalar (all hex values): 61 C3 8B 00 C3 BF
Expected Output Format: 2 UTF-8 characters, then 2 raw bytes.
Expected Output: 61 C3 8B 00 FF
At first, I created a string by concatenating the same values I was writing to my file from an empty string. And I tried writing that very string to a "standard" file without adding encoding. I got '?' characters instead of all my raw bytes over 0x7f (because, obviously, Perl decided to consider my string to be UTF-8).
Then, to try and tell Perl that it was already encoded, and to "please not try to be smart about it", I tried to encode the UTF-8 parts into "UTF-8", encode the binary parts into "ISO-8859-1", and concatenate everything. Then I wrote it. This time, the bytes looked perfect, but the parts which were already UTF-8 had been "double-encoded", that is, each byte of a multi-byte character had been seen as its codepoint...
I thought Perl wasn't supposed to re-encode "internal" UTF-8 into "encoded" UTF-8, if it was internally marked as UTF-8. The string holding all the values in UTF-8 comes from a C API, which sets the UTF-8 marker (or is supposed to, at the very least), to let Perl know it is already decoded.
Any idea about what I did miss there?
Is there a way to tell Perl what I want to do is just put a bunch of bytes one after another, and to please not try to interpret them in any way? The file I write to is opened as ">:raw" for that very reason, but I guess I need a way to specify that a given scalar is "raw" too?
Epilogue: I found the cause of the problem. The $bigInputString was supposed to be entirely composed of UTF-8 encoded data. But it did contain raw bytes with big values, because of a bug in C (turns out a "char" (not "unsigned char") is best tested with bitwise operators, instead of a " > 127"... ahem). So, "big" bytes weren't split into a two-bytes UTF-8 sequence, in the C API.
Which means the $bigInputString, created from the bad C data, didn't have the expected contents, and Perl rightfully didn't like it either.
After I corrected the bug, the string correctly encoded to UTF-8 (for the parts I wanted to keep as UTF-8) or LATIN-1 (for the "raw bytes" I wanted to convert back), and I got no further problems.
Sorry for wasting your time, guys. But I still learned some things, so I'll keep this here. Moral of the story, Devel::Peek is GOOD for debugging (thanks ikegami), and one should always double check, instead of assuming. Granted, I was in a hurry on friday, but the fault is still mine.
So, thanks to everyone who helped, or tried to, and special thanks to ikegami (again), who used quite a bit of his time helping me.
Assuming you have a Unicode string where you know what each codepoint is supposed to be stored as - a UTF-8 sequence or a single byte, and a way to create a template string where each character represents what the corresponding one of the unicode string is supposed to use (U for UTF-8, C for single byte to keep things simple), you can use pack:
#!/usr/bin/env perl
use strict;
use warnings;
sub process {
my ($str, $formats) = #_;
my $template = "C0$formats";
my #chars = map { ord } split(//, $str);
pack $template, #chars;
}
my $str = "\x61\xC3\x8B\x00\xC3\xBF";
utf8::decode($str);
print process($str, "UUCC"); # Outputs 0x61 0xc3 0x8b 0x00 0xff
So you have
my $in = "\x61\xC3\x8B\x00\xC3\xBF";
and you want
my $out = "\x61\xC3\x8B\x00\xFF";
This is the result of decoding only some parts of the input string, so you want something like the following:
sub decode_utf8 { my ($s) = #_; utf8::decode($s) or die("Invalid Input"); $s }
my $out = join "",
substr($in, 0, 3),
decode_utf8(substr($in, 3, 1)),
decode_utf8(substr($in, 4, 2));
Tested.
Alternatively, you could decode the entire thing and re-encode the parts that should be encoded.
sub encode_utf8 { my ($s) = #_; utf8::encode($s); $s }
utf8::decode($in) or die("Invalid Input");
my $out = join "",
encode_utf8(substr($in, 0, 2)),
substr($in, 2, 1),
substr($in, 3, 1);
Tested.
You have not indicate how you know which to decode and which not to decode, but you indicated you have this information.
I'm trying to fetch data from PostgreSQL with Erlang.
Here's my code that gets data from DB. However i have cyrrilic data in 'status' column. This cyrrilic data is not being fetched correctly.
I tried using UserInfo = io_lib:format("~tp ~n",[UserInfoQuery]), however this doesn't seem to work, because it crashes the app.
UserInfoQuery = odbc_queries:get_user_info(LServer,LUser),
UserInfo = io_lib:format("~p",[UserInfoQuery]),
?DEBUG("UserInfo: ~p",[UserInfo]),
StringForUserInfo = lists:flatten(UserInfo),
get_user_info(LServer, Id) ->
ejabberd_odbc:sql_query(
LServer,
[<<"select * from users "
"where email_hash='">>, Id, "';"]).
Here's the data that is fetched from DB
{selected,[<<"username">>,<<"password">>,<<"created_at">>,
<<"id">>,<<"email_hash">>,<<"status">>],
[{<<"admin">>,<<"admin">>,<<"2014-05-13 12:40:30.757433">>,
<<"1">>,<<"adminhash">>,
<<209,139,209,132,208,178,208,176,209,139,209,132,208,
178,208,176>>}]}
Question:
How can i extract data from column? For example only data from
'status' column?
How can i extract data in unicode from DB? Should i fetch the data from DB then use
io_lib:format("~tp~n") on it? Is there any better way to do it?
Additional question: is there any way to get string in human readable format, so that StringForUserInfo = 'ыфваыфва' from RowUnicode?
I tried this:
{selected, _, [Row]} = UserInfoQuery,
RowUnicode = io_lib:format("~tp~n", [Row]),
?DEBUG("RowUnicode: ~p",[RowUnicode]),
StringForUserInfo = lists:flatten(RowUnicode),
Error:
bad argument in call to erlang:iolist_size([123,60,60,34,97,100,109,105,110,34,
62,62,44,60,60,34,97,100,109,105,110,34,62,62,44,60,60,34,50,...])
Erlang ODBC driver perfectly fetched the status column from your database. Indeed, PostgreSQL encodes your data is UTF-8, and the value you get is UTF-8 encoded.
Status = <<209,139,209,132,208,178,208,176,209,139,209,132,208,178,208,176>>.
This is a binary representing the string ыфваыфва in UTF-8.
You can directly use UTF-8 encoded binaries in your code. If you want to use unicode character points instead of UTF-8 bytes, you can convert this to a list of integers (a string in Erlang parlance). Just use unicode:characters_to_list/1, which in your case will yield list [1099,1092,1074,1072,1099,1092,1074,1072]. This is a list representation of the same string. Unicode character 1099 (16#044B in hex) is ы (CYRILLIC SMALL LETTER YERU, cf Cyrillic excerpt unicode chart).
Erlang can handle unicode texts in the two representations above: lists of unicode characters as integers and binaries of UTF-8 encoded characters.
Let's examine a smaller example, string "ы". This string is composed of unicode character 044B CYRILLIC SMALL LETTER YERU, and it can be encoded as a binary as <<209,139>> or as a list as [16#044B] (= [1099]).
Historically, lists of integers as well as binaries were Latin-1 (ISO-8859-1) encoded. Unicode and ISO-8859-1 have the same values from 0 to 255, but UTF-8 transformation only matches ISO-8859-1 for characters in the 0-127 range. For this reason, Erlang's ~s format argument has a unicode translation modifier, ~ts. The following line will not work as expected:
io:format("~s", [<<209,139>>]).
It will output two characters, 00D1 (LATIN CAPITAL LETTER N WITH TILDE) and 008B (PARTIAL LINE FORWARD). This is because <<209,139>> is interpreted as a Latin-1 string and not as a UTF-8 encoded string.
The following line will fail:
io:format("~s", [[1099]]).
This is because [1099] is not a valid Latin-1 string.
Instead, you should write:
io:format("~ts", [<<209,139>>]),
io:format("~ts", [[1099]]).
Erlang's ~p format argument also has a unicode translation modifier, ~tp. However, ~tp will not do what you are looking for alone. Whether you use ~p or ~tp, by default, io_lib:format/2 will format the Status UTF-8 encoded binary above as:
<<209,139,209,132,208,178,208,176,209,139,209,132,208,178,208,176>>
Indeed, t modifier only means the argument shall accept unicode input. If you do use ~p, when formatting a string or a binary, Erlang will determine whether this could be represented as a Latin-1 string since input may be Latin-1 encoded. This heuristic allows Erlang to properly distinguish lists of integers and strings, most of the time. To see the heuristic at work, you can try something like:
io:format("~p\n~p\n", [[69,114,108,97,110,103], [1,2,3,4,5,6]]).
The heuristic detects that [69,114,108,97,110,103] actually is "Erlang", while [1,2,3,4,5,6] is just, well, a list of integers.
If you do use ~tp, Erlang will expect strings or binaries to be unicode-encoded, and then apply the default identification heuristic. And default heuristic happens to currently (R17) be latin-1 as well. Since your string cannot be represented with Latin-1, Erlang will display it as a list of integers. Fortunately, you can switch to Unicode heuristics by passing +pc unicode to Erlang on command line, and this will produce what you are looking for.
$ erl +pc unicode
So a solution to your problem is to pass +pc unicode and to use ~tp.
I don't understand why io:format("~tp") doesn't work, but you can extract the row and column you need and print that with io:format("~ts"):
> {selected, _, [Row]} = UserInfoQuery.
> io:format("~ts~n", [element(6, Row)]).
ыфваыфва
ok
Hoping someone can point me in the direction of where i'm going wrong with this:
I have a string of (what I believe) is hex-encoded UCS2, but the provider cannot tell me if it is UCS2-LE or UCS2-BE.
Like so: 0627062E062A062806270631
It translates to this: اختبا
In Arabic apparently... but no-matter whether I try converting it out of hex, using it as straight UCS2 (LE or BE) or practically anything else I can think of under the sun, I can't turn it into native-perl UTF-8 so that I can then re-encode as standard UTF-8 (Native format of our system).
Code:
my $string = "0627062E062A062806270631";
my $decodedHex = hex($string);
#NEAREST
my $perlDecodedUTF8 = decode("UCS-2BE", $decodedHex);
my $utf8 = encode('UTF-8',$perlDecodedUTF8);
open(ARABICTEST,">ucs2test.txt");
print(ARABICTEST $perlDecodedUTF8);
print("Done!");
close(ARABICTEST);
It outputs gibberish characters at the moment.
Now one idea I did come up with was to split the string in question into 4-character sections (i.e. per hex code), but even trying this with an individual, known UCS2 hex value doesn't appear to work.
Also tried forcing the output encoding, no joy there either.
Thanks!
hex is not the way to decode a hex string to a byte sequence. pack is. (hex produces a single integer, not a string of bytes.) Other than that, you were close. Try this:
use strict;
use warnings;
use Encode;
my $string = "0627062E062A062806270631";
my $decodedHex = pack('H*', $string);
my $perlDecodedUTF8 = decode("UCS-2BE", $decodedHex);
open(my $ARABICTEST,">:utf8", "ucs2test.txt");
print $ARABICTEST $perlDecodedUTF8;
print("Done!");
close($ARABICTEST);
Note: You probably want to use UTF-16BE instead of UCS-2BE. They're basically the same thing, but UTF-16BE allows surrogate pairs, and UCS-2BE doesn't. So all UCS-2BE text is also valid UTF-16BE, but not vice versa.
I've tried everything Google and StackOverflow have recommended (that I could find) including using Encode. My code works but it just uses UTF8 and I get the wide character warnings. I know how to work around those warnings but I'm not using UTF8 for anything else so I'd like to just convert it and not have to adapt the rest of my code to deal with it. Here's my code:
my $xml = XMLin($content);
# Populate the #titles array with each item title.
my #titles;
for my $item (#{$xml->{channel}->{item}}) {
my $title = Encode::decode_utf8($item->{title});
#my $title = $item->{title};
#utf8::downgrade($title, 1);
Encode::from_to($title, 'utf8', 'iso-8859-1');
push #titles, $title;
}
return #titles;
Commented out you can see some of the other things I've tried. I'm well aware that I don't know what I'm doing here. I just want to end up with a plain old ASCII string though. Any ideas would be greatly appreciated. Thanks.
The answer depends on how you want to use the title. There are 3 basic ways to go:
Bytes that represent a UTF-8 encoded string.
This is the format that should be used if you want to store the UTF-8 encoded string outside your application, be it on disk or sending it over the network or anything outside the scope of your program.
A string of Unicode characters.
The concept of characters is internal to Perl. When you perform Encode::decode_utf8, then a bunch of bytes is attempted to be converted to a string of characters, as seen by Perl. The Perl VM (and the programmer writing Perl code) cannot externalize that concept except through decoding UTF-8 bytes on input and encoding them to UTF-8 bytes on output. For example, your program receives two bytes as input that you know they represent UTF-8 encoded character(s), let's say 0xC3 0xB6. In that case decode_utf8 returns a representation that instead of two bytes, sees one character: ö.
You can then proceed to manipulate that string in Perl. To illustrate the difference further, consider the following code:
my $bytes = "\xC3\xB6";
say length($bytes); # prints "2"
my $string = decode_utf8($bytes);
say length($string); # prints "1"
The special case of ASCII, a subset of UTF-8.
ASCII is a very small subset of Unicode, where characters in that range are represented by a single byte. Converting Unicode into ASCII is an inherently lossy operation, as most of the Unicode characters are not ASCII characters. You're either forced to drop every character in your string which is not in ASCII or try to map from a Unicode character to their closest ASCII equivalents (which isn't possible in the vast majority of cases), when trying to coerce a Unicode string to ASCII.
Since you have wide character warnings, it means that you're trying to manipulate (possibly output) Unicode characters that cannot be represented as ASCII or ISO-8859-1.
If you do not need to manipulate the title from your XML document as a string, I'd suggest you leave it as UTF-8 bytes (I'd mention that you should be careful not to mix bytes and characters in strings). If you do need to manipulate it, then decode, manipulate, and on output encode it in UTF-8.
For further reading, please use perldoc to study perlunitut, perlunifaq, perlunicode, perluniintro, and Encode.
Although this is an old question, I just spent several hours (!) trying to do more or less the same thing! That is: read data from a UTF-8 XML file, and convert that data into the Windows-1252 codepage (I could also have used Latin1, ISO-8859-1 etc.) in order to be able to create filenames with accented letters.
After much experimentation, and even more searching, I finally managed to get the conversion working. The "trick" is to use Encode::encode instead of Encode::decode.
For example, given the code in the original question, the correct (or at least one :-) way to convert from UTF-8 would be:
my $title = Encode::encode("Windows-1252", $item->{title});
or
my $title = Encode::encode("ISO-8859-1", $item->{title});
or
my $title = Encode::encode("<your-favourite-codepage-here>", $item->{title});
I hope this helps others having similar problems!
You can use the following line to simply get rid of the warning. This assumes that you want to use UTF8, which shouldn't normally be a problem.
binmode(STDOUT, ":encoding(utf8)");
I'm running Perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class.
These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception
DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5
I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag and ignore the bad titles.
if(not utf8::valid($title)){
$title="Invalid UTF-8";
}
$data->title($title);
$data->update();
However Perl seems to think that the strings are valid, but it still throws the exceptions.
How can I get Perl to detect the bad UTF-8?
First off, please follow the documentation - the utf8 module should only be used in the 'use utf8;' form to indicate that your source code is UTF-8 instead of Latin-1. Don't use any of the utf8 functions.
Perl makes the distinction between bytes and UTF-8 strings. In byte mode, Perl doesn't know or care what encoding you are using, and will use Latin-1 if you print it. Take for example the Euro sign (€). In UTF-8 this is 3 bytes, 0xE2, 0x82, 0xAC. If you print the length of these bytes, Perl will return 3. Again, it doesn't care about the encoding. It can be any bytes or any encoding, legal or illegal.
If you use the Encode module and call Encode::decode("UTF-8', $bytes) you will get a new string which has the so-called UTF8 flag set. Perl now knows your string is in UTF-8, and will return a length of 1.
The problem that utf8::valid only applies to the second type of string. Your strings are probably in the first form, byte mode, and utf8::valid just returns true for anything in byte form. This is documented in the perldoc.
The solution is to get Perl to decode your byte strings as UTF-8, and detect any errors. This can be done with FB_CROAK as brian d foy explains:
my $ustring =
eval { decode( 'UTF-8', $byte_string, FB_CROAK ) }
or die "Could not decode string: $#";
You can then catch that error and skip those invalid strings.
Or if you know your code is mostly UTF-8 with a few invalid sequences here and there, you can use:
my $ustring = decode( 'UTF-8', $byte_string );
which uses the default mode of FB_DEFAULT, replacing invalid characters with U+FFFD, the Unicode REPLACEMENT CHARACTER (diamond with question mark in it).
You can then pass the string directly to your database driver in most cases. Some drivers may require you to re-encode the string back to byte form first:
my $byte_string = encode('UTF-8', $ustring);
There are also regexes online that you can use to check for valid UTF-8 sequences before calling decode (check other Stack Overflow answers). If you use those regexes, you don't need to do any encoding or decoding.
Finally, please use UTF-8 rather than utf8 in your calls to decode. The latter is more lax and allows some invalid UTF-8 sequences (such as sequences outside the Unicode range) to be allowed through.
How are you getting your strings? Are you sure that Perl thinks that they are UTF-8 already? If they aren't decoded yet (that is, octets interpreted as some encoding), you need to do that yourself:
use Encode;
my $ustring =
eval { decode( 'utf8', $byte_string, FB_CROAK ) }
or die "Could not decode string: $#";
Better yet, if you know that your source of strings is already UTF-8, you need to read that source as UTF-8. Look at the code you have that gets the strings to see if you are doing that properly.
As the documentation for utf8::valid points out, it returns true if the string is marked as UTF-8 and it's valid UTF-8, or if the string isn't UTF-8 at all. Although it's impossible to tell without seeing the code in context and knowing what the data is, most likely what you want isn't the "valid utf8" check at all; probably you just need to do
$data->title( Encode::encode("UTF-8", $title) )