I have to decode some base64 string using Perl, and I want to know the docode is success or not.
How can I know the decode is OK? What will happen if my decode is failed?
There is no "decode is failed" with MIME::Base64::decode_base64. It will simply ignore anything which does not fit, i.e. characters which are not valid base64 characters, incomplete padding at the end or any data following the end marker '='. Thus, it will always return something and in the worst case this will be an empty string.
Note that this behavior is not even wrong. At least some of the various Base64 standards explicitly require invalid characters to be skipped and none defines error handling in case of incomplete padding or data after '='. Still, the output of MIME::Base64 might be different compared to other implementations in case of invalid data.
When using MIME::Base64's decode_base64, the decode is always deemed to be successful. Disallowed characters are ignored.
You could strictly verify that you have a valid base64 using the following:
my $c1 = '[A-Za-z0-9+/]';
my $c2 = '[AQgw]';
my $c3 = '[AEIMQUYcgkosw048]';
die "Invalid data\n"
if $s !~ m{^(?:$c1{4})*+(?>$c1(?>$c2==|$c1$c3=)|)\z};
Whitespace is often used in the middle, so you might want to allow whitespace. (In fact, encode_base64 includes whitespace in its output by default!)
The = are often left out, so you might want to allow missing =.
If you're worried about data corruption, include a hash of the data with the data.
Related
I have the following function that converts unicode to HTML entities, but if I run the function again over the result it will not leave the HTML entities in tact. How can I get the function to leave already converted HTML entities alone?
sub convert_unicode {
use HTML::Entities;
use Encode;
my $str = shift;
Encode::_utf8_off($str);
return encode_entities(decode('utf8',$str));
}
What you're asking for is to be able to safely double character encode. Some encodings allow this. HTML character encoding does not because it uses certain characters like & to do the encoding and it cannot tell the difference between a special character being used for encoding and one that needs to be encoded.
For example...
use HTML::Entities;
use v5.10;
say encode_entities("&foo");
That produces &foo. If we encode it again it produces &foo because & is a special character which it faithfully encodes. It does not know that & is an already encoded & so it treats it as a literal & and encodes it.
You could write your own custom HTML encoding function that assumes &xxx; (and its variants) are already encoded, but that's just a guess. You can't actually tell a literal &foo; and an encoded &foo; apart. It will break with, for example, old school Perl code like &function;. Maybe you can be super clever and use an array of objects to indicate which parts are encoded and have the whole thing overload stringification so it looks like a string, and so long as everything carefully preserves that object that looks like a string it'll work...
And now we're into the lava flow anti-pattern where rather than fixing bad design, more complex and bad design is layered on top of it. Trying to "fix" that will just create more problems. The real problem lies deeper.
The real problem is that you're encoding multiple times. This probably means you've wielded your formatting and your functionality together. For example...
sub get_user_name {
my $uid = shift;
my $name = ...do a bunch of work to get the user name...
return encode_entities($name);
}
By HTML encoding the data, a function like this makes assumptions about how the data is going to be used. It limits its use to just HTML. If all your functions do this, you have a double encoding problem.
Then maybe you have something like this:
sub do_something {
my $uid = shift;
# $name is already HTML encoded.
my $name = get_user_name($uid);
my $stuff = ...something incorporating $name...
# Whoops, the user name is double encoded.
return encode_entities($stuff);
}
The answer is to leave the HTML formatting and encoding until the last minute. Ideally don't do it at all, just work with data and let an HTML template system take care of it. Template Toolkit, for example.
This also provides a clean separation between the formatting and the code, so now non-programmers can work on the formatting using a documented template system.
Documentation all directs me to unicode support, yet I don't think my request has anything to do with Unicode. I want to work with raw bytes within the context of a single scalar; I need to be able to figure out its length (in bytes), take substrings of it (in bytes), write the bytes to disc, and over the network. Is there an easy way to do this, without treating the bytes as any sort of encoding in perl?
EDIT
More explicitly,
my $data = "Perl String, unsure of encoding and don't need to know";
my #data_chunked_into_1024_bytes_each = #???
Perl strings are, conceptually, strings of characters, which are positive 32-bit integers that (normally) represent Unicode code points. A byte string, in Perl, is just a string in which all the characters have values less than 256.
(That's the conceptual view. The internal representation is somewhat more complicated, as the perl interpreter tries to store byte strings — in the above sense — as actual byte strings, while using a generalized UTF-8 encoding for strings that contain character values of 256 or higher. But this is all supposed to be transparent to the user, and in fact mostly is, except for some ugly historical corner cases like the bitwise not (~) operator.)
As for how to turn a general string into a byte string, that really depends on what the string you have contains and what the byte string is supposed to contain:
If your string already is a string of bytes — e.g. if you read it from a file in binary mode — then you don't need to do anything. The string shouldn't contain any characters above 255 to being with, and if it does, that's an error and will probably be reported as such by the encryption code.
Similarly, if your string is supposed to encode text in the ASCII or ISO-8859-1 encodings (which encode the 7- and 8-bit subsets of Unicode respectively), then you don't need to do anything: any characters up to 255 are already correctly encoded, and any higher values are invalid for those encodings.
If your input string contains (Unicode) text that you want to encode in some other encoding, then you'll need to convert the string to that encoding. The usual way to do that is by using the Encode module, like this:
use Encode;
my $byte_string = encode( "name of encoding", $text_string );
Obviously, you can convert the byte string back to the corresponding character string with:
use Encode;
my $text_string = decode( "name of encoding", $byte_string );
For the special case of the UTF-8 encoding, it's also possible to use the built-in utf8::encode() function instead of Encode::encode():
utf8::encode( $string );
which does essentially the same thing as:
use Encode;
$string = encode( "utf8", $string );
Note that, unlike Encode::encode(), the utf8::encode() function modifies the input string directly. Also note that the "utf8" above refers to Perl's extended UTF-8 encoding, which allows values outside the official Unicode range; for strictly standards-compliant UTF-8 encoding, use "utf-8" with a hyphen (see Encode documentation for the gory details). And, yes, there's also a utf8::decode() function that does pretty much what you'd expect.
If I understood your question correctly, what you want is the pack/unpack functions: http://perldoc.perl.org/functions/pack.html
As long as your string doesn't contain characters above codepoint 255, it will mostly work as plain byte string, with length and substr operating on bytes. Additionally, most output functions like print expect octets/bytes by default and will actually complain if you try to stuff anything else to them.
You may need to explicitly encode/decode your output if it is known to be in some encoding, but more details can only be added if you ask another specific question for each problematic part of your program.
I am using Perl to load some 'macro' files. These macros can, however, be encoded in various encodings, so there is a directive defined for users writing their macros (i.e.
#encoding iso-8859-2
at the beginning of the macro).
Every time this directive is encountered in the macro, a function setting encoding is called and looks sth like this:
sub change_encoding {
my ($file_handle, $encoding) = #_;
$file_handle->flush();
binmode($file_handle); # get rid of IO layers
binmode($file_handle,":encoding($encoding)");
}
The problem is that when I read the macro using standard
while($line = <$file_handle>){
process_macro($line);
}
I got messages saying "utf8 "\xXY" does not map to Unicode", but only if characters with diacritics is near the #encoding directive. I tried several examples and I was able to have half of the string with \xXY codes and other half of the string with correctly decoded characters, like here:
sub macro5_fn {
print "\xBElu\xBBou\xE8k\xFD k\xF9\xF2 úpěl ďábelské ódy\n";
}
If I put more comments before the function, all the characters are OK:
sub macro5_fn {
print "žluťoučký kůň úpěl ďábelské ódy\n";
}
Simply said, the number of correctly decoded characters depends on the distance of these characters from the #encoding directive, the ones that are close are not decoded correctly.
It seems to me that this is an issue of Perl and PerlIO (not) flushing the buffer. Or am I doing something wrong?
Thank you for your answers.
The problem is that <> reads more than just one line, so the next line or so is being interpreted under the old encoding before you ever see the #encoding directive for the new.
Your best bet is probably to read the file in binary mode and use the Encode module to decode each line from the current encoding.
I've tried everything Google and StackOverflow have recommended (that I could find) including using Encode. My code works but it just uses UTF8 and I get the wide character warnings. I know how to work around those warnings but I'm not using UTF8 for anything else so I'd like to just convert it and not have to adapt the rest of my code to deal with it. Here's my code:
my $xml = XMLin($content);
# Populate the #titles array with each item title.
my #titles;
for my $item (#{$xml->{channel}->{item}}) {
my $title = Encode::decode_utf8($item->{title});
#my $title = $item->{title};
#utf8::downgrade($title, 1);
Encode::from_to($title, 'utf8', 'iso-8859-1');
push #titles, $title;
}
return #titles;
Commented out you can see some of the other things I've tried. I'm well aware that I don't know what I'm doing here. I just want to end up with a plain old ASCII string though. Any ideas would be greatly appreciated. Thanks.
The answer depends on how you want to use the title. There are 3 basic ways to go:
Bytes that represent a UTF-8 encoded string.
This is the format that should be used if you want to store the UTF-8 encoded string outside your application, be it on disk or sending it over the network or anything outside the scope of your program.
A string of Unicode characters.
The concept of characters is internal to Perl. When you perform Encode::decode_utf8, then a bunch of bytes is attempted to be converted to a string of characters, as seen by Perl. The Perl VM (and the programmer writing Perl code) cannot externalize that concept except through decoding UTF-8 bytes on input and encoding them to UTF-8 bytes on output. For example, your program receives two bytes as input that you know they represent UTF-8 encoded character(s), let's say 0xC3 0xB6. In that case decode_utf8 returns a representation that instead of two bytes, sees one character: ö.
You can then proceed to manipulate that string in Perl. To illustrate the difference further, consider the following code:
my $bytes = "\xC3\xB6";
say length($bytes); # prints "2"
my $string = decode_utf8($bytes);
say length($string); # prints "1"
The special case of ASCII, a subset of UTF-8.
ASCII is a very small subset of Unicode, where characters in that range are represented by a single byte. Converting Unicode into ASCII is an inherently lossy operation, as most of the Unicode characters are not ASCII characters. You're either forced to drop every character in your string which is not in ASCII or try to map from a Unicode character to their closest ASCII equivalents (which isn't possible in the vast majority of cases), when trying to coerce a Unicode string to ASCII.
Since you have wide character warnings, it means that you're trying to manipulate (possibly output) Unicode characters that cannot be represented as ASCII or ISO-8859-1.
If you do not need to manipulate the title from your XML document as a string, I'd suggest you leave it as UTF-8 bytes (I'd mention that you should be careful not to mix bytes and characters in strings). If you do need to manipulate it, then decode, manipulate, and on output encode it in UTF-8.
For further reading, please use perldoc to study perlunitut, perlunifaq, perlunicode, perluniintro, and Encode.
Although this is an old question, I just spent several hours (!) trying to do more or less the same thing! That is: read data from a UTF-8 XML file, and convert that data into the Windows-1252 codepage (I could also have used Latin1, ISO-8859-1 etc.) in order to be able to create filenames with accented letters.
After much experimentation, and even more searching, I finally managed to get the conversion working. The "trick" is to use Encode::encode instead of Encode::decode.
For example, given the code in the original question, the correct (or at least one :-) way to convert from UTF-8 would be:
my $title = Encode::encode("Windows-1252", $item->{title});
or
my $title = Encode::encode("ISO-8859-1", $item->{title});
or
my $title = Encode::encode("<your-favourite-codepage-here>", $item->{title});
I hope this helps others having similar problems!
You can use the following line to simply get rid of the warning. This assumes that you want to use UTF8, which shouldn't normally be a problem.
binmode(STDOUT, ":encoding(utf8)");
I'm running Perl 5.10.0 and Postgres 8.4.3, and strings into a database, which is behind a DBIx::Class.
These strings should be in UTF-8, and therefore my database is running in UTF-8. Unfortunatly some of these strings are bad, containing malformed UTF-8, so when I run it I'm getting an exception
DBI Exception: DBD::Pg::st execute failed: ERROR: invalid byte sequence for encoding "UTF8": 0xb5
I thought that I could simply ignore the invalid ones, and worry about the malformed UTF-8 later, so using this code, it should flag and ignore the bad titles.
if(not utf8::valid($title)){
$title="Invalid UTF-8";
}
$data->title($title);
$data->update();
However Perl seems to think that the strings are valid, but it still throws the exceptions.
How can I get Perl to detect the bad UTF-8?
First off, please follow the documentation - the utf8 module should only be used in the 'use utf8;' form to indicate that your source code is UTF-8 instead of Latin-1. Don't use any of the utf8 functions.
Perl makes the distinction between bytes and UTF-8 strings. In byte mode, Perl doesn't know or care what encoding you are using, and will use Latin-1 if you print it. Take for example the Euro sign (€). In UTF-8 this is 3 bytes, 0xE2, 0x82, 0xAC. If you print the length of these bytes, Perl will return 3. Again, it doesn't care about the encoding. It can be any bytes or any encoding, legal or illegal.
If you use the Encode module and call Encode::decode("UTF-8', $bytes) you will get a new string which has the so-called UTF8 flag set. Perl now knows your string is in UTF-8, and will return a length of 1.
The problem that utf8::valid only applies to the second type of string. Your strings are probably in the first form, byte mode, and utf8::valid just returns true for anything in byte form. This is documented in the perldoc.
The solution is to get Perl to decode your byte strings as UTF-8, and detect any errors. This can be done with FB_CROAK as brian d foy explains:
my $ustring =
eval { decode( 'UTF-8', $byte_string, FB_CROAK ) }
or die "Could not decode string: $#";
You can then catch that error and skip those invalid strings.
Or if you know your code is mostly UTF-8 with a few invalid sequences here and there, you can use:
my $ustring = decode( 'UTF-8', $byte_string );
which uses the default mode of FB_DEFAULT, replacing invalid characters with U+FFFD, the Unicode REPLACEMENT CHARACTER (diamond with question mark in it).
You can then pass the string directly to your database driver in most cases. Some drivers may require you to re-encode the string back to byte form first:
my $byte_string = encode('UTF-8', $ustring);
There are also regexes online that you can use to check for valid UTF-8 sequences before calling decode (check other Stack Overflow answers). If you use those regexes, you don't need to do any encoding or decoding.
Finally, please use UTF-8 rather than utf8 in your calls to decode. The latter is more lax and allows some invalid UTF-8 sequences (such as sequences outside the Unicode range) to be allowed through.
How are you getting your strings? Are you sure that Perl thinks that they are UTF-8 already? If they aren't decoded yet (that is, octets interpreted as some encoding), you need to do that yourself:
use Encode;
my $ustring =
eval { decode( 'utf8', $byte_string, FB_CROAK ) }
or die "Could not decode string: $#";
Better yet, if you know that your source of strings is already UTF-8, you need to read that source as UTF-8. Look at the code you have that gets the strings to see if you are doing that properly.
As the documentation for utf8::valid points out, it returns true if the string is marked as UTF-8 and it's valid UTF-8, or if the string isn't UTF-8 at all. Although it's impossible to tell without seeing the code in context and knowing what the data is, most likely what you want isn't the "valid utf8" check at all; probably you just need to do
$data->title( Encode::encode("UTF-8", $title) )