Reading unicode chars on the byte level - perl

Suppose I wanted to detect unicode characters and encode them using \u notation. If I had to use a byte array, are there simple rules I can follow to detect groups of bytes that belong to a single character?
I am referring to UTF-8 bytes that need to be encoded for an ASCII-only receiver. At the moment, non-ASCII-Printable characters are stripped. s/[^\x20-\x7e\r\n\t]//g.
I want to improve this functionality to write \u0000 notation.

You need to have Unicode characters, so start by decoding your byte array.
use Encode qw( decode );
my $decoded_text = decode("UTF-8", $encoded_text);
Only then can you escape Unicode characters.
( my $escaped_text = $decoded_text ) =~
s/([^\x0A\x20-\x5B\x5D-\x7E])/sprintf("\\u%04X", ord($1))/eg;
For example,
$ perl -CSDA -MEncode=decode -E'
my $encoded_text = "\xC3\x89\x72\x69\x63\x20\xE2\x99\xA5\x20\x50\x65\x72\x6c";
my $decoded_text = decode("UTF-8", $encoded_text);
say $decoded_text;
( my $escaped_text = $decoded_text ) =~
s/([^\x0A\x20-\x5B\x5D-\x7E])/sprintf("\\u%04X", ord($1))/eg;
say $escaped_text;
'
Éric ♥ Perl
\u00C9ric \u2665 Perl

Related

How can I escape a string in Perl for LDAP searching?

I want to escape a string, per RFC 4515. So, the string "u1" would be transformed to "\75\31", that is, the ordinal value of each character, in hex, preceded by backslash.
Has to be done in Perl. I already know how to do it in Python, C++, Java, etc., but Perl if baffling.
Also, I cannot use Net::LDAP and I may not be able to add any new modules, so, I want to do it with basic Perl features.
Skimming through RFC 4515, this encoding escapes the individual octets of multi-byte UTF-8 characters, not codepoints. So, something that works with non-ASCII text too:
#!/usr/bin/env perl
use strict;
use warnings;
use feature qw/say/;
sub valueencode ($) {
# Unpack format returns octets of UTF-8 encoded text
my #bytes = unpack "U0C*", $_[0];
sprintf '\%02x' x #bytes, #bytes;
}
say valueencode 'u1';
say valueencode "Lu\N{U+010D}i\N{U+0107}"; # Lučić, from the RFC 4515 examples
Example:
$ perl demo.pl
\75\31
\4c\75\c4\8d\69\c4\87
Or an alternative using the vector flag:
use Encode qw/encode/;
sub valueencode ($) {
sprintf '\%*vx', "\\", encode('UTF-8', $_[0]);
}
Finally, a smarter version that only escapes ASCII characters when it has to (And multi-byte characters, even though upon a closer read of the RFC they don't actually need to be if they're valid UTF-8):
# Encode according to RFC 4515 valueencoding grammar rules:
#
# Text is UTF-8 encoded. Bytes can be escaped with the sequence
# \XX, where the X's are hex digits.
#
# The characters NUL, LPAREN, RPAREN, ASTERISK and BACKSLASH all MUST
# be escaped.
#
# Bytes > 0x7F that aren't part of a valid UTF-8 sequence MUST be
# escaped. This version assumes there are no such bytes and that input
# is a ASCII or Unicode string.
#
# Single bytes and valid multibyte UTF-8 sequences CAN be escaped,
# with each byte escaped separately. This version escapes multibyte
# sequences, to give ASCII results.
sub valueencode ($) {
my $encoded = "";
for my $byte (unpack 'U0C*', $_[0]) {
if (($byte >= 0x01 && $byte <= 0x27) ||
($byte >= 0x2B && $byte <= 0x5B) ||
($byte >= 0x5D && $byte <= 0x7F)) {
$encoded .= chr $byte;
} else {
$encoded .= sprintf '\%02x', $byte;
}
}
return $encoded;
}
This version returns the strings 'u1' and 'Lu\c4\8di\c4\87' from the above examples.
In short, one way is just as the question says: split the string into characters, get their ordinals then convert format to hex; then put it back together. I don't know how to get the \nn format so I'd make it 'by hand'. For instance
my $s = join '', map { sprintf '\%x', ord } split //, 'u1';
Or use vector flag %v to treat the string as a "vector" of integers
my $s = sprintf '\%*vx', '\\', 'u1';
With %v the string is broken up into numerical representation of characters, each is converted (%x), and they're joined back, with . between them. That (optional) * allows us to specify our string by which to join them instead, \ (escaped) here.
This can also be done with pack + unpack, see the link below. Also see that page if there is a wide range of input characters.†
See ord and sprintf, and for more pages like this one.
† If there is non-ASCII input then you may need to encode it so to get octets, if they are to escape (and not whole codepoints)
use Encode qw(encode);
my $s = sprintf '\%*vx', '\\', encode('UTF_8', $input);
See the linked page for more.

Is it guaranteed that trailing bytes in mbcs encodings are in specific range?

I need to read text file which contains strings in arbitrary MBCS encodings. Format of file (simplfied) is like this:
CODEPAGE "STRING"
CODEPAGE STRING
...
where CODEPAGE can be any MBCS codepage: UTF-8, cp1251 (Cyrillic), cp932 (Japanese), etc.
I can't decode the whole file in one call to MultiByteToWideChar. I need to extract string between quotes or until space or carriage return and call MultiByteToWideChar on extracted string.
But in MBCS (multi-byte coding schemes) one character can be represented with more than one byte. If I want to find latin 'A' in multi-byte encoded file, I can't just search for code 65 because 65 can be trailing byte in some encoding sequence.
So I'm not sure if I'm allowed to search for '"' or space or CR in MBCS string. I browsed several codepages (for exapmple Chinese 936 codepage: https://ssl.icu-project.org/icu-bin/convexp?conv=windows-936-2000&s=ALL) and as far as I see all trailing bytes starts from 0x40 so it's safe to scan file for punctuation characters. But is there some guarantee for that for any codepage?
Analyse which octets can occur in encoded octet sequences, discarding the leading one. Result is 0x40..0x7E, 0x80..0xFE.
#!/usr/bin/env perl
use Encode qw(encode);
my #encodings = qw(
cp1006 cp1026 cp1047 cp1250 cp1251 cp1252 cp1253 cp1254 cp1255 cp1256
cp1257 cp1258 cp37 cp424 cp437 cp500 cp737 cp775 cp850 cp852 cp855 cp856
cp857 cp858 cp860 cp861 cp862 cp863 cp864 cp865 cp866 cp869 cp874 cp875
cp932 cp936 cp949 cp950
);
my %continuation_octets;
for my $e (#encodings) {
for my $c (0..0x10_ffff) {
my $encoded = encode $e, chr($c), sub { -1 };
if ($encoded ne -1 && length($encoded) > 1) {
my #octets = split //, $encoded;
shift #octets;
$continuation_octets{$_}++ for #octets;
}
}
}

perl - matching at even positions and remove non-printable chars

I have a hex2string from database table dump that is like
"41424320202020200A200B000C"
what I want to do is to match at the even positions and detect the control chars that could break the string when printed.. i.e remove ascii null \x00, \n, \r, \f and \x80 to \xFF, etc..
I tried removing ascii null like
perl -e ' $x="41424320202020200A200B000C"; $x=~s/00//g; print "$x\n" '
but the result is incorrect as it removed 0 from trailing hex value of space \x20 and leading 0 of newline \x0A i.e 20 0A to 2A
414243202020202A2B0C
what i wanted is
414243202020202020
say unpack("H*", pack("H*", "41424320202020200A200B000C") =~ s/[^\t[:print:]]//arg);
or
my $hex = "41424320202020200A200B000C";
my $bytes = pack("H*", $hex);
$bytes =~ s/[^\t[:print:]]//ag;
$hex = unpack("H*", $bytes);
say $hex;
or
my $hex = "41424320202020200A200B000C";
my $bytes = pack("H*", $hex);
$bytes =~ s/[^\t\x20-\x7E]//g;
$hex = unpack("H*", $bytes);
say $hex;
Solutions using /a and /r require Perl 5.14+.
The above starts with the following string:
41424320202020200A200B000C
It is converted into the following using pack:
ABC␠␠␠␠␠␊␠␋␀␌
The substitution removes all non-ASCII and all non-printable characters except TAB, leaving us with the following:
ABC␠␠␠␠␠␠
It is converted into the following using unpack:
414243202020202020
This solution is not only shorter than the previous solutions, it is also faster because it allocates far fewer variables and only starts the regex match once.
detect the control chars that could break the string when printed.. i.e remove ascii null \x00, \n, \r, \f and \x80 to \xFF, etc..
Building on Hakon's answer (Which only strips out nul bytes, not all the other ones):
#!/usr/bin/perl
use warnings;
use strict;
use feature qw/say/;
my $x="41424320202020200A200B000C";
say $x;
say grep { chr(hex($_)) =~ /[[:print:]\t]/ && hex($_) < 128 } unpack("(A2)*", $x);
gives you
41424320202020200A200B000C
414243202020202020
The character class [:print:] inside a character set matches all printable characters including space (but not control characters like newline and linefeed), and I added in tab as well. Then it also checks to make sure the byte is in the ASCII range (Since higher characters are still printable in many locales).
It is possible to work directly with the hex form of the characters, but it's far more complicated. I recommend against using this approach. This answer serves to illustrate why this solution wasn't proposed.
You wish to exclude all characters except the following:
ASCII printables (2016 to 7E16)
TAB (0916)
That means you wish to exclude the following characters:
0016 to 0816
0A16 to 1F16
7F16 to FF16
If we group these by leading digits, we get
0016 to 0816, 0A16 to 0F16
1016 to 1F16
7F16
8016 to FF16
We can therefore use the following:
$hex =~ s/\G(?:..)*?\K(?:0[0-8A-Fa-f]|7F|[189A-Fa-f].)//sg; # 5.10+
$hex =~ s/\G((?:..)*?)(?:0[0-8A-Fa-f]|7F|[189A-Fa-f].)/$1/sg; # Slower
You can try split the string into 2 bytes substrings using unpack:
my $x="41424320202020200A200B000C";
say $x;
say join '', grep { $_ !~ /00/} unpack "(A2)*", $x;
Output:
41424320202020200A200B000C
41424320202020200A200B0C

perl- trim utf8 bytes to 'length' and sanitize the data

I have utf8 sequence of bytes and need to trim it to say 30bytes. This may result in incomplete sequence at the end. I need to figure out how to remove the incomplete sequence.
e.g
$b="\x{263a}\x{263b}\x{263c}";
my $sstr;
print STDERR "length in utf8 bytes =" . length(Encode::encode_utf8($b)) . "\n";
{
use bytes;
$sstr= substr($b,0,29);
}
#After this $sstr contains "\342\230\272\342"\0
# How to remove \342 from the end
UTF-8 has some neat properties that allow us to do what you want while dealing with UTF-8 rather than characters. So first, you need UTF-8.
use Encode qw( encode_utf8 );
my $bytes = encode_utf8($str);
Now, to split between codepoints. The UTF-8 encoding of every code point will start with a byte matching 0b0xxxxxxx or 0b11xxxxxx, and you will never find those bytes in the middle of a code point. That means you want to truncate before
[\x00-\x7F\xC0-\xFF]
Together, we get:
use Encode qw( encode_utf8 );
my $max_bytes = 8;
my $str = "\x{263a}\x{263b}\x{263c}"; # ☺☻☼
my $bytes = encode_utf8($str);
$bytes =~ s/^.{0,$max_bytes}(?![^\x00-\x7F\xC0-\xFF])\K.*//s;
# $bytes contains encode_utf8("\x{263a}\x{263b}")
# instead of encode_utf8("\x{263a}\x{263b}") . "\xE2\x98"
Great, yes? Nope. The above can truncate in the middle of a grapheme. A grapheme (specifically, an "extended grapheme cluster") is what someone would perceive as a single visual unit. For example, "é" is a grapheme, but it can be encoded using two codepoints ("\x{0065}\x{0301}"). If you cut between the two code points, it would be valid UTF-8, but the "é" would become a "e"! If that's not acceptable, neither is the above solution. (Oleg's solution suffers from the same problem too.)
Unfortunately, UTF-8's properties are no longer sufficient to help us here. We'll need to grab one grapheme at a time, and add it to the output until we can't fit one.
my $max_bytes = 6;
my $str = "abcd\x{0065}\x{0301}fg"; # abcdéfg
my $bytes = '';
my $bytes_left = $max_bytes;
while ($str =~ /(\X)/g) {
my $grapheme = $1;
my $grapheme_bytes = encode_utf8($grapheme);
$bytes_left -= length($grapheme_bytes);
last if $bytes_left < 0;
$bytes .= $grapheme_bytes;
}
# $bytes contains encode_utf8("abcd")
# instead of encode_utf8("abcde")
# or encode_utf8("abcde") . "\xCC"
First, please don't use bytes (and never assume that any internal encoding in Perl). As documentation says: This pragma reflects early attempts to incorporate Unicode into perl and has since been superseded <...> use of this module for anything other than debugging purposes is strongly discouraged.
To strip incomplete sequence at end of line, assuming it contains octets, use Encode::decode's Encode::FB_QUIET handling mode to stop processing once you hit invalid sequence and then just encode result back:
my $valid = Encode::decode('utf8', $sstr, Encode::FB_QUIET);
$sstr = Encode::encode('utf8', $valid);
Note that if you plan to use it with another encoding in future, not all of encodings may support this handling method.

Perl substr based on bytes

I'm using SimpleDB for my application. Everything goes well unless the limitation of one attribute is 1024 bytes. So for a long string I have to chop the string into chunks and save it.
My problem is that sometimes my string contains unicode character (chinese, japanese, greek) and the substr() function is based on character count not byte.
I tried to use use bytes for byte semantic or later
substr(encode_utf8($str), $start, $length) but it does not help at all.
Any help would be appreciated.
UTF-8 was engineered so that character boundaries are easy to detect. To split the string into chunks of valid UTF-8, you can simply use the following:
my $utf8 = encode_utf8($text);
my #utf8_chunks = $utf8 =~ /\G(.{1,1024})(?![\x80-\xBF])/sg;
Then either
# The saving code expects bytes.
store($_) for #utf8_chunks;
or
# The saving code expects decoded text.
store(decode_utf8($_)) for #utf8_chunks;
Demonstration:
$ perl -e'
use Encode qw( encode_utf8 );
# This character encodes to three bytes using UTF-8.
my $text = "\N{U+2660}" x 342;
my $utf8 = encode_utf8($text);
my #utf8_chunks = $utf8 =~ /\G(.{1,1024})(?![\x80-\xBF])/sg;
CORE::say(length($_)) for #utf8_chunks;
'
1023
3
substr operates on 1-byte characters unless the string has the UTF-8 flag on. So this will give you the first 1024 bytes of a decoded string:
substr encode_utf8($str), 0, 1024;
although, not necessarily splitting the string on character boundaries. To discard any split characters at the end, you can use:
$str = decode_utf8($str, Encode::FB_QUIET);