CRLF translation with Unicode in Perl - perl

I'm trying to write to a Unicode (UCS-2 Little Endian) file in Perl on Windows, like this.
open my $f, ">$fName" or die "can't write $fName\n";
binmode $f, ':raw:encoding(UCS-2LE)';
print $f, "ohai\ni can haz unicodez?\nkthxbye\n";
close $f;
It basically works except I no longer get the automatic LF -> CR/LF translation on output that I get on regular text files. (The output files just have LF) If I leave out :raw or add :crlf in the "binmode" call, then the output file is garbled. I've tried re-ordering the "directives" (i.e. :encoding before :raw) and can't get it to work. The same problem exists for reading.

This works for me on windows:
open my $f, ">:encoding(UCS-2LE):crlf", "test.txt";
print $f "ohai\ni can haz unicodez?\nkthxbye\n";
close $f;
Yielding UCS-16 LE output in test.txt of
ohai
i can haz unicodez?
kthxbye

Here is what I have found to work, at least with perl 5.10.1:
Input:
open(my $f_in, '<:raw:perlio:via(File::BOM):crlf', $file);
Output:
open(my $f_out, '>:raw:perlio:encoding(UTF-16LE):crlf:via(File::BOM)', $file);
These handle BOM, CRLF translation, and UTF-16LE encoding/decoding transparently.
Note that according to the perlmonks post below, if trying to specify with binmode() instead of open(), an extra ":pop" is required:
binmode $f_out, ':raw:pop:perlio:encoding(UTF-16LE):crlf';
which my experience corroborates. I was not able to get this to work with the ":via(File::BOM)" layer, however.
References:
http://www.perlmonks.org/?node_id=608532
http://metacpan.org/pod/File::BOM

The :crlf layer does a simple byte mapping of 0x0A -> 0x0D 0x0A (\n --> \r\n) in the output stream, but for the most part this isn't valid in any wide character encoding.
How about using a raw mode but explicitly print the CR?
print $f "ohai\r\ni can haz unicodez?\r\nkthxbye\r\n";
Or if portability is a concern, discover and explicitly use the correct line ending:
## never mind - $/ doesn't work
# print $f "ohai$/i can haz unicodez?$/kthxbye$/";
open DUMMY, '>', 'dummy'; print DUMMY "\n"; close DUMMY;
open DUMMY, '<:raw', 'dummy'; $EOL = <DUMMY>; close DUMMY;
unlink 'dummy';
...
print $f "ohai${EOL}i can haz unicodez?${EOL}kthxbye${EOL}";
Unrelated to the question, but Ωmega
asked in a comment about the difference between :raw and :bytes. As documented in perldoc perlio, you can think of :raw as removing all I/O layers, and :bytes as removing a :utf8 layer. Compare the output of these two commands:
$ perl -E 'binmode *STDOUT,":crlf:raw"; say' | od -c
0000000 \n
0000001
$ perl -E 'binmode *STDOUT,":crlf:bytes";say' | od -c
0000000 \r \n
0000002

Related

Setting diamond operator to work in binmode?

Diamond operator (<>) works in text mode by default, is it possible to change binmode for it? Seems binmode function accepts handle only.
See perldoc perlopentut:
Binary Files
On certain legacy systems with what could charitably be called terminally
convoluted (some would say broken) I/O models, a file isn't a file--at
least, not with respect to the C standard I/O library. On these old
systems whose libraries (but not kernels) distinguish between text and
binary streams, to get files to behave properly you'll have to bend over
backwards to avoid nasty problems. On such infelicitous systems, sockets
and pipes are already opened in binary mode, and there is currently no way
to turn that off. With files, you have more options.
Another option is to use the "binmode" function on the appropriate handles
before doing regular I/O on them:
binmode(STDIN);
binmode(STDOUT);
while (<STDIN>) { print }
Passing "sysopen" a non-standard flag option will also open the file in
binary mode on those systems that support it. This is the equivalent of
opening the file normally, then calling "binmode" on the handle.
sysopen(BINDAT, "records.data", O_RDWR | O_BINARY)
|| die "can't open records.data: $!";
Now you can use "read" and "print" on that handle without worrying about
the non-standard system I/O library breaking your data. It's not a pretty
picture, but then, legacy systems seldom are. CP/M will be with us until
the end of days, and after.
On systems with exotic I/O systems, it turns out that, astonishingly
enough, even unbuffered I/O using "sysread" and "syswrite" might do sneaky
data mutilation behind your back.
while (sysread(WHENCE, $buf, 1024)) {
syswrite(WHITHER, $buf, length($buf));
}
Depending on the vicissitudes of your runtime system, even these calls may
need "binmode" or "O_BINARY" first. Systems known to be free of such
difficulties include Unix, the Mac OS, Plan 9, and Inferno.
<> is a convenience. If it only iterated through filenames specified on the command line, you could use $ARGV within while (<>) to detect when a new file was opened, binmode it, and then fseek to the beginning. Of course, this does not work in the presence of redirection (console input is a whole other story).
One solution is to detect if #ARGV does contain something, and open each file individual, and default to reading from STDIN. A rudimentary implementation of this using an iterator could be:
#!/usr/bin/env perl
use strict;
use warnings;
use Carp qw( croak );
my $argv = sub {
#_ or return sub {
my $done;
sub {
$done and return;
$done = 1;
binmode STDIN;
\*STDIN;
}
}->();
my #argv = #_;
sub {
#argv or return;
my $file = shift #argv;
open my $fh, '<', $file
or croak "Cannot open '$file': $!";
binmode $fh;
$fh;
};
}->(#ARGV);
binmode STDOUT;
while (my $fh = $argv->()) {
while (my $line = <$fh>) {
print $line;
}
}
Note:
C:\...\Temp> xxd test.txt
00000000: 7468 6973 2069 7320 6120 7465 7374 0a0a this is a test..
Without binmode:
C:\...\Temp> perl -e "print " test.txt | xxd
00000000: 7468 6973 2069 7320 6120 7465 7374 0d0a this is a test..
00000010: 0d0a ..
With the script above:
C:\...\Temp> perl argv.pl test.txt | xxd
00000000: 7468 6973 2069 7320 6120 7465 7374 0a0a this is a test..
Same results using perl ... < test.txt | xxd, or piping text through perl ...

Cannot write UTF-16LE encoded CSV file with Text::CSV_XS Perl module

I want to write a CSV file encoded in UTF-16LE.
However, the output in the file gets messed up. There are strange chinese looking letters: ਍挀攀氀氀㄀⸀㄀㬀挀攀氀氀㄀⸀㈀㬀ഀ.
This looks like off-by-one-byte problem mentioned here: Creating UTF-16 newline characters in Python for Windows Notepad
Other threads about Perl and Text::CSV_XS didn't help.
This is how I try it:
#!perl
use strict;
use warnings;
use utf8;
use Text::CSV_XS;
binmode STDOUT, ":utf8";
my $csv = Text::CSV_XS->new({
binary => 1,
sep_char => ";",
quote_char => undef,
eol => $/,
});
open my $in, '<:encoding(UTF-16LE)', 'in.csv' or die "in.csv: $!";
open my $out, '>:encoding(UTF-16LE)', 'out.csv' or die "out.csv: $!";
while (my $row = $csv->getline($in)) {
$_ =~ s/ä/æ/ for #$row; # something will be done to the data...
$csv->print($out, $row);
}
close $in;
close $out;
in.csv contains some test data and it is encoded in UTF-16LE:
header1;header2;
cell1.1;cell1.2;
äöü2.1;ab"c2.2;
The results looks like this:
header1;header2;਍挀攀氀氀㄀⸀㄀㬀挀攀氀氀㄀⸀㈀㬀ഀ
æöü2.1;abc2.2;਍
It is not an option to switch to UTF-8 as output format (which works fine btw).
So, how do I write valid UTF-16LE encoded CSV files using Text::CSV_XS?
Perl adds :crlf by default on Windows. It's added first, before your :encoding is added.
That means LF⇔CRLF conversion will be performed before decoding on reads, and after encoding on writes. This is backwards.
It ends up working with UTF-8 despite being done backwards because all of the following conditions are met:
The UTF-8 encoding of LF is the same as its Code Point (0A).
The UTF-8 encoding of CR is the same as its Code Point (0D).
0A always refers to LF no matter where they are in the file.
0D always refers to CR no matter where they are in the file.
None of those conditions holds true for UTF-16le.
Fix:
open(my $fh_in, '<:raw:encoding(UTF-16LE):crlf', $qfn_in)
open(my $fh_out, '>:raw:encoding(UTF-16LE):crlf', $qfn_out)

How to output binary files in Perl?

I want to be able to output 0x41, and have it show up as A.
This is what I have tried so far:
my $out;
open $out, ">file.txt" or die $!;
binmode $out;
print $out 0x41;
close $out;
It outputs 65 instead of A in the resulting file. This is not what I want.
I also have read this similar question, but I wouldn't transfer the answer over. pack a short results to 2 bytes instead of 1 byte.
You can use chr(0x41).
For larger structures, you can use pack:
pack('c3', 0x41, 0x42, 0x43) # gives "ABC"
Regarding your suspicion of pack, do go read its page - it is extremely versatile. 'c' packs a single byte, 's' (as seen in that question) will pack a two-byte word.
Use the chr function:
print $out chr 0x41
pack need two argument: The first argument explain how and how many data have to be packed:
perl -e 'printf "|%s|\n",pack("c",0x41,0x42,0x44);'
|A|
perl -e 'printf "|%s|\n",pack("c3",0x41,0x42,0x44);'
|ABD|
perl -e 'my #bytes=(0x41,0x42,0x43,0x48..0x54);
printf "|%s|\n",pack("c".(1+$#bytes),#bytes);'
|ABCHIJKLMNOPQRST|
you could even mix format in the 1st part:
perl -e 'printf "|%s|\n",pack("c3B8",0x41,0x42,0x44,"01000001");'
|ABDA|

Why does my Perl script remove characters from the file?

I have some issue with a Perl script. It modifies the content of a file, then reopen it to write it, and in the process some characters are lost. All words starting with '%' are deleted from the file. That's pretty annoying because the % expressions are variable placeholders for dialog boxes.
Do you have any idea why? Source file is an XML with default encoding
Here is the code:
undef $/;
open F, $file or die "cannot open file $file\n";
my $content = <F>;
close F;
$content =~s{status=["'][\w ]*["']\s*}{}gi;
printf $content;
open F, ">$file" or die "cannot reopen $file\n";
printf F $content;
close F or die "cannot close file $file\n";
You're using printf there and it thinks its first argument is a format string. See the printf documentation for details. When I run into this sort of problem, I always ensure that I'm using the functions correctly. :)
You probably want just print:
print FILE $content;
In your example, you don't need to read in the entire file since your substitution does not cross lines. Instead of trying to read and write to the same filename all at once, use a temporary file:
open my($in), "<", $file or die "cannot open file $file\n";
open my($out), ">", "$file.bak" or die "cannot open file $file.bak\n";
while( <$in> )
{
s{status=["'][\w ]*["']\s*}{}gi;
print $out;
}
rename "$file.bak", $file or die "Could not rename file\n";
This also reduces to this command-line program:
% perl -pi.bak -e 's{status=["\']\\w ]*["\']\\s*}{}g' file
Er. You're using printf.
printf interprets "%" as something special.
use "print" instead.
If you have to use printf, use
printf "%s", $content;
Important Note:
PrintF stands for Print Format , just as it does in C.
fprintf is the equivelant in C for File IO.
Perl is not C.
And even IN C, putting your content as parameter 1 gets you shot for security reasons.
Or even
perl -i bak -pe 's{status=["\'][\w ]*["\']\s*}{}gi;' yourfiles
-e says "there's code following for you to run"
-i bak says "rename the old file to whatever.bak"
-p adds a read-print loop around the -e code
Perl one-liners are a powerful tool and can save you a lot of drudgery.
If you want a solution that is aware of the XML nature of the docs (i.e., only delete status attributes, and not matching text contents) you could also use XML::PYX:
$ pyx doc.xml | perl -ne'print unless /^Astatus/' | pyxw
That's because you used printf instead of print and you know printf doesn't print "%" (because it would think you forgot to type the format symbol such as %s, %f etc) unless you explicitly mention by "%%". :-)

How do I read UTF-8 with diamond operator (<>)?

I want to read UTF-8 input in Perl, no matter if it comes from the standard input or from a file, using the diamond operator: while(<>){...}.
So my script should be callable in these two ways, as usual, giving the same output:
./script.pl utf8.txt
cat utf8.txt | ./script.pl
But the outputs differ! Only the second call (using cat) seems to work as designed, reading UTF-8 properly. Here is the script:
#!/usr/bin/perl -w
binmode STDIN, ':utf8';
binmode STDOUT, ':utf8';
while(<>){
my #chars = split //, $_;
print "$_\n" foreach(#chars);
}
How can I make it read UTF-8 correctly in both cases? I would like to keep using the diamond operator <> for reading, if possible.
EDIT:
I realized I should probably describe the different outputs. My input file contains this sequence: a\xCA\xA7b. The method with cat correctly outputs:
a
\xCA\xA7
b
But the other method gives me this:
a
\xC3\x8A
\xC2\xA7
b
Try to use the pragma open instead:
use strict;
use warnings;
use open qw(:std :utf8);
while(<>){
my #chars = split //, $_;
print "$_" foreach(#chars);
}
You need to do this because the <> operator is magical. As you know it will read from STDIN or from the files in #ARGV. Reading from STDIN causes no problem as STDIN is already open thus binmode works well on it. The problem is when reading from the files in #ARGV, when your script starts and calls binmode the files are not open. This causes STDIN to be set to UTF-8, but this IO channel is not used when #ARGV has files. In this case the <> operator opens a new file handle for each file in #ARGV. Each file handle gets reset and loses it's UTF-8 attribute. By using the pragma open you force each new STDIN to be in UTF-8.
Your script works if you do this:
#!/usr/bin/perl -w
binmode STDOUT, ':utf8';
while(<>){
binmode ARGV, ':utf8';
my #chars = split //, $_;
print "$_\n" foreach(#chars);
}
The magic filehandle that <> reads from is called *ARGV, and it is
opened when you call readline.
But really, I am a fan of explicitly using Encode::decode and
Encode::encode when appropriate.
You can switch on UTF8 by default with the -C flag:
perl -CSD -ne 'print join("\n",split //);' utf8.txt
The switch -CSD turns on UTF8 unconditionally; if you use simply -C it will turn on UTF8 only if the relevant environment variables (LC_ALL, LC_TYPE and LANG) indicate so. See perlrun for details.
This is not recommended if you don't invoke perl directly (in particular, it might not work reliably if you pass options to perl from the shebang line). See the other answers in that case.
If you put a call to binmode inside of the while loop, then it will switch the handle to utf8 mode AFTER the first line is read in. That is probably not what you want to do.
Something like the following might work better:
#!/usr/bin/env perl -w
binmode STDOUT, ':utf8';
eof() ? exit : binmode ARGV, ':utf8';
while( <> ) {
my #chars = split //, $_;
print "$_\n" foreach(#chars);
} continue {
binmode ARGV, ':utf8' if eof && !eof();
}
The call to eof() with parens is magical, as it checks for end of file on the pseudo-filehandle used by <>. It will, if necessary, open the next handle that needs to be read, which typically has the effect of making *ARGV valid, but without reading anything out of it. This allows us to binmode the first file that's read from, before anything is read from it.
Later, eof (without parens) is used; this checks the last handle that was read from for end of file. It will be true after we process the last line of each file from the commandline (or when stdin reaches it's end).
Obviously, if we've just processed the last line of one file, calling eof() (with parens) opens the next file (if there is one), makes *ARGV valid (if it can), and tests for end of file on that next file. If that next file is present, and isn't at end of file, then we can safely use binmode on ARGV.