Perl - File Encoding and Word Comparison - perl

I have a file with one phrase/terms each line which i read to perl from STDIN. I have a list of stopwords (like "á", "são", "é") and i want to compare each one of them with each term, and remove if they are equal. The problem is that i'm not certain of the file's encoding format.
I get this from the file command:
words.txt: Non-ISO extended-ASCII English text
My linux terminal is in UTF-8 and it shows the right content for some words and for others don't. Here is the output from some of them:
condi<E3>
conte<FA>dos
ajuda, mas não resolve
mo<E7>ambique
pedagógico são fenómenos
You can see that the 3rd and 5th lines are correctly identifying words with accents and special characters while others don't. The correct output for the other lines should be: condiã, conteúdos and moçambique.
If i use binmode(STDOUT, utf8) the "incorrect" lines now output correctly while the other ones don't. For example the 3rd line:
ajuda, mas não resolve
What should i do guys?

I strongly suggest you create a filter that takes a file with lines in mixed encodings and translates them to pure UTF-8. Then instead
open(INPUT, "< badstuff.txt") || die "open failed: $!";
you would open either the fixed version, or a pipe from the fixer, like:
open(INPUT, "fixit < badstuff.txt |") || die "open failed: $!"
In either event, you would then
binmode(INPUT, ":encoding(UTF-8)") || die "binmode failed";
Then the fixit program could just do this:
use strict;
use warnings;
use Encode qw(decode FB_CROAK);
binmode(STDIN, ":raw") || die "can't binmode STDIN";
binmode(STDOUT, ":utf8") || die "can't binmode STDOUT";
while (my $line = <STDIN>) {
$line = eval { decode("UTF-8", $line, FB_CROAK() };
if ($#) {
$line = decode("CP1252", $line, FB_CROAK()); # no eval{}!
}
$line =~ s/\R\z/\n/; # fix raw mode reads
print STDOUT $line;
}
close(STDIN) || die "can't close STDIN: $!";
close(STDOUT) || die "can't close STDOUT: $!";
exit 0;
See how that works? Of course, you could change it to default to some other encoding, or have multiple fall backs. Probably it would be best to take a list of them in #ARGV.

It works like this:
C:\Dev\Perl :: chcp
Aktive Codepage: 1252.
C:\Dev\Perl :: type mixed-encoding.txt
eins zwei drei Käse vier fünf Wurst
eins zwei drei Käse vier fünf Wurst
C:\Dev\Perl :: perl mixed-encoding.pl < mixed-encoding.txt
eins zwei drei vier fünf
eins zwei drei vier fünf
Where mixed-encoding.pl goes like this:
use strict;
use warnings;
use utf8; # source in UTF-8
use Encode 'decode_utf8';
use List::MoreUtils 'any';
my #stopwords = qw( Käse Wurst );
while ( <> ) { # read octets
chomp;
my #tokens;
for ( split /\s+/ ) {
# Try UTF-8 first. If that fails, assume legacy Latin-1.
my $token = eval { decode_utf8 $_, Encode::FB_CROAK };
$token = $_ if $#;
push #tokens, $token unless any { $token eq $_ } #stopwords;
}
print "#tokens\n";
}
Note that the script doesn't have to be encoded in UTF-8. It's just that if you have funky character data in your script you have to make sure the encoding matches, so use utf8 if your encoding is UTF-8, and don't if it isn't.
Update based on tchrist's sound advice:
use strict;
use warnings;
# source in Latin1
use Encode 'decode';
use List::MoreUtils 'any';
my #stopwords = qw( Käse Wurst );
while ( <> ) { # read octets
chomp;
my #tokens;
for ( split /\s+/ ) {
# Try UTF-8 first. If that fails, assume 8-bit encoding.
my $token = eval { decode utf8 => $_, Encode::FB_CROAK };
$token = decode Windows1252 => $_, Encode::FB_CROAK if $#;
push #tokens, uc $token unless any { $token eq $_ } #stopwords;
}
print "#tokens\n";
}

Related

Perl and utf8 output from file [duplicate]

This question already has answers here:
How can I output UTF-8 from Perl?
(6 answers)
Closed 6 months ago.
I have a problem with perl output : the french word "préféré" is sometimes outputted "pr�f�r�" :
The sample script :
devel#k0:~/tmp$ cat 02.pl
#!/usr/bin/env perl
use strict;
use warnings;
print "préféré\n";
open( my $fh, '<:encoding(UTF-8)', 'text' ) ;
while ( <$fh> ) { print $_ }
close $fh;
exit;
The execution :
devel#k0:~/tmp$ ./02.pl
préféré
pr�f�r�
devel#k0:~/tmp$ cat text
préféré
devel#k0:~/tmp$ file text
text: UTF-8 Unicode text
Can please someone help me ?
Decode your inputs, encode your outputs. You have two bugs related to failure to properly decode and encode.
Specifically, you're missing
use utf8;
use open ":std", ":encoding(UTF-8)";
Details follow.
Perl source code is expected to be ASCII (with 8-bit clean string literals) unless you use use utf8 to tell Perl it's UTF-8.
I believe you have a UTF-8 terminal. We can conclude from the fact that cat 02.pl works that your source code is encoded using UTF-8. This means Perl sees the equivalent of this:
print "pr\x{C3}\x{A9}f\x{C3}\x{A9}r\x{C3}\x{A9}\n"; # C3 A9 = é encoded using UTF-8
You should be using use utf8; so Perl sees the equivalent of
print "pr\x{E9}f\x{E9}r\x{E9}\n"; # E9 = Unicode Code Point for é
You correctly decode the file you read.
The file presumably contains
70 72 C3 A9 66 C3 A9 72 C3 A9 0A # préféré␊ encoded using UTF-8
Because of the encoding layer you add, you are effectively doing
$_ = decode( "UTF-8", "\x{70}\x{72}\x{C3}\x{A9}\x{66}\x{C3}\x{A9}\x{72}\x{C3}\x{A9}\x{0A}" );
or
$_ = "pr\x{E9}f\x{E9}r\x{E9}\n";
This is correct.
Finally, you fail to encode your outputs.
The following does what you want:
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
BEGIN {
binmode( STDIN, ":encoding(UTF-8)" ); # Well, not needed here.
binmode( STDOUT, ":encoding(UTF-8)" );
binmode( STDERR, ":encoding(UTF-8)" );
}
print "préféré\n";
open( my $fh, '<:encoding(UTF-8)', 'text' ) or die $!;
while ( <$fh> ) { print $_ }
close $fh;
But the open pragma makes it a lot cleaner.
The following does what you want:
#!/usr/bin/env perl
use strict;
use warnings;
use utf8;
use open ":std", ":encoding(UTF-8)";
print "préféré\n";
open( my $fh, '<', 'text' ) or die $!;
while ( <$fh> ) { print $_ }
close $fh;
UTF-8 is an interesting problem. First, your Perl itself will print correctly, because you don't do any UTF-8 Handling. You have an UTF-8 String, but Perl itself don't really know that it is UTF-8, and it will also print it, as-is.
So an an UTF-8 Terminal. Everything looks fine. Even that's not the case.
When you add use utf8; to your source-code. You will see, that your print now will produce the same garbage. But if you have string containing UTF-8. That's what you should do.
use utf8;
# Now also prints garbage
print "préféré\n";
open my $fh, '<:encoding(UTF-8)', 'text';
while ( <$fh> ) {
print $_;
}
close $fh;
Next. For every input you do from the outside, you need to do an decode, and for every output you do. You need todo an encode.
use utf8;
use Encode qw(encode decode);
# Now correct
print encode("UTF-8", "préféré\n");
open my $fh, '<:encoding(UTF-8)', 'text';
while ( <$fh> ) {
print encode("UTF-8", $_);
}
close $fh;
This can be tedious. But you can enable Auto-Encoding on a FileHandle with binmode
use utf8;
# Activate UTF-8 Encode on STDOUT
binmode STDOUT, ':utf8';
print "préféré\n";
open my $fh, '<:encoding(UTF-8)', 'text';
while ( <$fh> ) {
print $_;
}
close $fh;
Now everything is UTF-8! You also can activate it on STDERR. Remember that if you want to print binary data on STDOUT (for whatever reason) you must disable the Layer.
binmode STDOUT, ':raw';

Trying to improve Encode::decode warning message: Segfault in $SIG{__WARN__} handler

I am trying to improve the warning message issued by Encode::decode(). Instead of printing the name of the module and the line number in the module, I would like it to print the name of the file being read and the line number in that file where the malformed data was found. To a developer, the origial message can be useful, but to an end user not familiar with Perl, it is probably quite meaningless. The end user would probably rather like to know which file is giving the problem.
I first tried to solve this using a $SIG{__WARN__} handler (which is probably not a good idea), but I get a segfault. Probably a silly mistake, but I could not figure it out:
#! /usr/bin/env perl
use feature qw(say);
use strict;
use warnings;
use Encode ();
binmode STDOUT, ':utf8';
binmode STDERR, ':utf8';
my $fn = 'test.txt';
write_test_file( $fn );
# Try to improve the Encode::FB_WARN fallback warning message :
#
# utf8 "\xE5" does not map to Unicode at <module_name> line xx
#
# Rather we would like the warning to print the filename and the line number:
#
# utf8 "\xE5" does not map to Unicode at line xx of file <filename>.
my $str = '';
open ( my $fh, "<:encoding(utf-8)", $fn ) or die "Could not open file '$fn': $!";
{
local $SIG{__WARN__} = sub { my_warn_handler( $fn, $_[0] ) };
$str = do { local $/; <$fh> };
}
close $fh;
say "Read string: '$str'";
sub my_warn_handler {
my ( $fn, $msg ) = #_;
if ( $msg =~ /\Qdoes not map to Unicode\E/ ) {
recover_line_number_and_char_pos( $fn, $msg );
}
else {
warn $msg;
}
}
sub recover_line_number_and_char_pos {
my ( $fn, $err_msg ) = #_;
chomp $err_msg;
$err_msg =~ s/(line \d+)\.$/$1/; # Remove period at end of sentence.
open ( $fh, "<:raw", $fn ) or die "Could not open file '$fn': $!";
my $raw_data = do { local $/; <$fh> };
close $fh;
my $str = Encode::decode( 'utf-8', $raw_data, Encode::FB_QUIET );
my ($header, $last_line) = $str =~ /^(.*\n)([^\n]*)$/s;
my $line_no = $str =~ tr/\n//;
++$line_no;
my $pos = ( length $last_line ) + 1;
warn "$err_msg, in file '$fn' (line: $line_no, pos: $pos)\n";
}
sub write_test_file {
my ( $fn ) = #_;
my $bytes = "Hello\nA\x{E5}\x{61}"; # 2 lines ending in iso 8859-1: åa
open ( my $fh, '>:raw', $fn ) or die "Could not open file '$fn': $!";
print $fh $bytes;
close $fh;
}
Output:
utf8 "\xE5" does not map to Unicode at ./p.pl line 27
, in file 'test.txt' (line: 2, pos: 2)
Segmentation fault (core dumped)
Here is another way to locate where the warning fires, with un-buffered sysread
use warnings;
use strict;
binmode STDOUT, ':utf8';
binmode STDERR, ':utf8';
my $file = 'test.txt';
open my $fh, "<:encoding(UTF-8)", $file or die "Can't open $file: $!";
$SIG{__WARN__} = sub { print "\t==> WARN: #_" };
my $char_cnt = 0;
my $char;
while (sysread($fh, $char, 1)) {
++$char_cnt;
print "$char ($char_cnt)\n";
}
The file test.txt was written by the posted program, except that I had to add to it to reproduce the behavior -- it runs without warnings on v5.10 and v5.16. I added \x{234234} to the end. The line number can be tracked with $char =~ /\n/.
The sysread returns undef on error. It can be moved into the body of while (1) to allow reads to continue and catch all warnings, breaking out on 0 (returned on EOF).
This prints
H (1)
e (2)
l (3)
l (4)
o (5)
(6)
A (7)
å (8)
a (9)
==> WARN: Code point 0x234234 is not Unicode, may not be portable at ...
(10)
While this does catch the character warned about, re-reading the file using Encode may well be better than reaching for sysread, in particular if sysread uses Encode.
However, Perl is utf8 internally and I am not sure that sysread needs Encode.
Note. The page for sysread supports its use on data with encoding layers
Note that if the filehandle has been marked as :utf8 Unicode
characters are read instead of bytes (the LENGTH, OFFSET, and the
return value of sysread are in Unicode characters). The
:encoding(...) layer implicitly introduces the :utf8 layer.
See binmode, open, and the open pragma.
Note Apparently, things have moved on and after a certain version sysread does not support encoding layers. The link above, while for an older version (v5.10 for one) indeed shows what is quoted, with a newer version tells us that there'll be an exception.

Perl Handling Malformed Characters

I'd like advice about Perl.
I have text files I want to process with Perl. Those text files are encoded in cp932, but for some reasons they may contain malformed characters.
My program is like:
#! /usr/bin/perl -w
use strict;
use encoding 'utf-8';
# 'workfile.txt' is supposed to be encoded in cp932
open my $in, "<:encoding(cp932)", "./workfile.txt";
while ( my $line = <$in> ) {
# my process comes here
print $line;
}
If workfile.txt includes malformed characters, Perl complains:
cp932 "\x81" does not map to Unicode at ./my_program.pl line 8, <$in> line 1234.
Perl knows if its input contains malformed characters. So I want to rewrite to see if my input is good or bad and act accordingly, say print all good lines (lines that do not contain malformed characters) to output filehandle A, and print lines that do contain malformed characters to output filehandle B.
#! /usr/bin/perl -w
use strict;
use encoding 'utf-8';
use English;
# 'workfile.txt' is supposed to be encoded in cp932
open my $in, "<:encoding(cp932)", "./workfile.txt";
open my $output_good, ">:encoding(utf8)", "good.txt";
open my $output_bad, ">:encoding(utf8)", "bad.txt";
select $output_good; # in most cases workfile.txt lines are good
while ( my $line = <$in> ) {
if ( $line contains malformed characters ) {
select $output_bad;
}
print "$INPUT_LINE_NUMBER: $line";
select $output_good;
}
My question is how I can write this "if ($line contains malfoomed characters)" part. How can I check if input is good or bad.
Thanks in advance.
#! /usr/bin/perl -w
use strict;
use utf8; # Source encoded using UTF-8
use open ':std', ':encoding(UTF-8)'; # STD* is UTF-8;
# UTF-8 is default encoding for open.
use Encode qw( decode );
open my $fh_in, "<:raw", "workfile.txt"
or die $!;
open my $fh_good, ">", "good.txt"
or die $!;
open my $fh_bad, ">:raw", "bad.txt"
or die $!;
while ( my $line = <$fh_in> ) {
my $decoded_line =
eval { decode('cp932', $line, Encode::FB_CROAK|Encode::LEAVE_SRC) };
if (defined($decoded_line)) {
print($fh_good "$. $decoded_line");
} else {
print($fh_bad "$. $line");
}
}

How to match Chinese character in the web pages whose charset is big5 (Perl)?

I am doing a match between Chinese words, for example, "语言中心“ and a mount of web files (php, html, htm, etc).
However, somehow I get the following error:
Malformed UTF-8 character (1 byte, need 2, after start byte 0xdf) in regexp compilation at ../Final_FindOnlyNoReplace_CLE_Chinese.pl line 89, <INFILE> line 12.
Can anyone help?
Here is my code.
#!/usr/bin/env perl
use Encode qw/encode decode/;
use utf8;
use strict;
use Cwd;
use LWP::UserAgent;
my($path) = #_;
## append a trailing / if it's not there
$path .= '/' if($path !~ /\/$/);
use File::Glob ':glob';
my #all_files = bsd_glob($path."*");
for my $eachFile (#all_files) {
open(INFILE, "<$eachFile") || die ("Could not open '$eachFile'\n");
my(#inlines) = <INFILE>;
my($line, $find);
my $outkey = 1;
foreach $line (#inlines) {
$find = &find($line);
if ($find ne 'false') {
chomp($line);
print "\tline$outkey : $line\n";
}
$outkey ++;
}
}
#subroutine
sub find {
my $m = encode("utf8", decode("big5", #_));
my $html = LWP::UserAgent->new
->get($m)
->decoded_content;
my $str_chinese = '語言中心';
if ($m =~ /$str_chinese/) {
$m; ##if match, return the whole line.
}
}
You aren't searching in $html you've retrieved and decoded, but in URL instead: $m =~ /$str_chinese/, which, I guess, is not what you intend.
Also, you're comparing result of find function with exact string "false," which will never work. Change if ($find ne 'false') to if (defined($find)) and add explicit returns for success and failure to find for clarity.
Finally, you script seems to fail because you point it to directory that have some other Perl scripts amongst other files. They're most likely in UTF-8, so when your script tries to read them as big5 data, it falis on decoding. Just change your glob to cover data files only.
#!/usr/bin/env perl
use utf8;
use strictures;
use LWP::UserAgent qw();
use Path::Class::Rule qw();
use URI::file qw();
my $start_directory = q(.);
my $search_text = qr'語言中心';
my $next = Path::Class::Rule->new->name(qw(*.php *.htm*))->iter($start_directory);
my #matching_lines;
while (my $file = $next->()) {
for my $line (split /\R/, LWP::UserAgent
->new
->get(URI::file->new_abs($file))
->decoded_content
) {
push #matching_lines, $line if $line =~ $search_text;
}
}
# #matching_lines is (
# '<title>Untitled 語言中心 Document</title>',
# 'abc 語言中心 cde',
# '天天向上語言中心他'
# )

How do I count the characters, words, and lines in a file, using Perl?

What is a good/best way to count the number of characters, words, and lines of a text file using Perl (without using wc)?
Here's the perl code. Counting words can be somewhat subjective, but I just say it's any string of characters that isn't whitespace.
open(FILE, "<file.txt") or die "Could not open file: $!";
my ($lines, $words, $chars) = (0,0,0);
while (<FILE>) {
$lines++;
$chars += length($_);
$words += scalar(split(/\s+/, $_));
}
print("lines=$lines words=$words chars=$chars\n");
A variation on bmdhacks' answer that will probably produce better results is to use \s+ (or even better \W+) as the delimiter. Consider the string "The quick brown fox" (additional spaces if it's not obvious). Using a delimiter of a single whitespace character will give a word count of six not four. So, try:
open(FILE, "<file.txt") or die "Could not open file: $!";
my ($lines, $words, $chars) = (0,0,0);
while (<FILE>) {
$lines++;
$chars += length($_);
$words += scalar(split(/\W+/, $_));
}
print("lines=$lines words=$words chars=$chars\n");
Using \W+ as the delimiter will stop punctuation (amongst other things) from counting as words.
The Word Count tool counts characters, words and lines in text files
Here. Try this Unicode-savvy version of the wc program.
It skips non-file arguments (pipes, directories, sockets, etc).
It assumes UTF-8 text.
It counts any Unicode whitespace as a word separator.
It also accepts alternate encodings if there is a .ENCODING at the end of the filename, like foo.cp1252, foo.latin1, foo.utf16, etc.
It also work with files that have been compressed in a variety of formats.
It gives counts of Paragraphs, Lines, Words, Graphemes, Characters, and Bytes.
It understands all Unicode linebreak sequences.
It warns about corrupted textfiles with linebreak errors.
Here’s an example of running it:
   Paras    Lines    Words   Graphs    Chars    Bytes File
       2     2270    82249   504169   504333   528663 /tmp/ap
       1     2404    11163    63164    63164    66336 /tmp/b3
uwc: missing linebreak at end of corrupted textfiile /tmp/bad
      1*       2*        4       19       19       19 /tmp/bad
       1       14       52      273      273      293 /tmp/es
      57      383     1369    11997    11997    12001 /tmp/funny
       1   657068  3175429 31205970 31209138 32633834 /tmp/lw
       1        1        4       27       27       27 /tmp/nf.cp1252
       1        1        4       27       27       34 /tmp/nf.euc-jp
       1        1        4       27       27       27 /tmp/nf.latin1
       1        1        4       27       27       27 /tmp/nf.macroman
       1        1        4       27       27       54 /tmp/nf.ucs2
       1        1        4       27       27       56 /tmp/nf.utf16
       1        1        4       27       27       54 /tmp/nf.utf16be
       1        1        4       27       27       54 /tmp/nf.utf16le
       1        1        4       27       27      112 /tmp/nf.utf32
       1        1        4       27       27      108 /tmp/nf.utf32be
       1        1        4       27       27      108 /tmp/nf.utf32le
       1        1        4       27       27       39 /tmp/nf.utf7
       1        1        4       27       27       31 /tmp/nf.utf8
       1    26906   101528   635841   636026   661202 /tmp/o2
131      346     1370     9590     9590     4486 /tmp/perl5122delta.pod.gz
291      814     3941    25318    25318     9878 /tmp/perl51310delta.pod.bz2
       1     2551     5345   132655   132655   133178 /tmp/tailsort-pl.utf8
       1       89      334     1784     1784     2094 /tmp/til
       1        4       18       88       88      106 /tmp/w
     276     1736     5773    53782    53782    53804 /tmp/www
Here ya go:
#!/usr/bin/env perl
#########################################################################
# uniwc - improved version of wc that works correctly with Unicode
#
# Tom Christiansen <tchrist#perl.com>
# Mon Feb 28 15:59:01 MST 2011
#########################################################################
use 5.10.0;
use strict;
use warnings FATAL => "all";
use sigtrap qw[ die untrapped normal-signals ];
use Carp;
$SIG{__WARN__} = sub {
confess("FATALIZED WARNING: #_") unless $^S;
};
$SIG{__DIE__} = sub {
confess("UNCAUGHT EXCEPTION: #_") unless $^S;
};
$| = 1;
my $Errors = 0;
my $Headers = 0;
sub yuck($) {
my $errmsg = $_[0];
$errmsg =~ s/(?<=[^\n])\z/\n/;
print STDERR "$0: $errmsg";
}
process_input(\&countem);
sub countem {
my ($_, $file) = #_;
my (
#paras, #lines, #words,
$paracount, $linecount, $wordcount,
$grafcount, $charcount, $bytecount,
);
if ($charcount = length($_)) {
$wordcount = eval { #words = split m{ \p{Space}+ }x };
yuck "error splitting words: $#" if $#;
$linecount = eval { #lines = split m{ \R }x };
yuck "error splitting lines: $#" if $#;
$grafcount = 0;
$grafcount++ while /\X/g;
#$grafcount = eval { #lines = split m{ \R }x };
yuck "error splitting lines: $#" if $#;
$paracount = eval { #paras = split m{ \R{2,} }x };
yuck "error splitting paras: $#" if $#;
if ($linecount && !/\R\z/) {
yuck("missing linebreak at end of corrupted textfiile $file");
$linecount .= "*";
$paracount .= "*";
}
}
$bytecount = tell;
if (-e $file) {
$bytecount = -s $file;
if ($bytecount != -s $file) {
yuck "filesize of $file differs from bytecount\n";
$Errors++;
}
}
my $mask = "%8s " x 6 . "%s\n";
printf $mask => qw{ Paras Lines Words Graphs Chars Bytes File } unless $Headers++;
printf $mask => map( { show_undef($_) }
$paracount, $linecount,
$wordcount, $grafcount,
$charcount, $bytecount,
), $file;
}
sub show_undef {
my $value = shift;
return defined($value)
? $value
: "undef";
}
END {
close(STDOUT) || die "$0: can't close STDOUT: $!";
exit($Errors != 0);
}
sub process_input {
my $function = shift();
my $enc;
if (#ARGV == 0 && -t) {
warn "$0: reading from stdin, type ^D to end or ^C to kill.\n";
}
unshift(#ARGV, "-") if #ARGV == 0;
FILE:
for my $file (#ARGV) {
# don't let magic open make an output handle
next if -e $file && ! -f _;
my $quasi_filename = fix_extension($file);
$file = "standard input" if $file eq q(-);
$quasi_filename =~ s/^(?=\s*[>|])/< /;
no strict "refs";
my $fh = $file; # is *so* a lexical filehandle! ☺
unless (open($fh, $quasi_filename)) {
yuck("couldn't open $quasi_filename: $!");
next FILE;
}
set_encoding($fh, $file) || next FILE;
my $whole_file = eval {
use warnings "FATAL" => "all";
local $/;
scalar <$fh>;
};
if ($#) {
$# =~ s/ at \K.*? line \d+.*/$file line $./;
yuck($#);
next FILE;
}
$function->($whole_file, $file);
unless (close $fh) {
yuck("couldn't close $quasi_filename at line $.: $!");
next FILE;
}
} # foreach file
}
sub set_encoding(*$) {
my ($handle, $path) = #_;
my $enc_name = "utf8";
if ($path && $path =~ m{ \. ([^\s.]+) \z }x) {
my $ext = $1;
die unless defined $ext;
require Encode;
if (my $enc_obj = Encode::find_encoding($ext)) {
my $name = $enc_obj->name || $ext;
$enc_name = "encoding($name)";
}
}
return 1 if eval {
use warnings FATAL => "all";
no strict "refs";
binmode($handle, ":$enc_name");
1;
};
for ($#) {
s/ at .* line \d+\.//;
s/$/ for $path/;
}
yuck("set_encoding: $#");
return undef;
}
sub fix_extension {
my $path = shift();
my %Compress = (
Z => "zcat",
z => "gzcat", # for uncompressing
gz => "gzcat",
bz => "bzcat",
bz2 => "bzcat",
bzip => "bzcat",
bzip2 => "bzcat",
lzma => "lzcat",
);
if ($path =~ m{ \. ( [^.\s] +) \z }x) {
if (my $prog = $Compress{$1}) {
return "$prog $path |";
}
}
return $path;
}
I stumbled upon this while googling for a character count solution.
Admittedly, I know next to nothing about perl so some of this may be off base, but here are my tweaks of newt's solution.
First, there is a built-in line count variable anyway, so I just used that. This is probably a bit more efficient, I guess.
As it is, the character count includes newline characters, which is probably not what you want, so I chomped $_.
Perl also complained about the way the split() is done (implicit split, see: Why does Perl complain "Use of implicit split to #_ is deprecated"? ) so I tweaked that.
My input files are UTF-8 so I opened them as such. That probably helps get the correct character count in the input file contains non-ASCII characters.
Here's the code:
open(FILE, "<:encoding(UTF-8)", "file.txt") or die "Could not open file: $!";
my ($lines, $words, $chars) = (0,0,0);
my #wordcounter;
while (<FILE>) {
chomp($_);
$chars += length($_);
#wordcounter = split(/\W+/, $_);
$words += #wordcounter;
}
$lines = $.;
close FILE;
print "\nlines=$lines, words=$words, chars=$chars\n";
There is the Perl Power Tools project whose goal is to reconstruct all the Unix bin utilities, primarily for those on operating systems deprived of Unix. Yes, they did wc. The implementation is overkill, but it is POSIX compliant.
It gets a little ridiculous when you look at the GNU compliant implementation of true.
Non-serious answer:
system("wc foo");
Reading the file in fixed-size chunks may be more efficient than reading line-by-line. The wc binary does this.
#!/usr/bin/env perl
use constant BLOCK_SIZE => 16384;
for my $file (#ARGV) {
open my $fh, '<', $file or do {
warn "couldn't open $file: $!\n";
continue;
};
my ($chars, $words, $lines) = (0, 0, 0);
my ($new_word, $new_line);
while ((my $size = sysread $fh, local $_, BLOCK_SIZE) > 0) {
$chars += $size;
$words += /\s+/g;
$words-- if $new_word && /\A\s/;
$lines += () = /\n/g;
$new_word = /\s\Z/;
$new_line = /\n\Z/;
}
$lines-- if $new_line;
print "\t$lines\t$words\t$chars\t$file\n";
}
To be able to count CHARS and not bytes, consider this:
(Try it with Chinese or Cyrillic letters and file saved in utf8)
use utf8;
my $file='file.txt';
my $LAYER = ':encoding(UTF-8)';
open( my $fh, '<', $file )
|| die( "$file couldn't be opened: $!" );
binmode( $fh, $LAYER );
read $fh, my $txt, -s $file;
close $fh;
print length $txt,$/;
use bytes;
print length $txt,$/;
This may be helpful to Perl beginners.
I tried to simulate MS word counting functionalities and added one more feature which is not shown using wc in Linux.
number of lines
number of words
number of characters with space
number of characters without space (wc will not give this in its output but Microsoft words shows it.)
Here is the url: Counting words,characters and lines in a file