An alternative to block eval? - perl

Is there an equivalent of the following -
eval { die "reason 1"; }; warn $# if $#;
eval { die "reason 2"; }; warn $# if $#;
eval { die "reason 3"; }; warn $# if $#;
.
.
As you can notice, the following code wont print out every possible reasons for the script to die..
eval {
die "reason 1";
die "reason 2";
die "reason 3";
};
warn $# if $#;
[EDIT] I would like to know all possible reasons the script (that uses lot many libraries) can fail. The die statements are not in a place i can edit.

overriding die is not exactly an alternative to eval, but this is what I think you're asking about, trapping die and turning it into a warn, log die and resume/continue program
$ perl -e " eval{die 1;}; die 2; die 3; "
2 at -e line 1.
$ perl -Mwarnerous -e " eval{die 1;}; die 2; die 3; "
FAKE die : 2 at -e line 1
FAKE die : 3 at -e line 1
$ cat warnerous.pm
*CORE::GLOBAL::die = sub {
unless( $^S ){
warn( qq{FAKE die : #_ #{[sprintf q{at %s line %s },(caller)[1,2] ]}\n} );
}
};
1;

Do you mean something like this?
my $problems;
for my $r (1 .. 3) {
eval { die "reason $r"; 1 } or $problems .= $#;
}
warn "There were the following problems:\n$problems";

Related

Perl: open/close capturing return code

myb.py
import time
import sys
stime = time.time()
run_until = 600
cnt = 0
while True:
dur = time.time() - stime
if dur > run_until:
break
cnt += 1
print cnt
time.sleep(1)
if cnt == 10:
sys.exit(2) <---- capture 2
mya.pl
use FileHandle;
my $myexe = 'myb.py';
my $FH = FileHandle->new;
open $FH, q{-|},
"$myexe 2>&1"
or print "Cannot open\n";
process_output($FH);
close $FH or warn $!;
sub process_output {
my ($fh) = #_;
while (my $line = <$fh>) {
chomp $line;
print "$line\n";
}
}
OUTPUT:
1
2
3
4
5
6
7
8
9
10
Warning: something's wrong at ./mya.pl line 10.
if i change the line to:
my $err = close $FH;
it gives me a blank for $err.
Question: How can I capture the return code 2 from myb.py in mya.pl?
As documented in http://perldoc.perl.org/functions/close.html, the exit value is available as part of $?. But it can be more convenient to use a wrapper:
use IPC::System::Simple qw(capture $EXITVAL EXIT_ANY);
my #output = capture([0,2], "$myexe 2>&1");
print #output;
print "Program exited with value $EXITVAL\n";
The [0,2] says that exit values 0 or 2 are expected, and anything else is a fatal error; you can use EXIT_ANY instead.
This does get all the output at the end, rather than when it is produced, though.
When open creates a child, close functions as waitpid and sets $? accordingly.
$ perl -e'
open(my $fh, "-|", #ARGV)
or die $!;
print while <$fh>;
close($fh);
if ($? == -1 ) { die $!; }
elsif ($? & 0x7F) { die "Killed by signal ".($? & 0x7F)."\n"; }
elsif ($? >> 8 ) { die "Exited with error ".($? >> 8)."\n"; }
' perl -E'
$| = 1;
for (1..5) {
say;
sleep 1;
}
exit 2;
'
1
2
3
4
5
Exited with error 2

perl Digest add addfile computes different SHA1 digest

perl Digest module computes different SHA1 digest for add and addfile functions.
I have created binary random data using /dev/urandom
running on ubuntu
$ lsb_release -a
No LSB modules are available.
Distributor ID: Ubuntu
Description: Ubuntu 12.04.1 LTS
Release: 12.04
Codename: precise
$ perl -v
This is perl 5, version 14, subversion 2 (v5.14.2) built for i686-linux-gnu-thread-multi-64int
output from the script
$ perl t.pl sha1 a.tmp
doesntwork da39a3ee5e6b4b0d3255bfef95601890afd80709
works ee49451434cffe001a568090c86f16f076677af5
$ openssl dgst -sha1 a.tmp
SHA1(a.tmp)= ee49451434cffe001a568090c86f16f076677af5
following in my code
use strict;
use warnings;
use Switch;
use Digest;
sub doesntwork {
my ($datafile, $hashfun) = #_;
open(my $fh, "<", $datafile ) or die "error: Can't open '$datafile', $!\n";
binmode($fh);
read($fh, my $data, -s $datafile);
close($fh);
$hashfun->add($data);
my $hashval = $hashfun->digest();
return unpack('H*', $hashval);
}
sub works {
my ($datafile, $hashfun) = #_;
open(my $fh, "<", $datafile ) or die "error: Can't open '$datafile', $!\n";
binmode($fh);
$hashfun->addfile($fh);
my $hashval = $hashfun->digest();
close($fh);
return unpack('H*', $hashval);
}
###############################################################################
(#ARGV >= 2) or die "usage: perl $0 algo datafile\n";
my ($algo, $datafile) = #ARGV;
my $hashfun;
switch($algo) {
case "md5" {$hashfun = Digest->new("MD5" );}
case "sha1" {$hashfun = Digest->new("SHA-1" );}
case "sha256" {$hashfun = Digest->new("SHA-256");}
case "sha512" {$hashfun = Digest->new("SHA-512");}
else {die "error: invalid algorithm '$algo'\n"}
}
print "doesntwork\t", doesntwork( $datafile, $hashfun ), "\n";
print "works \t", works ( $datafile, $hashfun ), "\n";
I would like add function to work, as I want to compute it on buffered data, not from file data. Possible add treats data as text, while for addfile, binmod on file handle makes it use binary data, if so how can I make add to treat buffer as binary data.
Edited post to print size of the data read--
$ stat -c "%n %s" a.tmp
a.tmp 671088640
$ openssl dgst -sha1 a.tmp
SHA1(a.tmp)= 7dfcced1b0c8864e6a20b2daa63de7ffc1cd7a26
#### Works
$ perl -W -MDigest -e 'open(my $fh, "<", "a.tmp") or die "cant open $!\n";
> binmode($fh);
> my $hf = Digest->new("SHA-1");
> $hf->addfile($fh);
> print unpack("H*", $hf->digest()),"\n";
> close($fh);'
7dfcced1b0c8864e6a20b2daa63de7ffc1cd7a26
#### Doesnt Work
$ perl -W -MDigest -e 'open(my $fh, "<", "a.tmp") or die "cant open $!\n";
> binmode($fh);
> read($fh, my $data, -s "a.tmp") or die "cant read $!\n";
> close($fh);
> printf("## data.length=%d,file.length=%d\n",length($data),-s "a.tmp");
> length($data)==(-s "a.tmp") or die "couldnt read all the data";
> my $hf = Digest->new("SHA-1");
> $hf->add($data);
> print unpack("H*", $hf->digest()),"\n";'
## data.length=671088640,file.length=671088640
9eecafd368a50fb240e0388e3c84c0c94bd6cc2a
Also tried according to Fred's answer
$ perl -W -MDigest -e '
> open(my $fh, "<", "a.tmp") or die "cant open $!\n";
> binmode($fh);
> my $size = -s "a.tmp";
> my $got = read($fh, my $data, $size) or die "cant read $!\n";
> print "##read $got bytes, size=$size\n";
> my $done = $size - $got;
> print "done=$done, size=$size, got=$got\n";
> until(!$done) {
> $got = read($fh, my $newdata, $done);
> $done -= $got ;
> $data .= $newdata;
> print "##read1 $got bytes, size=$size, done=$done\n";
> }
> close($fh);
> printf("## data.length=%d,file.length=%d\n",length($data),-s "a.tmp");
> length($data)==(-s "a.tmp") or die "couldnt read all the data";
> my $hf = Digest->new("SHA-1");
> $hf->add($data);
> print unpack("H*", $hf->digest()),"\n";'
##read 671088640 bytes, size=671088640
done=0, size=671088640, got=671088640
## data.length=671088640,file.length=671088640
9eecafd368a50fb240e0388e3c84c0c94bd6cc2a
You have yet to provide data that produces the problem, but I cannot replicate your problems using the Perl script as the input.
Here's the definition of addfile:
sub addfile {
my ($self, $handle) = #_;
my $n;
my $buf = "";
while (($n = read($handle, $buf, 4*1024))) {
$self->add($buf);
}
unless (defined $n) {
require Carp;
Carp::croak("Read failed: $!");
}
$self;
}
Your claim that addfile works and add doesn't makes much sense. I suppose there could be a bug in the module when it comes to handling long strings, but it's far more likely that you're passing different inputs to the module.
You need to test the return value from read. There is no guarantee that you have read the full contents of the file.
read in perl is generally implemented as a call to underlying system call fread. When you use low level reads like this you must test the return value
to see if you got as much as you asked for.
$size = -s $datafile ;
$got = read($fh, my $data, $size);
$done = $size - $got ;
until ( $done ) {
$got = read($fh, my $newdata, $done );
$done -= $got ;
$data .= $mydata ;
}
That's just off the top of my head and probably has a glaring fencepost error. This is why I avoid using read whenever possible. See, http://perltricks.com/article/21/2013/4/21/Read-an-entire-file-into-a-string for some less painful ways to do this.

Process hanging -SIGALRM not delivered- Perl

I have a command that I'm executing using OPEN with pipe, and I want to set a timeout of 10 seconds and have the sub process aborted if the execution time exceeds this. However, my code just causes the program to hang- Why is my ALARM not getting delivered properly?
my $pid = 0;
my $cmd = "someCommand";
print "Running Command # $num";
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
pid = open(my $fh, "$cmd|");
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
} else {
print $_ while(<$fh>);
}
EDIT:
So From the answers below, This is what I have:
my $pid = open(my $fh, qq(perl -e 'alarm 10; exec \#ARGV; die "exec: $!\n" ' $cmd |));
print $_ while(<$fh>);
But this print ALARM CLOCK to the console when the alarm times out...whereas I dont specify this anywhere in the code...how can I get rid of this, and where would I put the custom alarm event handler?
Thanks!
I want to set a timeout of 10seconds and have the sub process aborted if the execution time exceeds this
A different approach is to set the alarm on the subprocess itself, with a handy scripting language you already have:
my $cmd = "someCommand";
my $pid = open(my $child_stdout, '-|',
'perl', '-e', 'alarm 10; exec #ARGV; die "exec: $!"', $cmd);
...
Your child process will initially be perl (well, the shell and then perl), which will set an alarm on itself and then exec (replace itself with) $someCommand. Pending alarms, however, are inherited across exec()s.
All your code is doing is setting a 10 second timeout on the open call, not on the whole external program. You want to bring the rest of your interaction with the external command into the eval block:
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
$pid = open(my $fh, "$cmd|");
print while <$fh>;
close $fh;
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
}

perl "or" error handling: multi-statement on error possible?

This construct is pretty common in perl:
opendir (B,"/somedir") or die "couldn't open dir!";
But this does not seem to work:
opendir ( B, "/does-not-exist " ) or {
print "sorry, that directory doesn't exist.\n";
print "now I eat fugu.\n";
exit 1;
};
Is it possible for the "or" error-handling to have more than one command?
Compiling the above:
# perl -c test.pl
syntax error at test.pl line 5, near "print"
syntax error at test.pl line 7, near "}"
test.pl had compilation errors.
You can always use do:
opendir ( B, "/does-not-exist " ) or do {
print "sorry, that directory doesn't exist.\n";
print "now I eat fugu.\n";
exit 1;
}
Or you can use if/unless:
unless (opendir ( B, "/does-not-exist " )) {
print "sorry, that directory doesn't exist.\n";
print "now I eat fugu.\n";
exit 1;
}
Or you can swing together your own subroutine:
opendir ( B, "/does-not-exist " ) or fugu();
sub fugu {
print "sorry, that directory doesn't exist.\n";
print "now I eat fugu.\n";
exit 1;
}
There is more than one way to do it.
Exception handling in Perl is done with eval()
eval {
...
} or do {
...Use $# to handle the error...
};

Perl: Append to file and get new line count

Quick question, and I'm sure it's something I'm doing completely wrong with variables, however, here is the issue.
Code first:
#!/usr/bin/perl
use strict;
use warnings;
my $File = "file.txt";
my $CurrentLinesCount = `wc -l < $File` or die "wc failed: $?";
chomp($CurrentLinesCount);
sub GetStatistics() {
if (-d $dir) {
print "Current Lines In File: $CurrentLinesCount\n";
}
else {
exit;
}
}
sub EditFile() {
my $editfile = $File;
my $text = "1234\n12345\n234324\n2342\n2343";
open(MYFILE,">>$editfile") || die("Cannot Open File");
print MYFILE "$text";
close(MYFILE);
sleep 5;
}
## MAIN
GetStatistics();
EditFile();
GetStatistics();
This is the output I get:
Current Lines In File: 258
Current Lines In File: 258
I verified that the file is being written and appended to. Can someone point me in the correct direction on how to have a variable set, updated, and then called again properly?
You call subs, not variables.
Try:
sub CurrentLinesCount {
my $CurrentLinesCount = `wc -l < $File` or die "wc failed: $?";
chomp($CurrentLinesCount);
return $CurrentLinesCount;
}
...
print "Current Lines In File: ", CurrentLinesCount(), "\n";
You're only doing the call to wc once. Thus you're setting the value of $CurrentLinesCount once, and you get the same number when you print it twice.
You'll have to redo the
$CurrentLinesCount = `wc -l < $File` or die "wc failed: $?";
line after you append to the file.
Edit: Or put that line in the GetStatistics function, which would probably be a better place for it.
I would probably move the code block
my $CurrentLinesCount = `wc -l < $File` or die "wc failed: $?";
chomp($CurrentLinesCount);
to the GetStatistics subroutine, so the variable is updated whenever you call your sub.
As an optimization, you can count how many lines you added rather than recounting the whole file (unless another process may also be writing to the file).
use strict;
use warnings;
use FileHandle;
use IPC::Open2;
our $CurrentLinesCount;
our $file = "file.txt";
sub CountLines {
my $File = shift;
my $CurrentLinesCount = `wc -l < $File` or die "wc failed: $?";
$CurrentLinesCount =~ s/\s+//g;
return $CurrentLinesCount;
}
sub ShowStatistics {
my $file = shift;
if (-f $file) {
print "Current Lines In File: $CurrentLinesCount\n";
} else {
exit;
}
}
sub EditFile {
my $editfile = shift;
my $sleeptime = shift || 5;
my $text = "1234\n12345\n234324\n2342\n2343";
open(MYFILE,">>$editfile") || die("Cannot Open File");
print MYFILE "$text";
close(MYFILE);
# Look here:
my $pid = open2(*Reader, *Writer, "wc -l" );
print Writer $text;
close Writer;
$CurrentLinesCount += <Reader>;
sleep $sleeptime;
}
$CurrentLinesCount = CountLines($file);
ShowStatistics($file);
# EditFile updates $CurrentLinesCount
EditFile($file, 2);
ShowStatistics($file);
Still one too many globals for my taste, but I suppose this isn't a program of consequence. On the other hand, globals can be habit forming.
Note that wc doesn't count anything after the final "\n" when counting lines (it views "\n" as a line terminator). If you want to view "\n" as a line separator and count those trailing characters as a line, you'll need an alternate method of counting lines.