Retrying an operation after an exception: Please criticize my code - perl

My Perl application uses resources that become temporarily unavailable at times, causing exceptions using die. Most notably, it accesses SQLite databases that are shared by multiple threads and with other applications using through DBIx::Class. Whenever such an exception occurs, the operation should be retried until a timeout has been reached.
I prefer concise code, therefore I quickly got fed up with repeatedly
typing 7 extra lines for each such operation:
use Time::HiRes 'sleep';
use Carp;
# [...]
for (0..150) {
sleep 0.1 if $_;
eval {
# database access
};
next if $# =~ /database is locked/;
}
croak $# if $#;
... so I put them into a (DB access-specific) function:
sub _retry {
my ( $timeout, $func ) = #_;
for (0..$timeout*10) {
sleep 0.1 if $_;
eval { $func->(); };
next if $# =~ /database is locked/;
}
croak $# if $#;
}
which I call like this:
my #thingies;
_retry 15, sub {
$schema->txn_do(
sub {
#thingies = $thingie_rs->search(
{ state => 0, job_id => $job->job_id },
{ rows => $self->{batchsize} } );
if (#thingies) {
for my $thingie (#thingies) {
$thingie->update( { state => 1 } );
}
}
} );
};
Is there a better way to implement this? Am I re-inventing the wheel? Is
there code on CPAN that I should use?

I'd probably be inclined to write retry like this:
sub _retry {
my ( $retrys, $func ) = #_;
attempt: {
my $result;
# if it works, return the result
return $result if eval { $result = $func->(); 1 };
# nah, it failed, if failure reason is not a lock, croak
croak $# unless $# =~ /database is locked/;
# if we have 0 remaining retrys, stop trying.
last attempt if $retrys < 1;
# sleep for 0.1 seconds, and then try again.
sleep 0.1;
$retrys--;
redo attempt;
}
croak "Attempts Exceeded $#";
}
It doesn't work identically to your existing code, but has a few advantages.
I got rid of the *10 thing, like another poster, I couldn't discern its purpose.
this function is able to return the value of whatever $func() does to its caller.
Semantically, the code is more akin to what it is you are doing, at least to my deluded mind.
_retry 0, sub { }; will still execute once, but never retry, unlike your present version, that will never execute the sub.
More suggested ( but slightly less rational ) abstractions:
sub do_update {
my %params = #_;
my #result;
$params{schema}->txn_do( sub {
#result = $params{rs}->search( #{ $params{search} } );
return unless (#result);
for my $result_item (#result) {
$result_item->update( #{ $params{update} } );
}
} );
return \#result;
}
my $data = _retry 15, sub {
do_update(
schema => $schema,
rs => $thingy_rs,
search => [ { state => 0, job_id => $job->job_id }, { rows => $self->{batchsize} } ],
update => [ { state => 1 } ],
);
};
These might also be handy additions to your code. ( Untested )

The only real problem I see is the lack of a last statement. This is how I would write it:
sub _retry {
my ($timeout, $func) = #_;
for my $try (0 .. $timeout*10) {
sleep 0.1 if $try;
eval { $func->(); 1 } or do {
next if $# =~ /database is locked/; #ignore this error
croak $#; #but raise any other error
};
last;
}
}

I might use 'return' instead of 'last' (in the code as amended by Chas Owens), but the net effect is the same. I am also not clear why you multiply the first parameter of your retry function by 10.
IMNSHO, it is far better to (re)factor common skeletal code into a function as you have done than to continually write the same code fragment over and over. There's too much danger that:
You have to change the logic - in far too many places
You forget to edit the logic correctly at some point
These are standard arguments in favour of using functions or equivalent abstractions over inline code.
In other words - good job on creating the function. And it is useful that Perl allows you to create the functions on the fly (thanks, Larry)!

Attempt by Mark Fowler seems to be pretty close to what I described above. Now, it would be handy if one could specify some sort of exception filter.

Related

How can I write a SIG{__DIE__} handler that does not trigger in eval blocks?

According to the perldoc -f die, which documents $SIG{__DIE__}
Although this feature was to be run only right before your program was to exit, this is not currently so: the $SIG{__DIE__} hook is currently called even inside evaled blocks/strings! If one wants the hook to do nothing in such situations, put die #_ if $^S; as the first line of the handler (see $^S in perlvar). Because this promotes strange action at a distance, this counterintuitive behavior may be fixed in a future release.
So let's take a basic signal handler which will trigger with eval { die 42 },
package Stupid::Insanity {
BEGIN { $SIG{__DIE__} = sub { print STDERR "ERROR"; exit; }; }
}
We make this safe with
package Stupid::Insanity {
BEGIN { $SIG{__DIE__} = sub { return if $^S; print STDERR "ERROR"; exit; }; }
}
Now this will NOT trigger with eval { die 42 }, but it will trigger when that same code is in a BEGIN {} block like
BEGIN { eval { die 42 } }
This may seem obscure but it's rather real-world as you can see it being used in this method here (where the require fails and it's caught by an eval), or in my case specifically here Net::DNS::Parameters. You may think you can catch the compiler phase too, like this,
BEGIN {
$SIG{__DIE__} = sub {
return if ${^GLOBAL_PHASE} eq 'START' || $^S;
print STDERR "ERROR";
exit;
};
}
Which will work for the above case, but alas it will NOT work for a require of a document which has a BEGIN statement in it,
eval "BEGIN { eval { die 42 } }";
Is there anyway to solve this problem and write a $SIG{__DIE__} handler that doesn't interfere with eval?
Check caller(1)
Just a bit further down the rabbit hole with [caller(1)]->[3] eq '(eval)'
return if [caller(1)]->[3] eq '(eval)' || ${^GLOBAL_PHASE} eq 'START' || $^S;
Crawl entire call stack
You can crawl the entire stack and be sure you're NOT deeply in an eval with,
for ( my $i = 0; my #frame = caller($i); $i++ ) {
return if $frame[3] eq '(eval)'
}
Yes, this is total insanity. Thanks to mst on irc.freenode.net/#perl for the pointer.

How can I force exiting a perl subroutine/closure via last/next to fail the program automatically?

Given the following fully functional perl script and module:
tx_exec.pl:
#!/usr/bin/perl
use strict; # make sure $PWD is in your PERL5LIB
# no warnings!
use tx_exec qw(tx_exec);
tx_exec ("normal", sub { return "foobar"; });
tx_exec ("die", sub { die "barbaz\n"; });
tx_exec ("last", sub { last; });
tx_exec ("next", sub { next; });
tx_exec.pm:
package tx_exec;
use strict;
use warnings;
require Exporter;
our #ISA = qw(Exporter);
our #EXPORT_OK = qw(tx_exec);
my $MAX_TRIES = 3;
sub tx_exec {
my ($desc, $sub, $args) = #_;
print "\ntx_exec($desc):\n";
my $try = 0;
while (1) {
$try++;
my $sub_ret;
my $ok = eval {
# start transaction
$sub_ret = $sub->($args);
# commit transaction
1;
};
unless ($ok) {
print "failed with error: $#";
# rollback transaction
if ($try >= $MAX_TRIES) {
print "failed after $try tries\n";
return (undef, undef);
}
print "try #$try failed, retrying...\n";
next;
}
# some cleanup
print "returning (1, ".($sub_ret//'<undef>').")\n";
return (1, $sub_ret);
}
}
I get the following output:
$ ./tx_exec.pl
tx_exec(normal):
returning (1, foobar)
tx_exec(die):
failed with error: barbaz
try #1 failed, retrying...
failed with error: barbaz
try #2 failed, retrying...
failed with error: barbaz
failed after 3 tries
tx_exec(last):
tx_exec(next):
# infinite loop
I understand what is happening, and I'm getting a warning about it if I turn on warnings in the script defining the closures. However, can I force the program to fail/die automatically/idiomatically, when next/last would exit a closure-subroutine like here, under the following strict circumstances:
The $sub being passed is a closure and not a simple function (a simple function dies on bare next/last anyway, which is trivial to handle)
The library code (tx_exec) and the client code (invoking it) are in separate compilation units and the client does not use warnings.
Using perl 5.16.2 (without possibility of upgrading).
Here is a github gist documenting all the approaches so far:
use warnings FATAL => qw(exiting) doesn't make a difference in library code
local $SIG handler doesn't work if the call site doesn't have FATAL => qw(exiting) warning enabled
manual detection works, but is somewhat cumbersome and all over the place (nonlocalized)
ysth's approach with a bare block works best, as it catches the last/next, fully localizing manual detection and guaranteeing that nothing can go wrong (except next/last with labels, which is easier to avoid).
Short Using next/last in the sub (that caller passes as coderef) triggers an exception, if not within a "loop block." This affords an easy handling of such use, with a small change of tx_exec().
The wrong use of last/next raised in the question is a little nuanced. First, from last
last cannot be used to exit a block that returns a value such as eval {}, sub {}, or do {}, and should not be used to exit a grep or map operation.
and for doing this in a sub or eval we get a warning
Exiting subroutine via last at ...
(and for "eval"), and similarly for next. These are classified as W in perldiag and can be controlled by using/not the warnings pragma.† This fact foils attempts to make such use fatal by FATAL => 'exiting' warning or by $SIG{__WARN__} hook.
However, if such use of next or last (in a sub or eval) has no "loop block" in any enclosing scope (or call stack) then it also raises an exception.‡ The message is
Can't "last" outside a loop block...
and similarly for next. It is found in perldiag (search for outside a loop), classified as F.
Then one solution for the posed problem is to run the coderef passed by caller outside of loop blocks, and we get the interpreter to check for and alert us to (raise exception) the offending use. As the while (1) loop is there only to be able to try multiple times this can be implemented.
The coderef can be run and tested against this exception in a utility routine
sub run_coderef {
my ($sub, #args) = #_;
my $sub_ret;
my $ok = eval { $sub_ret = $sub->(#args); 1 };
if (not $ok) {
if ($# =~ /^Can't "(?:next|last)"/) { #'
die $#; # disallow such use
}
else { return } # other error, perhaps retry
}
else { return $sub_ret }
}
which can be used like
sub tx_exec {
my ($sub, #args) = #_;
my $sub_ret = run_coderef($sub, #args);
my $run_again = (defined $sub_ret) ? 0 : 1;
if ($run_again) {
my $MAX_TRIES = 3;
my $try = 0;
while (1) {
++$try;
$sub_ret = run_coderef($sub, #args);
if ( not defined $sub_ret ) { # "other error", run again
if ($try >= $MAX_TRIES) {
print "failed after $try tries\n";
return (undef, undef);
}
print "try #$try failed, retrying...\n";
next;
}
...
}
}
}
This approach makes perfect sense design wise: it allows an exception to be raised for the disallowed use, and it localizes the handling in its own sub.
The disallowed behavior is checked for really only on the first run, since after that run_coderef is called out of a loop, in which case (this) exception isn't thrown. This is fine since the repeated runs (for "allowed" failures) are executed with that same sub so it is enough to check the first use.
On the other hand, it also means that we can
run eval { $sub_ret = $sub->(#args) ... } directly in the while (1), since we have checked for bad use of last/next on the first run
Can add further cases to check for in run_coderef, making it a more rounded checker/enforcer. The first example is the Exiting warnings, which we can make fatal and check for them as well. This will be useful if warnings in the caller are enabled
This approach can be foiled but the caller would have to go out of their way toward that end.
Tested with v5.16.3 and v5.26.2.
† Btw, you can't fight a caller's decision to turn off warnings. Let them be. It's their code.
‡ This can be checked with
perl -wE'sub tt { last }; do { tt() }; say "done"'
where we get
Exiting subroutine via last at -e line 1.
Can't "last" outside a loop block at -e line
while if there is a "loopy" block
perl -wE'sub tt { last }; { do { tt() } }; say "done"'
we get to see the end of the program, no exception
Exiting subroutine via last at -e line 1.
done
The extra block { ... } "semantically identical to a loop that executes once" (next).
This can be checked for eval by printing its message in $#.
The original post, based on the expectation that only warnings are emitted
The warnings pragma is lexical, so adding per ysth comment
use warnings FATAL => 'exiting';
in the sub itself (or in eval to scope it more tightly) should work under the restrictions
sub tx_exec {
use warnings FATAL => "exiting";
my ($sub, $args) = #_;
$sub->($args);
};
since the warning fires inside the tx_exec scope. In my test the call to this with a coderef not doing last/next first runs fine, and it dies only for a later call with them.
Or, can implement it using $SIG{__WARN__} "signal" (hook)
sub tx_exec {
local $SIG{__WARN__} = sub {
die #_ if $_[0] =~ /^Exiting subroutine via (?:last|next)/;
warn #_
};
my ($sub, $args) = #_;
...
}
This is the manual approach I was mentioning in the question. So far this was the only approach that helped me cleanly handle misbehaving client code, without any assumptions or expectations.
I'd prefer, and will gladly consider, a more idiomatic approach, like the local $SIG or use warnings FATAL => 'exiting', if they work without any expectation from client code (specifically that it has warnings enabled in any form).
tx_exec.pl:
#!/usr/bin/perl
use strict;
# no warnings!
use tx_exec qw(tx_exec);
tx_exec ("normal", sub { return "foobar"; });
tx_exec ("die", sub { die "barbaz\n"; });
tx_exec ("last", sub { last; });
tx_exec ("next", sub { next; });
tx_exec.pm:
package tx_exec;
use strict;
use warnings;
require Exporter;
our #ISA = qw(Exporter);
our #EXPORT_OK = qw(tx_exec);
my $MAX_TRIES = 3;
sub tx_exec {
my ($desc, $sub, $args) = #_;
print "\ntx_exec($desc):\n";
my $try = 0;
my $running = 0;
while (1) {
$try++;
my $sub_ret;
my $ok = eval {
# start transaction
die "Usage of `next` disallowed in closure passed to tx_exec\n" if $running;
$running = 1;
$sub_ret = $sub->($args);
print "sub returned properly\n";
# commit transaction
1;
};
$running = 0;
unless ($ok) {
if ($# =~ /^Usage of `next`/) {
print $#;
return (undef, undef); # don't retry
}
print "failed with error: $#";
# rollback transaction
if ($try >= $MAX_TRIES) {
print "failed after $try tries\n";
return (undef, undef);
}
print "try #$try failed, retrying...\n";
next;
}
# some cleanup
print "returning (1, ".($sub_ret//'<undef>').")\n";
return (1, $sub_ret);
}
print "Usage of `last` disallowed in closure passed to tx_exec\n";
return (undef, undef);
}
output:
tx_exec(normal):
sub returned properly
returning (1, foobar)
tx_exec(die):
failed with error: barbaz
try #1 failed, retrying...
failed with error: barbaz
try #2 failed, retrying...
failed with error: barbaz
failed after 3 tries
tx_exec(last):
Usage of `last` disallowed in closure passed to tx_exec
tx_exec(next):
Usage of `next` disallowed in closure passed to tx_exec
For lack of #ysth's involvement in writing an answer, I'm writing the best solution I found so far, inspired by his first attempt from the comments to the question. (I will re-accept ysth's answer if he posts it later).
The eval calling the coderef needs to look like this:
my $ok = eval {
# start transaction
my $proper_return = 0;
{
$sub_ret = $sub->($args);
$proper_return = 1;
}
die "Usage of `next` or `last` disallowed in coderef passed to tx_exec\n" unless $proper_return;
# commit transaction
1;
};
The bare block is acting as a loop which will immediately exit on either next or last, so whether we land after the bare block, or within it, from calling the coderef, we can deduce whether the coderef executed next/last and act appropriately.
More on bare block semantics and their interaction with next/last can be found here.
It is left as an exercise for the reader to handle the rarely seen redo in the code above.

Perl procedural return two stack call levels

My question is similar to:
Is it possible for a Perl subroutine to force its caller to return?
but I need procedural method.
I want to program some message procedure with return, example essential code:
sub PrintMessage {
#this function can print to the screen and both to logfile
print "Script message: $_[0]\n";
}
sub ReturnMessage {
PrintMessage($_[0]);
return $_[2]; # <-- we thinking about *this* return
}
sub WorkingProc {
PrintMessage("Job is started now");
#some code
PrintMessage("processed 5 items");
# this should return from WorkingProc with given exitcode
ReturnMessage("too many items!",5) if $items>100;
#another code
ReturnMessage("time exceded!",6) if $timespent>3600;
PrintMessage("All processed succesfully");
return 0;
}
my $ExitCode=WorkingProc();
#finish something
exit $ExitCode
Idea is, how to use return inside ReturnMessage function to exit with specified code from WorkingProc function? Notice, ReturnMessage function is called from many places.
This isn't possible. However, you can explicitly return:
sub WorkingProc {
PrintMessage("Job is started now");
...
PrintMessage("processed 5 items");
# this returns from WorkingProc with given exitcode
return ReturnMessage("to much items!", 5) if $items > 100;
...
return ReturnMessage("time exceded!", 6) if $timespent > 3600;
PrintMessage("All processed succesfully");
return 0;
}
A sub can have any number of return statements, so this isn't an issue.
Such a solution is preferable to hacking through the call stack, because the control flow is more obvious to the reader. What you were dreaming of was a kind of GOTO, which most people not writing C or BASIC etc. have given up 45 years ago.
Your code relies on exit codes to determine errors in subroutines. *Sigh*. Perl has an exception system which is fairly backwards, but still more advanced than that.
Throw a fatal error with die "Reason", or use Carp and croak "Reason". Catch errors with the Try::Tiny or TryCatch modules.
sub WorkingProc {
PrintMessage("Job is started now");
...
PrintMessage("processed 5 items");
# this should return from WorkingProc with given exitcode
die "Too much items!" if $items > 100;
...
die "Time exceeded" if $timespent > 3600;
PrintMessage("All processed succesfully");
return 0;
}
WorkingProc();
If an error is thrown, this will exit with a non-zero status.
The approach that springs to mind for non-local return is to throw an exception (die) from the innermost function.
You'll then need to have some wrapping code to handle it at the top level. You could devise a set of utility routines to automatically set that up.
Using Log::Any and Log::Any::Adapter in conjunction with Exception::Class allow you to put all the pieces together with minimum fuss and maximum flexibility:
#!/usr/bin/env perl
package My::Worker;
use strict; use warnings;
use Const::Fast;
use Log::Any qw($log);
use Exception::Class (
JobException => { fields => [qw( exit_code )] },
TooManyItemsException => {
isa => 'JobException',
description => 'The worker was given too many items to process',
},
TimeExceededException => {
isa => 'JobException',
description => 'The worker spent too much time processing items',
},
);
sub work {
my $jobid = shift;
my $items = shift;
const my $ITEM_LIMIT => 100;
const my $TIME_LIMIT => 10;
$log->infof('Job %s started', $jobid);
shift #$items for 1 .. 5;
$log->info('Processed 5 items');
if (0.25 > rand) {
# throw this one with 25% probability
if (#$items > $ITEM_LIMIT) {
TooManyItemsException->throw(
error => sprintf(
'%d items remain. Limit is %d.',
scalar #$items, $ITEM_LIMIT,
),
exit_code => 5,
);
}
}
{ # simulate some work that might take more than 10 seconds
local $| = 1;
for (1 .. 40) {
sleep 1 if 0.3 > rand;
print '.';
}
print "\n";
}
my $time_spent = time - $^T;
($time_spent > $TIME_LIMIT) and
TimeExceededException->throw(
error => sprintf (
'Spent %d seconds. Limit is %d.',
$time_spent, $TIME_LIMIT,
),
exit_code => 6);
$log->info('All processed succesfully');
return;
}
package main;
use strict; use warnings;
use Log::Any qw( $log );
use Log::Any::Adapter ('Stderr');
eval { My::Worker::work(exceptional_job => [1 .. 200]) };
if (my $x = JobException->caught) {
$log->error($x->description);
$log->error($x->error);
exit $x->exit_code;
}
Sample output:
Job exceptional_job started
Processed 5 items
........................................
The worker spent too much time processing items
Spent 12 seconds. Limit is 10.
or
Job exceptional_job started
Processed 5 items
The worker was given too many items to process
195 items remain. Limit is 100.

Is it ok to use Attribute::Handlers for implementing retry logic

Is it ok to use Attribute::Handlers for implementing retry logic
I have almost 50+ subroutine like verifyXXXX. I need to implement the retry logic for all these subs. I want to write this retry logic where the sub is actually implemented. If the return value of sub is false/undef then it will retry again.
subs will be called in regular way, so that the caller will not know about the retry logic, something like.
verify_am_i_doing_good()
or die('sorry you are not doing as expected.');
verify_am_i_fine()
or die ('sorry you are not find.');
:
:
the actual implementation of these functions is something like this in the package.
use Attribute::Handlers;
use constant RETRY_LIMIT => 4;
use constant RETRY_DELAY => 2;
sub verify_am_i_doing_good : __retry
{
return 1 if ($x == $y);
return;
}
sub __retry : ATTR(CODE) {
my ($pkg, $sym, $code) = #_;
my $name = *{ $sym }{NAME};
no warnings 'redefine';
*{ $sym } = sub
{
my $self = $_[0];
my $result;
logMsg (INFO, "Executing subroutine $name with retry limit " . RETRY_LIMIT);
for (my $retryCount = 1; $retryCount <= RETRY_LIMIT; $retryCount++)
{
logMsg (INFO, "Executing subroutine $name with retry count $retryCount");
my $result = $code->( #_ );
if ($result)
{
logMsg (INFO, "Expected result observed in retry count $retryCount");
return wantarray ? #$result : $result;
}
else
{
logMsg (INFO, "Expected result is NOT observed in retry count $retryCount");
logMsg (INFO, "Retrying again by updating uixml");
sleep RETRY_DELAY;
$self->updateState();
}
}
logMsg (WARN, "Failed to verify expected result for subroutine $name with retry limit " . RETRY_LIMIT);
return;
};
}
The reason to use Attribute::Handlers, inplace of Attribute::Attempts is that in the case of failure, I need to call another subroutine updateState() before retrying (re-executing) the subroutine.
I got this idea of writing the retry logic from following post http://www.perl.com/pub/2007/04/12/lightning-four.html
My main concern is that since I am using this __retry attribute for almost 50+ subs. Is it a good practice to do in this way or is there anything simple I can do?
You help will be highly appreciated.
You don't need attributes to do a sub wrapper. There was Memoize long before there was Memoize::Attrs (or Attribute::Memoize for that matter). You can just take a look at how Memoize handles it.
Quite recently, I was writing some Perl for functions called in another interface. All the arguments passed to the Perl function from this interface would be passed in a funky-but-universal format used by my division. Rather than deal with this everywhere, I wrote a logic wrapper like so
sub external (#) {
my ( $subname, $code ) = #_;
...
my $wrapped
= sub {
my $count = 5;
while ( --$count and not my #results = &$code ) {
adjust_stuff();
}
return #results;
};
{ no strict 'refs'; # my special "no-block"
*$subname = $wrapped;
}
return;
}
And used it like this (some people don't like this use of the "fat comma")
external something_I_want_to_do => sub {
my #regular_old_perl_args = #_;
...
};
The prototype (#) helps a sub act as an operator and need not always be called with parenthesis.
But by all means if you like method attributes and it works and you can get it not to bite you, use them. But you don't have to. You should probably read up on the caveats though.

How do I use a block as an 'or' clause instead of a simple die?

I want to check the results of an operation in the Net::FTP Perl module rather than die.
Typically you would do:
$ftp->put($my_file)
or die "Couldn't upload file";
But I want to do something else instead of just dying in this script so I tried:
$ftp->put($my_file)
or {
log("Couldn't upload $my_file");
return(-1);
}
log("$my_file uploaded");
But Perl complains of compilation errors saying:
syntax error at toto.pl line nnn, near "log"
which is the second log in my code fragment.
Any suggestions greatly appreciated.
cheers,
do is what you're looking for:
$ftp->put($my_file)
or do {
log("Couldn't upload $my_file");
return(-1);
};
log("$my_file uploaded");
But this is probably better style:
unless( $ftp->put( $my_file )) { # OR if ( !$ftp->put...
log("Couldn't upload $my_file");
return(-1);
}
If you just want to return an error condition, then you can die and use eval in the calling func.
use English qw<$EVAL_ERROR>; # Thus, $# <-> $EVAL_ERROR
eval {
put_a_file( $ftp, $file_name );
handle_file_put();
};
if ( $EVAL_ERROR ) {
log( $EVAL_ERROR );
handle_file_not_put();
}
and then call
sub put_a_file {
my ( $ftp, $my_file ) = #_;
$ftp->put( $my_file ) or die "Couldn't upload $my_file!";
log( "$my_file uploaded" );
}
or do{}; always makes my head hurt. Is there a good reason to use "or" syntax (which I admit using a lot for one liners) vs "if" (which I prefer for multi liners)?
So, is there a reason to use or not use one of these methods in preference of the other?
foo()
or do {
log($error);
return($error);
};
log($success);
if (!foo()) {
log($error);
return($error);
}
log($success);
use do.
here is small code snippet:
sub test {
my $val = shift;
if($val != 2) {
return undef;
}
return 1;
}
test(3) || do {
print "another value was sent";
};
I'm having a hard time understanding why this needs to be wrapped up in a do. Is there a reason that this isn't sufficient?
my $ftp_ok = $ftp->put( $my_file )
or log("Couldn't upload $my_file" ) and return -1;
log("$my_file uploaded") if $ftp_ok;
This assumes that the put function doesn't always return undef on success.