I am using syntax checking to see whether my Perl script is being used in the correct way. If the syntax is not correct, I display a message saying what the correct syntax is, and then end the execution of the program.
Between:
print "Please use the following syntax: ...";
exit 1;
and:
die("Please use the following syntax: ...");
Which one should I use? I know die would have been the answer if exception handling was in place, but that is not the case.
Which one is the better choice for program termination without exception handling? Please also say why.
It depends on what you want to have happen with STDERR and STDOUT. I prefer to send error and warning type messages to STDERR so that when someone is trying to redirect output to a file they still see the error messages; However, there are times though when STDOUT is used to communicate status to the user so he or she can tail -f or paginate it, and at those times writing to STDERR causes them pain (they have to redirect STDERR back into STDOUT with 2>&1, and not everyone knows how to do that). So which to use, die or print and exit, depends heavily on what type of program you are writing.
There are other benefits/drawbacks to using die:
You can trap die with an eval, but not exit
You can run code when the program calls die by installing a signal handler for the __DIE__ signal
You can easily override the die function
Each of those has times where it is handy to be able to do them, and times when it is a pain that they can be done.
print prints to STDOUT but die prints to STDERR
exit 1 exits with exit status of 1 but die exits with exit current value of errno that is $!
Hence die is the preferred way as its simpler and shorter.
It is best to follow existing practice ("rule of least surprise"): exit with 2 on a fatal error such as a syntax error.
Related
I noticed in my program that an exception raised from IO::Pipe was behaving oddly, and I cannot figure out what it's doing (let alone how it's doing it). I've boiled it down to a simple example program:
use strict;
use warnings;
use Carp;
use IO::Pipe;
my($path) = shift;
my($bad) = shift || "";
eval {
if ($path =~ m{pipe}i) {
my($bin) = ($bad ? "/bin/lsddd" : "/bin/ls");
my($pipe) = IO::Pipe->new();
$pipe->reader("$bin -l .");
print "$_" while <$pipe>;
$pipe->close;
}
elsif ($path =~ m{croak}i) {
croak "CROAKED" if $bad;
}
else {
die "DIED" if $bad;
}
};
if ($#) {
my($msg) = $#;
die "Caught Exception: $msg\n";
}
die "Uh-oh\n" if $bad;
print "Made it!\n";
The example program takes two arguments, one to indicate which code path to go down inside the eval block, and the second to indicate whether or not to generate an error (anything that evaluates to false will not generate an error). All three paths behave as expected when no error is requested; they all print Made it! with no error messages.
When asking for an error and running through the croak or die paths, it also behaves as I expect: the exception is caught, reported, and the program terminates.
$ perl example.pl die foo
Caught Exception: DIED at example.pl line 23.
and
$ perl example.pl croak foo
Caught Exception: CROAKED at example.pl line 11.
eval {...} called at example.pl line 10
When I send an error down the IO::Pipe path, though, it reports an error, but the program execution continues until the outer die is reached:
$ perl example.pl pipe foo
Caught Exception: IO::Pipe: Cannot exec: No such file or directory at example.pl line 15.
Uh-oh
The first question is why -- why does the program report the "Caught Exception" message but not terminate? The second question is how do I prevent this from happening? I want the program to stop executing if the program can't be run.
There are two processes running after the eval in the case of interest. You can see this by adding a print statement before if ($#). One drops through eval and thus gets to the last die.
The reader forks when used with an argument, to open a process. That process is exec-ed in the child while the parent returns, with its pid. The code for this is in _doit internal subroutine
When this fails the child croaks with the message you get. But the parent returns regardless as it has no IPC with the child, which is expected to just disappear via exec. So the parent escapes and makes its way down the eval. That process has no $# and bypasses if ($#).
It appears that this is a hole in error handling, in the case when reader is used to open a process.
There are ways to tackle this. The $pipe is an IO::Handle and we can check it and exit that extra process if it's bad (but simple $pipe->error turns out to be the same in both cases). Or, since close is involved, we can go to $? which is indeed non-zero when error happens
# ...
$pipe->close;
exit if $? != 0;
(or rather first examine it). This is still a "fix," which may not always work. Other ways to probe the $pipe, or to find PID of the escapee, are a bit obscure (or worse, digging into class internals).
On the other hand, a simple way to collect the output and exit code from a program is to use a module for that. A nice pick is Capture::Tiny. There are others, like IPC::Run and IPC::Run3, or core but rather low-level IPC::Open3.
Given the clarifications, the normal open should also be adequate.
I have a Plack/Starman application running with TryCatch statements that call 'confess' from the Carp module. However I notice that the confess output is not printing to STDOUT. I've tried routing STDERR output to STDOUT '2>&1', but still don't see anything. I have search for possible error log files with no luck. Where in the world is this printing to? I am sure it's probably a simple answer. Where are the log files located? I am running on a Ubuntu box if that matters.
Thanks
Some confusion here. First, confess (and all the other carps in the pond) don't print to STDOUT: they print to STDERR. Second, you're stopping the exception and hence the associated output using try/catch (glorified eval), so it's not printed unless you explicitly print it yourself. You'll see warnings, but you won't see messages of instructions that would terminate the program (well, not Plack, but your script) because they're swallowed by your try/catch code and it's up to you to decide whether any of it should be printed and where to.
https://metacpan.org/module/Carp
https://metacpan.org/module/TryCatch
https://metacpan.org/module/Plack
I have a perl script, using standard-as-dirt Net::HTTP code, and perl 5.8.8.
I have come across an error condition in which the server returns 0 bytes of data when I call:
$_http_connection->read_response_headers;
Unfortunately, my perl script dies, because the Net::HTTP::Methods module has a "die" on line 306:
Server closed connection without sending any data back at
/usr/lib/perl5/vendor_perl/5.8.8/Net/HTTP/Methods.pm line 306
And lines 305-307 are, of course:
unless (defined $status) {
die "Server closed connection without sending any data back";
}
How can I have my script "recover gracefully" from this situation, detecting the die and subsequently going into my own error-handling code, instead of dieing itself?
I'm sure this is a common case, and probably something simple, but I have not come across it before.
Using eval to catch exceptions can occasionally be problematic, especially pre 5.14. You can use Try::Tiny.
You can use eval { } to catch die() exceptions. Use $# to inspect the thrown value:
eval {
die "foo";
};
print "the block died with $#" if $#;
See http://perldoc.perl.org/functions/eval.html for details.
Customizing the die to mean something else is simple:
sub custom_exception_handler { ... } # Define custom logic
local $SIG{__DIE__} = \&custom_exception_handler; # Won't die now
# Calls custom_exception_handler instead
The big advantage of this approach over eval is that it doesn't require calling another perl interpreter to execute the problematic code.
Of course, the custom exception handler should be adequate for the task at hand.
I am trying to run bp_genbank2gff3.pl (bioperl package) from another perl script that
gets a genbank as its argument.
This does not work (no output files are generated):
my $command = "bp_genbank2gff3.pl -y -o /tmp $ARGV[0]";
open( my $command_out, "-|", $command );
close $command_out;
but this does
open( my $command_out, "-|", $command );
sleep 3; # why do I need to sleep?
close $command_out;
Why?
I thought that close is supposed to block until the command is done:
Closing any piped filehandle causes
the parent process to wait for the
child to finish...
(see http://perldoc.perl.org/functions/open.html).
Edit
I added this as last line:
say "ret=$ret, \$?=$?, \$!=$!";
and in both cases the printout is:
ret=, $?=13, $!=
(which means close failed in both cases, right?)
$? = 13 means your child process was terminated by a SIGPIPE signal. Your external program (bp_genbank2gff3.pl) tried to write some output to a pipe to your perl program. But the perl program closed its end of the pipe so your OS sent a SIGPIPE to the external program.
By sleeping for 3 seconds, you are letting your program run for 3 seconds before the OS kills it, so this will let your program get something done. Note that pipes have a limited capacity, though, so if your parent perl script is not reading from the pipe and if the external program is writing a lot to standard output, the external program's write operations will eventually block and you may not really get 3 seconds of effort from your external program.
The workaround is to read the output from the external program, even if you are just going to throw it away.
open( my $command_out, "-|", $command );
my #ignore_me = <$command_out>;
close $command_out;
Update: If you really don't care about the command's output, you can avoid SIGPIPE issues by redirecting the output to /dev/null:
open my $command_out, "-|", "$command > /dev/null";
close $command_out; # succeeds, no SIGPIPE
Of course if you are going to go to that much trouble to ignore the output, you might as well just use system.
Additional info: As the OP says, closing a piped filehandle causes the parent to wait for the child to finish (by using waitpid or something similar). But before it starts waiting, it closes its end of the pipe. In this case, that end is the read end of the pipe that the child process is writing its standard output to. The next time the child tries to write something to standard output, the OS detects that the read end of that pipe is closed and sends a SIGPIPE to the child process, killing it and quickly letting the close statement in the parent finish.
I'm not sure what you're trying to do but system is probably better in this case...
I'm learning Perl, and in a lot of the examples I see errors are handled like this
open FILE, "file.txt" or die $!;
Is die in the middle of a script really the best way to deal with an error?
Whether die is appropriate in the middle of the script really depends on what you're doing. If it's only tens of lines, then it's fine. A small tool with a couple hundred lines, then consider confess (see below). If it's a large object-oriented system with lots of classes and interconnected code, then maybe an exception object would be better.
confess in the Carp package:
Often the bug that led to the die isn't on the line that die reports.
Replacing die with confess (see Carp package) will give the stack trace (how we got to this line) which greatly aids in debugging.
For handling exceptions from Perl builtins, I like to use autodie. It catches failures from open and other system calls and will throw exceptions for you, without having to do the or die bit. These exceptions can be caught with a eval { }, or better yet, by using Try::Tiny.
Since I use Log::Log4perl almost everywhere, I use $logger->logdie instead of die. And if you want to have more control over your exceptions, consider Exception::Class.
It is better to catch your exceptions with Try::Tiny (see its documentation why).
Unless you've got a more specific idea, then yes you want to die when unexpected things happen.
Dying at the failure to open a file and giving the file name is better than the system telling you it can't read from or write to an anonymous undefined.
If you're talking about a "script", in general you're talking about a pretty simple piece of code. Not layers that need coordination (not usually). In a Perl Module, there is an attendant idea is that you don't own the execution environment, so either the main software cares and it catches things in an eval, OR it doesn't really care and dying would be fine. However, one you should try at a little more robustness as a module and just pass back undefs or something.
You can catch whatever dies (or croaks) in an eval block. And you can do your more specific handling there.
But if you want to inspect $! then write that code, and you'll have a more specific resolution.
Take a look at the near-universal standard of using strict. That's code that dies on questionable syntax, rather than letting you continue along.
So I think the general idea is: yes, DIE unless you have a better idea of how things should be handled. If you put enough foresight into it, you can be forgiven for the one or two times you don't die, because you know you don't need to.
The more modern approach is to use the Carp standard library.
use Carp;
my $fh;
open $fh, '<', "file.txt" or confess($!);
The main advantage is it gives a stack trace on death.
I use die but I wrap it in eval blocks for control over the error handling:
my $status = eval
{
# Some code...
};
If the 'eval' fails:
$status will be undefined.
$# will be set to whatever error message was produced (or the contents of
a die)
If the 'eval' succeeds:
$status will be the last returned value of the block.
$# will be set to ''.
As #friedo wrote, if this is a standalone script of a few lines, die is fine, but in my opinion using die in modules or require'd piece of code is not a good idea because it interrupts the flow of the program. I think the control of the flow should be a prerogative of the main part of the program.
Thus, I think, in module it is better to return undef such as return which would return undef in scalar context and an empty list in list context and set an Exception object to be retrieved for more details. This concept is implemented in the module Module::Generic like this:
# in some module
sub myfunc
{
my $self = shift( #_ );
# some code...
do{ # something } or return( $self->error( "Oops, something went wrong." ) );
}
Then, in the caller, you would write:
$obj->myfunc || die( "An error occurred at line ", $obj->error->line, " with stack trace: ", $obj->error->trace );
Here error will set an exception object and return undef.
However, because many module die or croak, you also need to catch those interruption using eval or try-catch blocks such as with Nice::Try
Full disclosure: I am the developer of both Module::Generic and Nice::Try.