How to fix next in eval-like logic in Perl? [closed] - perl

Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 1 year ago.
Improve this question
I wrote in code
my $sql = ...
my $sth = $dbh->prepare($sql);
eval {
$sth->execute;
}
or do {
# die "SQL Error: $DBI::errstr\n";
addToLog("SQL Error: $DBI::errstr\n");
sleep(30);
next mysql_recover;
};
but checker swears I can't use next inside eval. How to rewrite otherwise?

The problem is in a scope enclosing the shown code (which isn't shown). From next
next LABEL
next EXPR
next
The next command is like the continue statement in C; it starts the next iteration of the loop [...]
So the shown code should be in a loop, with the label mysql_recover. Or in a block with such a label, since a block is a loop that executes once (but then next should be last).
However, we can't tell what is wrong without more code or the actual error message. The shown code doesn't give any hints since a next LABEL; is legit only to target structures that are subject to flow control so the mere acceptance of that statement implies that next is fine.
That do block is not a part of the eval so I don't know what your checker might mean.
But, if there were a next inside of eval, then
eval BLOCK does not count as a loop, so the loop control statements next, last, or redo cannot be used to leave or restart the block.
I'd like to also comment on that eval.
The $sth->execute shown as the sole statement in the eval can return an undef, if there is an error -- and so your eval-handler (the do block) would trigger in that case. If that's your intent then fine, but that'd be confusing since an eval is meant to guard against exceptions (a die).
A full idiom is eval { ...; 1 } or do {...};, so the eval always returns success (that 1) unless a die was thrown . So or do runs only if there was an exception, not on "ordinary" errors (like an undef from DBI...).
Another way to handle that would be to explicitly check for errors from $sth->execute, what need be done anyway. (But then add that 1 as well, against an unexpected false. Why not.)
(There is still some fine print about $# but that would take us elsewhere)

Related

"Can't goto subroutine outside a subroutine" [closed]

Closed. This question needs details or clarity. It is not currently accepting answers.
Want to improve this question? Add details and clarify the problem by editing this post.
Closed 5 years ago.
Improve this question
I've created a subroutine named "MENU" with a label named "INIT_MENU" at the top of the subroutine, but when i call this label i got a error : Can't goto subroutine outside a subroutine at program.pl line 15
Here is an example:
sub MENU {INIT_MENU: print "blah blah";}
and here is the line 15:
goto &MENU, INIT_MENU;
Sorry if it is a duplicated question, i searched in all possible places and even in the official site of Perl
The goto expects a single argument, so this code first executes goto &MENU and then after the comma operator it evaluates the constant INIT_MENU in a void context (drawing a warning).
The purpose of goto &NAME is to replace the subroutine in which it is encountered with subroutine NAME; it doesn't make sense calling it outside of a subroutine and this is an error. From perldiag
Can't goto subroutine outside a subroutine
(F) The deeply magical "goto subroutine" call can only replace one subroutine call for another. It can't manufacture one out of whole cloth. In general you should be calling it out of only an AUTOLOAD routine anyway. See goto.
A comment on the purpose of this.
A lot has been written on the harmfulness of goto LABEL.† Not to mention that INIT_MENU: cannot be found when hidden inside a routine. The upshot is that there are always better ways.
A sampler
Call the function and pass the parameter
MENU(INIT_MENU);
sub MENU {
my $menu = shift;
if ($menu eq 'INIT_MENU') { ... }
...
}
If, for some reason, you want to 'hide' the call to MENU use goto &MENU as intended
sub top_level { # pass INIT_MENU to this sub
...
goto &MENU; # also passes its #_ as it is at this point
# the control doesn't return here; MENU took place of this sub
}
This altogether replaces top_level() with MENU() and passes on its #_ to it. Then process input in MENU as above (for example) to trigger the chosen block. After this "not even caller will be able to tell that this routine was called first," see goto. Normally unneeded
Why even have MENU()? Instead, have menu options in separate subs and their references can be values in a hash where keys are names of options. So you'd have a dispatch table and after the logic (or user choice) selects a menu item its code is run directly
my %do_menu_item = ( INIT_MENU => sub { code for INIT_MENU }, ... );
...
$do_menu_item{$item_name}->();
Menu systems tend to grow bigger and more complex with development. It would make sense to adopt OO from start, and then there are yet other approaches for this detail
If you find yourself considering goto it may be time to reconsider (some of) the design.
† See for instance this post and this post. It's (even) worse than that in Perl since there are specific constructs (and restrictions) for situations where goto can be argued acceptable, so there is even less room for it. One example is jumping out of nested loops: there are labels in Perl.

Perl `defined' and `undef' subroutine scope

Please take a look at the following code:
use strict;
use warnings;
print "subroutine is defined\n" if defined &myf;
myf();
sub myf
{
print "called myf\n";
}
undef &myf;
#myf();
print "now subroutine is defined\n" if defined &myf;
The output is
subroutine is defined
called myf
The first print statement can print, does that mean the interpreter (or compiler?) looks further and sees the subroutine definition? If so, why it doesn't see the undef &myf; as the second print statement?
Thanks
That doesn't have to do with scope, but with compile time and run time. Here's a simplified explanation.
The Perl interpreter will scan your code initially, and follow any use statements or BEGIN blocks. At that point, it sees all the subs, and notes them down in their respective packages. So now you have a &::myf in your symbol table.
When compile time has reached the end of the program, it will switch into run time.
At that point, it actually runs the code. Your first print statement is executed if &myf is defined. We know it is, because it got set at compile time. Perl then calls that function. All is well. Now you undef that entry in the symbol table. That occurs at run time, too.
After that, defined &myf returns false, so it doesn't print.
You even have the second call to myf() there in the code, but commented out. If you remove the comment, it will complain about Undefined subroutine &main::myf called. That's a good hint at what happened.
So in fact it doesn't look forward or backward in the code. It is already finished scanning the code at that time.
The different stages are explained in perlmod.
Note that there are not a lot of use cases for actually undefing a function. I don't see why you would remove it, unless you wanted to clean up your namespace manually.

global level exception handling in perl

I have written a perl program which internally calls three perl modules. My supervisor after reviewing the code asked me add global exception handling. I didn't understand what he meant by this. He also said use Eval to accomplish this.I am not sure how to use Eval so that it catches any exception in the enire perl module. Can anyone help me by providing links or by providing explanation?
Thanks in advance.
For each program he wants me to have an exception handling where in if something goes wrong it will be highlighted and it becomes easy for us to debug.
When an uncaught exception occurs, it is printed to STDERR. Your highlighting requirement is already being met.
Exceptions messages already include the line number at which they were thrown (unless specifically suppressed), so some information to help debug is already available.
$ perl -e'sub f { die "bar" } f("foo")'
bar at -e line 1.
Adding use Carp::Always; to your scripts will cause a stack backtrace to be provided as well, providing more information.
$ perl -e'use Carp::Always; sub f { die "bar" } f("foo")'
bar at -e line 1.
main::f("foo") called at -e line 1
The problem you are given seems imprecise. "Global" and eval are somewhat contradictory, as
Borodin explained in his comment. Another way to do things "global" is given in ikegami's answer. However, since mentioning eval is specific here is a rundown on a very basic use of that.
You use eval on a block of code (or an expression but that is not what you want here). If a die is thrown anywhere inside that block, the eval catches that in the sense that you get the control back, the program won't just die. The variable $# gets filled with the error message. Then you can interrogate what happened, print out diagnostics, and possibly recover from the error.
eval { run_some_code(#args) };
if ($#) {
carp "Error in `run_some_code()`: $# --";
# Do further investigation, print, recover ...
}
You can have any code in the eval block above, it needn't be a single function call. Note that eval itself returns values that convey what happened (aside from $# being set).
As for the "global" in your problem statement, one thing that comes to mind is that you use eval at the level of main:: -- wrap in it any subs that themselves invoke functions from the modules.
A crucial thing about exceptions is that they "bubble up". When a die (Perl's sole exception) is thrown in a sub and the caller doesn't eval it, it goes up the call chain ... eventually showing up in main::, and if it is not caught (eval-ed) there then the program dies. So you can eval the top-level call in main:: and get to know whether anything anywhere below went wrong.
eval { top_level_call(); };
if ($#) {
warn "Error from somewhere in `top_level_call(): $#";
}
# Functions
sub top_level_call {
# doing some work ...
another_sub();
# doing more ...
}
sub another_sub {
# do work, no eval checks
}
If an error triggering a die happens in another_sub() its processing stops immediately and the control is returned to the caller, the top_level_call(). Since that sub doesn't check (no eval there) its execution stops at that point as well, and the control returns to its caller (in this example the main:: itself). So it eventually hits main::, and eval-ing there lets you know about errors and your program won't just exit.
Perhaps that's what was meant by "global" exception handling using eval.
You can do far more along these lines, if this is what you need to be doing. See eval for starters.
Update your question with clarifications, so you get more discussion here.
In practical terms, I would say that you equip yourself with some understanding of eval use as well as some of "global" error reporting, and then ask your supervisor for clarification and/or examples, as suggested by Borodin.

sysread returns undef, errno is EINVAL, but working?

I have some code set up for non-blocking input from a system event file.
sysopen(FILE, $targetInput, O_NONBLOCK|O_RDONLY) or die "Failed to open $targetInput, quitting.\n";
binmode(FILE);
#More assignments and preparations here...
while (1) {
#Code that justifies non-blocking I/O here...
$rBytes = sysread(FILE, $buffer, 16);
printf("%d vs %d\n", $!, EAGAIN);
print defined($rBytes) ? "Defined!\n" : "Undef!\n";
if (!defined($rBytes) && $! == EAGAIN) {
#Nothing actually read in non-blocking mode:
usleep(1000);
} else {
#Got an event, moving on:
print "Got it!\n";
print "$rBytes!\n";
last;
}
}
#Logic using $buffer here...
This is a pretty standard setup, dozens of examples are available for this very thing. However, I have discovered that, 100% of the time, $rBytes remains undefined, and $! is set to code 22, EINVAL (Invalid Argument). Much testing has gone into ensuring that it is the sysread function specifically that causes this, and definitely nothing before it.
The catch is, it works. As you can see, my code assumes anything that is not a combination of $rBytes being undefined (always true) and $! being EAGAIN (always false) just sort of assumes everything is fine, since I haven't added any error handling. What follows this code block is a massive switch/case; the nonexistent data "harmlessly" passes over it all, looping back over to do it again as fast as it can.
When valid input is received, $rBytes is still undefined. But since $! is still not EAGAIN, it too passes to the rest of the program and functions precisely as intended, with $buffer containing exactly what it should. I actually would not have noticed this issue at all if I didn't glance at my CPU usage meter, and probably wouldn't try to fix it if it were a quick one-shot script.
I can safely say that an accusation of "invalid argument" is bogus. The question is, why does it give that error code, and why is $rBytes always undefined?
As I was cobbling together a quick script to mock the assumptions made thus far, I actually stumbled upon the answer myself; sure enough, all the necessary information was there in the question all along.
A typical system event file works with messages that are 24 bytes in length. This block of code was, of course, trying to extract the 16-bit timestamp from those events, and leave the rest unconsumed for later. For reasons I don't understand myself, sysread doesn't like that.
By ensuring that the desired length is greater than or equal to 24, sysread will report bytes read and $! as documented. 24 could be a magic number, or it could simply have problems reading a partial message from a non-blocking resource (no errors occur in blocking mode). It was still putting the appropriate data into $buffer, presumably because the underlying code passes it by reference and does not zero it out in the event of an error, and that is why everything after the shown block worked fine.

Turn off or disallow fatal errors during script run?

We have a while(1) script that loops through its' various workings, then sleeps for 60 seconds and then goes through it all again. This script needs to run 24/7.
As we add new functionality within the while(1) loop, or just over the course of random issues, sometimes they fail and unfortunately crash the entire script. The solution has been wrapping any such functions in eval{}, but my question is...Is there anyway to globally set that all errors or fatals do NOT halt/kill the entire script so we don't have to wrap everything around the eval{} ?
What you are trying to do – ignore any errors, carry on at all costs – is a very questionable practice, may bring your program into undefined state, and makes actual bugs even harder to find.
You could in theory override the CORE::GLOBAL::die subroutine to catch exceptions from your Perl code, but a real die is still available as the CORE::die sub, and this doesn't trap errors from XS code or perl itself (unlike using eval). Note that some modules use die and warn for control flow. Consider the following code:
sub foo {
my ($x, $y) = #_;
croak "X must be smaller than Y" unless $x < $y;
return $y - $x;
}
Now if die becomes a warn, then that function could start to output negative numbers, wreaking all kind of havoc (e.g. when used for array indices).
Please, stay with the eval solution, or even better: migrate to Try::Tiny. Fatal errors exist for a reason.
If high reliability is a must, you may want to adopt an Erlang-like model: A pool of worker processes. If an error turns up, that process is killed, and a replacement process started.
That makes no sense. How would the program know where to resume? You must tell it where, and you do that using eval.
You would surely be better writing a wrapper for the script that logs failures and restarts the script.
Despite the other answers, the fact is that some errors may not be easily recoverable and just ignoring them and trudging along could easily cause unwanted behavior. Another option is to remove the while loop so that the script only executes once, and call it from cron, which allows you to run programs on a schedule. For example, you might open a shell and call crontab -e to edit the scheduler and add this line:
* * * * * perl /path/to/script.pl
which will run your script every minute and send you a mail with the output if there is any, including warnings and errors.