I'm writing Perl for the first time and Internet tells me i can use alarm if i have a long running sql query.
I need to run a SP which will run for hours in a Perl, and i would like to set a time limit.
Below is the last step of my codes but i'm getting segmentation fault when the process runs longer that my timeout, in this occasion sleep(20). Logs shows it finished printing ---1---- and then segmentation fault.
How can I fix this?
I tried taking sub DelRef{} out into a seperate .pl and comment the $db part in the third line to test if alarm is working fine, and it worked all right. I'm confused which part went wrong and caused the segmentation fault.
sub DelRef {
print "starting defRefData\n";
$db = new Sybapi($user, $password, $server, $margin_database);
print "entering eval\n";
my $timeout = 10;
eval {
local $SIG{ALRM} = sub { die "timeout\n" };
print "inside eval now\n";
alarm($timeout);
print "---1----\n";
#sleep(5); --working fine
sleep(20); #not working,
#$db->exec_sql("exec CPN_Margins..clean_up_refData_db '$XrefCode'");
print "----2----\n";
alarm(0);
};
#alarm(0);
print "out of eval\n";
if($#)
{
#die unless $# eq "timeout\n";
if($# =~ "timeout\n")
{
warn "Timed out!\n";
#exit 0;
}
else{
print $#;
}
}
}
&DelRef();
print "process is done\n";
$db->close();
From perldoc (https://perldoc.perl.org/functions/alarm.html)-
It is usually a mistake to intermix alarm and sleep calls, because
sleep may be internally implemented on your system with alarm.
If you want to implement timeout, it can be achieved without sleep. Just follow the example mentioned in the link.
EDIT:
I have added it here -How to set timeout for a long running Sybase sp in Perl
Related
I am using AsyncTimeout perl package to invoke a process asynchronously with timeout in seconds. The process is taking more time to complete, it is returning "Timed out" msg only after completing the process.
Is there a way to stop the process and return timed-out?
The timeout in AsyncTimeout is implemented using alarm() which is not automatically delivered on windows. If you can call sleep() from time to time (in your worker callback) the alarm will get caught.
use strict;
use warnings;
use Async;
my $proc = AsyncTimeout->new(sub{for (1..10000){
print "count: $_\n";
sleep(0);
}
},
1,
"my timeout\n");
while (defined $proc) {
print "Main program: The time is now ", scalar(localtime), "\n";
my $e;
if ($proc->ready) {
if ($e = $proc->error) {
print "Something went wrong. The error was: $e\n";
} else {
print "The result of the computation is: ", $proc->result, "\n";
}
undef $proc;
}
# The result is not ready; we can go off and do something else here.
sleep 1; # One thing we could do is to take nap.
}
print "main process continues\n";
prints
count: ...
count: 5555
The result of the computation is: my timeout
main program continues
This works with strawberry 5.22 and 5.32. If you have to do long running system calls this approach will be of no use.
I am using the following lib Net::SSH2
I can connect to my device and get the output OK in most case. Below is the relevant code:
sub logIntoDevice
{
my $self = shift;
my ($ssh2) = #_;
if(! $ssh2->connect($self->deviceIP))
{
say "Failed to connect to device:",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
if(! $ssh2->auth_password($self->usr, $self->pass))
{
say "Authentication Fail to ",$self->deviceIP;
$ssh2->disconnect();
exit 0;
}
my $channel = $ssh2->channel();
$channel->blocking(0);
$channel->shell();
return $channel;
}
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
print $channel "$cmd\n";
while (<$channel>)
{
chomp $_;
push(#cmdOutput, $_);
}
return #cmdOutput;
}
So below are the cmd i sent to the sub's. They work fine and the output is write to file OK.
$self->writeToFile($fileName,$self->sendCmd("show clock",$channel));
$self->writeToFile($fileName,$self->sendCmd("\n",$channel));
$self->writeToFile($fileName,$self->sendCmd("dir",$channel));
with the exception of when i sent the following cmd:
$self->writeToFile($fileName,$self->sendCmd("sh run",$channel));
the output of the cmd on the device using putty is:
sh run
Building configuration...
Current configuration : 16575 bytes
!
!
!Last configuration change at 16:37:19 CET+1 Sat Mar 15 2014
.....
but in the log file all you see is
sh run
Building configuration...
so the issue is the blank lines after the Building configuration output make the while (<$channel>) think its the end of the output.
My issue is I cant figure a way to loop through the data without using a While loop.
UPDATE
Ok come up with this solution but seems very clunky. must be a better way if doing this
sub sendCmd
{
my $self = shift;
my ($cmd,$channel) = #_;
my #cmdOutput;
my $currPrompt;
#get prompt. i am sure there is a better way!!! just cant figure it out
print $channel "\n";
while (<$channel>)
{
$currPrompt = $_;
}
print $channel "$cmd\n";
while(42)
{
my $inerOutput;
while (<$channel>)
{
chomp $_;
$inerOutput = $_;
push(#cmdOutput, $_);
}
if($inerOutput ne $currPrompt)
{
sleep(7);
}
else
{
last;
}
}
return #cmdOutput;
}
I don't think your issue is blank lines. Most likely, the issue is that you are using non-blocking mode, and it takes time for the device to perform the command. So you are getting an empty line (or an undef) after "Building configuration..." is read because there is no extra output produced yet.
I would use Net::SSH2's poll method with a timeout, which will let you know when there is something to read. If "sh run" takes substantially longer than other commands you are issuing, your sendCmd method needs to be aware of this, and allow more time to pass before it decides no more output is coming its way.
Alternatively, you can (as is the custom when using, for example, Net::Telnet) wait for more output until you see the prompt, whatever the prompt is for the device in question,
and then you will know that the command has finished its execution.
Net::SSH2->poll is deprecated as result of libss2_poll deprecation
How to end script without using using exit if using Perl FCGI. After searching for days the only solution I found is to jump at label in the main script. below is the code of the main index.fcgi.
$fcgi_requests = 0; # the number of requests this fcgi process handled.
$handling_request = 0;
$exit_requested = 0;
$app_quit_request = 0; # End the application but not the FCGI process
# workaround for known bug in libfcgi
while (($ignore) = each %ENV) { }
$fcgi_request = FCGI::Request();
#$fcgi_request = FCGI::Request( \*STDIN, \*STDOUT, \*STDERR, \%ENV, $socket);
sub sig_handler {
my ($callpackage, $callfile, $callline) = caller;
if ($app_quit_request) {
$app_quit_request = 0;
goto ABORTLABEL;
}
$exit_requested = 1;
exit(0) if !$handling_request;
}
$SIG{USR1} = \&sig_handler;
$SIG{TERM} = \&sig_handler;
$SIG{PIPE} = 'IGNORE';
#The goal of fast cgi is to load the program once, and iterate in a loop for every request.
while ($handling_request = ($fcgi_request->Accept() >= 0)) {
process_fcgi_request();
$handling_request = 0;
last if $exit_requested;
#exit if -M $ENV{SCRIPT_FILENAME} < 0; # Autorestart
}
$fcgi_request->Finish();
exit(0);
#=========================================================#
sub process_fcgi_request() {
$fcgi_requests++;
# dispatch current request
my_app();
$fcgi_request->Finish();
}
#=========================================================#
# let it think we are done, used by abort
ABORTLABEL:
$fcgi_request->Finish();
#=========================================================#
The main request is I want to stop the program execution from inside sub insidi modules that may be called by long depth for example inside a login function in a accounts module.
Of course I can not use exit because it will terminate the fcgi process, I tried all error and throw and try modules all use die which also ends the process. Of course I can use the return from each sub but this will require to rewrite the whole program for fcgi.
The normal way to model exceptions in Perl is to call die inside eval BLOCK, which catches the die and so doesn't terminate the process. It'll just terminate the eval and the program continues to run from immediately afterwards. As far as I've seen, the exception-handling modules on CPAN are mostly wrappers around this basic functionality to give it different syntax or make it easier to write catch blocks. Therefore I'm surprised these don't work for you. Did you actually try them or did you just assume die always kills the process? The name is slightly misleading, because it really means 'throw an exception'. Just if you do that outside an eval the interpreter catches it, and its only response is to terminate the process.
eval {
say "Hello world";
die;
say "Not printed";
};
say "Is printed";
You don't want to call exit inside an eval though. Nothing catches that.
I would recommend though rewriting the entire control flow for FCGI. The lifecycle of your code changes significantly, so you have to make a certain amount of modifications to make sure that variable re-use is working properly and you're not leaking memory. Often it's better to do that up front rather than spend days tracking down odd bugs later.
After several questions and deep research, I got this solution. This coding example allows you to return from any nested levels of calls. The module Scope::Upper is XS so it should be fast.
use Scope::Upper qw/unwind CALLER/;
sub level1 {
print "before level 1 \n";
level2();
print "after level 1 \n";
}
sub level2 {
print "before level 2 \n";
level3();
print "after level 2 \n";
}
sub level3 {
print "before level 3 \n";
level4();
print "after level 3 \n";
}
sub level4 {
print "before level 4 \n";
#unwind CALLER 2;
my #frame;
my $i;
#$i++ while #frame = caller($i);# and $frame[0] ne "main";
$i++ while #frame = caller($i);
#print "i=: $i \n";
#unwind CALLER (#frame ? $i : $i - 1);
unwind CALLER $i-1;
print "after level 4 \n";
}
print level1();
If you run this code the output will be:
before level 1
before level 2
before level 3
before level 4
You can return to any up level using:
my intLevel = 2;
unwind CALLER intLevel;
I am running a scraper that is fetching some key pieces of information from several hundred webpages. Everything is mostly working fine, but on timeouts I am getting a segmentation fault.
use Perl::Unsafe::Signals;
require LWPx::ParanoidAgent;
...
$ua = LWPx::ParanoidAgent->new();
$ua->timeout(60);
...
local $SIG{ALRM} = sub {
print "Timeout occurred. Skipping to next record.\n";
};
alarm 60; # give each journal a minute to respond, in total.
UNSAFE_SIGNALS {
...
# some calls like the following:
my $pageResponse = $ua->get($url);
if ($pageResponse->is_success) {
# calls to a sub
# that also does $ua->get()
# I think it fails inside the sub (if that makes a diff)
}
};
alarm 0; # clear the timeout.
Running: perl 5, version 16, subversion 3 (v5.16.3) built for i686-linux
The script is throwing the segmentation fault after a timeout. I get the print for "timeout ocured" and then Segmentation Fault.
Does anybody have any clues as to what might be happening? Suggestions for debugging?
Extra Info:
I had just an "eval" block, instead of an UNSAFE_SIGNALS block before and it would just hang when a timeout occurred.
The problem was solved by putting the eval block only around the $ua->get request, and nothing else. It seems the problem was with reaching an inconsistent state depending on when the alarm was triggered.
Used the following code for all my get requests:
sub uaGetWrapper($) {
my $url = shift;
my $response;
eval {
local $SIG{ALRM} = sub { die "timeout"; };
alarm 60;
$response = $ua->get($url);
alarm 0;
};
if ($# && $# =~ /timeout/) {
return 0; # return false on a timeout.
}
return $response;
}
Is there a way to suppress warnings & error messages in PDF::Reuse?
(I don't need the warnings...if this part of the script fails for any particular pdf then its ok.)
I've tried the following but it doesn't seem to work:
eval {
local $SIG{ALRM} = sub {die "alarm\n"};
alarm 10;
{
local $SIG{__WARN__}=sub{};
use PDF::Reuse;
prFile( $copyPdf );
prDoc( $file ) ;
prEnd() or next;
}
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n";
print "timed out\n";
}
What warnings are you seeing?
I tried the above script with a PDF I had lying around and got no errors or warnings. perl 5.8.8, PDF::Reuse 0.35.
Is the problem that one of your PDFs is badly formed?