perl: can end block be called when program is 'kill' - perl

BEGIN {
while (1) {
print "hi\n";
}
}
END {
print "end is called\n";
}
in shell:
kill <pid>
OUTPUT:
hi
hi
hi
hi
hi
hi
hi
hi
hi
Terminated
The end block didnt get called when i killed it via kill or ctrl-c.
Is there something equivalent that will always get called before program exits

Ctrl C sends a SIGINT to your program. You can 'catch' this with a signal handler by setting the appropriate entry in %SIG. I would note - I don't see why you're using BEGIN that way. BEGIN is a special code block that's called at compile time - at the very first opportunity. That means it's triggered when you run perl -c to validate your code, and as such is really a bad idea to set as an infinite loop. See: perlmod
E.g.
#!/usr/bin/perl
use strict;
use warnings;
$SIG{'INT'} = \&handle_kill;
my $finished = 0;
sub handle_kill {
print "Caught a kill signal\n";
$finished++;
}
while ( not $finished ) {
print "Not finished yet\n";
sleep 1;
}
END {
print "end is called\n";
}
But there's a drawback - some signals you can't trap in this way. See perlipc for more details.
Some signals can be neither trapped nor ignored, such as the KILL and STOP (but not the TSTP) signals. Note that ignoring signals makes them disappear. If you only want them blocked temporarily without them getting lost you'll have to use POSIX' sigprocmask.
By default if you send a kill, then it'll send a SIGTERM. So you may want to override this handler too. However it's typically considered bad to do anything other than exit gracefully with a SIGTERM - it's more acceptable to 'do something' and resume when trapping SIGHUP (Hangup) and SIGINT.
You should note that Perl does 'safe signals' though - and so some system calls won't be interrupted, perl will wait for it to return before processing the signal. That's because bad things can happen if you abort certain operations (like close on a file where you're flushing data might leave it corrupt). Usually that's not a problem, but it's something to be aware of.

put the proper signal handler in your code:
$SIG{INT} = sub { die "Caught a sigint $!" };
the control-c sends the SIGINT signal to the script, who is catched by this handler

Related

ForkManager SIGINT only kills current process in fork

I want to have all child processes die when I kill a perl process that is using ForkManager. In the code below, if I run it and hit ctrl+c while the sleep line is running, the sleep process is killed, but the print lines are then all simultaneously executed before the script ends. Ideally, I'd like an interrupt to immediately stop all execution. What can I do?
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
system("sleep 15s");
print "Still going!"
$fork1->finish;
}
fork1->wait_all_children;
}
According to perldoc system, system actually ignores both SIGINT and SIGQUIT:
Since SIGINT and SIGQUIT are ignored during the execution of system,
if you expect your program to terminate on receipt of these signals
you will need to arrange to do so yourself based on the return value.
So if you want your processes to stop executing if you SIGINT during the system call, you need to implement that logic yourself:
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
print "Sleeping...";
system("sleep 15s") == 0 or exit($?);
print "Still going!";
$fork1->finish;
}
fork1->wait_all_children;
}
OR the more reasonable approach is the use the Perl built-in sleep:
#!/usr/bin/perl -w
use Parallel::ForkManager;
main {
my $fork1 = new Parallel::ForkManager(8);
while (1) {
$fork1->start and next;
print "Sleeping...";
sleep 15;
print "Still going!";
$fork1->finish;
}
fork1->wait_all_children;
}
First off - using system means you might have something strange happen, because ... then you're allowing whatever you're calling to do stuff to handle signals by itself.
That may be your problem.
However otherwise, what you can do with perl is configure signal handlers - what to do if a signal is recieved by this process. By default - signals are either set to 'exit' or 'ignore'.
You can see what this is currently via print Dumper \%SIG;
However the simplest solution to you problem I think, would be to set a handler to trap SIGINT and then send a kill to your current process group.
The behavior of kill when a PROCESS number is zero or negative depends on the operating system. For example, on POSIX-conforming systems, zero will signal the current process group, -1 will signal all processes, and any other negative PROCESS number will act as a negative signal number and kill the entire process group specified.
$SIG{'INT'} = sub {
kill ( 'TERM', -$$ );
};

How to catch signal in Perl and don't stop process

I am trying to catch signal , SIGUSR2 in my case , I am creating subroutine to handle signal using next code
$SIG{USR2} =\&handle_usr2;
sub handle_usr2 {
open HELLO, ">hello" or die "die" ;
print HELLO "SAYHELLO";
close HELLO;
}
In this example I am catching signal and print some text to file. In this example signal really enters handle subroutine , it writes to file BUT after that process is killed.
So it kills process anyway what signal I am trapping.
BUT intresting thing is that if to set handler to 'IGNORE'
$SIG{USR2} = 'IGNORE';
it really ignores signal and doesn't kill process, how can I handle signal and don't kill process.
What does the rest of you code look like?
Because that should work fine, with one caveat (well two - you do potentially issue a 'die' within your handler). Kill will interrupt certain system calls, like 'sleep', and your code will jump past it.
IGNORE works a little differently - your code will discard the signal without processing it.

signal a perl process from an independent perl process to trigger code in handler

I am using perl v14 on windows. I have 2 simple files:
$SIG{'INT'} = sub {
print "got SIGINT\n";
#some useful code to be executed
#on reception of signal
};
$SIG{'ALRM'} = sub {
print "got SIGALRM\n";
};
print "my pid: ",$$,"\n";
while(1)
{
print "part 1\n";
sleep(3);
print "part 2\n\n";
sleep(3);
}
the above file starts and waits to be killed having given its pid.
The second file simply kills the first perl process using its pid(set manually).
$pid = xxxx; #this is the manually entered pid for I process
print "will attempt to kill process: $pid\n";
kill INT, $pid;
What I run the first perl script and press Ctrl-C, the handler works as expected but using the second file I can't get the same result. I have also tried with other signals like ALRM,HUP,TERM,FPE but no success. All I want to do is to execute the code in the signal handler.
I found something called INT2 signal for win32.
Thanks in advance.
Windows does let you use signals only within the same thread. So signaling different processes will not work.
Instead of signals you could use other methods of interprocess communication like sockets, pipes or files.
From perlwin32:
Signal handling may not behave as on Unix platforms (where it doesn't
exactly "behave", either :). For instance, calling die() or exit()
from signal handlers will cause an exception, since most
implementations of signal() on Windows are severely crippled. Thus,
signals may work only for simple things like setting a flag variable
in the handler. Using signals under this port should currently be
considered unsupported.

Terminating a system() after certain amount of time in Windows

I'm running a command line application from within the perl script(using system()) that sometimes doesn't return, to be precise it throws exception which requires the user input to abort the application. This script is used for automated testing of the application I'm running using the system() command. Since, it is a part of automated testing, sytem() command has to return if the exception occurs and consider the test to be fail.
I want to write a piece of code that runs this application and if exception occurs it has to continue with the script considering the this test to be failed.
One way to do this is to run the application for certain period of time and if the system call doesn't return in that period of time we should terminate the system() and continue with the script.
(How can I terminate a system command with alarm in Perl?)
code for achieving this:
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 60;
return = system("testapp.exe");
alarm 0;
};
if ($#) {
print "Test Failed";
} else {
#compare the returned value with expected
}
but this code doesn't work on windows i did some research on this and found out that SIG doesn't work for windows(book programming Perl).
could some one suggest how could I achieve this in windows?
I would recommend looking at the Win32::Process module. It allows you to start a process, wait on it for some variable amount of time, and even kill it if necessary. Based on the example the documentation provides, it looks quite easy:
use Win32::Process;
use Win32;
sub ErrorReport{
print Win32::FormatMessage( Win32::GetLastError() );
}
Win32::Process::Create($ProcessObj,
"C:\\path\\to\\testapp.exe",
"",
0,
NORMAL_PRIORITY_CLASS,
".")|| die ErrorReport();
if($ProcessObj->Wait(60000)) # Timeout is in milliseconds
{
# Wait succeeded (process completed within the timeout value)
}
else
{
# Timeout expired. $! is set to WAIT_FAILED in this case
}
You could also sleep for the appropriate number of seconds and use the kill method in this module. I'm not exactly sure if the NORMAL_PRIORITY_CLASS creation flag is the one you want to use; the documentation for this module is pretty bad. I see some examples using the DETACHED_PROCESS flag. You'll have to play around with that part to see what works.
See Proc::Background, it abstracts the code for both win32 and linux, the function is timeout_system( $seconds, $command, $arg, $arg, $arg )

Perl: when doing 'open(A,"proc|")' make 'close(A)' return instantly

If I do 'open(A,"proc|")', how do I make 'close(A)' return instantly,
even if 'proc' hasn't finished writing to stdout?
"man perlfunc" tells me:
Prematurely closing the read end of a pipe (i.e. before the process
writing to it at the other end has closed it) will result in a
SIGPIPE being delivered to the writer. If the other end can't handle
that, be sure to read all the data before closing the pipe.
but is there a workaround? Specific example:
$|=1;
open(A,"curl -sN http://test.barrycarter.info/bc-slow-cgi.pl|");
while (<A>) {
print "THUNK: $_\n";
if (/5$/) {last;}
}
print "LOOP EXIT\n";
close(A);
print "A CLOSED\n";
bc-slow-cgi.pl just prints time() once per second forever: the above
code prints "LOOP EXIT", but never "A CLOSED".
close on a handle created by open -| waits for the child to end. It seems to me that the child should die from a PIPE signal or error the next time it attempts to write after you call close, but you could kill the child if you don't want to wait that long.
my $pid = open(...);
while (...) {
...
}
kill PIPE => $pid;
close(...);
PIPE is a bit unorthodox, but it seemed appropriate here. Feel free to send TERM or whatever.