Kill a hung child process - perl

My Perl script runs an external program (which takes a single command-line parameter) and processes its output. Originally, I was doing this:
my #result = `prog arg`;
However, turns out that the program is buggy and hangs unpredictably in rare cases. How can I kill the program if it hasn't exited after a certain amount of time? The script has to work both in Windows and in Linux, and it is my understanding that alarms and forks don't work well (or at all) in Windows.
I found a module called IPC::Run but I can't figure out how to use it properly from its documentation. :-( I tried this:
use strict;
use warnings;
use IPC::Run qw(run timeout);
my $in;
my $out;
my $err;
my #result;
my #cmd = qw(prog arg);
run \#cmd, \$in, \$out, \$err, timeout (10) or die "#cmd: $?";
push #result, $_ while (<$out>);
close $out;
print #result;
As a test, I created a program that just sleeps 60 seconds, prints a string to stdout and exits. When I try to run it with the above code, it hangs for 60 seconds (instead of for 10 seconds, as specified in the timeout) and aborts with a bizarre error:
IPC::Run: timeout on timer #1 at C:/Bin/Unix/Perl/site/lib/IPC/Run.pm line 2956
Then I found another module, Proc::Reliable. From the description, it seems to do precisely what I want. Except that it doesn't work! I tried this:
use strict;
use warnings;
use Proc::Reliable;
my $proc = Proc::Reliable->new ();
$proc->maxtime (10);
my $out = $proc->run ("prog arg");
print "$out\n";
It indeed aborts the child process after 10 seconds. So far, so good. But then I modified the external program and made it sleep for only 5 seconds. This means that the program should finish before the 10-second timeout specified in the above code and its stdout output should be captured into the variable $out. But it isn't! The above script doesn't output anything.
Any ideas how to do it properly? (Fixing the buggy external program is not an option.) Thanks in advance.

Try the poor man's alarm
my $pid;
if ($^O eq 'MSWin32') {
$pid = system 1, "prog arg"; # Win32 only, run proc in background
} else {
$pid = fork();
if (defined($pid) && $pid == 0) {
exec("proc arg");
}
}
my $poor_mans_alarm = "sleep 1,kill(0,$pid)||exit for 1..$TIMEOUT;kill -9,$pid";
system($^X, "-e", $poor_mans_alarm);
The poor man's alarm runs in a separate process. Every second, it checks whether the process with identifier $pid is still alive. If the process isn't alive, the alarm process exits. If the process is still alive after $time seconds, it sends a kill signal to the process (I used 9 to make it untrappable and -9 to take out the whole subprocess tree, your needs may vary. kill 9,... is also portable).
Edit: How do you capture the output of the process with the poor man's alarm?
Not with backticks -- then you can't get the process id and you may lose the intermediate output if the process times out and gets killed. The alternatives are
1) send output to a file, read the file when the process is done
$pid = system 1, "proc arg > some_file";
... start poor man's alarm, wait for program to finish ...
open my $fh, '<', 'some_file';
my #process_output = <$fh>;
...
2) use Perl's open to start the process
$pid = open my $proc, '-|', 'proc arg';
if (fork() == 0) {
# run poor man's alarm in a background process
exec($^X, '-e', "sleep 1,kill 0,$pid||exit ...");
}
my #process_output = ();
while (<$proc>) {
push #process_output, $_;
}
The while loop will end when the process ends, either naturally or unnaturally.

This is the best I could do. Any ideas on how to avoid the use of a temporary file on Windows would be appreciated.
#!/usr/bin/perl
use strict;
use warnings;
use File::Temp;
use Win32::Process qw(STILL_ACTIVE NORMAL_PRIORITY_CLASS);
my $pid;
my $timeout = 10;
my $prog = "prog arg";
my #output;
if ($^O eq "MSWin32")
{
my $exitcode;
my $fh = File::Temp->new ();
my $output_file = $fh->filename;
close ($fh);
open (OLDOUT, ">&STDOUT");
open (STDOUT, ">$output_file" ) || die ("Unable to redirect STDOUT to $output_file.\n");
Win32::Process::Create ($pid, $^X, $prog, 1, NORMAL_PRIORITY_CLASS, '.') or die Win32::FormatMessage (Win32::GetLastError ());
for (1 .. $timeout)
{
$pid->GetExitCode ($exitcode);
last if ($exitcode != STILL_ACTIVE);
sleep 1;
}
$pid->GetExitCode ($exitcode);
$pid->Kill (0) or die "Cannot kill '$pid'" if ($exitcode == STILL_ACTIVE);
close (STDOUT);
open (STDOUT, ">&OLDOUT");
close (OLDOUT);
open (FILE, "<$output_file");
push #output, $_ while (<FILE>);
close (FILE);
}
else
{
$pid = open my $proc, "-|", $prog;
exec ($^X, "-e", "sleep 1, kill (0, $pid) || exit for 1..$timeout; kill -9, $pid") unless (fork ());
push #output, $_ while (<$proc>);
close ($proc);
}
print "Output:\n";
print #output;

You may want to use alarm system call as in perldoc -f alarm.

Related

Process hanging -SIGALRM not delivered- Perl

I have a command that I'm executing using OPEN with pipe, and I want to set a timeout of 10 seconds and have the sub process aborted if the execution time exceeds this. However, my code just causes the program to hang- Why is my ALARM not getting delivered properly?
my $pid = 0;
my $cmd = "someCommand";
print "Running Command # $num";
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
pid = open(my $fh, "$cmd|");
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
} else {
print $_ while(<$fh>);
}
EDIT:
So From the answers below, This is what I have:
my $pid = open(my $fh, qq(perl -e 'alarm 10; exec \#ARGV; die "exec: $!\n" ' $cmd |));
print $_ while(<$fh>);
But this print ALARM CLOCK to the console when the alarm times out...whereas I dont specify this anywhere in the code...how can I get rid of this, and where would I put the custom alarm event handler?
Thanks!
I want to set a timeout of 10seconds and have the sub process aborted if the execution time exceeds this
A different approach is to set the alarm on the subprocess itself, with a handy scripting language you already have:
my $cmd = "someCommand";
my $pid = open(my $child_stdout, '-|',
'perl', '-e', 'alarm 10; exec #ARGV; die "exec: $!"', $cmd);
...
Your child process will initially be perl (well, the shell and then perl), which will set an alarm on itself and then exec (replace itself with) $someCommand. Pending alarms, however, are inherited across exec()s.
All your code is doing is setting a 10 second timeout on the open call, not on the whole external program. You want to bring the rest of your interaction with the external command into the eval block:
eval {
local $SIG{ALRM} = sub {
print "alarm \n";
kill 9, $pid;
};
alarm 10;
$pid = open(my $fh, "$cmd|");
print while <$fh>;
close $fh;
alarm 0;
};
if($#) {
die unless $# eq "alarm \n";
}

perl background process

I am trying to run a background process in perl. I create a child process, which is used to call another perl script. I want to run few lines of code parallely with this child process. And after the child process is done.I want to print a line of code.
Main script
#!/usr/bin/perl
$|=1;
print "before the child process\n";
my $pid = fork();
if (defined $pid)
{
system("perl testing.pl");
}
print "before wait command\n";
wait();
print "after 20 secs of waiting\n";
testing.pl
#!/usr/bin/perl
print "inside testing\n";
sleep(20);
Expected output
before the child process
before wait command
(should wait for 20 secs and then print)
after 20 secs of waiting
There are many problems with your script. Always:
use strict;
use warnings;
localising special variables is a good practice. Only a variable containing the special value undef returns false for defined. So, every other value (even a 0; which is the case here) returns true for defined. In the other script, the shebang is wrong.
#!/usr/bin/perl
use strict;
use warnings;
local $| = 1;
print "Before the child process\n";
unless (fork) {
system("perl testing.pl");
exit;
}
print "Before wait command\n";
wait;
print "After 20 secs of waiting\n";
The “Background Processes” section of the perlipc documentation reads
You can run a command in the background with:
system("cmd &");
The command’s STDOUT and STDERR (and possibly STDIN, depending on your shell) will be the same as the parent’s. You won't need to catch SIGCHLD because of the double-fork taking place; see below for details.
Adding an ampersand to the argument to system in your program can vastly simplify your main program.
#! /usr/bin/env perl
print "before the child process\n";
system("perl testing.pl &") == 0
or die "$0: perl exited " . ($? >> 8);
print "before wait command\n";
wait;
die "$0: wait: $!" if $? == -1;
print "after 20 secs of waiting\n";
fork return value handling is a bit tricky, indeed.
Recent article by Aristotle features a nice and concise forking idiom, which, in your case, looks like:
#!/usr/bin/env perl
use 5.010000;
use strict;
use warnings qw(all);
say 'before the child process';
given (fork) {
when (undef) { die "couldn't fork: $!" }
when (0) {
exec $^X => 'testing.pl';
} default {
my $pid = $_;
say 'before wait command';
waitpid $pid, 0;
say 'after 20 secs of waiting';
}
}
Pay attention to exec $^X => '...' line: the $^X variable holds the full path to the current Perl executable, so the "right Perl version" will be guaranteed. Also, system call is pointless when you're pre-forking.

Why doesn't die in alarm signal handler kill the process?

From How can I specify timeout limit for Perl system call?
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm $timeout;
$nread = sysread SOCKET, $buffer, $size;
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
If a timeout happens, should sub { die "alarm\n" }; cause the end of a process. I guess I am not able to understand die. This http://www.cs.cf.ac.uk/Dave/PERL/node111.html says that "The die() function is used to quit your script and display a message for the user to read". However, in the case of the script above, the script will process the code in #timed out. Also sysread continues to work. Instead of sysread, I had a perl script that slept for 30 seconds. My timeout was set to 10 seconds. As expected, the code in #timed out is executed but the script continued to sleep.Any inputs appreciated
die doesn't cause the end of a process, it throws an exception.
Now, if nothing catches an exception, that ends a process, but you have code in place to catch this very exception.
The process doesn't end because you explicitly prevent it from ending.
Since you're not very clear on what behaviour you are getting, there could be another possibility: That you are using a Windows build of Perl.
The alarm is a Unix system call. It's very purpose (sending a signal after a certain amount of time has passed) makes no sense on Windows since Windows doesn't have signals.
Perl emulates alarm to some extent, but only in a very limited manner. sleep could very well be the only operation that's interruptable by alarm. Otherwise, the timeout is only checked between statements.
So it won't interrupt sysread, but once sysread returns, Perl notices the timeout expired and emulate a signal then.
From man alarm
alarm() arranges for a SIGALRM signal to be delivered to the calling process in seconds seconds.
Before sigalarm is delivered execution reaches else block. Insert a STDIN before sysread so that sigalarm triggers resulting expected results.
"Instead of sysread, I had a perl script that slept for 30 seconds. My
timeout was set to 10 seconds. As expected, the code in #timed out is
executed but the script continued to sleep."
Really?
#!/usr/bin/perl
use strict;
use warnings FATAL => qw(all);
eval {
open my $fh, '<', $0 || die;
local $SIG{ALRM} = sub {
print STDERR "hello!\n";
die "bye!";
};
alarm 3;
while (<$fh>) {
print $_;
sleep 1;
}
close $fh;
};
if ($#) {
print "HERE: $#\n";
}
The output:
#!/usr/bin/perl
use strict;
use warnings FATAL => qw(all);
hello!
HERE: bye! at ./test.pl line 9, <$fh> line 3.
Over in the expected 3 seconds; this is still the case if I just use "sleep 100" instead of the file read. Note that if you spawn a subprocess, alarm will not kill that and the parent process must wait. In that case, the "hello!" in the signal handler will appear when alarm fires, but the eval which catches the die will not complete until the subprocess does.
I had the same issue when porting a Linux Perl script to Windows.
I solved it by ...
Creating a non-blocking socket
$recsock = IO::Socket::INET->new(
LocalPort => 68,
Proto => "udp",
Broadcast => 1,
Blocking => 0,
) or die "socket: $#";
Adding $continue variable to the timeout handle
# Timeout handle
$SIG{ALRM} = sub {
print "timeout\n";
$continue = 1;
};
and checking for the $continue to become true when the timeout occurs:
alarm($timeout);
while(1){
$recsock->recv($newmsg, 1024);
eval {
$packet = Net::Package->new($newmsg);
...
};
sleep 0.1;
last if ($continue);
}
alarm(0);

How to run in parallel two command from a parent one?

I have two code
1.
use File::Temp qw(tempfile);
$tmp = new File::Temp( UNLINK => 0 );
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
2.
while (-e $file) {
sleep 2;
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
}
I would like to combine this 2 code to run tv_grab_au xmltv grabber (update TV Guide), and simultaneously, send command to tvtime for display message 'Updating TV Guide. Please wait this might take a several minutes...', every two seconds, until $file exist.
I try this one:
use strict;
use warnings;
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
use POSIX qw(:sys_wait_h);
$|++;
defined(my $pid = fork) or die "Couldn't fork: $!";
if (!$pid) {
system("tv_grab_huro | tv_sort >> $file");
unlink($file);
}
else {
while (! waitpid($pid, WNOHANG)) {
system("tvtime-command DISPLAY_MESSAGE \'Updating TV Guide. Please wait this might take a several minutes...\'");
sleep 2;
}
}
Thanks.
The builtin fork function creates a copy of your current program in a new background process. The original process and the "child" process will then run at the same time. So you can do something like:
use File::Temp qw(tempfile);
my $file = new File::Temp( UNLINK => 0 );
my $new_pid = fork();
die "fork failed $!" unless defined $new_pid; # this is uncommon
# Task 1 - in the background
if ($new_pid == 0) {
system("tv_grab_au | tv_sort >> $file");
system("cp $file $HOME/.xmltv/listings.xml");
unlink($file);
exit; # don't forget this part!
}
# Task 2 - in the foreground
while (-e $file) {
print "...";
sleep 2;
}
Using $file as an indicator of when the first task has finished has some drawbacks. What if the child code has some runtime error? What if the child process gets interrupted? The child process could exit before it gets a chance to delete $file. Then your while loop in the parent process would never end.
The builtin waitpid command can check if a child process is still running, and can handle the case where the child terminates abnormally.
# Task 2
use POSIX ':sys_wait_h';
while (! waitpid $new_pid, &WNOHANG) { # WNOHANG => non-blocking wait
print "...";
sleep 2;
}
Use fork(). I've added extra sleep() calls so you can see that the processes both run and work. In practice, the crontab update will probably run fast enough that the monitor loop doesn't run at all, or only runs once. I used "unless(...)" because it seems to me to make the code clearer; the thing to remember is that fork() returns the pid to the parent, and zero to the child. The process that doesn't see the pid is therefore a subprocess. (As has been pointed out, if the fork fails, the fork will return undef, and the code will be executing in the original process. In our case, that will simply mean that the monitoring starts up after the writing finishes, so the only thing we lose is the monitoring.)
my $file = "/tmp/.$$.crontab.txt";
my $crontab = <<EOS;
# Crontab lines here. Inserted at #{[scalar localtime()]}
EOS
my ($writer_pid, $monitor_pid);
$|++;
# Open file BEFORE launching processes. The monitor depends on the file's
# presence or absence, so if we opened it in the writer process, there'd be a
# chance the monitor process would check before we created it, and exit without
# monitoring.
die "Cannot open temp file\n" unless open(WRITE, ">" . $file);
# Crontab file handle gets passed to the forked process, so we can just use it.
# Altered so we can see the process do its thing.
unless ($writer_pid = fork()) {
print WRITE $crontab."\n";
close WRITE;
print("crontab -l |grep -v backup >> $file");
sleep 20;
print("crontab $file");
sleep 10;
unlink($file);
print "done!\n";
exit;
}
# Either file will exist, or the previous process will
# have completed. If it exists, we monitor. If not,
# we exit immediately.
unless ($monitor_pid = fork()) {
# Child: monitor the writer.
my $waitcount = 1;
while ( -e $file ) {
sleep 2;
print "($waitcount) installing crontab...";
$waitcount++;
}
print "installed\n";
exit;
}
waitpid($monitor_pid, 0);
waitpid($writer_pid,0);
print "both processes done\n";

How can I terminate a system command with alarm in Perl?

I am running the below code snippet on Windows. The server starts listening continuously after reading from client. I want to terminate this command after a time period.
If I use alarm() function call within main.pl, then it terminates the whole Perl program (here main.pl), so I called this system command by placing it in a separate Perl file
and calling this Perl file (alarm.pl) in the original Perl File using the system command.
But in this way I was unable to take the output of this system() call neither in the original Perl File nor in called one Perl File.
Could anybody please let me know the way to terminate a system() call or take the output in that way I used above?
main.pl
my #output = system("alarm.pl");
print"one iperf completed\n";
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
alarm.pl
alarm 30;
my #output_1 = readpipe("adb shell cd /data/app; ./iperf -u -s -p 5001");
open FILE, ">display.txt" or die $!;
print FILE #output_1;
close FILE;
In both ways display.txt is always empty.
There are a few separate issues here.
First, to keep the alarm from killing your script, you need to handle the ALRM signal. See the alarm documentation. You shouldn't need two scripts for this.
Second, system doesn't capture output. You need one of the backtick variants or a pipe if you want to do that. There are answers for that on Stackoverflow already.
Third, if alarm.pl puts anything in display.txt, you discard it in main.pl when you re-open the file in write mode. You only need to create the file in one place. When you get rid of the extra script, you won't have this problem.
I recently had some problems with alarm and system, but switching to IPC::System::Simple fixed that.
Good luck, :)
What the hell was I thinking? You don't need a background process for this task. You just need to follow the example in the perldoc -f alarm function and wrap your time-sensitive code in an eval block.
my $command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 30;
#output = `$command`;
alarm 0;
};
if ($#) {
warn "$command timed out.\n";
} else {
print "$command successful. Output was:\n", #output;
}
Inside the eval block, you can capture your output the regular way (with backticks or qx() or readpipe). Though if the call times out, there won't be any output.
If you don't need the output (or don't mind hacking some interprocess communication together), an almost idiot-proof alternative is to set the alarm and run the system call in a child process.
$command = "adb shell cd /data/app; ./iperf -u -s -p 5001";
if (($pid = fork()) == 0) {
# child process
$SIG{ALRM} = sub { die "Timeout\n" }; # handling SIGALRM in child is optional
alarm 30;
my $c = system($command);
alarm 0;
exit $c >> 8; # if you want to capture the exit status
}
# parent
waitpid $pid, 0;
waitpid will return when either the child's system command is finished, or when the child's alarm goes off and kills the child. $? will hold the exit code of the system call, or something else (142 on my system) for an unhandled SIGALRM or 255 if your SIGALRM handler calls die.
I run into a similar problem that requires:
run a system command and get its output
time out the system command after x seconds
kill the system command process and all child processes
After much reading about Perl IPC and manual fork & exec, I came out with this solution. It is implemented as a simulated 'backtick' subroutine.
use Error qw(:try);
$SIG{ALRM} = sub {
my $sig_name = shift;
die "Timeout by signal [$sig_name]\n";
};
# example
my $command = "vmstat 1 1000000";
my $output = backtick(
command => $command,
timeout => 60,
verbose => 0
);
sub backtick {
my %arg = (
command => undef,
timeout => 900,
verbose => 1,
#_,
);
my #output;
defined( my $pid = open( KID, "-|" ) )
or die "Can't fork: $!\n";
if ($pid) {
# parent
# print "parent: child pid [$pid]\n" if $arg{verbose};
try {
alarm( $arg{timeout} );
while (<KID>) {
chomp;
push #output, $_;
}
alarm(0);
}
catch Error with {
my $err = shift;
print $err->{-text} . "\n";
print "Killing child process [$pid] ...\n" if $arg{verbose};
kill -9, $pid;
print "Killed\n" if $arg{verbose};
alarm(0);
}
finally {};
}
else {
# child
# set the child process to be a group leader, so that
# kill -9 will kill it and all its descendents
setpgrp( 0, 0 );
# print "child: pid [$pid]\n" if $arg{verbose};
exec $arg{command};
exit;
}
wantarray ? #output : join( "\n", #output );
}
Might use "timeout -n " for wrapping your commands if thats already common on your system.