Killing an application started using system() in Perl - perl

I am trying to run an application inside a Perl script using system(). The application I'm running gets stuck sometimes (it enters some kind of infinite loop). Is there a way I can know if this application is stuck and kill it to continue with the Perl script?
I'm trying to do something like this:
start testapp.exe;
if(stuck with testapp.exe) {
kill testapp.exe;
}

Determining if "it is stuck in infinite loop" is called Halting Problem and is undecidable.
If you want to kill it, you will have to fork the application using fork and then kill it from the other fork, if it is going for too long.
You can determine if the proccess is going for too long by this
use POSIX ":sys_wait_h";
waitpid($pid, WNOHANG)>0 #waitpid returns 0 if it still running
at least, according to this manual page
I am not sure how well it works on various systems, you can try it out.
Not a direct answer, but I can recommend using forks module if you want to fork with ease, but it works only on UNIX systems (not windows).
OK, more helping code :) It works in UNIX, according to perlfork perldoc, it should work on Windows exactly the same way.
use warnings;
use strict;
use POSIX ":sys_wait_h";
my $exited_cleanly; #to this variable I will save the info about exiting
my $pid = fork;
if (!$pid) {
system("anything_long.exe"); #your long program
} else {
sleep 10; #wait 10 seconds (can be longer)
my $result = waitpid(-1, WNOHANG); #here will be the result
if ($result==0) { #system is still running
$exited_cleanly = 0; #I already know I had to kill it
kill('TERM', $pid); #kill it with TERM ("cleaner") first
sleep(1); #wait a bit if it ends
my $result_term = waitpid(-1, WNOHANG);
#did it end?
if ($result_term == 0) { #if it still didnt...
kill('KILL', $pid); #kill it with full force!
}
} else {
$exited_cleanly = 1; #it exited cleanly
}
}
#you can now say something to the user, for example
if (!$exited_cleanly) {...}

system("start testapp")
is short for
system("cmd", "/c", "start testapp")
Perl just knows about cmd; it doesn't know anything about start, much less about testapp. system is not the tool you want. That's the first problem.
The second problem is that you haven't defined what it means to be "stuck". If you want to monitor a program, it needs a heartbeat. A heartbeat is a periodic activity that can be externally examined. It can be writing to a pipe. It can be changing a file. Anything.
The monitoring program listens for this heartbeat, and presumes the program is dead if the heart stops beating, so to speak.
"Killing" is done using signals in unix, but it's done using TerminateProcess in Windows. The third problem is that Perl core does not give you access to that function.
The solution to the first and third problem is Win32::Process. It allows you to launch a process in the background, and it also allows you to terminate it.
Creating a heartbeat is up to you.

Here is one way you can handle the problem if you know that testapp should not take more than N seconds to do its thing, then you can use a timeout to kill the app by way of IPC::Run.
In the example below there is a timeout of 1 second which kills the sleep 10 command that takes too long (longer than the timeout of 1 second). If this doesn't do what you want, then you should provide more information about how you can detect that testapp.exe is "stuck".
#!/usr/bin/env perl
use IPC::Run qw( run timeout );
eval { # if (stuck with testapp.exe for more than N seconds)
#cmd = ('sleep', '10'); # this could be testapp.exe instead of sleep
run \#cmd, \$in, \$out, \$err, timeout( 1 ) or die "test"; # start testapp.exe
print "do stuff if cmd succeeds\n";
};
print "more stuff to do afterwards whether or not command fails or succeeds\n";

You can't determine that the application is stuck if you execute it like that, because the system statement won't return until the application terminates.
So, at least, you need to start the test application so it can run asynchronously from the Perl script that is to monitor it.
Having resolved that part of the problem, you have to establish a mechanism that will allow the monitoring Perl script to determine that the application is stuck. That is a non-trivial exercise, and likely system dependent, unless you adopt a simple expedient such as requiring the application to write a heart-beat indication somewhere, and the Perl script monitors for the heart-beat. For example (not necessarily a good example), the application could write the current time into a file identified by its PID, and the Perl script could monitor the file to see if the heart-beat is sufficiently recent. Of course, this assumes that the 'infinite loop' doesn't include code that writes to the heart-beat file.

Related

A daemon to tail a log and fork multiple external (perl) script

I'm trying to write a program, actually a daemon, which stay in memory and perform something like tail -F on a rapidly updated log file. Then the program, when detect a new line on the file, have to launch another compiled perl script which will perform some operations on the log line and then send it with a post.
To clearly explain, I will refer to these two program as "prgTAIL" and "prgPROCESS". So, prgTAIL tail the log and launch prgPROCESS passing the new line to it.
Obviously the prgTAIL doesn't have to wait for the prgPROCESS to end the process, cause prgTAIL have to stay in memory and keep detecting new line on the log. Also, the rate of file update needs to launch multiple parallel prgPROCESS instance. For this reason I'm using two program: the first small and fast just pass the data to the second, which may be heavier cause it can be launched in multiple instances.
On the prgTAIL I used:
a pipe to tail the log file
a while loop to launch prgPROCESS on new log line
a fork(); to continue without waiting prgPROCESS ends
my $log_csv = "/log/csv.csv";
open (my $pipe, "-|", "tail", "-n0", "-F", $log_csv) or die "error";
while (<$pipe>) {
$line = $_ ;
my $pid = fork();
if (defined $pid && $pid == 0) {
exec("/bin/prgPROCESS ".$line) ; # I tried system() too.
exit 0;
}
}
The prgPROCESS operation are not so important; anyway.. it parses the $line passed as arguments, construct an XML and then post it via https.
So, this stuff actually run, but I think I messed up something with the process, cause when a reach a number of newline and prgPROCESS call around 550, prgTAIL keep running but it can't call prgPROCESS anymore, cause there are too many process. I get this error on the bash:
-bash: fork: Resource temporarily unavailable
What's wrong? Any idea? Maybe the prgPROCESS processes don't end and stay stuck without make room for other process?
PS: I'm using a Mac OS X now, but this will run on Linux.
Your problem is this:
while () {
doesn't have any constraint condition, so it's just spinning as fast as it can. You're never actually reading from your pipe, you're just forking as fast as you can and spawning that new script.
You might be wanting:
while ( my $line = <$pipe> ) {
#....
}
But really - it's arguable that you don't actually need to fork at all, because a read/process/read loop would probably do just fine - fork() and exec() is basically what system already does anyway.
You should also - if forking - clean up child processes. It doesn't matter too much for short running things, but things that sit in a loop will leave a lot of zombie processes. Either via setting $SIG{CHLD} or using waitpid.

Perl: Move to next item in loop before it is done executing

I have a perl script that is like so:
foreach my $addr ('http://site1.com', ...., 'http://site2.com') {
my $script = `curl -m 15 $addr`;
*do stuff with $script*
}
The -m sets a timeout of 15 seconds. Is there a way to make it if a user pushes a key, it stops the current execution and moves onto the next item in the foreach? I know last; can move to the next item but I am unsure of how to link this to the key being pushed and how to do it while the curl script is running
Edit: So based on the answers it seems difficult to do it while curl is running. Would it be possible to push a key while curl is running and have it skip to the next item in the loop as soon as the curl script returns (or times out after 15sec)?
The problem you've got with this, is that when you run curl perl hands over control and waits for completion. It blocks until it's 'done'.
So it's not as easy to do this as it might seem.
As another poster alludes to - you can use a variety of parallel processing options. I would suggest the easiest is to move away from using 'any' key, and require a ctrl-c.
So you'd then do:
foreach my $addr ('http://site1.com', ...., 'http://site2.com') {
my $pid = open ( my $curl_fh, "-|", "curl -m 15 $addr" );
$SIG{'INT'} = sub { print "Aborting fetch of $addr"; kill $pid };
while ( <$curl_fh> ) {
print;
}
#might want to set it to something else.
#undef means 'ctrl-c' will abort the whole program.
#IGNORE means exactly what it says on the tin.
#important to change it though, as it has a specific pid it'll kill,
#and that might cause problems.
$SIG{'INT'} = undef;
}
What this does is configure SIGINT (e.g. ctrl-c) so it doesn't kill your program, but does kill the sub-process.
If you wanted to look at other options, I'd offer:
Multithreading, spawn a thread to 'do' the curl fetching in the background and use Thread::Queue to pass results back and forth. (Thread::Queue supports nonblocking checks).
Forking - fork a sub process to do the curl, and use your 'main' process to send a signal if a key is pressed.
IO::Select such that you're not making blocking reads on your process.
Basically you have two options:
1. Use threads
Create a new thread, call desired system function there. Wait for output. In another thread, check for user input. On input, you can kill the child process. When child process has finished, you can ignore user input.
Such a solution seems to be rather complex, with a lot of synchronization needed, probably with using signals. Risky.
2. Use non-blocking IO
Please read this thread. It explains how to make non-blocking IO reads from either a file or a pipe. You'd like to make a non-blocking read from pipe (created with open), then non-blocking read from STDIN, loop.
Seems like a way to go, but, alas, rather complex as well.

signal a perl process from an independent perl process to trigger code in handler

I am using perl v14 on windows. I have 2 simple files:
$SIG{'INT'} = sub {
print "got SIGINT\n";
#some useful code to be executed
#on reception of signal
};
$SIG{'ALRM'} = sub {
print "got SIGALRM\n";
};
print "my pid: ",$$,"\n";
while(1)
{
print "part 1\n";
sleep(3);
print "part 2\n\n";
sleep(3);
}
the above file starts and waits to be killed having given its pid.
The second file simply kills the first perl process using its pid(set manually).
$pid = xxxx; #this is the manually entered pid for I process
print "will attempt to kill process: $pid\n";
kill INT, $pid;
What I run the first perl script and press Ctrl-C, the handler works as expected but using the second file I can't get the same result. I have also tried with other signals like ALRM,HUP,TERM,FPE but no success. All I want to do is to execute the code in the signal handler.
I found something called INT2 signal for win32.
Thanks in advance.
Windows does let you use signals only within the same thread. So signaling different processes will not work.
Instead of signals you could use other methods of interprocess communication like sockets, pipes or files.
From perlwin32:
Signal handling may not behave as on Unix platforms (where it doesn't
exactly "behave", either :). For instance, calling die() or exit()
from signal handlers will cause an exception, since most
implementations of signal() on Windows are severely crippled. Thus,
signals may work only for simple things like setting a flag variable
in the handler. Using signals under this port should currently be
considered unsupported.

Terminating a system() after certain amount of time in Windows

I'm running a command line application from within the perl script(using system()) that sometimes doesn't return, to be precise it throws exception which requires the user input to abort the application. This script is used for automated testing of the application I'm running using the system() command. Since, it is a part of automated testing, sytem() command has to return if the exception occurs and consider the test to be fail.
I want to write a piece of code that runs this application and if exception occurs it has to continue with the script considering the this test to be failed.
One way to do this is to run the application for certain period of time and if the system call doesn't return in that period of time we should terminate the system() and continue with the script.
(How can I terminate a system command with alarm in Perl?)
code for achieving this:
my #output;
eval {
local $SIG{ALRM} = sub { die "Timeout\n" };
alarm 60;
return = system("testapp.exe");
alarm 0;
};
if ($#) {
print "Test Failed";
} else {
#compare the returned value with expected
}
but this code doesn't work on windows i did some research on this and found out that SIG doesn't work for windows(book programming Perl).
could some one suggest how could I achieve this in windows?
I would recommend looking at the Win32::Process module. It allows you to start a process, wait on it for some variable amount of time, and even kill it if necessary. Based on the example the documentation provides, it looks quite easy:
use Win32::Process;
use Win32;
sub ErrorReport{
print Win32::FormatMessage( Win32::GetLastError() );
}
Win32::Process::Create($ProcessObj,
"C:\\path\\to\\testapp.exe",
"",
0,
NORMAL_PRIORITY_CLASS,
".")|| die ErrorReport();
if($ProcessObj->Wait(60000)) # Timeout is in milliseconds
{
# Wait succeeded (process completed within the timeout value)
}
else
{
# Timeout expired. $! is set to WAIT_FAILED in this case
}
You could also sleep for the appropriate number of seconds and use the kill method in this module. I'm not exactly sure if the NORMAL_PRIORITY_CLASS creation flag is the one you want to use; the documentation for this module is pretty bad. I see some examples using the DETACHED_PROCESS flag. You'll have to play around with that part to see what works.
See Proc::Background, it abstracts the code for both win32 and linux, the function is timeout_system( $seconds, $command, $arg, $arg, $arg )

How can I make fork in Perl in different scripts?

I have a process in Perl that creates another one with the system command, I leave it on memory and I pass some variables like this:
my $var1 = "Hello";
my $var1 = "World";
system "./another_process.pl $var1 $var2 &";
But the system command only returns the result, I need to get the PID. I want to make something like fork. What should I do? How can I make something like fork but in diferent scripts?
Thanks in advance!
Perl has a fork function.
See perldoc perlfaq8 - How do I start a process in the background?
(contributed by brian d foy)
There's not a single way to run code
in the background so you don't have to
wait for it to finish before your
program moves on to other tasks.
Process management depends on your
particular operating system, and many
of the techniques are in perlipc.
Several CPAN modules may be able to
help, including
IPC::Open2
or
IPC::Open3
,
IPC::Run
,
Parallel::Jobs
,
Parallel::ForkManager
,
POE
,
Proc::Background
, and
Win32::Process
.
There are many other modules you might
use, so check those namespaces for
other options too. If you are on a
Unix-like system, you might be able to
get away with a system call where you
put an & on the end of the command:
system("cmd &")
You can also try using
fork,
as described in
perlfunc
(although this is the same thing that
many of the modules will do for you).
STDIN, STDOUT, and STDERR are shared
Both the main process and the
backgrounded one (the "child" process)
share the same STDIN, STDOUT and
STDERR filehandles. If both try to
access them at once, strange things
can happen. You may want to close or
reopen these for the child. You can
get around this with opening a pipe
(see open) but on some systems this
means that the child process cannot
outlive the parent.
Signals
You'll have to catch the SIGCHLD
signal, and possibly SIGPIPE too.
SIGCHLD is sent when the backgrounded
process finishes. SIGPIPE is sent when
you write to a filehandle whose child
process has closed (an untrapped
SIGPIPE can cause your program to
silently die). This is not an issue
with system("cmd&").
Zombies
You have to be prepared to "reap" the
child process when it finishes.
$SIG{CHLD} = sub { wait };
$SIG{CHLD} = 'IGNORE'; You can also
use a double fork. You immediately
wait() for your first child, and the
init daemon will wait() for your
grandchild once it exits.
unless ($pid = fork) {
unless (fork) {
exec "what you really wanna do";
die "exec failed!";
}
exit 0;
}
waitpid($pid, 0);
See Signals in
perlipc
for other examples of code to do this.
Zombies are not an issue with
system("prog &").system("prog &").
It's true that you can use fork/exec, but I think it will be much easier to simply use the pipe form of open. Not only is the return value the pid you are looking for, you can be connected to either the stdin or stdout of the process, depending on how you open. For instance:
open my $handle, "foo|";
will return the pid of foo and connect you to the stdout so that if you you get a line of output from foo. Using "|foo" instead will allow you to write to foo's stdin.
You can also use open2 and open3 to do both simultaneously, though that has some major caveats applied as you can run in to unexpected issues due to io buffering.
Use fork and exec.
If you need to get the PID of a perl script you can use the $$ variable. You can put it in your another_process.pl then have it output the pid to a file. Can you be more clear on like fork? You can always use the fork exec combination.