I'm writing a perl program for windows, that runs several SVN commands.
I need to receive the status of the SVN process so i'm using "back ticks".
e.g:
{
$COMMAND="blabla...";
$results=`$COMMAND 2>&1`;
parse_results($results);
}
Sometimes the process gets stuck, so I need to set timeout to the process.
I tried to use "ALARM" signal but it didn't kill the stuck process. I receive the indication only if and when the process finishes.
What can I do to deal with processes that don't complete fast enough?
Signals are a unix concept. Instead, you should use IPC::Run.
use IPC::Run qw( run timeout );
run [ 'bla', 'arg' ], '>&', \my $results, timeout( 10 )
or die "bla: $?"
Related
there. I'm trying to do the following:
Fork.
Launched a desktop application.
Wait for 2 secs.
Kill the desktop application.
This is what I have:
#!/usr/bin/perl
use utf8;
use strict;
use warnings;
my #cmd = ('calc.exe');
my $pid = fork();
die "fork() failed: $!" unless defined $pid;
if ($pid) {
system #cmd;
}
sleep 2;
kill 'TERM' => $pid;
The application is launched correctly but it doesn't kill after the two seconds. I know I missing something, I hope someone can point me to the right direction. Right know I'm testing this code in windows 7 SP1 with perl 5.32.1 bundled with msys2.
You have up to four processes here: Parent, child, shell, app. You're killing the child, not the app. Use exec instead of system and use the exec BLOCK LIST form to avoid a shell.
Even then, that may not work. It depends on how well msys2 emulates fork and exec. A better solution might be to use the Windows system calls (CreateProcess and ExitProcess) instead of emulations of unix system calls.
This question isn't really new, https://www.perlmonks.org/?node_id=620645, but I did not find a working answer for my problem. In short, I want to get access to the PID of a thread/process created by a function such as system, exec or open. One that I found using the open function my $pid = open my $fhOut, "| the command ", or die ...;, in Linux, the actual PID value according to the ps command is $pid + 2, but in wind32, the actual PID is a negative number (like -1284). In both cases, the PID returned by the open function is not the same as $pid!!
Likewise, the pid returned by my $pid = system 1, "command params" does not match the PID from the operating system. Can someone explain please? What is the proper way to quit an endless loop program called from open or system functions.
This is my test code:
my $pid = fork();
if( $pid == 0 )
{
my $mimperf_pid = open my $cmd, "mimperf -C $db > results/mimperf/mimperf.log |" or die $!;
sleep(10);
print $mimperf_pid;
kill 'KILL', $mimperf_pid;
exit 0;
}
In this code, I am trying to kill the thread ($mimperf_pid) created via the open function, but it did not succeed.
fork() isn't supported by Windows, so Perl (poorly) emulates it using threads. These virtual processes have negative PIDs. You should avoid fork() on Windows if you can, using threads instead if nothing else.
This isn't pertinent to your question, though. While the PID returned by fork is negative, the PID returned by open is not.
One should generally avoid two-arg open, so
open my $cmd, "foo bar |"
is equivalent to
open my $cmd, "-|", "foo bar"
And that's equivalent to
open my $cmd, "-|", "cmd", "/x", "/c", "foo bar"
You are launching a shell to execute a shell command, and it's the PID of the shell that open returns.
Same goes with system 1, $shell_cmd.
That means that the Ctrl-Break signal is being sent to the shell. (That's what Perl sends in lieu of non-existent SIGKILL.)
Now, I don't have mimperf so I used an alternative program instead of mimperf (perl), and it received the Ctrl-Break signal as well as the shell. So if mimperf didn't exit, maybe it's because it isn't responsive to Ctrl-Break?
If you want to avoid the shell, you will need to do the output redirection yourself. For that, I recommend IPC::Run. It handles timeouts too.
use IPC::Run qw( run );
run [ "mimperf", "-C", $db ],
">", "results/mimperf/mimperf.log",
timeout(10);
I am working on a capstone project and am hoping for some insight.
This is the first time I've worked with Perl and it's pretty much a basic Perl script to automate a few different Unix commands that need to be executed in a specific order. There are two lines throughout the script which executes a Unix command that needs to finish processing before it is acceptable for the rest of the script to run (data will be incorrect otherwise).
How am I able to use Perl (or maybe this is a Unix question?) to print a simple string once the Unix command has finished processing? I am looking into ways to read in the Unix command name but am not sure how to implement a way to check if the process is no longer running and to print a string such as "X command has finished processing" upon it's completion.
Example:
system("nohup scripts_pl/RunAll.pl &");
This runs a command in the background that takes time to process. I am asking how I can use Perl (or Unix?) to print a string once the process has finished.
I'm sorry if I didn't understand your asking context.
But couldn't you use perl process fork function instead of & if you would like to do parallel process?
# parent process
if (my $pid = fork) {
# this block behaves as a normal process
system("nohup scripts_pl/RunAll2.pl"); # you can call other system (like RunAll2.pl)
wait; # wait for the background processing
say 'finished both';
}
# child process
else {
# this block behaves as a background process
system("nohup scripts_pl/RunAll.pl"); # trim &
}
You could try to use IPC::Open3 instead of system:
use IPC::Open3;
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
waitpid( $pid, 0 );
Or, if you need to run nohup through the shell:
my $pid = open3("<&STDIN", ">&STDOUT", ">&STDERR", 'bash','-c', 'nohup scripts_pl/RunAll.pl & wait');
Update: Thanks to #ikegami. A better approach if you would like STDIN to stay open after running the command:
open(local *CHILD_STDIN, "<&", '/dev/null') or die $!;
my $pid = open3("<&CHILD_STDIN", ">&STDOUT", ">&STDERR", 'nohup scripts_pl/RunAll.pl');
I am calling an external program from my perl code using backticks
print `<some long running program>`
The long running program prints detailed log messages onto standard output.
The problem I'm having is that due to buffering, the output from the long running program is printed all at once after it has finished its execution.
I tried making the STDOUT filehandle "hot" but that did not help.
Is there anyway I can have my program print continuously onto the screen?
Open as an exec pipe rather than using backticks.
open ( my $prog_stdout, "-|", "/your/program" ) or die $!;
This will fork and exec but give you access to $prog_stdout to do things with.
E.g.
while ( <$prog_stdout> ) {
print;
}
(It'll close if your external program exits, so the while will terminate).
You may also want to include autoflushing of the filehandle. http://perldoc.perl.org/IO/Handle.html
But that may not be necessary, as output won't be buffered indefinitely.
It might not be buffering but the fact that back ticks return when external program finishes.
You can however use reading pipe to read external output line by line,
use autodie;
open my $pipe, "-|", "<some long running program>";
# $pipe->autoflush();
while (<$pipe>) { .. }
I'm trying to grasp the concept of fork() & exec() for my own learning purposes. I'm trying to use perl fork as a second identical process, and then use that to exec a .sh script.
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? Perl script doesn't wait on the child process and continues on its execution. So my perl script doesn't care about the output of the child process, but only that the command is valid and running. Sort of like calling the script to run in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
If I use fork() & exec() can I get the .sh script to run in parallel to my perl script? [...] Sort of like calling the script to run in the background.
Yes. Fork & exec is actually the way shells run commands in the background.
Is there some sort of safety I can implement to know that the child process exited correctly as well?
Yes, using waitpid() and looking at the return value stored in $?
Like #rohanpm mentioned, the perlipc man page has a lot of useful examples showing how to do this. Here is one of the most relevant, where a signal handler is set up for SIGCHLD (which will be sent to the parent when the child terminates)
use POSIX ":sys_wait_h";
$SIG{CHLD} = sub {
while ((my $child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
};
To get waitpid to not wait for the child:
use POSIX qw/ WNOHANG /;
my $st = waitpid $pid, WNOHANG;
$st is 0 if the process is still running and the pid if it's reaped.