perl: executing multiple systems processes and waiting for them to finish - perl

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.

Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.

Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Related

Perl kill process with timeout ignored

I was testing my source code, in which the child process calls several other programs (some of which are C++).
#Other variables and functions
my $MAX_TIME = 10;#testing 10 minutes
my $timeRemaining = $MAX_TIME * 60;
my $pid = fork();
if ( $pid == 0 ) {
#child process
my $nowTime = localtime;
print "Run started at $nowTime\n";
#This run() calls a for loop in perl, in each iteration there are several C++ programs
run();
setpgrp(0,0);
}
elsif ($pid > 0){
my $nowTime = localtime;
eval{
local $SIG{ALRM} = sub {
kill -9, $pid;
print "Run completed at $nowTime\nJob time out for $MAX_TIME minutes\n";
log();
die "TIMEOUT!\n";
};
alarm $timeRemaining;
waitpid($pid, 0);
};
print "Run completed at $nowTime with no timeout\n";
}
When I checked the print out, I noticed that after 10 minutes, the "Run completed at $nowTime with no timeout\n" part gets printed out, and the child process is still executing. The die "TIMEOUT!\n"; part in the parent process does not get executed.
Is it because of the C++ programs that the perl program calls cannot be killed once it started?
First of all, kill is failing because $pid isn't a process group.
run();
setpgrp(0,0);
should be
setpgrp(0,0);
run();
Secondly, the reason you see
Run completed at $nowTime with no timeout
even when there's a timeout is that you execute
print "Run completed at $nowTime with no timeout\n";
whether there's a timeout or not.
Thirdly, you don't disable the alarm when the child is reaped. Add
alarm(0);
Fourthly, you expect $nowTime to contain the current time without making it so.
Finally, you still need to reap your child even if you kill it. (Ok, this can be skipped if the parent exits immediately anyway.)
Fixed:
use strict;
use warnings;
use POSIX qw( strftime );
sub current_time { strftime("%Y-%m-%d %H:%M:%S", localtime) }
sub run {
print("a\n");
system('perl', '-e', 'sleep 3;');
print("b\n");
system('perl', '-e', 'sleep 3;');
print("c\n");
}
my $MAX_TIME = 5;
my $pid = fork();
die($!) if !defined($pid);
if ($pid) {
if (eval{
local $SIG{ALRM} = sub {
kill KILL => -$pid;
die "TIMEOUT!\n";
};
alarm($MAX_TIME);
waitpid($pid, 0);
alarm(0);
return 1;
}) {
print "[".current_time()."] Run completed.\n";
} else {
die($#) if $# ne "TIMEOUT!\n";
print "[".current_time()."] Run timed out.\n";
waitpid($pid, 0);
print "[".current_time()."] Child reaped.\n";
}
} else {
print "[".current_time()."] Run started.\n";
setpgrp(0,0);
run();
}
Output:
[2017-05-11 14:58:06] Run started.
a
b
[2017-05-11 14:58:11] Run timed out.
[2017-05-11 14:58:11] Child reaped.

Not wait for the computation and runs next [duplicate]

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Call Several Other Scripts Async

I know there a lot of ways to do this, but because there are so many I don't know which one to choose.
What I want to accomplish:
1. Start several child scripts
2. Be able to check if they are running
3. Be able to kill them
4. I DON'T need to capture their output, and their output does not need to be displayed.
Each of these scripts is in their own file.
I haven't done scripting in a while and I'm stuck in an OOP mindset, so forgive me if I say something ridiculous.
use Parallel::ForkManager qw( );
use constant MAX_SIMUL_CHILDREN => 10;
my $pm = Parallel::ForkManager->new(MAX_SIMUL_CHILDREN);
for my $cmd (#cmds) {
$pm->start()
and next;
open(STDOUT, '>', '/dev/null')
or die($!);
exec($cmd)
or die($!);
$pm->finish(); # Never reached, but that's ok.
}
$pm->wait_all_children();
Adding the following before the loop will log the PID of the children.
$pm->run_on_start(sub {
my ($pid, $ident) = #_;
print("Child $pid started.\n");
});
$pm->run_on_finish(sub {
my ($pid, $exit_code, $ident, $exit_signal) = #_;
if ($exit_signal) { print("Child $pid killed by signal $exit_signal.\n"); }
elsif ($exit_code) { print("Child $pid exited with error $exit_code.\n"); }
else { print("Child $pid completed successfully.\n"); }
});
$ident is the value passed to $pm->start(). It can be used to give a "name" to a process.
Perl and parallel don't go well together, but here are a few thoughts :
fork() a few times, and manage each child independently
Perl allows you to open filehandles to processes: open my $fh, '-|', 'command_to_run.sh'. You could use this and poll those handles
Fork them to the background and store their process IDs

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.

TCP Client hangs with Perl fork() + system()

I have a perl script running a TCP listener via Net::Server module. When the remote connects to the perl server, the remote sends the filename of an mp3 music file to play. When I fork() and then call system('mpg123 $filename'), the client hangs. How can I background the mpg123 process so the child can close the connection?
my $pid = fork();
if (defined $pid && $pid == 0)
{
# child process -- never gets to print statement until $cmd is done
system ($cmd);
print STDERR "child launched\n";
exit (0);
}
Perl’s system doesn’t return until the command completes. You might rearrange the child to
if (defined $pid && $pid == 0)
{
# child process
warn "child launched\n";
exec $cmd or die "$0: exec $cmd: $!";
}
Ended up using Proc::Daemon
#!/usr/bin/perl -w
use strict;
use Proc::Daemon;
my $dm = Proc::Daemon->new( work_dir=>'/tmp/');
my $pid = $dm->Init( { exec_command => '/usr/bin/find / >/tmp/find.txt', } );
while (1)
{
print "child status :".$dm->Status($pid)."\n";
sleep 2;
if ($dm->Status($pid) eq 0)
{
print "child terminated :".$dm->Status($pid)."\n";
last;
}
}