Call Several Other Scripts Async - perl

I know there a lot of ways to do this, but because there are so many I don't know which one to choose.
What I want to accomplish:
1. Start several child scripts
2. Be able to check if they are running
3. Be able to kill them
4. I DON'T need to capture their output, and their output does not need to be displayed.
Each of these scripts is in their own file.
I haven't done scripting in a while and I'm stuck in an OOP mindset, so forgive me if I say something ridiculous.

use Parallel::ForkManager qw( );
use constant MAX_SIMUL_CHILDREN => 10;
my $pm = Parallel::ForkManager->new(MAX_SIMUL_CHILDREN);
for my $cmd (#cmds) {
$pm->start()
and next;
open(STDOUT, '>', '/dev/null')
or die($!);
exec($cmd)
or die($!);
$pm->finish(); # Never reached, but that's ok.
}
$pm->wait_all_children();
Adding the following before the loop will log the PID of the children.
$pm->run_on_start(sub {
my ($pid, $ident) = #_;
print("Child $pid started.\n");
});
$pm->run_on_finish(sub {
my ($pid, $exit_code, $ident, $exit_signal) = #_;
if ($exit_signal) { print("Child $pid killed by signal $exit_signal.\n"); }
elsif ($exit_code) { print("Child $pid exited with error $exit_code.\n"); }
else { print("Child $pid completed successfully.\n"); }
});
$ident is the value passed to $pm->start(). It can be used to give a "name" to a process.

Perl and parallel don't go well together, but here are a few thoughts :
fork() a few times, and manage each child independently
Perl allows you to open filehandles to processes: open my $fh, '-|', 'command_to_run.sh'. You could use this and poll those handles
Fork them to the background and store their process IDs

Related

IPC communication between 2 processes with Perl

Let's say we have a 'Child' and 'Parent' process defined and subroutines
my $pid = fork;
die "fork failed: $!" unless defined($pid);
local $SIG{USR1} = sub {
kill KILL => $pid;
$SIG{USR1} = 'IGNORE';
kill USR1 => $$;
};
and we divide them, is it possible to do the following?
if($pid == 0){
sub1();
#switch to Parent process to execute sub4()
sub2();
#switch to Parent process to execute sub5()
sub3();
}
else
{
sub4();
#send message to child process so it executes sub2
sub5();
#send message to child process so it executes sub3
}
If yes, can you point how, or where can I look for the solution? Maybe a short example would suffice. :)
Thank you.
There is a whole page in the docs about inter process communication: perlipc
To answer your question - yes, there is a way to do what you want. The problem is, exactly what it is ... depends on your use case. I can't tell what you're trying to accomplish - what you you mean by 'switch to parent' for example?
But generally the simplest (in my opinion) is using pipes:
#!/usr/bin/env perl
use strict;
use warnings;
pipe ( my $reader, my $writer );
my $pid = fork(); #you should probably test for undef for fork failure.
if ( $pid == 0 ) {
## in child:
close ( $writer );
while ( my $line = <$reader> ) {
print "Child got $line\n";
}
}
else {
##in parent:
close ( $reader );
print {$writer} "Parent says hello!\n";
sleep 5;
}
Note: you may want to check your fork return codes - 0 means we're in the child - a number means we're in the parent, and undef means the fork failed.
Also: Your pipe will buffer - this might trip you over in some cases. It'll run to the end just fine, but you may not get IO when you think you should.
You can open pipes the other way around - for child->parent comms. Be slightly cautious when you multi-fork though, because an active pipe is inherited by every child of the fork - but it's not a broadcast.

Not wait for the computation and runs next [duplicate]

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.

perl: executing multiple systems processes and waiting for them to finish

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Problems while making a multiprocessing task in Perl

I'm trying to make a basic multiprocessing task and this is what I have. First of all, I don't know the right way to make this program as a non-blocking process, because when I am waiting for the response of a child (with waitpid) the other processes also have to wait in the queue, but, what will happen if some child processes die before (I mean, the processes die in disorder)? So, I've been searching and I foud that I can get the PID of the process that just die, for that I use waitpid(-1, WNOHANG). I always get a warning that WNOHANG is not a number, but when I added the lib sys_wait_h, I didn't get that error but the script never waits for PID, what may be the error?
#!/usr/bin/perl
#use POSIX ":sys_wait_h"; #if I use this library, I dont get the error, but it wont wait for the return of the child
use warnings;
main(#ARGV);
sub main{
my $num = 3;
for(1..$num){
my $pid = fork();
if ($pid) {
print "Im going to wait (Im the parent); my child is: $pid\n";
push(#childs, $pid);
}
elsif ($pid == 0) {
my $slp = 5 * $_;
print "$_ : Im going to execute my code (Im a child) and Im going to wait like $slp seconds\n";
sleep $slp;
print "$_ : I finished my sleep\n";
exit(0);
}
else {
die "couldn’t fork: $!\n";
}
}
foreach (#childs) {
print "Im waiting for: $_\n";
my $ret = waitpid(-1, WNOHANG);
#waitpid($_, 0);
print "Ive just finish waiting for: $_; the return: $ret \n";
}
}
Thanks in advance, bye!
If you use WNOHANG, the process will not block if no children have terminated. That's the point of WNOHANG; it ensures that waitpid() will return quickly. In your case, it looks like you want to just use wait() instead of waitpid().
I find that POE handles all of this stuff for me quite nicely. It's asynchronous (non-blocking) control of all sorts of things, including external processes. You don't have to deal with all the low level stuff because POE does it for you.