Not wait for the computation and runs next [duplicate] - matlab

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.

Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.

Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Related

Call Several Other Scripts Async

I know there a lot of ways to do this, but because there are so many I don't know which one to choose.
What I want to accomplish:
1. Start several child scripts
2. Be able to check if they are running
3. Be able to kill them
4. I DON'T need to capture their output, and their output does not need to be displayed.
Each of these scripts is in their own file.
I haven't done scripting in a while and I'm stuck in an OOP mindset, so forgive me if I say something ridiculous.
use Parallel::ForkManager qw( );
use constant MAX_SIMUL_CHILDREN => 10;
my $pm = Parallel::ForkManager->new(MAX_SIMUL_CHILDREN);
for my $cmd (#cmds) {
$pm->start()
and next;
open(STDOUT, '>', '/dev/null')
or die($!);
exec($cmd)
or die($!);
$pm->finish(); # Never reached, but that's ok.
}
$pm->wait_all_children();
Adding the following before the loop will log the PID of the children.
$pm->run_on_start(sub {
my ($pid, $ident) = #_;
print("Child $pid started.\n");
});
$pm->run_on_finish(sub {
my ($pid, $exit_code, $ident, $exit_signal) = #_;
if ($exit_signal) { print("Child $pid killed by signal $exit_signal.\n"); }
elsif ($exit_code) { print("Child $pid exited with error $exit_code.\n"); }
else { print("Child $pid completed successfully.\n"); }
});
$ident is the value passed to $pm->start(). It can be used to give a "name" to a process.
Perl and parallel don't go well together, but here are a few thoughts :
fork() a few times, and manage each child independently
Perl allows you to open filehandles to processes: open my $fh, '-|', 'command_to_run.sh'. You could use this and poll those handles
Fork them to the background and store their process IDs

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.

TCP Client hangs with Perl fork() + system()

I have a perl script running a TCP listener via Net::Server module. When the remote connects to the perl server, the remote sends the filename of an mp3 music file to play. When I fork() and then call system('mpg123 $filename'), the client hangs. How can I background the mpg123 process so the child can close the connection?
my $pid = fork();
if (defined $pid && $pid == 0)
{
# child process -- never gets to print statement until $cmd is done
system ($cmd);
print STDERR "child launched\n";
exit (0);
}
Perl’s system doesn’t return until the command completes. You might rearrange the child to
if (defined $pid && $pid == 0)
{
# child process
warn "child launched\n";
exec $cmd or die "$0: exec $cmd: $!";
}
Ended up using Proc::Daemon
#!/usr/bin/perl -w
use strict;
use Proc::Daemon;
my $dm = Proc::Daemon->new( work_dir=>'/tmp/');
my $pid = $dm->Init( { exec_command => '/usr/bin/find / >/tmp/find.txt', } );
while (1)
{
print "child status :".$dm->Status($pid)."\n";
sleep 2;
if ($dm->Status($pid) eq 0)
{
print "child terminated :".$dm->Status($pid)."\n";
last;
}
}

perl: executing multiple systems processes and waiting for them to finish

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

How can I timeout a forked process that might hang?

I am writing a Perl script that will write some inputs and send those inputs to an external program. There is a small but non-zero chance that this program will hang, and I want to time it out:
my $pid = fork;
if ($pid > 0){
eval{
local $SIG{ALRM} = sub { die "TIMEOUT!"};
alarm $num_secs_to_timeout;
waitpid($pid, 0);
alarm 0;
};
}
elsif ($pid == 0){
exec('echo blahblah | program_of_interest');
exit(0);
}
As it stands now, after $num_secs_to_timeout, program_of_interest still persists. I tried to kill it in the anonymous subroutine for $SIG{ALRM} as follows:
local $SIG{ALRM} = sub{kill 9, $pid; die "TIMEOUT!"}
but this doesn't do anything. program_of_interest is still persisting. How do I go about killing this process?
I was able to successfully kill my exec()ed process by killing the process group, as shown as the answer to question In perl, killing child and its children when child was created using open. I modified my code as follows:
my $pid = fork;
if ($pid > 0){
eval{
local $SIG{ALRM} = sub {kill 9, -$PID; die "TIMEOUT!"};
alarm $num_secs_to_timeout;
waitpid($pid, 0);
alarm 0;
};
}
elsif ($pid == 0){
setpgrp(0,0);
exec('echo blahblah | program_of_interest');
exit(0);
}
After timeout, program_of_interest is successfully killed.
The above code (by strictlyrude27) didn't work out of the box, because -$PID is spelt in capitals.
(BTW: there's also: http://www.gnu.org/software/coreutils/manual/html_node/timeout-invocation.html)
Here's an example with test:
#!/usr/bin/perl
use strict;
use warnings;
use File::Basename;
my $prg = basename $0;
my $num_secs_sleep = 2;
my $num_secs_to_timeout = 1;
my $orig_program = "sleep $num_secs_sleep; echo \"Look ma, survived!\"";
my $program = $orig_program;
my $expect = "";
if (#ARGV){
if($ARGV[0] eq "test"){
test();
exit 0;
} elsif (#ARGV == 1) {
$num_secs_to_timeout = $ARGV[0];
} elsif (#ARGV == 2) {
$program = $ARGV[0];
$num_secs_to_timeout = $ARGV[1];
} else {
die "Usage: $prg [ \"test\" | [program] seconds ] "
}
}
if($orig_program eq $program) {
if(#ARGV < 2) {
$expect = $num_secs_to_timeout > $num_secs_sleep ?
"(we expected to survive.)" : "(we expected to TIME OUT!)";
}
print STDERR "sleeping: $num_secs_sleep seconds$/";
}
print STDERR <<END;
timeout after: $num_secs_to_timeout seconds,
running program: '$program'
END
if($orig_program eq $program) {
print STDERR "$expect$/";
}
exit Timed::timed($program, $num_secs_to_timeout);
sub test {
eval "use Test::More qw(no_plan);";
my $stdout;
close STDOUT;
open STDOUT, '>', \$stdout or die "Can't open STDOUT: $!";
Timed::timed("sleep 1", 3);
is($stdout, undef);
Timed::timed("sleep 2", 1);
is($stdout, "TIME OUT!$/");
}
################################################################################
package Timed;
use strict;
use warnings;
sub timed {
my $retval;
my ($program, $num_secs_to_timeout) = #_;
my $pid = fork;
if ($pid > 0){ # parent process
eval{
local $SIG{ALRM} =
sub {kill 9, -$pid; print STDOUT "TIME OUT!$/"; $retval = 124;};
alarm $num_secs_to_timeout;
waitpid($pid, 0);
alarm 0;
};
return defined($retval) ? $retval : $?>>8;
}
elsif ($pid == 0){ # child process
setpgrp(0,0);
exec($program);
} else { # forking not successful
}
}
Hmmm your code works for me, after some minor modifications - which I assume are changes made by yourself to make the code into a generic example.
So that leaves me with two ideas:
You removed the problem when you created the sample code - try creating a small sample that actually runs (I had to change 'program_of_interest' and $num_secs_to_timeout to real values to test it). Make sure the sample has the same problem.
It's something to do with the program_of_interest you're running - as far as I know, you can't mask a kill 9, but maybe there's something going on. Have you tried testing your code with a really simple script. I created one for my testing that goes while (1) { print "hi\n"; sleep 1; }
Something else.
Good luck...
The only way SIGKILL can be ignored is if the process is stuck in a system call which is uninterruptible. Check the state of the hung process (with ps aux) if the state is D, then the process can't be killed.
You might also want to check that the function is being called by outputting something from it.