close multiple output pipes in perl without blocking on each one - perl

I have a perl script which sends a lot of output to multiple subprocesses. I need to be able to close my end of all the pipes and then wait for the subprocesses to finish their work. So far I've only succeeded at closing each pipe and waiting for each subprocess to finish one by one.
More concretely, I'm doing something like this:
for ($i=0;$i<24;$i++) {
my $fh;
open $fh, "|externalprogram $i";
$fhs{$i}=$fh;
}
#...now I can write output to the pipes
while (moreworktodo()) {
$whichone, $data = do_some_work();
print $fhs{$whichone} $data;
}
#Now I just need to wait for all the subprocesses to finish. However, they
#need to do a lot of work that can only begin when they've finished reading input. So I need to close my end of the pipe to indicate I'm finished.
for ($i=0;$i<24;$i++) {
my $file = $fhs{$i};
close $file; #unfortunately, this blocks until process $i finishes
#meanwhile all the other processes are waiting for EOF
#on their STDIN before they can proceed. So I end up waiting
#for 24 processes to finish one-at-a-time instead of all at once
}
One way to get all the subprocesses to finish promptly (closing their stdin) is simply to let my script exit without closing the (pipe) filehandles at all, but that's no good because the script is part of a larger job that needs the subprocess' work to actually be done before proceeding.
What is a simple way to close each subprocesses' stdin (so that they can all finish working) and then wait for all of them to finish before proceeding? I've tried forking off a child to close each pipe but that doesn't seem to work -- only the parent's "close" actually closes the stdin of the subprocess and waits for the subprocess to finish.

I would create the pipes myself and not use open(P, "|external-program").
Then you can close the pipe and not wait for the child process to exit.
Example of opening a pipe to a child process yourself:
sub spawn {
my ($cmd) = #_;
pipe(my $rp, $wp) or die "pipe failed: $!";
my $pid = fork();
die "fork: $!" unless defined($pid);
if ($pid) {
# parent
close($rp);
return ($wp, $pid);
} else {
# child
close($wp);
open(STDIN, "<&", $rp);
exec($cmd) or die "exec: $!";
}
}
sub main {
$| = 1;
my ($wp, $pid) = spawn("./child");
for (1..10) {
print {$wp} "sending $_\n";
}
close($wp);
print "done\n";
}
main();
Here's a sample child program to test that close() is NOT waiting for the child to exit:
# file: ./child
while (<STDIN>) {
print "got: $_";
sleep(2);
}
The last piece of the puzzle is to asynchronously wait for the child processes to exit.
This can be done with a $SIG{CHLD} handler, or, alternatively, here is a simplistic "join_children" function:
my #child_ids = (1..24); # or whatever ids you want to use
my %pipe; # hash map from child_id -> pipe handle
sub join_children {
for my $id (#child_ids) {
close( $pipe{$id} );
}
my $count = scalar(#child_ids);
while ($count > 0) {
wait;
$count--;
}
}

Related

IO::Select loop is somehow keeping child from exiting?

I have a perl IO::Select loop for running a command (with no stdin) and saving it's output into a variety of logs.
It's worked great for a number of years, but suddenly it's misbehaving with one particular command. The command runs to completion and exits properly if it's run from the command line, but when I run it from this perl code then after it's done (as seen from the logs) the child process won't exit, and it just sits in a hung state. I don't have strace rights on the systems this is happening on, so I can't see exactly what is stuck.
The code is essentially:
# Start process
open(local *nostdin, '<', '/dev/null') or die $!;
my $pid = open3('<&nostdin',
my $outH = gensym(),
my $errH = gensym(),
$command
);
# Setup select
my $sel = IO::Select->new();
$sel->add($outH);
$sel->add($errH);
# Select loop
while (my #ready = $sel->can_read()) {
foreach my $handle (#ready) {
my $bytesRead = sysread($handle, my $buf='', 1024);
if ($bytesRead <= 0) {
warn("Error reading command out $!\n Command: $command\n") if $bytesRead;
$sel->remove($handle);
next;
}
if ($handle==$outH) {
syswrite(SYSOUT,$buf) if $stdoutOpen;
syswrite(SYSLOG,$buf) if $logOpen;
}
if ($handle==$errH) {
syswrite(SYSERR,$buf) if $stderrOpen;
syswrite(SYSLOG,$buf) if $logOpen;
}
}
}
Is there anything I could be doing wrong? And even if there is a mistake, how can the perl I/O loop keep the child process from doing it's exit? It seems once the child exits it would close all it's I/O and we'd finish this loop (as has always been the case with this code).
I've done some tests using waitPid and some output, and it looks like the $outH is closing, but $errH never has any data and never closes, and then I can see from waitPid that the child has exiting.
So I open two pipes, for child out and child err. The child uses the output pipe and never the error pipe. Then the child closes the output pipe and exits, somehow this exit isn't closing the error pipe. How is this happening? Is this something a child could even do on purpose?

perl, no child process w/ "open"

Hi I have this problem where the perl script spits back "No child process found at" ...
My script calls several different types of forks, so I tried implementing the perldoc's waitpid() implementation method to be able to use handle both fork & exec & system & qw.
$SIG{CHLD} = sub {
# don't change $! and $? outside handler
local ($!, $?);
my $pid = waitpid(-1, WNOHANG) > 0;
return if $pid == -1;
return unless defined $children{$pid};
delete $children{$pid};
};
my $pid = fork();
die "cannot fork" unless defined $pid;
if ($pid == 0) {
# ...
exit 0;
} else {
$children{$pid}=1;
# ...
exec($command);
}
There is no problem with this part of the execution of the code, however the "No child processor found" occurs when I try to close the filehandle's CLOSE. Can someone explain to me how come this is happening, as I really want to understand this problem more in depth. Do I end up reaping the child process forked by the OPEN call, so that the close doesn't know how to handle the file handle? or maybe I'm 100% off. Any solutions would be appreciated
open(RESULTS, "-|", "find $dir\/ -maxdepth 1 -name RESULTS -print0 | xargs -0 cat ") or die $!;
while(<RESULTS>){
if($_ =~ /match/){
print $_;
}
}
close RESULTS;
close on a handle* opened thusly calls waitpid to reap the child created by open. However, your signal handler managed to reap the child before close did, so close could not find the child, so close returned an error.
You could fix this by changing your signal handler to only reap the children you've created using fork (below), or you could ignore that error from close.
$SIG{CHLD} = sub {
local ($!, $?, $^E);
for my $pid (keys(%children)) {
if (waitpid($pid, WNOHANG) > 0) {
delete $children{$pid};
}
}
};
* — The proper term is "file handle". It is named such since it allows you to hold onto a file. It's not handler as it performs no action.

Problems while making a multiprocessing task in Perl

I'm trying to make a basic multiprocessing task and this is what I have. First of all, I don't know the right way to make this program as a non-blocking process, because when I am waiting for the response of a child (with waitpid) the other processes also have to wait in the queue, but, what will happen if some child processes die before (I mean, the processes die in disorder)? So, I've been searching and I foud that I can get the PID of the process that just die, for that I use waitpid(-1, WNOHANG). I always get a warning that WNOHANG is not a number, but when I added the lib sys_wait_h, I didn't get that error but the script never waits for PID, what may be the error?
#!/usr/bin/perl
#use POSIX ":sys_wait_h"; #if I use this library, I dont get the error, but it wont wait for the return of the child
use warnings;
main(#ARGV);
sub main{
my $num = 3;
for(1..$num){
my $pid = fork();
if ($pid) {
print "Im going to wait (Im the parent); my child is: $pid\n";
push(#childs, $pid);
}
elsif ($pid == 0) {
my $slp = 5 * $_;
print "$_ : Im going to execute my code (Im a child) and Im going to wait like $slp seconds\n";
sleep $slp;
print "$_ : I finished my sleep\n";
exit(0);
}
else {
die "couldn’t fork: $!\n";
}
}
foreach (#childs) {
print "Im waiting for: $_\n";
my $ret = waitpid(-1, WNOHANG);
#waitpid($_, 0);
print "Ive just finish waiting for: $_; the return: $ret \n";
}
}
Thanks in advance, bye!
If you use WNOHANG, the process will not block if no children have terminated. That's the point of WNOHANG; it ensures that waitpid() will return quickly. In your case, it looks like you want to just use wait() instead of waitpid().
I find that POE handles all of this stuff for me quite nicely. It's asynchronous (non-blocking) control of all sorts of things, including external processes. You don't have to deal with all the low level stuff because POE does it for you.

Killing child and its children when child was created using open

Here's my code, with error handling and other stuff removed for clarity:
sub launch_and_monitor {
my ($script, $timeout) = #_;
sub REAPER {
while ((my $child = waitpid(-1, &WNOHANG)) > 0) {}
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
my $pid = fork;
if (defined $pid) {
if ($pid == 0) {
# in child
monitor($timeout);
}
else {
launch($script);
}
}
}
The launch sub executes a shell script which in turn launches other processes, like so:
sub launch($) {
my ($script) = #_;
my $pid = open(PIPE, "$script|");
# write pid to pidfile
if ($pid != 0) {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
}
The monitor sub basically just waits for a specified period of time and then attempts to kill the shell script.
sub monitor($) {
my ($timeout) = #_;
sleep $timeout;
# check if script is still running and if so get pid from pidfile
if (...) {
my $pid = getpid(...);
kill 9, $pid;
}
}
This kills the script, however, it does not kill any of its subprocesses. How to fix it?
You can do this with process groups, if your operating system supports them. You need to make the script process become a process group leader. The child processes that it runs will inherit the process group from their parent. You can then use kill to send a signal to each process in the group at the same time.
In launch(), you will need to replace the open line with one that forks. Then in the child, you would call setpgrp() before exec'ing the command. Something like the following should work:
my $pid = open(PIPE, "-|");
if (0 == $pid) {
setpgrp(0, 0);
exec $script;
die "exec failed: $!\n";
}
else {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
Later, to kill the script process and its children, negate the process ID that you're signalling:
kill 9, -$pid;
In general, I don't think you can expect signals to be propagated into all child processes; this isn't specific to perl.
That said, you might be able to use the process group signal feature built into perl kill():
...if SIGNAL is negative, it kills process groups instead of processes...
You probably need to use setpgrp() on your (direct) child process, then change your kill call to something like:
kill -9, $pgrp;
Try adding:
use POSIX qw(setsid);
setsid;
at the top of your launch_and_monitor function. This will put your processes in a separate session, and cause things to exit when the session leader (i.e. the master) exits.
Killing a processgroup works, but don't forget the parent can be killed alone too. Assuming child processes have an event loop, they can check the parent socket that was created in a socketpair prior doing the fork() for validness. In fact, select() cleanly exits when the parent socket is gone, all that needs to be done is to check the socket.
E.g.:
use strict; use warnings;
use Socket;
$SIG{CHLD} = sub {};
socketpair(my $p, my $c, AF_UNIX, SOCK_STREAM, PF_UNSPEC) or die $!;
print "parent $$, fork 2 kids\n";
for (0..1){
my $kid = fork();
unless($kid){
child_loop($p, $c);
exit;
}
print "parent $$, forked kid $kid\n";
}
print "parent $$, waiting 5s\n";
sleep 5;
print "parent $$ exit, closing sockets\n";
sub child_loop {
my ($p_s, $c_s) = #_;
print "kid: $$\n";
close($c_s);
my $rin = '';
vec($rin, fileno($p_s), 1) = 1;
while(1){
select my $rout = $rin, undef, undef, undef;
if(vec($rout, fileno($p_s), 1)){
print "kid: $$, parent gone, exiting\n";
last;
}
}
}
Runs like this:
tim#mint:~$ perl ~/abc.pl
parent 5638, fork 2 kids
parent 5638, forked kid 5639
kid: 5639
parent 5638, forked kid 5640
parent 5638, waiting 5s
kid: 5640
parent 5638 exit, closing sockets
kid: 5640, parent gone, exiting
kid: 5639, parent gone, exiting
tim#mint:~$

How can I prevent the parent from blocking when writing to a child?

Recently I had a problem using (pipe |-) when I wanted to communicate between two processes.
Basically, the child process couldn't process STDIN as fast as it was filled up by parent. This caused parent to wait until STDIN was free and made it run slow.
How big can STDIN be and is it possible to modify it. If yes, what is the best practice size?
Here is some code sample to show what I mean:
if ($child_pid = open($child, "|-"))
{
$child->autoflush(1);
# PARENT process
while (1)
{
# Read packet from socket save in $packet
process_packet($packet);
# forward packet to child
print $child $packet;
}
}
else
{
die "Cannot fork: $!" unless defined $child_pid;
# CHILD process
my $line;
while($line = <STDIN>)
{
chomp $line;
another_process_packet($line);
}
}
In this sample another_process_packet slower than process_packet. The reason I write the code like this is, I want to use same data comes from socket and actually get it once.
Thanks in advance.
You can of course buffer in the parent process, and only write to the child when the child's fd is writable (i.e., writing won't block). You can do this yourself with the right args to syswrite, or use an event loop:
use AnyEvent;
use AnyEvent::Handle;
# make child, assume you write to it via $fh
my $done = AnyEvent->condvar;
my $h = AnyEvent::Handle->new( fh => $fh );
while( you do stuff ){
my $data = ...;
$h->push_write($data); # this will never block
}
$h->on_drain(sub { $done->send });
$done->wait; # now you block, waiting for all writes to actually complete
Edit: This used to be untested, but I tested it, and it works. (I used perl -ne "sleep 1; print $_" as the slow child.) Writes proceed during the while loop, if possible, but never block the loop. At the end, you actually block until all the writes have completed.
My test scripts are on gist.github: http://gist.github.com/126488
You can see how the child blocks the blocking loop, but how it doesn't block the non-blocking loop. Obvious when you put it that way ;)
(Finally, as a general rule of thumb; if you are interacting with the network or with other processes, you should probably be using an event loop.)
The size is set in the kernel. You can either recompile the kernel with a higher limit or use an intermediary buffer process.
Process handle contains a member function named 'blocking'. Just set the blocking to 0, and the parent process will not be blocked.
if ($child_pid = open($child, "|-"))
{
$child->blocking(0); # Key to the solution.
$child->autoflush(1);
# PARENT process
while (1)
{
# Read packet from socket save in $packet
process_packet($packet);
# forward packet to child
print $child $packet;
}
}
else
{
die "Cannot fork: $!" unless defined $child_pid;
# CHILD process
my $line;
while($line = <STDIN>)
{
chomp $line;
another_process_packet($line);
}
}