perl: one parent, many children - single pipe reader in parent? - perl

is it possible in perl to establish a pipe in such a way that the parent has only one READER pipe and many children write to it as they come to life/exit?
The typical cookbook code is:
#!/usr/bin/perl -w
# pipe2 - use pipe and fork so child can send to parent
use IO::Handle;
pipe(READER, WRITER);
WRITER->autoflush(1);
if ($pid = fork) {
close WRITER;
chomp($line = <READER>);
print "Parent Pid $$ just read this: `$line'\n";
// do what you need
} else {
die "cannot fork: $!" unless defined $pid;
close READER;
print WRITER "Child Pid $$ is sending this\n";
close WRITER; # this will happen anyway
exit;
}
Let's instead assume a case where I need my parent "READER" to get messages from multiple children, is it possible to do this without keeping a list of pipes, one per child? I can't close WRITER in the parent because the next child won't get a valid handle to write to. I also need the parent to continue its regular operation and not block on any client data from the pipe.
Pseudocode of what I need:
# parent code
pipe (READER, WRITER)
fork_random_number_of_children(READER,WRITER)
on_some_tick => {
my $data = read_from(READER, non_blocking)
if (data) print "Hey some child sent me: $data"
else print "No data, going back life"
do_other_things_before_next_tick()
}
child_job(R,W) { # lets assume this is called for each child fork
close (R); # no problem, its a copy
sleep (random duration)
print W, "Message from child with pid $$"
exit 0
}

I don't think it's necessary to close WRITER in the parent. It may be a good practice, but since you can't reuse the same pipe for new child processes after you close it, that's a good excuse not to do it. If you keep WRITER open until you are done launching all your child processes, you can reuse the pipe with multiple child processes. Here's a proof of concept:
use IO::Handle;
use POSIX ':sys_wait_h';
pipe(READER,WRITER);
WRITER->autoflush(1);
sub child_process {
my $stage = shift;
close READER; # also a best but optional practice
srand($$);
do {
sleep 1 + 5*rand();
print WRITER "Child Pid $$ ($stage) is sending this\n";
} while (rand > 0.5);
exit;
}
# initial set of children
for (my $i=0; $i<5; $i++) {
if (fork() == 0) {
child_process("LAUNCH");
}
}
# parent
my ($rin,$rout) = ('');
vec($rin,fileno(READER),1) = 1;
while (1) {
# non-blocking read on pipe
my $read_avail = select($rout=$rin, undef, undef, 0.0);
if ($read_avail < 0) {
if (!$!{EINTR}) {
warn "READ ERROR: $read_avail $!\n";
last;
}
} elsif ($read_avail > 0) {
chomp(my $line = <READER>);
print "Read in Parent $$: '$line'\n";
} else {
print STDERR "No input ... do other stuff\n";
# start some run-time child processes
if (time-$^T > 5 && time-$^T < 10) {
# launch a few more children in the middle of the program
if (fork() == 0) {
child_process("RUN");
}
}
sleep 1;
}
last if waitpid(-1,&WNOHANG) < 0; # no more children are alive
}
close WRITER; # now it is safe to do this ...

Related

Detecting the end of a Pipe in Perl

I'm trying to fork a separate process/thread in perl and get the input back to the parent via a pipe. For instance:
my($RD, $WR);
pipe($RD, $WR);
if(fork())
{
#parent
while(!eof $RD) { print "From Child: " . readline($RD); }
print "Parent reached EOF\n";
} else {
#child
for(my $i = 0; $i < 25; $i++) { print $WR "$i\n"; }
close $WR;
}
All of the lines from the child are recieved and printed out by the parent. But the parent never detects EOF and is stuck in that while loop, waiting. What is the proper way to detect EOF here?
A file handle is only closed when all file descriptors referring to that handle are closed. Have the parent close its copy.
pipe(my ($RD, $WR))
or die("pipe: $!\n");
defined( my $pid = fork() )
or die("fork: $!\n");
if ($pid) {
# parent
close($WR);
print "From Child: $_" while <$RD>;
print "Parent reached EOF\n";
} else {
# child
close($RD);
print $WR "$_\n" for 0..25;
}

Perl creating a pipe and fork a sub-process

All im trying to do here is:
Create a pipe
Fork a sub-process
Parent gets a message from the user, sends it to the child
Child gets the message, prints it to the screen
Repeat until user doesn't enter a message
This is what I got now so far: I still need to implement a loop to repeat until user doesn't enter a message.
#!perl -w
use strict
pipe(PIPE_READ, PIPE_WRITE);
autoflush PIPE_WRITE 1;
my $pid = fork();
if ($pid) {
&write_pipe ($pid);
waitpid($pid,0);
}
elsif (defined $pid) {
&read_pipe;
}
else {
die "cannot fork: $!";
}
sub write_pipe {
print "pid $$ \n";
print "Enter message: ";
sleep 1;
my $usr_msg = <>;
print "Parent pid = $$ message = $usr_msg";
print PIPE_WRITE "$usr_msg\n";
close(PIPE_WRITE)
close(PIPE_READ);
}
sub read_pipe {
print "child pid = $pid";
my $msg_read = <PIPE_READ>;
close(PIPE_WRITE);
print "received from pipe $msg_read";
}
First of all, you are unintentionally creating two children. Replace
if ($pid = fork)
with
if ($pid)
In the child, call
close(PIPE_WRITE);
In the parent, call
close(PIPE_READ);
In the parent (when done writing), call
close(PIPE_WRITE);
As for reading from a file handle until EOF or a specific command is entered,
while (my $line = <>) {
last if $line =~ /^(?:quit|exit)$/;
...
}

close multiple output pipes in perl without blocking on each one

I have a perl script which sends a lot of output to multiple subprocesses. I need to be able to close my end of all the pipes and then wait for the subprocesses to finish their work. So far I've only succeeded at closing each pipe and waiting for each subprocess to finish one by one.
More concretely, I'm doing something like this:
for ($i=0;$i<24;$i++) {
my $fh;
open $fh, "|externalprogram $i";
$fhs{$i}=$fh;
}
#...now I can write output to the pipes
while (moreworktodo()) {
$whichone, $data = do_some_work();
print $fhs{$whichone} $data;
}
#Now I just need to wait for all the subprocesses to finish. However, they
#need to do a lot of work that can only begin when they've finished reading input. So I need to close my end of the pipe to indicate I'm finished.
for ($i=0;$i<24;$i++) {
my $file = $fhs{$i};
close $file; #unfortunately, this blocks until process $i finishes
#meanwhile all the other processes are waiting for EOF
#on their STDIN before they can proceed. So I end up waiting
#for 24 processes to finish one-at-a-time instead of all at once
}
One way to get all the subprocesses to finish promptly (closing their stdin) is simply to let my script exit without closing the (pipe) filehandles at all, but that's no good because the script is part of a larger job that needs the subprocess' work to actually be done before proceeding.
What is a simple way to close each subprocesses' stdin (so that they can all finish working) and then wait for all of them to finish before proceeding? I've tried forking off a child to close each pipe but that doesn't seem to work -- only the parent's "close" actually closes the stdin of the subprocess and waits for the subprocess to finish.
I would create the pipes myself and not use open(P, "|external-program").
Then you can close the pipe and not wait for the child process to exit.
Example of opening a pipe to a child process yourself:
sub spawn {
my ($cmd) = #_;
pipe(my $rp, $wp) or die "pipe failed: $!";
my $pid = fork();
die "fork: $!" unless defined($pid);
if ($pid) {
# parent
close($rp);
return ($wp, $pid);
} else {
# child
close($wp);
open(STDIN, "<&", $rp);
exec($cmd) or die "exec: $!";
}
}
sub main {
$| = 1;
my ($wp, $pid) = spawn("./child");
for (1..10) {
print {$wp} "sending $_\n";
}
close($wp);
print "done\n";
}
main();
Here's a sample child program to test that close() is NOT waiting for the child to exit:
# file: ./child
while (<STDIN>) {
print "got: $_";
sleep(2);
}
The last piece of the puzzle is to asynchronously wait for the child processes to exit.
This can be done with a $SIG{CHLD} handler, or, alternatively, here is a simplistic "join_children" function:
my #child_ids = (1..24); # or whatever ids you want to use
my %pipe; # hash map from child_id -> pipe handle
sub join_children {
for my $id (#child_ids) {
close( $pipe{$id} );
}
my $count = scalar(#child_ids);
while ($count > 0) {
wait;
$count--;
}
}

perl - child process signaling parent

I have written the following piece of code to test signaling between child and parent. Ideally, when the child gives a SIGINT to parent the parent should come back in the new iteration and wait for user input. This I have observed in perl 5.8, but in perl 5.6.1(which I am asked to use) the parent is actually "killed". There is no next iteration.
my $parent_pid = $$;
$pid = fork();
if($pid == 0)
{
print "child started\n";
kill 2, $parent_pid;
}
else
{
while(1)
{
eval
{
$SIG{INT} = sub{die "GOTCHA";};
print 'inside parent'."\n";
$a = <>;
};
if($#)
{
print "got the signal!!!!\n$#\n";
next;
}
}
}
Could someone please give a walkaround for this problem or some other way to signal the parent so that it enters the new iteration.
The failure on 5.6.X might be because of the way Perl used to handle signals, which was fixed with 'Safe Signal Handling' in Perl 5.8.0. In either case, you are using a Perl which is practically archaeological and you should argue strongly to your masters that you should be using at least Perl 5.12, and ideally 5.14.
This is likely to be a race condition, caused by the child sending the SIGINT before the parent was ready for it. Remember that after you fork() you will have two independent processes, each might proceed at whatever pace it likes.
It's best in your case to set up the SIGINT handler before the fork() call, so you know it's definitely in place before the child tries to kill() its parent.
(with some minor corrections):
$SIG{INT} = sub { die "GOTCHA" };
my $parent_pid = $$;
defined( my $pid = fork() ) or die "Cannot fork() - $!";
if($pid == 0)
{
print "child started\n";
kill INT => $parent_pid;
}
else
{
while(1)
{
eval
{
print "inside parent\n";
<>;
};
if($#)
{
print "got the signal!!!!\n$#\n";
next;
}
}
}

Killing child and its children when child was created using open

Here's my code, with error handling and other stuff removed for clarity:
sub launch_and_monitor {
my ($script, $timeout) = #_;
sub REAPER {
while ((my $child = waitpid(-1, &WNOHANG)) > 0) {}
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
my $pid = fork;
if (defined $pid) {
if ($pid == 0) {
# in child
monitor($timeout);
}
else {
launch($script);
}
}
}
The launch sub executes a shell script which in turn launches other processes, like so:
sub launch($) {
my ($script) = #_;
my $pid = open(PIPE, "$script|");
# write pid to pidfile
if ($pid != 0) {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
}
The monitor sub basically just waits for a specified period of time and then attempts to kill the shell script.
sub monitor($) {
my ($timeout) = #_;
sleep $timeout;
# check if script is still running and if so get pid from pidfile
if (...) {
my $pid = getpid(...);
kill 9, $pid;
}
}
This kills the script, however, it does not kill any of its subprocesses. How to fix it?
You can do this with process groups, if your operating system supports them. You need to make the script process become a process group leader. The child processes that it runs will inherit the process group from their parent. You can then use kill to send a signal to each process in the group at the same time.
In launch(), you will need to replace the open line with one that forks. Then in the child, you would call setpgrp() before exec'ing the command. Something like the following should work:
my $pid = open(PIPE, "-|");
if (0 == $pid) {
setpgrp(0, 0);
exec $script;
die "exec failed: $!\n";
}
else {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
Later, to kill the script process and its children, negate the process ID that you're signalling:
kill 9, -$pid;
In general, I don't think you can expect signals to be propagated into all child processes; this isn't specific to perl.
That said, you might be able to use the process group signal feature built into perl kill():
...if SIGNAL is negative, it kills process groups instead of processes...
You probably need to use setpgrp() on your (direct) child process, then change your kill call to something like:
kill -9, $pgrp;
Try adding:
use POSIX qw(setsid);
setsid;
at the top of your launch_and_monitor function. This will put your processes in a separate session, and cause things to exit when the session leader (i.e. the master) exits.
Killing a processgroup works, but don't forget the parent can be killed alone too. Assuming child processes have an event loop, they can check the parent socket that was created in a socketpair prior doing the fork() for validness. In fact, select() cleanly exits when the parent socket is gone, all that needs to be done is to check the socket.
E.g.:
use strict; use warnings;
use Socket;
$SIG{CHLD} = sub {};
socketpair(my $p, my $c, AF_UNIX, SOCK_STREAM, PF_UNSPEC) or die $!;
print "parent $$, fork 2 kids\n";
for (0..1){
my $kid = fork();
unless($kid){
child_loop($p, $c);
exit;
}
print "parent $$, forked kid $kid\n";
}
print "parent $$, waiting 5s\n";
sleep 5;
print "parent $$ exit, closing sockets\n";
sub child_loop {
my ($p_s, $c_s) = #_;
print "kid: $$\n";
close($c_s);
my $rin = '';
vec($rin, fileno($p_s), 1) = 1;
while(1){
select my $rout = $rin, undef, undef, undef;
if(vec($rout, fileno($p_s), 1)){
print "kid: $$, parent gone, exiting\n";
last;
}
}
}
Runs like this:
tim#mint:~$ perl ~/abc.pl
parent 5638, fork 2 kids
parent 5638, forked kid 5639
kid: 5639
parent 5638, forked kid 5640
parent 5638, waiting 5s
kid: 5640
parent 5638 exit, closing sockets
kid: 5640, parent gone, exiting
kid: 5639, parent gone, exiting
tim#mint:~$