What would be the right way to fork processes that each one of them runs a different subroutine sub1,sub2,...,subN. After reading a lot of previous thread and material, I feel that I understand the logic but a bit confused on how to write in the cleanest way possible (readability is important to me).
Consider 4 subs. Each one of them gets different arguments. It feels like that the most efficient way would be to create 7 forks that each one of them will run a different sub. The code will look something like this:
my $forks = 0;
foreach my $i (1..4) {
if ($i == 1) {
my $pid = fork();
if ($pid == 0) {
$forks++;
run1();
exit;
}
} elsif ($i == 2) {
my $pid = fork();
if ($pid == 0) {
$forks++;
run1();
exit;
}
} elsif ($i == 3) {
my $pid = fork();
if ($pid == 0) {
$forks++;
run1();
exit;
}
} elsif ($i == 4) {
my $pid = fork();
if ($pid == 0) {
$forks++;
run1();
exit;
}
}
}
for (1 .. $forks) {
my $pid = wait();
print "Parent saw $pid exiting\n";
}
print "done\n";
Some points:
This will work only if all of the forks were successful. But I would like to run the subs even though the fork failed (even though it will not be parallel. In that case, I guess we need to take the subs out of the if and exit only if the $pid wasn't 0. something like:
my $pid = fork();
run1();
$forks++ if ($pid == 0);
exit if ($pid == 0);
But it still feels not right.
Using exit is the right way to kill the child process? if the processes were killed with exit should I still use wait? Will it prevent zombies?
Maybe the most interesting question: What will I do if we have 15 function calls? I would like to somehow create 15 forks but I can't create 15 if-else statements - the code will not be readable that way. At first, I thought that it is possible to insert those function calls into an array (somehow) and loop over that array. But after some research, I didn't find a way that it is possible.
If possible, I prefer not to use any additional modules like Parallel::ForkManager.
Is there a clean and simple way to solve it?
There are a few questions to clear up here.
A basic example
use warnings;
use strict;
use feature 'say';
my #coderefs;
for my $i (1..4) {
push #coderefs, sub {
my #args = #_;
say "Sub #$i with args: #args";
};
}
my #procs;
for my $i (0 .. $#coderefs) {
my $pid = fork // do {
warn "Can't fork: $!";
# retry, or record which subs failed so to run later
next;
};
if ($pid == 0) {
$coderefs[$i]->("In $$: $i");
exit;
}
push #procs, $pid;
#sleep 1;
}
say "Started: #procs";
for my $pid (#procs) {
my $goner = waitpid $pid, 0;
say "$goner exited with $?";
}
We generate anonymous subroutines and store those code references in an array, then go through that array and start that many processes, running a sub in each. After that the parent waitpids on these in the order in which they were started, but normally you'll want to reap as they exit; see docs listed below.
A child process always exits, or you'd have multiple processes executing all of the rest of the code in the program. Once a child process exits the kernel will notify the parent, and the parent can "pick up" that notification ("reap" the exit status of the child process) via wait/waitpid, or use a signal handler to handle/ignore it.
If the parent never does this after the child exited, once it exits itself later the OS stays stuck with that information about the (exited) child process in the process table; that's a zombie. So you do need to wait, so that OS gets done with the child process (and you check up on how it went). Or, you can indicate in a signal handler that you don't care about the child's exit.† Modern systems reap would-be zombies but not always and you cannot rely on that; clean up after yourself.
Note, you'll need to be reading perlipc, fork, wait and waitpid, perlvar ... and yet other resources that'll come up while working on all this. It will take a little playing and some trial and error. Once you get it all down you may want to start using modules, at least for some types of tasks.
† To ignore the SIGCHLD (default)
$SIG{CHLD} = 'IGNORE';
Or, can run code there (but well advised to be minimal)
$SIG{CHLD} = sub { ... };
These signal "dispositions" are inherited in fork-ed processes (but not via execve).
See the docs listed above, and the basics of %SIG variable in perlvar. Also see man(7) signal. All this is generally *nix business.
This is a global variable, affecting all code in the interpreter. In order to limit the change to the nearest scope use local
local $SIG{CHLD} = ...
Related
Let's say we have a 'Child' and 'Parent' process defined and subroutines
my $pid = fork;
die "fork failed: $!" unless defined($pid);
local $SIG{USR1} = sub {
kill KILL => $pid;
$SIG{USR1} = 'IGNORE';
kill USR1 => $$;
};
and we divide them, is it possible to do the following?
if($pid == 0){
sub1();
#switch to Parent process to execute sub4()
sub2();
#switch to Parent process to execute sub5()
sub3();
}
else
{
sub4();
#send message to child process so it executes sub2
sub5();
#send message to child process so it executes sub3
}
If yes, can you point how, or where can I look for the solution? Maybe a short example would suffice. :)
Thank you.
There is a whole page in the docs about inter process communication: perlipc
To answer your question - yes, there is a way to do what you want. The problem is, exactly what it is ... depends on your use case. I can't tell what you're trying to accomplish - what you you mean by 'switch to parent' for example?
But generally the simplest (in my opinion) is using pipes:
#!/usr/bin/env perl
use strict;
use warnings;
pipe ( my $reader, my $writer );
my $pid = fork(); #you should probably test for undef for fork failure.
if ( $pid == 0 ) {
## in child:
close ( $writer );
while ( my $line = <$reader> ) {
print "Child got $line\n";
}
}
else {
##in parent:
close ( $reader );
print {$writer} "Parent says hello!\n";
sleep 5;
}
Note: you may want to check your fork return codes - 0 means we're in the child - a number means we're in the parent, and undef means the fork failed.
Also: Your pipe will buffer - this might trip you over in some cases. It'll run to the end just fine, but you may not get IO when you think you should.
You can open pipes the other way around - for child->parent comms. Be slightly cautious when you multi-fork though, because an active pipe is inherited by every child of the fork - but it's not a broadcast.
In this scenario, I need my perl program to start multiple child processes that last an unknown amount of time, and in fact only the parent process knows when the child processes need to end. I've been trying to fork off more than one process then ending it from the parent but have been unsuccessful. What I have so far:
Successfully forking off one process then ending it
my $pid = fork();
if($pid == 0){
#do things in child process
}
else{
#examine external conditions, when the time is right:
kill 1, $pid;
}
Unsuccessfully trying to extend it to 2 processes:
my $pid = fork();
if($pid != 0){ #parent makes another fork
my $pid2 = fork();
}
if($pid == 0 || $pid2 = 0){
#do things in child process
}
else{
#examine external conditions, when the time is right:
kill 1, $pid;
kill 2, $pid;
}
I've read all the documentation on fork available on the internet, and it was all written about forking off one process which I understand pretty well, but I have no clue how to extend it to 2 or more processes, and would appreciate any help on how to do that.
Once you understand well what's going on in the first answer (but only then!), go have a look at Parallel::ForkManager (or something similar) for real work. There are many, many small niggling details that you can get wrong while working with child processes, so using a third-party module for that can save you a lot of time.
Follow this code, I hope the code is self explanatory:
my $num_process = 5; ## for as many you want, I tested with 5
my %processes; ## to store the list of children
for ( 1 .. $num_process ) {
my $pid = fork();
if ( not defined $pid ) {
die "Could not fork";
}
elseif ( $pid > 0 ) {
## boss
$processes{$pid} = 1;
}
else {
#do things in child process
## exit for child, dont forget this
exit;
}
}
## when things are right to kill ;-)
foreach my $pid ( keys %processes ) {
kill 1, $pid;
}
Let's say I have this:
pipe(READ,WRITE);
$pid = fork();
if ($pid == 0) {
close(READ);
# do something that may be blocking
print WRITE "done";
close(WRITE);
exit(0);
} else {
close(WRITE);
$resp = <READ>;
close(READ);
# do other stuff
}
In this situation, it's possible for the child to hang indefinitely. Is there a way I can read from READ for a certain amount of time (ie, a timeout) and if I don't get anything, I proceed in the parent with the assumption that the child is hanging?
Typically, in C or Perl, you use select() to test if there is any input available. You can specify a timeout of 0 if you like, though used 1 second in the example below.:
use IO::Select;
pipe(READ,WRITE);
$s = IO::Select->new();
$s->add(\*READ);
$pid = fork();
if ($pid == 0) {
close(READ);
# do something that may be blocking
for $i (0..2) {
print "child - $i\n";
sleep 1;
}
print WRITE "donechild";
close(WRITE);
print "child - end\n";
exit(0);
} else {
print "parent - $pid\n";
close(WRITE);
for $i (0..10) {
print "parent - $i\n";
# 1 second wait (timeout) here. Can be 0.
print "parent - ", (#r=$s->can_read(1))?"yes":"no", "\n";
last if #r;
}
$resp = <READ>;
print "parent - read: $resp\n";
close(READ);
# do other stuff
}
Is there a way I can read from READ for a certain amount of time (ie, a timeout) and if I don't get anything, I proceed in the parent with the assumption that the child is hanging?
When you fork, you are working with two entirely separate processes. You're running two separate copies of your program. Your code cannot switch back and forth between the parent and child in your program. You're program is either the parent or the child.
You can use alarm in the parent to send a SIGALRM to your parent process. If I remember correctly, you set your $SIG{ALRM} subroutine, start your alarm, do your read, and then set alarm back to zero to shut it off. The whole thing needs to be wrapped in an eval.
I did this once a long time ago. For some reason, I remember that the standard system read didn't work. You have to use sysread. See Perl Signal Processing for more help.
I am aware of the many questions regarding waitpid and timeouts but they all cover this by killing the child from within an alarm handler.
That is not what i want, i want to keep the process running but dispatch it from waitpid.
The underlaying problem I try to solve is a daemon process with a main loop that processes a queue. The tasks are processed one at a time.
If a task hangs the whole main loop hangs. To get around this fork() and waitpid seemed an obvious choice. Still if a task hangs the loop hangs.
I can think of workarounds where i do not use waitpid at all but i would have to track running processes another way as i still want to process one task at a time in parallel to possibly hanging tasks.
I could even kill the task but i would like to have it running to examine what exactly is going wrong. A kill handler that dumps some debug information is also possible.
Anyway, the most convenient way to solve that issue is to timeout waitpid if possble.
Edit:
This is how I used fork() and waitpid and it may be clearer what is meant by child.
my $pid = fork();
if ($pid == 0){
# i am the child and i dont want to die
}
elsif ($pid > 0) {
waitpid $pid, 0;
# i am the parent and i dont want to wait longer than $timeout
# for the child to exit
}
else {
die "Could not fork()";
}
Edit:
Using waitpid WNOHANG does what I want. Is this usage good practice or would you do it differently?
use strict;
use warnings;
use 5.012;
use POSIX ':sys_wait_h';
my $pid = fork();
if ($pid == 0){
say "child will sleep";
sleep 20;
say "child slept";
}
else {
my $time = 10;
my $status;
do {
sleep 1;
$status = waitpid -1, WNOHANG;
$time--;
} while ($time && not $status );
say "bye";
}
If a task hangs the whole main loop hangs. To get around this fork()
and waitpid seemed an obvious choice. Still if a task hangs the loop
hangs.
Use waitpid with the WNOHANG option. This way it's not going to suspend the parent process and will immediately return 0 when the child has not yet exited. In your main loop you'll have to periodically poll all the children (tasks).
instead of poling all the children periodically, you might want to set up a signal handler to handle SIGCHLD... from perlipc:
use POSIX ":sys_wait_h";
$SIG{CHLD} = sub {
while ((my $child = waitpid(-1, WNOHANG)) > 0) {
$Kid_Status{$child} = $?;
}
};
# do something that forks...
Enabling and handling SIGCHLD is also a possibility; it'll notify you of child process state changes without polling -- see sigprocmask(2) and signal(3) in the manual pages.
I'm trying to make a basic multiprocessing task and this is what I have. First of all, I don't know the right way to make this program as a non-blocking process, because when I am waiting for the response of a child (with waitpid) the other processes also have to wait in the queue, but, what will happen if some child processes die before (I mean, the processes die in disorder)? So, I've been searching and I foud that I can get the PID of the process that just die, for that I use waitpid(-1, WNOHANG). I always get a warning that WNOHANG is not a number, but when I added the lib sys_wait_h, I didn't get that error but the script never waits for PID, what may be the error?
#!/usr/bin/perl
#use POSIX ":sys_wait_h"; #if I use this library, I dont get the error, but it wont wait for the return of the child
use warnings;
main(#ARGV);
sub main{
my $num = 3;
for(1..$num){
my $pid = fork();
if ($pid) {
print "Im going to wait (Im the parent); my child is: $pid\n";
push(#childs, $pid);
}
elsif ($pid == 0) {
my $slp = 5 * $_;
print "$_ : Im going to execute my code (Im a child) and Im going to wait like $slp seconds\n";
sleep $slp;
print "$_ : I finished my sleep\n";
exit(0);
}
else {
die "couldn’t fork: $!\n";
}
}
foreach (#childs) {
print "Im waiting for: $_\n";
my $ret = waitpid(-1, WNOHANG);
#waitpid($_, 0);
print "Ive just finish waiting for: $_; the return: $ret \n";
}
}
Thanks in advance, bye!
If you use WNOHANG, the process will not block if no children have terminated. That's the point of WNOHANG; it ensures that waitpid() will return quickly. In your case, it looks like you want to just use wait() instead of waitpid().
I find that POE handles all of this stuff for me quite nicely. It's asynchronous (non-blocking) control of all sorts of things, including external processes. You don't have to deal with all the low level stuff because POE does it for you.