TCP Client hangs with Perl fork() + system() - perl

I have a perl script running a TCP listener via Net::Server module. When the remote connects to the perl server, the remote sends the filename of an mp3 music file to play. When I fork() and then call system('mpg123 $filename'), the client hangs. How can I background the mpg123 process so the child can close the connection?
my $pid = fork();
if (defined $pid && $pid == 0)
{
# child process -- never gets to print statement until $cmd is done
system ($cmd);
print STDERR "child launched\n";
exit (0);
}

Perl’s system doesn’t return until the command completes. You might rearrange the child to
if (defined $pid && $pid == 0)
{
# child process
warn "child launched\n";
exec $cmd or die "$0: exec $cmd: $!";
}

Ended up using Proc::Daemon
#!/usr/bin/perl -w
use strict;
use Proc::Daemon;
my $dm = Proc::Daemon->new( work_dir=>'/tmp/');
my $pid = $dm->Init( { exec_command => '/usr/bin/find / >/tmp/find.txt', } );
while (1)
{
print "child status :".$dm->Status($pid)."\n";
sleep 2;
if ($dm->Status($pid) eq 0)
{
print "child terminated :".$dm->Status($pid)."\n";
last;
}
}

Related

Perl kill process with timeout ignored

I was testing my source code, in which the child process calls several other programs (some of which are C++).
#Other variables and functions
my $MAX_TIME = 10;#testing 10 minutes
my $timeRemaining = $MAX_TIME * 60;
my $pid = fork();
if ( $pid == 0 ) {
#child process
my $nowTime = localtime;
print "Run started at $nowTime\n";
#This run() calls a for loop in perl, in each iteration there are several C++ programs
run();
setpgrp(0,0);
}
elsif ($pid > 0){
my $nowTime = localtime;
eval{
local $SIG{ALRM} = sub {
kill -9, $pid;
print "Run completed at $nowTime\nJob time out for $MAX_TIME minutes\n";
log();
die "TIMEOUT!\n";
};
alarm $timeRemaining;
waitpid($pid, 0);
};
print "Run completed at $nowTime with no timeout\n";
}
When I checked the print out, I noticed that after 10 minutes, the "Run completed at $nowTime with no timeout\n" part gets printed out, and the child process is still executing. The die "TIMEOUT!\n"; part in the parent process does not get executed.
Is it because of the C++ programs that the perl program calls cannot be killed once it started?
First of all, kill is failing because $pid isn't a process group.
run();
setpgrp(0,0);
should be
setpgrp(0,0);
run();
Secondly, the reason you see
Run completed at $nowTime with no timeout
even when there's a timeout is that you execute
print "Run completed at $nowTime with no timeout\n";
whether there's a timeout or not.
Thirdly, you don't disable the alarm when the child is reaped. Add
alarm(0);
Fourthly, you expect $nowTime to contain the current time without making it so.
Finally, you still need to reap your child even if you kill it. (Ok, this can be skipped if the parent exits immediately anyway.)
Fixed:
use strict;
use warnings;
use POSIX qw( strftime );
sub current_time { strftime("%Y-%m-%d %H:%M:%S", localtime) }
sub run {
print("a\n");
system('perl', '-e', 'sleep 3;');
print("b\n");
system('perl', '-e', 'sleep 3;');
print("c\n");
}
my $MAX_TIME = 5;
my $pid = fork();
die($!) if !defined($pid);
if ($pid) {
if (eval{
local $SIG{ALRM} = sub {
kill KILL => -$pid;
die "TIMEOUT!\n";
};
alarm($MAX_TIME);
waitpid($pid, 0);
alarm(0);
return 1;
}) {
print "[".current_time()."] Run completed.\n";
} else {
die($#) if $# ne "TIMEOUT!\n";
print "[".current_time()."] Run timed out.\n";
waitpid($pid, 0);
print "[".current_time()."] Child reaped.\n";
}
} else {
print "[".current_time()."] Run started.\n";
setpgrp(0,0);
run();
}
Output:
[2017-05-11 14:58:06] Run started.
a
b
[2017-05-11 14:58:11] Run timed out.
[2017-05-11 14:58:11] Child reaped.

Not wait for the computation and runs next [duplicate]

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

How to autorun Perl after killed?

I have script that use multiple $pid. So if too many connection of child spawn, my server hang and program not running.
I want to restart it automatic from cron job at my server, but it's cannot running because I use alternatif cPanel. So I want restart it automatic from it script.
I try to restart it with :
kill 9, $pid;
sleep 60;
and will display output :
Child Spawned : 15945
Killed
But I don't know how to autorun or re-execute
It looks you want a per-forking worker pool. Your server process starts a number of child processes to handle requests, and automatically restarts any that die.
A basic template:
use strict;
use warnings;
use POSIX qw(sigprocmask SIG_BLOCK SIG_UNBLOCK SIGINT SIGTERM WNOHANG);
my $pool_size = 4; # 4 workers
my %pool;
# When one or more workers die, delete them from the pool
$SIG{CHLD} = sub {
while ((my $pid = waitpid(-1, WNOHANG)) > 0) {
delete $pool{$pid};
}
};
# If a fatal signal is sent to the server, kill all children and exit
for my $sig (qw(INT TERM)) {
$SIG{$sig} = sub {
local $SIG{CHLD} = 'IGNORE';
kill $sig => keys %pool;
exit;
};
}
# HUP = restart all workers
$SIG{HUP} = sub {
print "Caught HUP, restarting workers.\n";
kill TERM => keys %pool
};
# Set up pool
make_worker() for (1..$pool_size);
# Maintain population
while (1) {
sleep; # wait for signal
while (keys %pool < $pool_size) {
make_worker();
}
}
exit;
sub make_worker {
# Block INT signal during fork, so parent handler is not called
my $sigset = POSIX::SigSet->new(SIGINT, SIGTERM);
sigprocmask(SIG_BLOCK, $sigset) or die "Couldn't block signals for fork: $!";
my $pid = fork;
die "fork: $!" if !defined $pid;
if ($pid) {
sigprocmask(SIG_UNBLOCK, $sigset) or die "Couldn't unblock signals for fork: $!";
$pool{$pid} = 1;
return;
}
else {
$SIG{$_} = 'DEFAULT' for qw(INT TERM);
sigprocmask(SIG_UNBLOCK, $sigset) or die "Couldn't unblock signals for child: $!";
# Your worker code goes here.
exit;
}
}
For simply restarting a single command in a loop, try;
while(1) {
system("/path/to/your-command", "--args");
}
As soon as the command dies (for any reason), it is re-executed. Processing the exit code of system is not that simple, so I recommend using IPC::System::Simple here.
use IPC::System::Simple qw(system);
use Try::Tiny;
while(1) {
try { system(...) }
catch { "Command failed, restarting: $!" };
}
You should also detect if the command exited too quickly, which would indicate a fatal error.

perl: executing multiple systems processes and waiting for them to finish

Currently in my Perl script I make a call like the following:
system(" ./long_program1 & ./long_program2 & ./long_program3 & wait ");
I would like to be able to log when each of the long running commands executes while still executing them asyncronously. I know that the system call causes perl to make a fork, so is something like this possible? Could this be replaced by multiple perl fork() and exec() calls?
Please help me find a better solution.
Yes, definitely. You can fork off a child process for each of the programs to be executed.
You can either do system() or exec() after forking, depending on how much processing you want your Perl code to do after the system call finishes (since exec() is very similar in functionality to system(); exit $rc;)
foreach my $i (1, 2, 3) {
my $pid = fork();
if ($pid==0) { # child
exec("./long_program$i");
die "Exec $i failed: $!\n";
} elsif (!defined $pid) {
warn "Fork $i failed: $!\n";
}
}
1 while wait() >= 0;
Please note that if you need to do a lot of forks, you are better off controlling them via Parallel::ForkManager instead of doing forking by hand.
Two alternatives:
use IPC::Open3 qw( open3 );
sub launch {
open(local *CHILD_STDIN, '<', '/dev/null') or die $!;
return open3('<&CHILD_STDIN', '>&STDOUT', '>&STDERR', #_);
}
my %children;
for my $cmd (#cmds) {
print "Command $cmd started at ".localtime."\n";
my $pid = launch($cmd);
$children{$pid} = $cmd;
}
while (%children) {
my $pid = wait();
die $! if $pid < 1;
my $cmd = delete($children{$pid});
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
}
I use open3 since it it's shorter than a even trivial fork+exec and since it doesn't misattribute exec errors to the command you launch like a trivial fork+exec.
use threads;
my #threads;
for my $cmd (#cmds) {
push #threads, async {
print "Command $cmd started at ".localtime."\n";
system($cmd);
print "Command $cmd ended at ".localtime." with \$? = $?."\n";
};
}
$_->join() for #threads;

Killing child and its children when child was created using open

Here's my code, with error handling and other stuff removed for clarity:
sub launch_and_monitor {
my ($script, $timeout) = #_;
sub REAPER {
while ((my $child = waitpid(-1, &WNOHANG)) > 0) {}
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
my $pid = fork;
if (defined $pid) {
if ($pid == 0) {
# in child
monitor($timeout);
}
else {
launch($script);
}
}
}
The launch sub executes a shell script which in turn launches other processes, like so:
sub launch($) {
my ($script) = #_;
my $pid = open(PIPE, "$script|");
# write pid to pidfile
if ($pid != 0) {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
}
The monitor sub basically just waits for a specified period of time and then attempts to kill the shell script.
sub monitor($) {
my ($timeout) = #_;
sleep $timeout;
# check if script is still running and if so get pid from pidfile
if (...) {
my $pid = getpid(...);
kill 9, $pid;
}
}
This kills the script, however, it does not kill any of its subprocesses. How to fix it?
You can do this with process groups, if your operating system supports them. You need to make the script process become a process group leader. The child processes that it runs will inherit the process group from their parent. You can then use kill to send a signal to each process in the group at the same time.
In launch(), you will need to replace the open line with one that forks. Then in the child, you would call setpgrp() before exec'ing the command. Something like the following should work:
my $pid = open(PIPE, "-|");
if (0 == $pid) {
setpgrp(0, 0);
exec $script;
die "exec failed: $!\n";
}
else {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
Later, to kill the script process and its children, negate the process ID that you're signalling:
kill 9, -$pid;
In general, I don't think you can expect signals to be propagated into all child processes; this isn't specific to perl.
That said, you might be able to use the process group signal feature built into perl kill():
...if SIGNAL is negative, it kills process groups instead of processes...
You probably need to use setpgrp() on your (direct) child process, then change your kill call to something like:
kill -9, $pgrp;
Try adding:
use POSIX qw(setsid);
setsid;
at the top of your launch_and_monitor function. This will put your processes in a separate session, and cause things to exit when the session leader (i.e. the master) exits.
Killing a processgroup works, but don't forget the parent can be killed alone too. Assuming child processes have an event loop, they can check the parent socket that was created in a socketpair prior doing the fork() for validness. In fact, select() cleanly exits when the parent socket is gone, all that needs to be done is to check the socket.
E.g.:
use strict; use warnings;
use Socket;
$SIG{CHLD} = sub {};
socketpair(my $p, my $c, AF_UNIX, SOCK_STREAM, PF_UNSPEC) or die $!;
print "parent $$, fork 2 kids\n";
for (0..1){
my $kid = fork();
unless($kid){
child_loop($p, $c);
exit;
}
print "parent $$, forked kid $kid\n";
}
print "parent $$, waiting 5s\n";
sleep 5;
print "parent $$ exit, closing sockets\n";
sub child_loop {
my ($p_s, $c_s) = #_;
print "kid: $$\n";
close($c_s);
my $rin = '';
vec($rin, fileno($p_s), 1) = 1;
while(1){
select my $rout = $rin, undef, undef, undef;
if(vec($rout, fileno($p_s), 1)){
print "kid: $$, parent gone, exiting\n";
last;
}
}
}
Runs like this:
tim#mint:~$ perl ~/abc.pl
parent 5638, fork 2 kids
parent 5638, forked kid 5639
kid: 5639
parent 5638, forked kid 5640
parent 5638, waiting 5s
kid: 5640
parent 5638 exit, closing sockets
kid: 5640, parent gone, exiting
kid: 5639, parent gone, exiting
tim#mint:~$