How to autorun Perl after killed? - perl

I have script that use multiple $pid. So if too many connection of child spawn, my server hang and program not running.
I want to restart it automatic from cron job at my server, but it's cannot running because I use alternatif cPanel. So I want restart it automatic from it script.
I try to restart it with :
kill 9, $pid;
sleep 60;
and will display output :
Child Spawned : 15945
Killed
But I don't know how to autorun or re-execute

It looks you want a per-forking worker pool. Your server process starts a number of child processes to handle requests, and automatically restarts any that die.
A basic template:
use strict;
use warnings;
use POSIX qw(sigprocmask SIG_BLOCK SIG_UNBLOCK SIGINT SIGTERM WNOHANG);
my $pool_size = 4; # 4 workers
my %pool;
# When one or more workers die, delete them from the pool
$SIG{CHLD} = sub {
while ((my $pid = waitpid(-1, WNOHANG)) > 0) {
delete $pool{$pid};
}
};
# If a fatal signal is sent to the server, kill all children and exit
for my $sig (qw(INT TERM)) {
$SIG{$sig} = sub {
local $SIG{CHLD} = 'IGNORE';
kill $sig => keys %pool;
exit;
};
}
# HUP = restart all workers
$SIG{HUP} = sub {
print "Caught HUP, restarting workers.\n";
kill TERM => keys %pool
};
# Set up pool
make_worker() for (1..$pool_size);
# Maintain population
while (1) {
sleep; # wait for signal
while (keys %pool < $pool_size) {
make_worker();
}
}
exit;
sub make_worker {
# Block INT signal during fork, so parent handler is not called
my $sigset = POSIX::SigSet->new(SIGINT, SIGTERM);
sigprocmask(SIG_BLOCK, $sigset) or die "Couldn't block signals for fork: $!";
my $pid = fork;
die "fork: $!" if !defined $pid;
if ($pid) {
sigprocmask(SIG_UNBLOCK, $sigset) or die "Couldn't unblock signals for fork: $!";
$pool{$pid} = 1;
return;
}
else {
$SIG{$_} = 'DEFAULT' for qw(INT TERM);
sigprocmask(SIG_UNBLOCK, $sigset) or die "Couldn't unblock signals for child: $!";
# Your worker code goes here.
exit;
}
}
For simply restarting a single command in a loop, try;
while(1) {
system("/path/to/your-command", "--args");
}
As soon as the command dies (for any reason), it is re-executed. Processing the exit code of system is not that simple, so I recommend using IPC::System::Simple here.
use IPC::System::Simple qw(system);
use Try::Tiny;
while(1) {
try { system(...) }
catch { "Command failed, restarting: $!" };
}
You should also detect if the command exited too quickly, which would indicate a fatal error.

Related

Perl : Implement timeout (& kill) for process invoked via backticks

I am trying to implement a routine which will take in a "command" and associated "timeout".
If the command completes within the specified time, it should return the output.
Or else - it should kill the process.
sub runWithTimeout {
my ($pCommand,$pTimeOut) = #_;
my (#aResult);
print "Executing command [$pCommand] with timeout [$pTimeOut] sec/s \n";
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm $pTimeOut;
#aResult = `$pCommand`;
alarm 0;
};
if ($#) {
print("Command [$pCommand] timed out\n");
# Need to kill the process.However I don't have the PID here.
# kill -9 pid
} else {
print "Command completed\n";
#print Dumper(\#aResult);
}
}
Sample Invocation :
&runWithTimeout('ls -lrt',5);
Executing command [ls -lrt] with timeout [5] sec/s
Command completed
&runWithTimeout('sleep 10;ls -lrt',5);
Executing command [sleep 10;ls -lrt] with timeout [5] sec/s
Command [sleep 10;ls -lrt] timed out
Guess if I have the PID with me - I can use "kill" on the PID in the if block.
Any pointer on how can I get the PID(or any other better approach) - it would be a great help.
Don't run the command with backticks, and use open instead. For bonus points - use IO::Select and can_read to see if you've got any output:
use IO::Select;
my $pid = open ( my $output_fh, '-|', 'ls -lrt' );
my $select = IO::Select -> new ( $output_fh );
while ( $select -> can_read ( 5 ) ) {
my $line = <$output_fh>;
print "GOT: $line";
}
##timed out after 5s waiting.
kill 15, $pid;

Perl kill process with timeout ignored

I was testing my source code, in which the child process calls several other programs (some of which are C++).
#Other variables and functions
my $MAX_TIME = 10;#testing 10 minutes
my $timeRemaining = $MAX_TIME * 60;
my $pid = fork();
if ( $pid == 0 ) {
#child process
my $nowTime = localtime;
print "Run started at $nowTime\n";
#This run() calls a for loop in perl, in each iteration there are several C++ programs
run();
setpgrp(0,0);
}
elsif ($pid > 0){
my $nowTime = localtime;
eval{
local $SIG{ALRM} = sub {
kill -9, $pid;
print "Run completed at $nowTime\nJob time out for $MAX_TIME minutes\n";
log();
die "TIMEOUT!\n";
};
alarm $timeRemaining;
waitpid($pid, 0);
};
print "Run completed at $nowTime with no timeout\n";
}
When I checked the print out, I noticed that after 10 minutes, the "Run completed at $nowTime with no timeout\n" part gets printed out, and the child process is still executing. The die "TIMEOUT!\n"; part in the parent process does not get executed.
Is it because of the C++ programs that the perl program calls cannot be killed once it started?
First of all, kill is failing because $pid isn't a process group.
run();
setpgrp(0,0);
should be
setpgrp(0,0);
run();
Secondly, the reason you see
Run completed at $nowTime with no timeout
even when there's a timeout is that you execute
print "Run completed at $nowTime with no timeout\n";
whether there's a timeout or not.
Thirdly, you don't disable the alarm when the child is reaped. Add
alarm(0);
Fourthly, you expect $nowTime to contain the current time without making it so.
Finally, you still need to reap your child even if you kill it. (Ok, this can be skipped if the parent exits immediately anyway.)
Fixed:
use strict;
use warnings;
use POSIX qw( strftime );
sub current_time { strftime("%Y-%m-%d %H:%M:%S", localtime) }
sub run {
print("a\n");
system('perl', '-e', 'sleep 3;');
print("b\n");
system('perl', '-e', 'sleep 3;');
print("c\n");
}
my $MAX_TIME = 5;
my $pid = fork();
die($!) if !defined($pid);
if ($pid) {
if (eval{
local $SIG{ALRM} = sub {
kill KILL => -$pid;
die "TIMEOUT!\n";
};
alarm($MAX_TIME);
waitpid($pid, 0);
alarm(0);
return 1;
}) {
print "[".current_time()."] Run completed.\n";
} else {
die($#) if $# ne "TIMEOUT!\n";
print "[".current_time()."] Run timed out.\n";
waitpid($pid, 0);
print "[".current_time()."] Child reaped.\n";
}
} else {
print "[".current_time()."] Run started.\n";
setpgrp(0,0);
run();
}
Output:
[2017-05-11 14:58:06] Run started.
a
b
[2017-05-11 14:58:11] Run timed out.
[2017-05-11 14:58:11] Child reaped.

Kill current LWP request with CTRL + C

I have a script based on Term::ReadLine and LWP::UserAgent
The logic is like this,
while (defined ($_ = $term->readline('console> ')))
{
next unless $_; chomp;
if ($_ eq 'exit')
{
last;
}
&run ($_);
}
sub run {
my $ua = LWP::UserAgent->new;
my $resp = $ua->get (...);
say $resp->content;
}
In run it will do a LWP request. Now If I press CTRL + C, not only the LWP is terminated, the whole perl script is terminated as well.
I wanted to kill the LWP request only. Any ideas?
I can add a SIGINT handler, but I don't know what the handler should do
Convert the signal into an exception.
local $SIG{INT} = sub { die "SIGINT\n" };
Generally, one would then wrap the code in an eval BLOCK, but LWP::UserAgent catches these exceptions and returns an error response.
For example,
use LWP::UserAgent;
my $ua = LWP::UserAgent->new();
my $response = do {
local $SIG{INT} = sub { die "SIGINT\n" };
$ua->get("http://localhost/zzz.crx")
};
say $response->is_success ? "Successful" : "Unsuccessful";
say $response->code;
say $response->status_line;
Output if no SIGINT received:
Successful
200
200 OK
Output if SIGINT received:
Unsuccessful
500
500 SIGINT
One way to stop code is to run it in a child process and kill that child in the parent's signal handler when SIGINT is received by the parent. The parent keeps running since the signal is handled.
use warnings;
use strict;
use feature 'say';
$SIG{INT} = \&sigint_handler; # or: $SIG{INT} = sub { ... };
say "Parent $$ start.";
my $pid = run_proc();
my $gone_pid = waitpid $pid, 0; # check status, in $?
say "Parent exiting";
sub run_proc
{
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) { # child process
say "\tKid, sleep 5 (time for Ctrl-C)"; # run your job here
sleep 5;
say "\tKid exiting.";
exit;
}
return $pid;
}
sub sigint_handler {
if ($pid and kill 0, $pid) {
say "Got $_[0], send 'kill TERM' to child process $pid.";
my $no_signalled = kill 15, $pid;
}
else { die "Got $_[0]" } # or use exit
}
A good deal of the code is for diagnostic prints. Some comments follow
The kill only sends a signal. It does not in any way ensure that the process terminates. Check this with kill $pid, 0, which returns true if the process has not been reaped (even if it's a zombie). On my system TERM is 15, and even though this is very common please check.
The signal could come at a time when the child is not running. The handler first checks whether the $pid is out there and if not it dies/exits, respecting SIGINT. Change as appropriate.
After the fork the parent drops past if ($pid == 0) and returns the $pid right away.
You can install $SIG{TERM} in the child, where it can clean up if it needs to exit orderly.
The SIGINT handler will run out of the child as well, so "Got $_[0] ..." is printed twice. If this is a concern add a handler to the child to ignore the signal, $SIG{INT} = 'IGNORE';. With this in place and with Ctrl-C hit while the child is running, the output is
Parent 9334 start.
Kid, sleep 5 (time for Ctrl-C)
^CGot INT, send 'kill TERM' to child process 9335.
Parent exiting
The status of the child once it exited can be checked via $?, see system and in perlvar.
Documentation: fork (and exec, system), %SIG in perlvar, waitpid, parts of perlipc, kill.
If the job done in the child needed to communicate with the parent then there would be more to do. However, the code snippet added to the question indicates that this is not the case.
You need to provide a callback in your call to $ua->request. Issuing die in that callback will terminate the transfer.
You then just need to set a flag variable in your Ctrl-C signal handler, and die in your callback if that flag is set.
I'll write some code when I get back to a PC, and when you have shown what your run subroutine does.
Here's some code that looks right, but I can't test it at present
Beware that run is a dire identifier for any subroutine, especially one that starts a network transfer and prints the result
sub run {
my ($url) = #_;
my $die;
local $SIG{INT} = sub { $die = 1 };
my $ua = LWP::UserAgent->new;
my $resp = $ua->get(
$url,
':content_cb' => sub {
die "Interrupted LWP transfer" if $die;
my ($data, $resp, $proto) = #_;
print $data;
},
':read_size_hint' => 1024
);
print "\n"; # Emulate additional newline from `say`
}
Note that reducing :read_size_hint will cause the callback to be called more frequently with smaller chunks of data. That will improve the response to Ctrl-C but reduce the efficiency of the transfer

perl process queue

I have a Perl script which forks a number of sub-processes. I'd like to have some kind of functionality like xargs --max-procs=4 --max-args=1 or make -j 4, where Perl will keep a given number of processes running until it runs out of work.
It's easy to say fork four process and wait for them all to complete, and then fork another four, but I'd like to keep four or n processes running at the same time, forking a new process as soon as one completes.
Is there a simple way in Perl to implement such a process pool?
Forks::Super can handle this requirement.
use Forks::Super MAX_PROC => 5, ON_BUSY => [ block | queue ];
Calls to fork() can block until the number of active subprocesses falls below 5, or you can pass additional parameters to the fork call and the tasks to perform can queue up:
fork { sub => sub { ... task to run in subprocess ... } }
When one subprocess finishes, another job on the queue will start up.
(I am the author of this module).
Check out Parallel::ForkManager -- it does much of what you describe. You can set a maximum number of processes, and the callback function could start a new child as soon as one finishes (as long as there is work to do).
While I would almost always use a CPAN module, or write something with the fantastic AnyEvent modules I think its important to understand how these things work under the hood. Here's an example that has no dependencies other than perl. The same approach could also be written in C without too much trouble.
#!/usr/bin/env perl
use strict;
## run a function in a forked process
sub background (&) {
my $code = shift;
my $pid = fork;
if ($pid) {
return $pid;
} elsif ($pid == 0) {
$code->();
exit;
} else{
die "cant fork: $!"
}
}
my #work = ('sleep 30') x 8;
my %pids = ();
for (1..4) {
my $w = shift #work;
my $pid = background {
exec $w;
};
$pids{$pid} = $w;
}
while (my $pid = waitpid(-1,0)) {
if ($?) {
if ($? & 127) {
warn "child died with signal " . ($? & 127);
} else {
warn "chiled exited with value " . ($? >> 8);
}
## redo work that died or got killed
my $npid = background {
exec $pids{$pid};
};
$pids{$npid} = delete $pids{$pid};
} else {
delete $pids{$pid};
## send more work if there is any
if (my $w = shift #work) {
my $pid = background {
exec shift #work;
};
$pids{$pid} = $w;
}
}
}

Killing child and its children when child was created using open

Here's my code, with error handling and other stuff removed for clarity:
sub launch_and_monitor {
my ($script, $timeout) = #_;
sub REAPER {
while ((my $child = waitpid(-1, &WNOHANG)) > 0) {}
$SIG{CHLD} = \&REAPER;
}
$SIG{CHLD} = \&REAPER;
my $pid = fork;
if (defined $pid) {
if ($pid == 0) {
# in child
monitor($timeout);
}
else {
launch($script);
}
}
}
The launch sub executes a shell script which in turn launches other processes, like so:
sub launch($) {
my ($script) = #_;
my $pid = open(PIPE, "$script|");
# write pid to pidfile
if ($pid != 0) {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
}
The monitor sub basically just waits for a specified period of time and then attempts to kill the shell script.
sub monitor($) {
my ($timeout) = #_;
sleep $timeout;
# check if script is still running and if so get pid from pidfile
if (...) {
my $pid = getpid(...);
kill 9, $pid;
}
}
This kills the script, however, it does not kill any of its subprocesses. How to fix it?
You can do this with process groups, if your operating system supports them. You need to make the script process become a process group leader. The child processes that it runs will inherit the process group from their parent. You can then use kill to send a signal to each process in the group at the same time.
In launch(), you will need to replace the open line with one that forks. Then in the child, you would call setpgrp() before exec'ing the command. Something like the following should work:
my $pid = open(PIPE, "-|");
if (0 == $pid) {
setpgrp(0, 0);
exec $script;
die "exec failed: $!\n";
}
else {
while(<PIPE>) {
# do stuff with output
}
close(PIPE) or die $!;
}
Later, to kill the script process and its children, negate the process ID that you're signalling:
kill 9, -$pid;
In general, I don't think you can expect signals to be propagated into all child processes; this isn't specific to perl.
That said, you might be able to use the process group signal feature built into perl kill():
...if SIGNAL is negative, it kills process groups instead of processes...
You probably need to use setpgrp() on your (direct) child process, then change your kill call to something like:
kill -9, $pgrp;
Try adding:
use POSIX qw(setsid);
setsid;
at the top of your launch_and_monitor function. This will put your processes in a separate session, and cause things to exit when the session leader (i.e. the master) exits.
Killing a processgroup works, but don't forget the parent can be killed alone too. Assuming child processes have an event loop, they can check the parent socket that was created in a socketpair prior doing the fork() for validness. In fact, select() cleanly exits when the parent socket is gone, all that needs to be done is to check the socket.
E.g.:
use strict; use warnings;
use Socket;
$SIG{CHLD} = sub {};
socketpair(my $p, my $c, AF_UNIX, SOCK_STREAM, PF_UNSPEC) or die $!;
print "parent $$, fork 2 kids\n";
for (0..1){
my $kid = fork();
unless($kid){
child_loop($p, $c);
exit;
}
print "parent $$, forked kid $kid\n";
}
print "parent $$, waiting 5s\n";
sleep 5;
print "parent $$ exit, closing sockets\n";
sub child_loop {
my ($p_s, $c_s) = #_;
print "kid: $$\n";
close($c_s);
my $rin = '';
vec($rin, fileno($p_s), 1) = 1;
while(1){
select my $rout = $rin, undef, undef, undef;
if(vec($rout, fileno($p_s), 1)){
print "kid: $$, parent gone, exiting\n";
last;
}
}
}
Runs like this:
tim#mint:~$ perl ~/abc.pl
parent 5638, fork 2 kids
parent 5638, forked kid 5639
kid: 5639
parent 5638, forked kid 5640
parent 5638, waiting 5s
kid: 5640
parent 5638 exit, closing sockets
kid: 5640, parent gone, exiting
kid: 5639, parent gone, exiting
tim#mint:~$