I would like to use IPC::Run to communicate with child via child's STDIN, STDOUT and STDERR (start, pump, finish). It seems to work.
I would like to know how to detect
premature child exit (e.g. caused by errors)
pipes closed by the child
The pump throws a die on errors, or writes its message to STDERR if "called after all harnessed activities have completed." See right before ROUTINES section and pump itself. The second case can come about if the child exited. So wrap the pump call in eval, and also convert warnings to die to catch both cases
if ($talk_to_child)
{
eval {
local $SIG{__WARN__} = sub { die "pump WARNING: #_" };
pump $harness;
};
if ($#) {
print $#;
$talk_to_child = 0;
};
}
# ... and eval {} for finish()
But this alone won't cut it: when a parent tries to write to a child that exited it gets a SIGPIPE, which outright terminates the process. The same goes when a child closes streams and the parent attempts to write. So also install a signal handler for SIGPIPE
$SIG{PIPE} = sub {
say "$_[0]: $!";
$talk_to_child = 0; # global
};
so that the parent survives the SIGPIPE. Consider local-izing the change to the global %SIG by doing local $SIG{PIPE} = ... instead, a good practice even just on general principle. On the other hand, there's good sense in globally handling a signal that can terminate you out of blue (even in cases where the handler may decide to exit).
The eval is still needed even as $SIG{PIPE} is handled since pump throws, too.
These together take care of all tests I came up with, practically as they stand. Still, some processing in the handler and in eval is needed to distinguish cases of interest if that is wanted.
If this adds up to too much another way is to check before each call. See this post for one-line checks (wrapped in subs) of: (1) whether a child is running, using result, and (2) whether "there are open I/O channels or active processes", using pumpable.
I think that you want both, and also throw in the SIGPIPE handler. That should cover it.
I cannot be more specific here since the question doesn't provide specifics.
Update: Thanks to #zdim for reminding me to check the SIGPIPE signal. Here is an update of my answer that also checks SIGPIPE:
I did a simple test using start, pump, and finish. Here is the main script p.pl that I used:
use feature qw(say);
use strict;
use warnings;
use IPC::Run;
my $child_in;
my $child_out;
my $child_err;
my $child_name = shift;
my $harness = eval {
IPC::Run::start [ $child_name ], \$child_in, \$child_out, \$child_err;
};
if ( $# ) {
chomp $#;
die "Caught exception: '$#'";
}
for (1..2) {
$child_in = "Joe$_\n";
say "Parent sleeping for 1 second..";
sleep 1;
eval {
local $SIG{PIPE} = sub {
die "Parent received SIGPIPE. "
. "Child is either dead or has closed its input pipe\n";
};
say "Sending data to child..";
my $result = $harness->pump;
say "IPC::Run::pump() returned: ", $result ? "TRUE" : "FALSE";
};
if ( $# ) {
chomp $#;
say "IPC::Run::pump() failed: '$#'";
last;
}
say "\$child_in = '$child_in'";
say "\$child_out = '$child_out'";
}
say "Finishing harness..";
my $res = eval {
local $SIG{PIPE} = sub {
die "Parent received SIGPIPE. "
. "Child is either dead or has closed its input pipe\n";
};
$harness->finish;
};
if ( $# ) {
chomp $#;
die "IPC::Run::finish() failed: '$#'\n";
}
printf "IPC::Run::finish() returned: '%s'\n", $res ? "TRUE" : "FALSE";
chomp $child_out;
say "STDOUT from child: '$child_out'";
chomp $child_err;
say "STDERR from child: '$child_err'";
say "Child returned exit code: ", $harness->result;
say "Parent exited normally.."
I used three different child scripts:
child.pl:
#! /usr/bin/env perl
use feature qw(say);
use strict;
use warnings;
my $reply = <STDIN>;
chomp $reply;
say "Hello $reply";
my $reply2 = <STDIN>;
chomp $reply2;
say "Got second reply: $reply2";
exit 0;
and output:
$ p.pl child.pl
Parent sleeping for 1 second..
Sending data to child..
IPC::Run::pump() returned: TRUE
$child_in = ''
$child_out = ''
Parent sleeping for 1 second..
Sending data to child..
IPC::Run::pump() returned: TRUE
$child_in = ''
$child_out = ''
Finishing harness..
IPC::Run::finish() returned: 'TRUE'
STDOUT from child: 'Hello Joe1
Got second reply: Joe2'
STDERR from child: ''
Child returned exit code:
Parent exited normally..
child2.pl:
#! /usr/bin/env perl
use feature qw(say);
use strict;
use warnings;
my $reply = <STDIN>;
chomp $reply;
say "Hello $reply";
die "Child exception\n";
and output:
$ p.pl child2.pl
Parent sleeping for 1 second..
Sending data to child..
IPC::Run::pump() returned: TRUE
$child_in = ''
$child_out = ''
Parent sleeping for 1 second..
Sending data to child..
IPC::Run::pump() failed: 'Parent received SIGPIPE. Child is either dead or has closed its input pipe'
Finishing harness..
IPC::Run::finish() failed: 'Parent received SIGPIPE. Child is either dead or has closed its input pipe'
child3.pl:
#! /usr/bin/env perl
use strict;
use warnings;
close \*STDIN;
close \*STDOUT;
close \*STDERR;
sleep 5;
exit 2;
and output:
$ p.pl child3.pl
Parent sleeping for 1 second..
Sending data to child..
IPC::Run::pump() failed: 'ack Parent received SIGPIPE. Child is either dead or has closed its input pipe'
Finishing harness..
IPC::Run::finish() failed: 'Parent received SIGPIPE. Child is either dead or has closed its input pipe'
So for these tests, it seems that the SIGPIPE signal can be used to check if a child is a alive or has closed its input pipe. Note that if you try to call pump() after a child has exited, the previous output from the child is lost, see the child2.pl example.
Related
I have a script based on Term::ReadLine and LWP::UserAgent
The logic is like this,
while (defined ($_ = $term->readline('console> ')))
{
next unless $_; chomp;
if ($_ eq 'exit')
{
last;
}
&run ($_);
}
sub run {
my $ua = LWP::UserAgent->new;
my $resp = $ua->get (...);
say $resp->content;
}
In run it will do a LWP request. Now If I press CTRL + C, not only the LWP is terminated, the whole perl script is terminated as well.
I wanted to kill the LWP request only. Any ideas?
I can add a SIGINT handler, but I don't know what the handler should do
Convert the signal into an exception.
local $SIG{INT} = sub { die "SIGINT\n" };
Generally, one would then wrap the code in an eval BLOCK, but LWP::UserAgent catches these exceptions and returns an error response.
For example,
use LWP::UserAgent;
my $ua = LWP::UserAgent->new();
my $response = do {
local $SIG{INT} = sub { die "SIGINT\n" };
$ua->get("http://localhost/zzz.crx")
};
say $response->is_success ? "Successful" : "Unsuccessful";
say $response->code;
say $response->status_line;
Output if no SIGINT received:
Successful
200
200 OK
Output if SIGINT received:
Unsuccessful
500
500 SIGINT
One way to stop code is to run it in a child process and kill that child in the parent's signal handler when SIGINT is received by the parent. The parent keeps running since the signal is handled.
use warnings;
use strict;
use feature 'say';
$SIG{INT} = \&sigint_handler; # or: $SIG{INT} = sub { ... };
say "Parent $$ start.";
my $pid = run_proc();
my $gone_pid = waitpid $pid, 0; # check status, in $?
say "Parent exiting";
sub run_proc
{
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) { # child process
say "\tKid, sleep 5 (time for Ctrl-C)"; # run your job here
sleep 5;
say "\tKid exiting.";
exit;
}
return $pid;
}
sub sigint_handler {
if ($pid and kill 0, $pid) {
say "Got $_[0], send 'kill TERM' to child process $pid.";
my $no_signalled = kill 15, $pid;
}
else { die "Got $_[0]" } # or use exit
}
A good deal of the code is for diagnostic prints. Some comments follow
The kill only sends a signal. It does not in any way ensure that the process terminates. Check this with kill $pid, 0, which returns true if the process has not been reaped (even if it's a zombie). On my system TERM is 15, and even though this is very common please check.
The signal could come at a time when the child is not running. The handler first checks whether the $pid is out there and if not it dies/exits, respecting SIGINT. Change as appropriate.
After the fork the parent drops past if ($pid == 0) and returns the $pid right away.
You can install $SIG{TERM} in the child, where it can clean up if it needs to exit orderly.
The SIGINT handler will run out of the child as well, so "Got $_[0] ..." is printed twice. If this is a concern add a handler to the child to ignore the signal, $SIG{INT} = 'IGNORE';. With this in place and with Ctrl-C hit while the child is running, the output is
Parent 9334 start.
Kid, sleep 5 (time for Ctrl-C)
^CGot INT, send 'kill TERM' to child process 9335.
Parent exiting
The status of the child once it exited can be checked via $?, see system and in perlvar.
Documentation: fork (and exec, system), %SIG in perlvar, waitpid, parts of perlipc, kill.
If the job done in the child needed to communicate with the parent then there would be more to do. However, the code snippet added to the question indicates that this is not the case.
You need to provide a callback in your call to $ua->request. Issuing die in that callback will terminate the transfer.
You then just need to set a flag variable in your Ctrl-C signal handler, and die in your callback if that flag is set.
I'll write some code when I get back to a PC, and when you have shown what your run subroutine does.
Here's some code that looks right, but I can't test it at present
Beware that run is a dire identifier for any subroutine, especially one that starts a network transfer and prints the result
sub run {
my ($url) = #_;
my $die;
local $SIG{INT} = sub { $die = 1 };
my $ua = LWP::UserAgent->new;
my $resp = $ua->get(
$url,
':content_cb' => sub {
die "Interrupted LWP transfer" if $die;
my ($data, $resp, $proto) = #_;
print $data;
},
':read_size_hint' => 1024
);
print "\n"; # Emulate additional newline from `say`
}
Note that reducing :read_size_hint will cause the callback to be called more frequently with smaller chunks of data. That will improve the response to Ctrl-C but reduce the efficiency of the transfer
Let's say we have a 'Child' and 'Parent' process defined and subroutines
my $pid = fork;
die "fork failed: $!" unless defined($pid);
local $SIG{USR1} = sub {
kill KILL => $pid;
$SIG{USR1} = 'IGNORE';
kill USR1 => $$;
};
and we divide them, is it possible to do the following?
if($pid == 0){
sub1();
#switch to Parent process to execute sub4()
sub2();
#switch to Parent process to execute sub5()
sub3();
}
else
{
sub4();
#send message to child process so it executes sub2
sub5();
#send message to child process so it executes sub3
}
If yes, can you point how, or where can I look for the solution? Maybe a short example would suffice. :)
Thank you.
There is a whole page in the docs about inter process communication: perlipc
To answer your question - yes, there is a way to do what you want. The problem is, exactly what it is ... depends on your use case. I can't tell what you're trying to accomplish - what you you mean by 'switch to parent' for example?
But generally the simplest (in my opinion) is using pipes:
#!/usr/bin/env perl
use strict;
use warnings;
pipe ( my $reader, my $writer );
my $pid = fork(); #you should probably test for undef for fork failure.
if ( $pid == 0 ) {
## in child:
close ( $writer );
while ( my $line = <$reader> ) {
print "Child got $line\n";
}
}
else {
##in parent:
close ( $reader );
print {$writer} "Parent says hello!\n";
sleep 5;
}
Note: you may want to check your fork return codes - 0 means we're in the child - a number means we're in the parent, and undef means the fork failed.
Also: Your pipe will buffer - this might trip you over in some cases. It'll run to the end just fine, but you may not get IO when you think you should.
You can open pipes the other way around - for child->parent comms. Be slightly cautious when you multi-fork though, because an active pipe is inherited by every child of the fork - but it's not a broadcast.
In Perl, the command, will wait till the "command" is completed. Is there a way to let command wait only for 20 sec ? One scenario is like the following:
The command is an infinite loop and won't finish. The command will freeze and the program can't proceed. What I want to let the program not blocked by command.
I know Ruby has a way to do this. Does Perl have a solution?
Thanks,
=Y
Use alarm:
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm 20;
system("<Your command>")
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
#!/usr/bin/perl -w
use strict;
use 5.010;
my $foo = 123;
my $pidChild = fork(); # All objects before this fork statement will be copied.
given ($pidChild)
{
when (!defined($_)) {
die "Cannot fork: $!";
}
when ($_ == 0) {
# The child process goes here.
$foo = 456; # This is a duplicate.
system 'subprocess options'; # Or: exec 'suprocess options';
}
default {
# The original process goes here.
waitpid($_, 0); # Whether to wait or not is up to you.
say $foo; # Output: 123
}
}
If Inter-Process Communication (IPC) is needed, before the invocation of fork, the built-in function pipe can be used to create 2 handlers, one for input and another for output, they'll be shared by the original process and the subprocess.
There's surely more than one way to do IPC. The built-in function open, the subroutine open2 offered by the module IPC::Open2, and the open3 offered by IPC::Open3 all can run a subprocess asynchronously.
I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.
I'm using fork with CGI::Fast in perl.
When I run it normally, it prints In handle request 5 times as expected.
But when I configure this file as a fast cgi in apache server, It prints In handle request only one time.
Can anybody help me out here.
Sample program:
#!/usr/bin/perl
use strict;
use warnings;
use CGI::Fast;
use Data::Dumper;
use HTML::Template;
while(my $query = CGI::Fast->new()) {
my $timeout ='60';
eval {
local $SIG{ALRM} = sub {
print STDERR "Fcgi error";
exit(0);
};
alarm $timeout;
#wait till all the result returns
my $val = process_request( $query );
alarm 0;
} or print STDERR $#;
}
sub process_request{
my $query = shift;
my #childs = ();
print $query->header;
foreach my $i ( 1..5 ) {
my $pid = fork();
if( $pid ) {
push( #childs, $pid );
}
elsif( $pid == 0 ) {
_handle_request();
exit(0);
}
}
foreach my $child ( #childs ) {
my $pid = waitpid( $child, 0 );
}
return;
}
sub _handle_request{
print "In handle request<br/>";
return;
}
The FastCGI FAQ includes a section specifically on How do I use fork or exec? in Perl.
When you fork, however, without calling exec as well, i.e. when you
have two instance of perl running, the request handle object will
(eventually) be destroyed in both instances of perl. As a result, a
possible request being handled will be finished when the object is
destroyed for the first time. This is probably not what you expect, since you will usually only be handling the request in either the parent or the child. To inhibit this unwanted request finishing, you can send the Detach() message to the request handle object. In a detached state, destruction will not lead to any finishing of requests. It is advised you call Detach() before forking and Attach afterwards (in the process that will continue handling the request).
What's probably happening is that, after the first forked child sends its response, it marks the request as complete and the connection to the client is closed before the other children send their responses.