fork with CGI::Fast perl - perl

I'm using fork with CGI::Fast in perl.
When I run it normally, it prints In handle request 5 times as expected.
But when I configure this file as a fast cgi in apache server, It prints In handle request only one time.
Can anybody help me out here.
Sample program:
#!/usr/bin/perl
use strict;
use warnings;
use CGI::Fast;
use Data::Dumper;
use HTML::Template;
while(my $query = CGI::Fast->new()) {
my $timeout ='60';
eval {
local $SIG{ALRM} = sub {
print STDERR "Fcgi error";
exit(0);
};
alarm $timeout;
#wait till all the result returns
my $val = process_request( $query );
alarm 0;
} or print STDERR $#;
}
sub process_request{
my $query = shift;
my #childs = ();
print $query->header;
foreach my $i ( 1..5 ) {
my $pid = fork();
if( $pid ) {
push( #childs, $pid );
}
elsif( $pid == 0 ) {
_handle_request();
exit(0);
}
}
foreach my $child ( #childs ) {
my $pid = waitpid( $child, 0 );
}
return;
}
sub _handle_request{
print "In handle request<br/>";
return;
}

The FastCGI FAQ includes a section specifically on How do I use fork or exec? in Perl.
When you fork, however, without calling exec as well, i.e. when you
have two instance of perl running, the request handle object will
(eventually) be destroyed in both instances of perl. As a result, a
possible request being handled will be finished when the object is
destroyed for the first time. This is probably not what you expect, since you will usually only be handling the request in either the parent or the child. To inhibit this unwanted request finishing, you can send the Detach() message to the request handle object. In a detached state, destruction will not lead to any finishing of requests. It is advised you call Detach() before forking and Attach afterwards (in the process that will continue handling the request).
What's probably happening is that, after the first forked child sends its response, it marks the request as complete and the connection to the client is closed before the other children send their responses.

Related

alarm does not seem to fire if I set $SIG{ALRM}

I'm trying to implement an alarm in my Perl backend process so that it will terminate if it gets stuck for too long. I tried to implement the code given on the alarm documentation page on Perldoc (this is verbatim from the documentation other than the line that calls my program's key subroutine instead of the sample line in the documentation):
eval {
local $SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm $timeout;
&FaithTree::Backend::commandLine({ 'skipWidgets' => $skipWidgets, 'commandLineId' => $commandLineId, 'force' => $force });
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
# didn't
}
Given this code, nothing happens when the alarm should have timed out. On the other hand, if I remove the custom definition for $SIG{ALRM} (which, again, came straight from the Perl documentation) the alarm does fire, just without the custom handler.
I'm wondering if the fact that I'm using Thread::Queue is playing a role in the alarm failing, but that doesn't explain why it works so long as I skip redefining $SIG{ALRM}.
Here's a minimal, runnable version with subroutine that is intentionally an infinite loop for testing:
eval {
$SIG{ALRM} = sub { die "alarm\n" }; # NB: \n required
alarm 1;
&FaithTree::Test::Backend::commandLine({ 'skipWidgets' => $skipWidgets, 'commandLineId' => $commandLineId, 'force' => $force });
alarm 0;
};
if ($#) {
die unless $# eq "alarm\n"; # propagate unexpected errors
# timed out
}
else {
exit;
}
package FaithTree::Test::Backend;
use File::Tail;
use threads;
use threads::shared;
use Thread::Queue;
sub commandLine {
our $N //= 4;
my $Q = new Thread::Queue;
my #kids = map threads->create( \&FaithTree::Test::Backend::fetchChild, $Q ), 1 .. $N;
my #feeds = ( "1","2","3","4" );
foreach my $feed (#feeds) {
$Q->enqueue( $feed );
}
$Q->enqueue( ( undef ) x $N );
$_->join for #kids;
}
sub fetchChild {
print "Test";
# Access queue.
my $Q = shift;
#What is my thread id?
my $tid = threads->tid();
my ($num, $num2);
for ( ; ; ){
if ($num2 == 10000) {
say STDERR $tid . ': ' . $num;
$num2 = 0;
}
$num++;
$num2++;
}
return 1;
}
If you comment out the $SIG{ALRM} line, it will terminate when the alarm is set to time out. If you leave it in place, it will never terminate.
Signals and threads don't mix well. You might want to rethink your use of signals. For example, you could move all the thread stuff to a child process.
Signal handlers are only called between Perl ops. The main thread is in a call to XS sub thread->join, and the signal handler will be called once join returns.
Most blocking system calls can be interrupted (returning error EINTR), so it might be possible to write a signal-aware version of join. Except I seem to remember pthread functions not being interruptible, so maybe not.
In this particular case, you could have the threads signal the main thread when they are over using a system that allows the main thread to block until the a signal occurs or a timeout has occured. cond_signal/cond_timedwait is such a system.
use Sub::ScopeFinalizer qw( scope_finalizer );
use Time::HiRes qw( time );
my $lock :shared;
my $threads_remaining = $N;
my $Q = new Thread::Queue;
my #threads;
{
lock $lock;
for (1..$N) {
++$threads_remaining;
push #threads, async {
my $guard = scope_finalizer {
lock $lock;
--$threads_remaining;
cond_signal($lock);
};
worker($Q);
}
}
}
my $max_end_time = time + 1;
# ...
{
lock $lock;
while ($threads_remaining) {
if (!cond_timedwait($lock, $max_end_time)) {
# ... Handle timeout ...
}
}
}
$_->join for #threads;

What is best way in Perl to set a timer to stop long-running process?

I've got an application that invokes a potentially long-running process. I want my program, the caller of this process, to cancel it at any given point and move on to the next entry when a time limit is exceeded. Using Perl's AnyEvent module, I tried something like this:
#!/usr/bin/env perl
use Modern::Perl '2017';
use Path::Tiny;
use EV;
use AnyEvent;
use AnyEvent::Strict;
my $cv = AE::cv;
$cv->begin; ## In case the loop runs zero times...
while ( my $filename = <> ) {
chomp $filename;
$cv->begin;
my $timer = AE::timer( 10, 0, sub {
say "Canceled $filename...";
$cv->end;
next;
});
potentially_long_running_process( $filename );
$cv->end;
}
$cv->end;
$cv->recv;
exit 0;
sub potentially_long_running_process {
my $html = path('foo.html')->slurp;
my #a_pairs = ( $html =~ m|(<a [^>]*>.*?</a>)|gsi );
say join("\n", #a_pairs);
}
The problem is the long-running processes never time out and get canceled, they just keep on going. So my question is "How do I use AnyEvent (and/or related modules) to time out a long-running task?"
You have not mentioned the platform you are running this script on, but if it is running on *nix, you can use the SIGALRM signal, something like this:
my $run_flag = 1;
$SIG{ALRM} = sub {
$run_flag = 0;
}
alarm (300);
while ($run_flag) {
# do your stuff here
# note - you cannot use sleep and alarm at the same time
}
print "This will print after 300 seconds";

Kill current LWP request with CTRL + C

I have a script based on Term::ReadLine and LWP::UserAgent
The logic is like this,
while (defined ($_ = $term->readline('console> ')))
{
next unless $_; chomp;
if ($_ eq 'exit')
{
last;
}
&run ($_);
}
sub run {
my $ua = LWP::UserAgent->new;
my $resp = $ua->get (...);
say $resp->content;
}
In run it will do a LWP request. Now If I press CTRL + C, not only the LWP is terminated, the whole perl script is terminated as well.
I wanted to kill the LWP request only. Any ideas?
I can add a SIGINT handler, but I don't know what the handler should do
Convert the signal into an exception.
local $SIG{INT} = sub { die "SIGINT\n" };
Generally, one would then wrap the code in an eval BLOCK, but LWP::UserAgent catches these exceptions and returns an error response.
For example,
use LWP::UserAgent;
my $ua = LWP::UserAgent->new();
my $response = do {
local $SIG{INT} = sub { die "SIGINT\n" };
$ua->get("http://localhost/zzz.crx")
};
say $response->is_success ? "Successful" : "Unsuccessful";
say $response->code;
say $response->status_line;
Output if no SIGINT received:
Successful
200
200 OK
Output if SIGINT received:
Unsuccessful
500
500 SIGINT
One way to stop code is to run it in a child process and kill that child in the parent's signal handler when SIGINT is received by the parent. The parent keeps running since the signal is handled.
use warnings;
use strict;
use feature 'say';
$SIG{INT} = \&sigint_handler; # or: $SIG{INT} = sub { ... };
say "Parent $$ start.";
my $pid = run_proc();
my $gone_pid = waitpid $pid, 0; # check status, in $?
say "Parent exiting";
sub run_proc
{
my $pid = fork // die "Can't fork: $!";
if ($pid == 0) { # child process
say "\tKid, sleep 5 (time for Ctrl-C)"; # run your job here
sleep 5;
say "\tKid exiting.";
exit;
}
return $pid;
}
sub sigint_handler {
if ($pid and kill 0, $pid) {
say "Got $_[0], send 'kill TERM' to child process $pid.";
my $no_signalled = kill 15, $pid;
}
else { die "Got $_[0]" } # or use exit
}
A good deal of the code is for diagnostic prints. Some comments follow
The kill only sends a signal. It does not in any way ensure that the process terminates. Check this with kill $pid, 0, which returns true if the process has not been reaped (even if it's a zombie). On my system TERM is 15, and even though this is very common please check.
The signal could come at a time when the child is not running. The handler first checks whether the $pid is out there and if not it dies/exits, respecting SIGINT. Change as appropriate.
After the fork the parent drops past if ($pid == 0) and returns the $pid right away.
You can install $SIG{TERM} in the child, where it can clean up if it needs to exit orderly.
The SIGINT handler will run out of the child as well, so "Got $_[0] ..." is printed twice. If this is a concern add a handler to the child to ignore the signal, $SIG{INT} = 'IGNORE';. With this in place and with Ctrl-C hit while the child is running, the output is
Parent 9334 start.
Kid, sleep 5 (time for Ctrl-C)
^CGot INT, send 'kill TERM' to child process 9335.
Parent exiting
The status of the child once it exited can be checked via $?, see system and in perlvar.
Documentation: fork (and exec, system), %SIG in perlvar, waitpid, parts of perlipc, kill.
If the job done in the child needed to communicate with the parent then there would be more to do. However, the code snippet added to the question indicates that this is not the case.
You need to provide a callback in your call to $ua->request. Issuing die in that callback will terminate the transfer.
You then just need to set a flag variable in your Ctrl-C signal handler, and die in your callback if that flag is set.
I'll write some code when I get back to a PC, and when you have shown what your run subroutine does.
Here's some code that looks right, but I can't test it at present
Beware that run is a dire identifier for any subroutine, especially one that starts a network transfer and prints the result
sub run {
my ($url) = #_;
my $die;
local $SIG{INT} = sub { $die = 1 };
my $ua = LWP::UserAgent->new;
my $resp = $ua->get(
$url,
':content_cb' => sub {
die "Interrupted LWP transfer" if $die;
my ($data, $resp, $proto) = #_;
print $data;
},
':read_size_hint' => 1024
);
print "\n"; # Emulate additional newline from `say`
}
Note that reducing :read_size_hint will cause the callback to be called more frequently with smaller chunks of data. That will improve the response to Ctrl-C but reduce the efficiency of the transfer

IPC communication between 2 processes with Perl

Let's say we have a 'Child' and 'Parent' process defined and subroutines
my $pid = fork;
die "fork failed: $!" unless defined($pid);
local $SIG{USR1} = sub {
kill KILL => $pid;
$SIG{USR1} = 'IGNORE';
kill USR1 => $$;
};
and we divide them, is it possible to do the following?
if($pid == 0){
sub1();
#switch to Parent process to execute sub4()
sub2();
#switch to Parent process to execute sub5()
sub3();
}
else
{
sub4();
#send message to child process so it executes sub2
sub5();
#send message to child process so it executes sub3
}
If yes, can you point how, or where can I look for the solution? Maybe a short example would suffice. :)
Thank you.
There is a whole page in the docs about inter process communication: perlipc
To answer your question - yes, there is a way to do what you want. The problem is, exactly what it is ... depends on your use case. I can't tell what you're trying to accomplish - what you you mean by 'switch to parent' for example?
But generally the simplest (in my opinion) is using pipes:
#!/usr/bin/env perl
use strict;
use warnings;
pipe ( my $reader, my $writer );
my $pid = fork(); #you should probably test for undef for fork failure.
if ( $pid == 0 ) {
## in child:
close ( $writer );
while ( my $line = <$reader> ) {
print "Child got $line\n";
}
}
else {
##in parent:
close ( $reader );
print {$writer} "Parent says hello!\n";
sleep 5;
}
Note: you may want to check your fork return codes - 0 means we're in the child - a number means we're in the parent, and undef means the fork failed.
Also: Your pipe will buffer - this might trip you over in some cases. It'll run to the end just fine, but you may not get IO when you think you should.
You can open pipes the other way around - for child->parent comms. Be slightly cautious when you multi-fork though, because an active pipe is inherited by every child of the fork - but it's not a broadcast.

Perl (tk): how to run asynchronously a system command, being able to react to it's output?

I'm writing a wrapper to an external command ("sox", if this can help) with Perl "Tk".
I need to run it asynchronously, of course, to avoid blocking tk's MainLoop().
But, I need to read it's output to notify user about command's progress.
I am testing a solution like this one, using IPC::Open3:
{
$| = 1;
$pid = open3(gensym, ">&STDERR", \*FH, $cmd) or error("Errore running command \"$cmd\"");
}
while (defined($ch = FH->getc)) {
notifyUser($ch) if ($ch =~ /$re/);
}
waitpid $pid, 0;
$retval = $? >> 8;
POSIX::close($_) for 3 .. 1024; # close all open handles (arbitrary upper bound)
But of course the while loop blocks MainLoop until $cmd does terminate.
Is there some way to read output handle asynchronously?
Or should I go with standard fork stuff?
The solution should work under win32, too.
For non-blocking read of a filehandle, take a look at Tk::fileevent.
Here's an example script how one can use a pipe, a forked process, and fileevent together:
use strict;
use IO::Pipe;
use Tk;
my $pipe = IO::Pipe->new;
if (!fork) { # Child XXX check for failed forks missing
$pipe->writer;
$pipe->autoflush(1);
for (1..10) {
print $pipe "something $_\n";
select undef, undef, undef, 0.2;
}
exit;
}
$pipe->reader;
my $mw = tkinit;
my $text;
$mw->Label(-textvariable => \$text)->pack;
$mw->Button(-text => "Button", -command => sub { warn "Still working!" })->pack;
$mw->fileevent($pipe, 'readable', sub {
if ($pipe->eof) {
warn "EOF reached, closing pipe...";
$mw->fileevent($pipe, 'readable', '');
return;
}
warn "pipe is readable...\n";
chomp(my $line = <$pipe>);
$text = $line;
});
MainLoop;
Forking may or may not work under Windows. Also one needs to be cautious when forking within Tk; you must make sure that only one of the two processes is doing X11/GUI stuff, otherwise bad things will happen (X11 errors, crashes...). A good approach is to fork before creating the Tk MainWindow.