Perl WebService increase max connections using HTTP::Server::Simple::CGI - perl

I'm running a little Perl Webservice, based on the example i found on this page : https://www.perlmonks.org/?node_id=1078567 (first example)
However, when a lot of clients are calling it at once, it looks like the requests are suddenly crashing, and there's a lot of TIME_WAIT tcp connections left on the server running the webservice, as if the webservice was not able to handle that many connections at once.
is there a parameter in that module or other that i could use to extend this ?
or a way to put some kind of queue for the incoming requests ?
some parts of my code, to help :
{
package TACWebService;
use HTTP::Server::Simple::CGI;
use base qw(HTTP::Server::Simple::CGI);
use Cwd 'abs_path';
use POSIX;
use DBI;
use warnings;
.........
my %dispatch = (
'/insertunix' => \&resp_insertunix,
'/insertwin' => \&resp_insertwin,
'/getpwdate' => \&resp_getpwdate,
);
# ---------------------------------------------------------------------
# Requests Handling
# ---------------------------------------------------------------------
sub handle_request {
my $self = shift;
my $cgi = shift;
my $path = $cgi->path_info();
my $handler = $dispatch{$path};
if (ref($handler) eq "CODE") {
print "HTTP/1.0 200 OK\r\n";
$handler->($cgi);
} else {
print "HTTP/1.0 404 Not found\r\n";
print $cgi->header,
$cgi->start_html('Not found'),
$cgi->h1('Not found'),
$cgi->end_html;
}
}
sub resp_insertwin {
my $cgi = shift; # CGI.pm object
return if !ref $cgi;
....
} else {
print $cgi->header("text/plain"), "INSERT";
}
.....
# ---------------------------------------------------------------------
# WebService Start in background
# ---------------------------------------------------------------------
my $pid = TACWebService->new($TACWebService::conf{tac_ws_port})->background();
print "Use 'kill $pid' to stop TAC WebService.\n";
the clients themselves are using use LWP::UserAgent like this :
my $ua = LWP::UserAgent->new();
$ua->timeout($timeout);
my $response = $ua->post($TAC_Url,
[
'args' => $here,
]
if (!$response->is_success) {
print "Timeout while connecting to $TAC_Url\n";
} else {
my $content = $response->as_string();
print $content if (grep(/INSERT_/,$content));
}
to describe the exact issue would be complicated. In short : the clients are Unix servers sending their user database (user accounts). and when lots of clients are sending this user db at once, i can see the webservice receiving half of the data, and answering "timeout" after a couple of accounts (probably because it's overloaded in some way)
thanks again

The problem is, that the client waits to long for the server to respond. To solve this you have to start the server multiple times. The easiest Solution to this is to add
sub net_server { 'Net::Server::PreFork' }
to your package TACWebService and the HTTP::Server::Simple::CGI will do the rest of the magick.
Or you can use HTTP::Server::Simple::CGI::PreFork instead. See https://metacpan.org/pod/HTTP::Server::Simple::CGI::PreFork

Related

Capture both resolved and rejected in Perl's Mojo::Promises

I'm playing with Mojo::UserAgent and Mojo::Promise to run non-blocking calls to 3 services A, B, and C. The problem is it works fine when all the services connect/resolve, but if one of those, say, service C is unable to connect, the whole thing fail. Is there a way to capture all services (connect and Not-connect)? Any insight is greatly appreciated. Thanks!
my #urls = (
'https://hostA/serviceA', # ServcieA connects and returns some text
'https://hostB/serviceB', # ServiceB connects and returns some text
'https://hostC/serviceC', # ServiceC refuses to connect
);
my $ua = Mojo::UserAgent->new;
my #promises = map { $ua->get_p($_) } #urls;
Mojo::Promise->all( #promises )->then(
sub {
for my $tx (map { $_->[0] } #_) {
print "Service result: $tx->res->text";
}#end for
}#end sub
)->catch(
sub {
for my $err (map { $_->[0] } #_) {
print "ERROR: $err";
}#end for
}#end sub
)->wait;
I think I'd make it simpler. Give each Promise its own handlers, then simply put all of those together. Inside the code refs in then, do whatever you need to do:
#!perl
use v5.10;
use Mojo::Promise;
use Mojo::UserAgent;
my #urls = qw(
https://www.yahoo.com
https://hostB/serviceB
https://hostC/serviceC
);
my $ua = Mojo::UserAgent->new;
my #promises = map {
my $url = $_;
$ua->get_p( $url )->then(
sub { say "$url connected" },
sub { say "$url failed" },
);
} #urls;
Mojo::Promise->all( #promises )->wait;
This outputs which connected or failed, although I could have also marked their status in some data structure or database:
https://hostB/serviceB failed
https://hostC/serviceC failed
https://www.yahoo.com connected
I have many other Promises examples in Mojo Web Clients.

Mojolicious re-using a previously established connection

I'm trying to reuse a previously established websocket connection to avoid the websocket handshake. I found that a custom websocket transaction can be built using build_websocket_tx (more details here), and there's a connection identifier for every websocket connection which can be retrieved using connection subroutine defined in Mojo::Transaction (more details here). Can I somehow combine both of these to re use the connection? Is there another way to do so?
PS: Websocket connections are supposed to be consistent and reusable. But Mojolicoious doesn't provide any such options for websocket connections.
EDIT
Example code without connection re-use.
#!/usr/bin/perl
use strict;
use warnings;
use Mojo::UserAgent;
use JSON qw |encode_json|;
my $ua = Mojo::UserAgent->new;
my $url = "wss://trello.com/1/Session/socket";
$| = 1;
sub _connect {
my $req = {
type => "ping",
reqid=> 0
};
$ua->websocket(
$url => sub {
my ($ua, $tx) = #_;
die "error: ", $tx->res->error->{message}, "\n" if $tx->res->error;
die 'Not a websocket connection' unless $tx->is_websocket;
# Connection established.
$tx->on(
message => sub {
my ($tx, $msg) = #_;
print "$msg\n";
$tx->closed; #Close connection
});
$tx->send(encode_json($req));
});
}
sub reuse_conn {
# Re use connection
}
_connect();
Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
Preliminaries: to run a Mojolicious client script with debugging information:
MOJO_EVENTEMITTER_DEBUG=1 MOJO_USERAGENT_DEBUG=1 perl mua.pl
As at version 7.43, Mojo::UserAgent has built-in connection pooling but specifically refuses to use it for WebSockets. This may well be because as you said, WebSockets is stateful, and if the UserAgent blindly reused connections, that could cause chaos. However, if your application knows how to safely reuse them, that is a different matter.
This question came up recently in the IRC channel for Mojolicious, and sri, the author, said:
16:28 sri mohawk: some frameworks like phoenix have their own higher level protocol on top of websockets to multiplex multiple channels https://hexdocs.pm/phoenix/channels.html
16:28 sounds like that's what you want
16:28 mojolicious should have something like that, but doesn't yet
[...]
16:42 sri it's not hard to build on top of mojolicious, but for now you have to do that yourself
16:42 ultimately i'd hope for us to have it in core, without the message bus part
16:43 but channel management and routing
[...]
16:50 jberger mohawk I did write Mojolicious::Plugin::Multiplex which might help
16:51 For an example of a higher level tool
I acknowledge OP said:
The only way I was able to reuse connection was by storing the transaction object and use it in subsequent calls.
However, as the code currently is, it appears this is the only way. This code demonstrates how to make, maintain, and use your own connection pool:
#!/usr/bin/perl
use strict;
use warnings;
use Mojo::UserAgent;
use Time::HiRes qw(time);
$| = 1;
my $REQ = {
type => "ping",
reqid => 0,
};
my $URL = "wss://trello.com/1/Session/socket";
my $SECONDS = 2;
my $POOL_SIZE = 5;
my $ua = Mojo::UserAgent->new;
my #pool;
sub make_conn {
my ($ua, $url, $pool) = #_;
$ua->websocket($URL => sub {
my (undef, $tx) = #_;
die "error: ", $tx->res->error->{message}, "\n" if $tx->res->error;
die 'Not a websocket connection' unless $tx->is_websocket;
push #$pool, $tx;
});
}
# pool gets pushed onto, shifted off, so using recently-used connection
sub send_message {
my ($pool, $request, $start) = #_;
my $tx = shift #$pool;
die "got bad connection" unless $tx; # error checking needs improving
$tx->once(message => sub {
my (undef, $msg) = #_;
print "got back: $msg\n";
print "took: ", time - $start, "\n";
push #$pool, $tx;
});
$tx->send({json => $request});
}
make_conn($ua, $URL, \#pool) for (1..5); # establish pool
# every 2 secs, send a message
my $timer_cb;
$timer_cb = sub {
my $loop = shift;
print "every $SECONDS\n";
send_message(\#pool, $REQ, time);
$loop->timer($SECONDS => $timer_cb);
};
Mojo::IOLoop->timer($SECONDS => $timer_cb);
Mojo::IOLoop->start unless Mojo::IOLoop->is_running;
At a high level, it works like this:
make 5 connections in the pool
send_message uses the least-recently-used connection in that pool
send a message every two seconds, registering a one-time "on message" callback to deal with the response
For the sake of simplicity, it does not check connections are still working when it gets them from the pool, and relies on the first two-second delay to initialise all 5 connections.
The use of time calls demonstrates the speed gain from using this pool. Your provided code takes (on my system) around 300ms to start up the connection, then send and receive. Using the pool, it is taking around 120ms.

mojolicious script works three times, then crashes

The following script should demonstrate a problem I'm facing using Mojolicious on OpenBSD5.2 using mod_perl.
The script works fine 4 times being called as CGI under mod_perl. Additional runs of the script result in Mojolicious not returning the asynchronous posts. The subs that are usually called when data is arriving just don't seem to be called anymore. Running the script from command line works fine since perl is then completely started from scratch and everything is reinitialized, which is not the case under mod_perl. Stopping and starting Apache reinitializes mod_perl so that the script can be run another 4 times.
I only tested this on OpenBSD5.2 using Mojolicious in the version that's provided in OpenBSDs ports tree (2.76). This is kinda old I think but that's what OpenBSD comes with.
Am I doing something completely wrong here? Or is it possible that Mojolicious has some circular reference or something which causes this issue?
I have no influence on the platform (OpenBSD) being used. So please don't suggest to "use Linux and install latest Mojolicious version". However if you are sure that running a later version of Mojolicous will solve the problem, I might get the permission to install that (though I don't yet know how to do that).
Thanks in advance!
T.
Here's the script:
#!/usr/bin/perl
use diagnostics;
use warnings;
use strict;
use feature qw(switch);
use CGI qw/:param/;
use CGI qw/:url/;
use CGI::Carp qw(fatalsToBrowser warningsToBrowser);
use Mojo::IOLoop;
use Mojo::JSON;
use Mojo::UserAgent;
my ($activeconnections, $md5, $cgi);
my $ua = Mojo::UserAgent->new;
$ua->max_redirects(0)->connect_timeout(3)->request_timeout(6); # Timeout 6 seconds of which 3 may be connecting
my $delay = Mojo::IOLoop->delay();
sub online{
my $url = "http://www.backgroundtask.eu/Systeemtaken/Search.php";
$delay->begin;
$activeconnections++;
my $response_bt = $ua->post_form($url, { 'ex' => $md5 }, sub {
my ($ua, $tx) = #_;
my $content=$tx->res->body;
$content =~ m/(http:\/\/www\.backgroundtask\.eu\/Systeemtaken\/taakinfo\/.*$md5\/)/;
if ($1){
print "getting $1\n";
my $response_bt2 = $ua->get($1, sub {
$delay->end();
$activeconnections--;
print "got result, ActiveConnections: $activeconnections\n";
($ua, $tx) = #_;
my $filename = $tx->res->dom->find('table.view')->[0]->find('tr.even')->[2]->td->[1]->all_text;
print "fn = " . $filename . "\n";
}
)
} else {
print "query did not return a result\n";
$activeconnections--;
$delay->end;
}
});
}
$cgi = new CGI;
print $cgi->header(-cache_control=>"no-cache, no-store, must-revalidate") . "\n";
$md5 = lc($cgi->param("md5") || ""); # read param
$md5 =~ s/[^a-f0-9]*//g if (length($md5) == 32); # custom input filter for md5 values only
if (length $md5 != 32) {
$md5=lc($ARGV[0]);
$md5=~ s/[^a-f0-9]*//g;
die "invalid MD5 $md5\n" if (length $md5 ne 32);
}
online;
if ($activeconnections) {
print "waiting..., activeconnections: $activeconnections\n" for $delay->wait;
}
print "all pending requests completed, activeconnections is " . $activeconnections . "\n";
print "script done.\n md5 was $md5\n";
exit 0;
Well I hate to say it, but there's a lot wrong here. The most glaring is your use of ... for $delay->wait which doesn't make much sense. Also you are comparing numbers with ne rather than !=. Not my-ing the arguments in the deeper callback seems problematic for async style code.
Then there are some code smells, like regexing for urls and closing over the $md5 variable unnecessarily.
Lastly, why use CGI.pm when Mojolicious can operate under CGI just fine? When you do that, the IOLoop is already running, so some things get easier. And yes I understand that you are using the system provided Mojolicious, however I feel I should mention that the current version is 3.93 :-)
Anyway, here is an example, which strips out a lot of things but still should do pretty much the same thing as the example. Of course I can't test it without a valid md5 for the site (and I also can't get rid of the url regex without sample data).
#!/usr/bin/perl
use Mojolicious::Lite;
use Mojo::UserAgent;
my $ua = Mojo::UserAgent->new;
$ua->max_redirects(0)->connect_timeout(3)->request_timeout(6); # Timeout 6 seconds of which 3 may be connecting
any '/' => sub {
my $self = shift;
$self->res->headers->cache_control("no-cache, no-store, must-revalidate");
my $md5 = lc($self->param("md5") || ""); # read param
$md5 =~ s/[^a-f0-9]*//g if (length($md5) == 32); # custom input filter for md5 values only
if (length $md5 != 32) {
$md5=lc($ARGV[0]);
$md5=~ s/[^a-f0-9]*//g;
die "invalid MD5 $md5\n" if (length $md5 != 32);
}
$self->render_later; # wait for ua
my $url = "http://www.backgroundtask.eu/Systeemtaken/Search.php";
$ua->post_form($url, { 'ex' => $md5 }, sub {
my ($ua, $tx) = #_;
my $content=$tx->res->body;
$content =~ m{(http://www\.backgroundtask\.eu/Systeemtaken/taakinfo/.*$md5/)};
return $self->render( text => 'Failed' ) unless $1;
$ua->get($1, sub {
my ($ua, $tx) = #_;
my $filename = $tx->res->dom->find('table.view')->[0]->find('tr.even')->[2]->td->[1]->all_text;
$self->render( text => "md5 was $md5, filename was $filename" );
});
});
};
app->start;

Make Perl web server deliver an ogg through HTTP to Chrome for a HTML5 audio element

I am writing a Perl script that acts as a simple web server that serves audio files over HTML5. I have succeeded in getting it to show a page to a web browser with an HTML5 audio element. It continues to listen to the socket for when the browser asks for an audio file via a GET request; hh.ogg in this example and tries to respond with the ogg inside the message body. It works over port 8888.
#!/usr/bin/perl
use strict;
use warnings;
use IO::Socket;
my $port = 8888;
my $server = new IO::Socket::INET( Proto => 'tcp',
LocalPort => $port,
Listen => SOMAXCONN,
ReuseAddr => 1)
or die "Unable to create server socket";
# Server loop
while(my $client = $server->accept())
{
my $client_info;
my $faviconRequest = 0;
while(<$client>)
{
last if /^\r\n$/;
$faviconRequest = 1 if ($_ =~ m/favicon/is);
print "\n$_" if ($_ =~ m/GET/is);
$client_info .= $_;
}
if ($faviconRequest == 1)
{
#Ignore favicon requests for now
print "Favicon request, ignoring and closing client";
close($client);
}
incoming($client, $client_info) if ($faviconRequest == 0);
}
sub incoming
{
print "\n=== Incoming Request:\n";
my $client = shift;
print $client &buildResponse($client, shift);
print "Closing \$client";
close($client);
}
sub buildResponse
{
my $client = shift;
my $client_info = shift;
my $re1='.*?';
my $re2='(hh\\.ogg)';
my $re=$re1.$re2;
print "client info is $client_info";
# Send the file over socket if it's the ogg the browser wants.
return sendFile($client) if ($client_info =~ m/$re/is);
my $r = "HTTP/1.0 200 OK\r\nContent-type: text/html\r\n\r\n
<html>
<head>
<title>Hello!</title>
</head>
<body>
Hello World.
<audio src=\"hh.ogg\" controls=\"controls\" preload=\"none\"></audio>
</body>
</html>";
return $r;
}
sub sendFile
{
print "\n>>>>>>>>>>>>>>>>>>>>>>> sendFile";
my $client = shift;
open my $fh, '<' , 'hh.ogg';
my $size = -s $fh;
print "\nsize: $size";
print $client "Allow: GET\015\012";
print $client "Accept-Ranges: none\015\012";
print $client "Content-Type: \"audio/ogg\"\015\012";
print $client "Content-Length: $size\015\012";
print "\nsent headers before sending file";
############################################
#Take the filehandle and send it over the socket.
my $scalar = do {local $/; <$fh>};
my $offset = 0;
while(1)
{
print "\nsyswriting to socket. Offset: $offset";
$offset += syswrite($client, $scalar, $size, $offset);
last if ($offset >= $size);
}
print "Finished writing to socket.";
close $fh;
return "";
}
The sendFile subroutine is called when the GET request matches a regex for hh.ogg.
I send a few headers in the response before writing the ogg to the socket before closing.
This code works exactly as I'd expect in Firefox. When I press play the script receives a GET from Firefox asking for the ogg, I send it over and Firefox plays the track.
My problem is the script crashes in Google Chrome. Chrome's developer tools just says it cannot retrieve hh.ogg. When I visit 127.0.0.1:8888 in my browser while the script is running I can download hh.ogg. I have noticed that Chrome will make multiple GET requests for hh.ogg whereas Firefox just makes one. I've read that it may do this for caching reasons? This could be a reason as to why the script crashes.
I have
print $client "Accept-Ranges: none\015\012";
to try and stop this behaviour but it didn't work.
I'm not sure of exactly what headers to respond to Chrome to let it receive the file within one HTTP response. When the script crashes I also occasionally get this message printed out from Perl; otherwise there are no other errors. It will quit somewhere inside the while loop where I syswrite() to the socket.
Use of uninitialized value in addition (+) at ./test.pl line 91, <$fh> line 1.
Which is referring to this line.
$offset += syswrite($client, $scalar, $size, $offset);
I don't know why there would be any uninitialized values.
Would anyone have any ideas why this could be happening? If at all possible I'd like to accomplish this without requiring additional modules from CPAN.
Use a real web server instead that is already working and thorougly debugged instead of messing with sockets yourself. The Web is always more complicated than you think. Run the following app with plackup --port=8888.
use HTTP::Status qw(HTTP_OK);
use Path::Class qw(file);
use Plack::Request qw();
use Router::Resource qw(router resource GET);
my $app = sub {
my ($env) = #_;
my $req = Plack::Request->new($env);
my $router = router {
resource '/' => sub {
GET {
return $req->new_response(
HTTP_OK,
[Content_Type => 'application/xhtml+xml;charset=UTF-8'],
[ '… HTML …' ] # array of strings or just one big string
)->finalize;
};
};
resource '/hh.ogg' => sub {
GET {
return $req->new_response(
HTTP_OK,
[Content_Type => 'audio/vorbis'],
file(qw(path to hh.ogg))->resolve->openr # file handle
)->finalize;
};
};
};
$router->dispatch($env);
};
Your error says Use of uninitialized value in addition which means it is not inside the syswrite, but in the += operation. syswrite() returns undef if there is an error. Which seems consistent with your overall error with Chrome. The $! variable contains some info about the writing error.

True timeout on LWP::UserAgent request method

I am trying to implement a request to an unreliable server. The request is a nice to have, but not 100% required for my perl script to successfully complete. The problem is that the server will occasionally deadlock (we're trying to figure out why) and the request will never succeed. Since the server thinks it is live, it keeps the socket connection open thus LWP::UserAgent's timeout value does us no good what-so-ever. What is the best way to enforce an absolute timeout on a request?
FYI, this is not an DNS problem. The deadlock has something to do with a massive number of updates hitting our Postgres database at the same time. For testing purposes, we've essentially put a while(1) {} line in the servers response handler.
Currently, the code looks like so:
my $ua = LWP::UserAgent->new;
ua->timeout(5); $ua->cookie_jar({});
my $req = HTTP::Request->new(POST => "http://$host:$port/auth/login");
$req->content_type('application/x-www-form-urlencoded');
$req->content("login[user]=$username&login[password]=$password");
# This line never returns
$res = $ua->request($req);
I've tried using signals to trigger a timeout, but that does not seem to work.
eval {
local $SIG{ALRM} = sub { die "alarm\n" };
alarm(1);
$res = $ua->request($req);
alarm(0);
};
# This never runs
print "here\n";
The final answer I'm going to use was proposed by someone offline, but I'll mention it here. For some reason, SigAction works while $SIG(ALRM) does not. Still not sure why, but this has been tested to work. Here are two working versions:
# Takes a LWP::UserAgent, and a HTTP::Request, returns a HTTP::Request
sub ua_request_with_timeout {
my $ua = $_[0];
my $req = $_[1];
# Get whatever timeout is set for LWP and use that to
# enforce a maximum timeout per request in case of server
# deadlock. (This has happened.)
use Sys::SigAction qw( timeout_call );
our $res = undef;
if( timeout_call( 5, sub {$res = $ua->request($req);}) ) {
return HTTP::Response->new( 408 ); #408 is the HTTP timeout
} else {
return $res;
}
}
sub ua_request_with_timeout2 {
print "ua_request_with_timeout\n";
my $ua = $_[0];
my $req = $_[1];
# Get whatever timeout is set for LWP and use that to
# enforce a maximum timeout per request in case of server
# deadlock. (This has happened.)
my $timeout_for_client = $ua->timeout() - 2;
our $socket_has_timedout = 0;
use POSIX;
sigaction SIGALRM, new POSIX::SigAction(
sub {
$socket_has_timedout = 1;
die "alarm timeout";
}
) or die "Error setting SIGALRM handler: $!\n";
my $res = undef;
eval {
alarm ($timeout_for_client);
$res = $ua->request($req);
alarm(0);
};
if ( $socket_has_timedout ) {
return HTTP::Response->new( 408 ); #408 is the HTTP timeout
} else {
return $res;
}
}
You might try LWPx::ParanoidAgent, a subclass of LWP::UserAgent which is more cautious about how it interacts with remote webservers.
Among other things, it allows you to specify a global timeout. It was developed by Brad Fitzpatrick as part of the LiveJournal project.
You can make your own timeout like this:
use LWP::UserAgent;
use IO::Pipe;
my $agent = new LWP::UserAgent;
my $finished = 0;
my $timeout = 5;
$SIG{CHLD} = sub { wait, $finished = 1 };
my $pipe = new IO::Pipe;
my $pid = fork;
if($pid == 0) {
$pipe->writer;
my $response = $agent->get("http://stackoverflow.com/");
$pipe->print($response->content);
exit;
}
$pipe->reader;
sleep($timeout);
if($finished) {
print "Finished!\n";
my $content = join('', $pipe->getlines);
}
else {
kill(9, $pid);
print "Timed out.\n";
}
From what I understand, the timeout property doesn't take into account DNS timeouts. It's possible that you could make a DNS lookup separately, then make the request to the server if that works, with the correct timeout value set for the useragent.
Is this a DNS problem with the server, or something else?
EDIT: It could also be a problem with IO::Socket. Try updating your IO::Socket module, and see if that helps. I'm pretty sure there was a bug in there that was preventing LWP::UserAgent timeouts from working.
Alex
The following generalization of one of the original answers also restores the alarm signal handler to the previous handler and adds a second call to alarm(0) in case the call in the eval clock throws a non alarm exception and we want to cancel the alarm. Further $# inspection and handling can be added:
sub ua_request_with_timeout {
my $ua = $_[0];
my $request = $_[1];
# Get whatever timeout is set for LWP and use that to
# enforce a maximum timeout per request in case of server
# deadlock. (This has happened.)`enter code here`
my $timeout_for_client_sec = $ua->timeout();
our $res_has_timedout = 0;
use POSIX ':signal_h';
my $newaction = POSIX::SigAction->new(
sub { $res_has_timedout = 1; die "web request timeout"; },# the handler code ref
POSIX::SigSet->new(SIGALRM),
# not using (perl 5.8.2 and later) 'safe' switch or sa_flags
);
my $oldaction = POSIX::SigAction->new();
if(!sigaction(SIGALRM, $newaction, $oldaction)) {
log('warn',"Error setting SIGALRM handler: $!");
return $ua->request($request);
}
my $response = undef;
eval {
alarm ($timeout_for_client_sec);
$response = $ua->request($request);
alarm(0);
};
alarm(0);# cancel alarm (if eval failed because of non alarm cause)
if(!sigaction(SIGALRM, $oldaction )) {
log('warn', "Error resetting SIGALRM handler: $!");
};
if ( $res_has_timedout ) {
log('warn', "Timeout($timeout_for_client_sec sec) while waiting for a response from cred central");
return HTTP::Response->new(408); #408 is the HTTP timeout
} else {
return $response;
}
}