Why would hot deploy of Hypnotoad rerun old http requests? - perl

The nutshell:
When I do a hot deployment of Hypnotoad sometimes the new server immediately processes a slew of HTTP requests that were already handled by the previous server.
If a response has been rendered but the thread is still doing some processing does Mojo/Hypnotoad retain the request until the processing has stopped? Do I need to tell the server that the HTTP request is resolved?
The long version:
I have a Mojolicious::Lite app running under Hypnotoad.
The app's function is to accept HTTP requests from another service.
We are processing jobs that progress through a series of states.
At each job state change the app is notified with an HTTP request.
This is a busy little script - recieving more than 1000 req/hour.
The scripts job is to manipulate some data .. doing DB updates, editng files, sending mail.
In an effort to keep things moving along, when it recieves the HTTP request it sanity checks the data it recieved. If the data looks good it sends a 200 response to the caller immediately and then continues on to do the more time consuming tasks. (I'm guessing this is the underlying cause)
When I hot deploy - by rerunning the start script (which runs 'localperl/bin/hypnotoad $RELDIR/etc/bki/bki.pl') - some requests that were already handled are sent to the new server and reprocessed.
Why are these old transactions still being held by the original server? Many have been long since completed!
Does the need to tell Mojolicious that the request is done before it goes off and messes with data?
(I considered $c->finish() but that is just for sockets?)
How does Hypnotoad decide what requests should be passed to it's replacement server?
Here is some psuedo code with what I'm doing:
get '/jobStateChange/:jobId/:jobState/:jobCause' => sub {
my $c =shift;
my $jobId = $c->stash("jobId");
return $c->render(text => "invalid jobId: $jobId", status => 400) unless $jobId=~/^\d+$/;
my $jobState = $c->stash("jobState");
return $c->render(text => "invalid jobState: $jobState", status => 400) unless $jobState=~/^\d+$/;
my $jobCause = $c->stash("jobCause");
return $c->render(text => "invalid jobCause: $jobCause", status => 400) unless $jobCause=~/^\d+$/;
my $jobLocation = $c->req->param('jobLocation');
if ($jobLocation){ $jobLocation = $ENV{'DATADIR'} . "/jobs/" . $jobLocation; }
unless ( $jobLocation && -d $jobLocation ){
app->log->debug("determining jobLocation because passed job jobLocation isn't useable");
$jobLocation = getJobLocation($jobId);
$c->stash("jobLocation", $jobLocation);
}
# TODO - more validation? would BKI lie to us?
return if $c->tx->res->code && 400 == $c->tx->res->code; # return if we rendered an error above
# tell BKI we're all set ASAP
$c->render(text => 'ok');
handleJobStatusUpdate($c, $jobId, $jobState, $jobCause, $jobLocation);
};
sub handleJobStatusUpdate{
my ($c, $jobId, $jobState, $jobCause, $jobLocation) = #_;
app->log->info("job $jobId, state $jobState, cause $jobCause, loc $jobLocation");
# set the job states in jobs
app->work_db->do($sql, undef, #params);
if ($jobState == $SOME_JOB_STATE) {
... do stuff ...
... uses $c->stash to hold data used by other functions
}
if ($jobState == $OTHER_JOB_STATE) {
... do stuff ...
... uses $c->stash to hold data used by other functions
}
}

Your request will not be complete until the request handler returns. This little app, for example, will take 5 seconds to output "test":
# test.pl
use Mojolicious::Lite;
get '/test' => sub { $_[0]->render( text => "test" ); sleep 5 };
app->start;
The workaround for your app would be to run handleJobStatusUpdate in a background process.
get '/jobStateChange/:jobId/:jobState/:jobCause' => sub {
my $c =shift;
my $jobId = $c->stash("jobId");
my $jobState = $c->stash("jobState");
my $jobCause = $c->stash("jobCause");
my $jobLocation = $c->req->param('jobLocation');
...
$c->render(text => 'ok');
if (fork() == 0) {
handleJobStatusUpdate($c, $jobId, $jobState, $jobCause, $jobLocation);
exit;
}

Related

Delayed response to slash command with Mojolicious in Perl

I am trying to create a slack application in Perl with mojolicious and I am having the following use case:
Slack sends a request to my API from a slash command and needs a response in a 3 seconds timeframe. However, Slack also gives me the opportunity to send up to 5 more responses in a 30 minute timeframe but still needs an initial response in 3 seconds (it just sends a "late_response_url" in the initial call back so that I could POST something to that url later on). In my case I would like to send an initial response to slack to inform the user that the operation is "running" and after a while send the actual outcome of my slow function to Slack.
Currently, I can do this by spawning a second process using fork() and using one process to respond imidiately as Slack dictates and the second to do the rest of the work and respond later on.
I am trying to do this with Mojolicious' subprocesses to avoid using fork(). However I can't find a way to get this to work....
a sample code of what I am already doing with fork is like this:
sub withpath
{
my $c = shift;
my $user = $c->param('user_name');
my $response_body = {
response_type => "ephemeral",
text => "Running for $user:",
attachments => [
{ text => 'analyze' },
],
};
my $pid = fork();
if($pid != 0){
$c->render( json => $response_body );
}else{
$output = do_time_consuming_things()
$response_body = {
response_type => "in-channel",
text => "Result for $user:",
attachments => [
{ text => $output },
],
};
my $ua = Mojo::UserAgent->new;
my $tx = $ua->post(
$response_url,
{ Accept => '*/*' },
json => $response_body,
);
if( my $res = $tx->success )
{
print "\n success \n";
}
else
{
my $err = $tx->error;
print "$err->{code} response: $err->{message}\n" if $err->{code};
print "Connection error: $err->{message}\n";
}
}
}
So the problem is that no matter how I tried I couldn't replicate the exact same code with Mojolicious' subproccesses. Any ideas?
Thanks in advance!
Actually I just found a solution to my problem!
So here is my solution:
my $c = shift; #receive request
my $user = $c->param('user_name'); #get parameters
my $response_url = $c->param('response_url');
my $text = $c->param('text');
my $response_body = { #create the imidiate response that Slack is waiting for
response_type => "ephemeral",
text => "Running for $user:",
attachments => [
{ text => 'analyze' },
],
};
my $subprocess = Mojo::IOLoop::Subprocess->new; #create the subprocesses
$subprocess->run(
sub {do_time_consuming_things($user,$response_url,$text)}, #this callback is the
#actuall subprocess that will run in background and contains the POST request
#from my "fork" code (with the output) that should send a late response to Slack
sub {# this is a dummy subprocess doing nothing as this is needed by Mojo.
my ($subprocess, $err, #results) = #_;
say $err if $err;
say "\n\nok\n\n";
}
);
#and here is the actual imidiate response outside of the subprocesses in order
#to avoid making the server wait for the subprocess to finish before responding!
$c->render( json => $response_body );
So I actually simply had to put my code of do_time_consuming_things in the first callback (in order for it to run as a subprocess) and use the second callback (that is actually linked to the parent process) as a dummy one and keep my "imidiate" response in the main body of the whole function instead of putting it inside one of the subprocesses. See code comments for more information!

Race between pool requests in Guzzle

I'm doing mutiple api concurrent requests using guzzle Pool.
Everything's working fine.
But I want to stop/avoid all requests if any of the requests responded. That is, I want to do some race between the requests. Is it possible using Guzzle in laravel?
Here's what I've done so far:
$requests = function(array $urls){
foreach ($urls as $url) {
yield new Request('GET', $url);
}
};
$pool = new Pool($client,
$requests($urls),
[
'concurrency' => 5,
'fulfilled' => function($response, $index) use ($urls){
echo "<br>Completed ".$urls[$index];
},
'rejected' => function($reason, $index){
echo "Rejected ".$index;
},
]);
$promise = $pool->promise();
$promise->wait();
$urls is an array of URIs
I don't think it is possible with a current implementation of the Guzzle Pool. The only thing you can possibly do with it is to exit; in the fulfilled function:
'fulfilled' => function($response, $index) use ($urls){
echo "Completed " . $urls[$index];
exit;
},
In this case it will still send all the requests, but immediately exit the script on the fastest response.
Without the Pool you can use GuzzleHttp\Promise\any or GuzzleHttp\Promise\some helper functions:
use GuzzleHttp\Client;
use GuzzleHttp\Promise;
$client = new Client(['base_uri' => 'http://site.local/']);
// Initiate each request but do not block
$promises = [
'delay3' => $client->getAsync('/async/delay3.php'),
'delay2' => $client->getAsync('/async/delay2.php'),
'delay1' => $client->getAsync('/async/delay1.php'),
];
//Initiate a competitive race between multiple promises
$promise = Promise\any($promises)->then(
function (\GuzzleHttp\Psr7\Response $response) {
echo "Completed: " . $response->getStatusCode() . "\n";
echo $response->getBody() ."\n";
},
function ($reason) {
echo $reason;
}
);
$results = $promise->wait();
From the docs for GuzzleHttp\Promise\some($count, $promises):
Initiate a competitive race between multiple promises or values
(values will become immediately fulfilled promises).
When count amount of promises have been fulfilled, the returned
promise is fulfilled with an array that contains the fulfillment
values of the winners in order of resolution.
This promise is rejected with a {#see
GuzzleHttp\Promise\AggregateException} if the number of fulfilled
promises is less than the desired $count.
From the docs for GuzzleHttp\Promise\any($promises):
Like some(), with 1 as count. However, if the promise fulfills, the
fulfillment value is not an array of 1 but the value directly.

How do I direct a Perl script to check for website response?

I’m pinging a website and checking the availability and sending an email only when it’s down. (That part is working just fine according to the code below.)
require LWP::UserAgent;
my $ua = LWP::UserAgent->new;
$ua->timeout(20);
my $response = $ua->get('https://www.Mysite.net/websuite/');
if (! $response->is_success) {
#print 'CMM Is up and Running';
$path = "C:\\prac\\send_email_failure.ps1";
$pwspath = "c:\\windows\\system32\\windowspowershell\\v1.0\\powershell.exe";
system("$pwspath -command $path"); #using powershell to invoke email utility
}
Now, I’m working on trying to expand the script to see whether
It can check once it’s down and send email (which it’s doing now) and don’t send email until it’s bought up. By the way, I’m using Windows task scheduler to run the script every twenty minutes.
After it sees the website is up it should goto its normal process of checking whether the site is down again and send email (for example the website went down then bought back up and again went down). I’m running the script every 20 mins using task scheduler.
Any help appreciated.
If your script is executed from some kind of scheduler you'll need to persist the status of your last request somehow. You could for example create a file which flags the last status as "down".
Or you could simply run your script as a daemon and schedule a check every 20 minutes (for example with AnyEvent). This way you wouldn't have to cope with filesystem related issues.
use LWP::UserAgent;
use AnyEvent;
my $previous = 1;
my $watch = AnyEvent->timer(interval => 1200, cb => sub {
if(check_status() == 0) {
if($previous == 1) {
# send e-mail
}
$previous = 0;
}
else {
$previous = 1;
}
});
AnyEvent->condvar->recv;
sub check_status {
my $ua = LWP::UserAgent->new(timeout => 20);
my $response = $ua->get('...');
return $response->is_success ? 1 : 0;
}

Why do I have to send multiple messages to my Jabber bot before it will logout?

I am trying to make my own Jabber bot but i have run into a little trouble. I have gotten my bot to respond to messages, however, if I try to change the bot's presence then it seems as though all of the messages you send to the bot get delayed.
What I mean is when I run the script I change the presence so I can see that it is online. Then when I send it a message it takes three before the callback subroutine I have set up for messages gets called. After the thirrd message is sent and the chat subroutine is called it still process the first message I sent.
This really doesn't pose too much of a problem except that I have it set up to log out when I send the message "logout" and it has to be followed by two more messages in order to log out. I am not sure what it is that I have to do to fix this but i think it has something to do with iq packets because I have an iq callback set as well and it gets called two times after setting the presence.
Here is my source code:
#!/usr/bin/perl
use strict;
use warnings;
#Libraries
use Net::Jabber;
use DBI;
use DBD::mysql;
#--------------- Config Vars -----------------
# Jabber Client
my $jbrHostname = "DOMAINNAME";
my $jbrUserName = "USERNAME";
my $jbrPassword = "PASSWORD";
my $jbrResource = "RESOURCE";
my $jbrBoss = new Net::Jabber::JID();
$jbrBoss->SetJID(userid=>"USERNAME",server=>$jbrHostname);
# MySQL
my $dbHostname = "DOMAINNAME";
my $dbName = "DATABASENAME";
my $dbUserName = "USERNAME";
my $dbPassword = "PASSWORD";
#--------------- End Config -----------------
# connect to the db
my $dbh = DBI->connect("DBI:mysql:database=$dbName;host=$dbHostname",$dbUserName, $dbPassword, {RaiseError => 1}) or die "Couldn't connect to the database: $!\n";
# create a new jabber client and connect to server
my $jabberBot = Net::Jabber::Client->new();
my $status = $jabberBot->Connect(hostname=>$jbrHostname) or die "Cannot connect ($!)\n";
my #results = $jabberBot->AuthSend(username=>$jbrUserName,password=>$jbrPassword,resource=>$jbrResource);
if($results[0] ne "ok")
{
die "Jabber auth error #results\n";
}
# set jabber bot callbacks
$jabberBot->SetMessageCallBacks(chat=>\&chat);
$jabberBot->SetPresenceCallBacks(available=>\&welcome);
$jabberBot->SetCallBacks(iq=>\&gotIQ);
$jabberBot->PresenceSend(type=>"available");
$jabberBot->Process(1);
sub welcome
{
$jabberBot->MessageSend(to=>$jbrBoss->GetJID(),subject=>"",body=>"Hello There!",type=>"chat",priority=>10);
&keepItGoing;
}
$jabberBot->MessageSend(to=>$jbrBoss->GetJID(),subject=>"",body=>"Hello There! Global...",type=>"chat",priority=>10);
#$jabberBot->Process(5);
&keepItGoing;
sub chat
{
print "Chat Called!\n";
my ($sessionID,$msg) = #_;
$jabberBot->MessageSend(to=>$msg->GetFrom(),subject=>"",body=>"Chatting!",type=>"chat",priority=>10);
if($msg->GetBody() ne 'logout')
{
print $msg->GetBody()."\n";
&keepItGoing;
}
else
{
&killBot($msg);
}
}
sub gotIQ
{
print $_[1]->GetID()."\n";
&chat;
}
sub keepItGoing
{
print "Movin' the chains!\n";
my $proc = $jabberBot->Process(1);
while(defined($proc) && $proc != 1)
{
$proc = $jabberBot->Process(1);
}
}
sub killBot
{
$jabberBot->MessageSend(to=>$_[0]->GetFrom(),subject=>"",body=>"Logging Out!",type=>"chat",priority=>10);
$jabberBot->Process(1);
$jabberBot->Disconnect();
exit;
}
Thanks for your help!
You've got resource starvation because of your keepItGoing routine. In general, trying to use XMPP synchronously like this is not going to work. I suggest getting your callbacks set up, then just calling Process() in one loop.
The docs for Process() say:
Process(integer) - takes the timeout period as an argument. If no
timeout is listed then the function blocks until
a packet is received. Otherwise it waits that
number of seconds and then exits so your program
can continue doing useful things. NOTE: This is
important for GUIs. You need to leave time to
process GUI commands even if you are waiting for
packets. The following are the possible return
values, and what they mean:
1 - Status ok, data received.
0 - Status ok, no data received.
undef - Status not ok, stop processing.
IMPORTANT: You need to check the output of every
Process. If you get an undef then the connection
died and you should behave accordingly.
Each time you call Process(), 0 or more of your callbacks will fire. You never know which, since it depends on server timing. If you want for Process() to return before sending something, you're almost always thinking synchronously, rather than asych, which kills you in XMPP.
In your case, if you remove the call to keepItGoing from chat(), I bet things will work more like you expect.
Replace the line:
$jabberBot->Process(1);
with these:
while (defined($jabberBot->Process(1))) {
# Do stuff here
}

How do I get killed using Perl FCGI?

I'm having a little problem with nginx and the Perl FCGI module. I have a long operation in my FCGI program that may outlive the server (or the user on the server) on the other end of the Unix socket I'm using to communicate FCGI. I need the FCGI accept() loop in my program to break if the FCGI request is closed. I tried installing INT, TERM, etc signal handlers, but they do nothing, since the only communication between nginx and my program happens over the FCGI socket, AFAIK.
I also tried this but there's no way that I can see to use the FCGI module in Perl to send raw data to or from nginx over the FCGI socket. Is there a way I can do it without modifying the FCGI module to have a "ping" function?
The basic problem is that my program does not know if nginx has terminated the FCGI request.
Example:
#!/usr/bin/perl -w
use strict;
use FCGI;
my $fcgi_socket = FCGI::OpenSocket( '/tmp/test.socket', 100000 );
my $request = FCGI::Request(\*STDIN, \*STDOUT, \*STDERR, \%ENV, $fcgi_socket);
REQUEST: while($request->Accept() >= 0) {
#begin handling request
my $result = '';
while (1) { #or select(), etc
if (somehow check whether the fcgi $request is still live) {
next REQUEST;
}
#check for results, set $result if done
}
print $result;
}
You have to use a FCGI implementation which treats FCGI_ABORT_REQUEST.
You cannot use the following, because they ignore FCGI_ABORT_REQUEST:
FCGI <=v0.69 (the one which you are currently using?)
FCGI-Async <=v0.19
Net-FastCGI <=v0.08
FCGI-EV <=1.0.7
You could use the following, which treat FCGI_ABORT_REQUEST:
Vitaly Kramskikh's AnyEvent-FCGI
When using AnyEvent-FCGI, checking for an aborted request is as easy as calling $request->is_active(), but keep in mind that is_active() will not reflect the true state of the request until the on_request handler returns, which means you have to return from on_request as soon as possible and somehow do the actual work "in parallel" (you probably don't want to use Perl threads, but something more akin to continuations) in order to give the AnyEvent loop the opportunity to process any further requests (including FCGI_ABORT_REQUESTs) while you are completing the long-winded operations.
I am not familiar enough with AnyEvent to know for sure whether there is a better way of doing this, but here's my take, below, for a start:
use AnyEvent;
use AnyEvent::FCGI;
my #jobs;
my $process_jobs_watcher;
sub process_jobs {
# cancel aborted jobs
#jobs = grep {
if ($_->[0]->is_active) {
true
} else {
# perform any job cleanup
false
}
} #jobs;
# any jobs left?
if (scalar(#jobs)) {
my $job = $jobs[0];
my ( $job_request, $job_state ) = #$job;
# process another chunk of $job
# if job is done, remove (shift) from #jobs
} else {
# all jobs done; go to sleep until next job request
undef $process_jobs_watcher;
}
}
my $fcgi = new AnyEvent::FCGI(
port => 9000,
on_request => sub {
my $request = shift;
if (scalar(#jobs) < 5) { # set your own limit
# accept request and send back headers, HTTP status etc.
$request.print_stdout("Content-Type: text/plain\nStatus: 200 OK\n\n");
# This will hold your job state; can also use Continutiy
# http://continuity.tlt42.org/
my $job_state = ...;
# Enqueue job for parallel processing:
push #jobs, [ $request, $job_state ];
if (!$process_jobs_watcher) {
# If and only if AnyEvent->idle() does not work,
# use AnyEvent->timer() and renew from process_jobs
$process_jobs_watcher = AnyEvent->idle(cb => \&process_jobs);
}
} else {
# refuse request
$request.print_stdout("Content-Type: text/plain\nStatus: 503 Service Unavailable\n\nBusy!");
}
}
);
AnyEvent->loop;