This is not the same as a background/asynchronous HTTP request.
Is there a way to fire an HTTP PUT request and not wait on response or determination of success/failure?
My codebase is single-threaded and single-process.
I'm open to monkey-patching LWP::UserAgent->request if needed.
You could just abandon processing the response when the first data come in:
use LWP::UserAgent;
my $ua = LWP::UserAgent->new;
my $resp = $ua->put('http://example.com/', ':content_cb' => sub { die "break early" });
print $resp->as_string;
You might also create a request using HTTP::Request-new('PUT',...)->as_string, create a socket with IO::Socket::IP->new(...) or (for https) IO::Socket::SSL->new(...) and send this request over the socket - then leave the socket open for a while while doing other things in your program.
But the first approach with the early break in the :content_cb is probably simpler. And contrary to crafting and sending the request yourself it guarantees that the server at least started to process your request since it started to send a response back.
If you are open to alternatives to LWP, Mojo::UserAgent is a non-blocking user agent which allows you to control how the response is handled by the event loop when used that way. For example, as described in the cookbook, you can change the handling of the response stream. This could be used to simply ignore the stream, or do something else.
Related
My Plack web service logs via a TCP connection to fluentD, and I'd like to execute my logging code after I've sent the response back to the client. This would reduce response time (assume this is a high request volume service where such a performance optimization is worth doing).
At least one other web framework, express for nodejs, supports this by enabling middlewares to add an on-end event handler to the request object.
I've looked at the Plack::Request and Plack::Response interfaces, and I didn't see a similar event hook.
I think I could probably do a local override of the finalize method in my middleware to force the framework to do my logging after the response is finalized, but I'd like to avoid tinkering with the Plack internals if possible.
Is there a better way to defer execution of some code until after a response has been sent to the client?
Thanks to LeoNerd and ilmari for their debugging help in MagNET #io-async.
#!/usr/bin/env -S plackup -s Net::Async::HTTP::Server
use Future::AsyncAwait;
use Time::HiRes qw(time);
use Future::IO qw();
use Future::IO::Impl::IOAsync qw();
async sub mylogger {
# simulate expensive run-time
await Future::IO->sleep(1);
open my $log, '>>', '/tmp/so-58605156.log';
$log->say(time);
}
my $app = sub {
my ($env) = #_;
mylogger->retain;
return [200, ['Content-Type' => 'text/plain'], [time]];
};
I'm using WWW::Mechanize to fetch a web page that includes a Google Maps widget that receives constant data from a single response of type text/event-stream.
That kind of response is like a never ending response from the server that constantly returns updated data for the widget to work.
I'm trying to find out how to read that exact response from Perl. Using something like:
my $mech = WWW::Mechanize->new;
# Do some normal GET and POST requests to authenticate and set cookies for the session
# Now try to get that text/event-stream response
$mech->get('https://some.domain.com/event_stream_page');
But that doesn't work because the response never ends.
How can I make that request and start reading the response and do something with that data every time the server updates the stream?
Found a way to do this. Using a handler from LWP, from which WWW::Mechanize inherits:
$mech->add_handler (
'response_data',
sub {
my ($response, $ua, $h, $data) = #_;
# Your chunk of response is now in $data, do what you need
# If you plan on reading an infinite stream, it's a good idea to clean the response so it doesn't grow infinitely too!
$response->content(undef);
# Important to return a true value if you want to keep reading the response!
return 1;
},
);
When a request was made(actions like GET POST PATCH) to server through a rest client like LWP or REST::Client or HTTP::Request. how can we decode the request so that we will get the actual method which is called from client. if we can get the action we will process or respond to client accordingly.
this way i am able to get headers, all params sent in post request.
my $q = CGI->new;
my $input = $q->param( 'POSTDATA' ); # for content
my %headers = map { $_ => $q->http($_) } $q->http();
print $q->header('text/plain');
print "Got the following headers:\n";
for my $header ( keys %headers ) {
print "$header: $headers{$header}\n";
}
Now my question is how to receive the action like GET or POST.
From the docs
request_method()
Returns the method used to access your script, usually one of 'POST', 'GET' or 'HEAD'.
Also from the docs:
CGI.pm is no longer considered good practice for developing web applications, including quick prototyping and small web scripts. There are far better, cleaner, quicker, easier, safer, more scalable, more extensible, more modern alternatives available at this point in time. These will be documented with CGI::Alternatives.
I want a request to a Mojolicious application to be able to trigger a long running job. The client doesn't need to wait for that long job to finish, so I'd like the app to send back a quick response and start the job. Here's what I have in mind:
use Mojolicious::Lite;
get '/foo' => sub {
my $self = shift;
$self->render( text => 'Thanks for requesting /foo. I will get started on that.' );
# ... force Mojolicious to send response now ...
do_long_running_job();
};
But when I write the code like this, the client doesn't receive the response until after the long running job is finished (which may trigger inactivity timeouts, etc.). Is there any way to send the response more quickly? Is there another way to structure my code/app to achieve this?
Things from the docs that looked promising but didn't work:
$self->rendered(200);
$self->res->finish;
Randal Schwartz's Watching long processes through CGI should help:
The child goes on, but it must first close STDOUT, because otherwise Apache will think there might still be some output coming for the browser, and won't respond to the browser or release the connection until this is all resolved. Next, we have to launch a child process of the child to execute …
We'll do this with a pipe-open which includes an implicit fork, in line 37. The grandchild process merges STDERR to STDOUT, and then executes …
The child (that is, the parent of the traceroute) reads from the filehandle opened from the STDOUT (and STDERR) …
In short, the child process scurries off to execute the command. …
Given that you are only interested in kicking off a process rather than watching it, you should be able to prune most of the code.
In my Catalyst app I have a very important connection to a remote server using SOAP with WSDL.
Everything works fine, but when the remote server goes down due to any reason, ALL my app waits until the timeout expires. EVERYTHING. ALL the controllers and processes, ALL the clients!!
If I set a 15 secs timeout for the SOAP LITE transport error, everything waits for 15 secs.
Any page from any user or connection can't be displayed during the timeout wait.
I use Fast CGI and Ngnix for the Catalyst app. If I use multiple fcgi processes when one waits, others take care of the connections, but if all of them try to access the faulty SOAP service... they all wait and wait for an answer until they reach their timeouts. When all of them are waiting, no more connections are allowed.
Looking for answers I have read somewhere that SOAP::LITE is "single threaded".
Is it true? Does it means that ALL my app, with ALL the visitors can only use one SOAP connection? It is hard to believe.
This is my code for the call:
sub check_result {
my ($self, $code, $IP, $PORT) = #_;
my $soap = SOAP::Lite->new( proxy => "http://$IP:$PORT/REMOTE_SOAP
+");
$soap->autotype(0);
$soap->default_ns('http://REMOTENAMESPACE/namespace/default');
$soap->transport->timeout(15);
$soap-> on_fault(sub { my($soap, $res) = #_;
eval { die ref $res ? $res->faultstring : $soap->transport->st
+atus };
return ref $res ? $res : new SOAP::SOM;
});
my $som = $soap->call("remote_function",
SOAP::Data->name( 'Entry1' )->value( $code ),
);
return $som->paramsout;
}
I also tried this slightly different approach kindly suggested at perlmonks, but nothing got better
Please, can someone point me in the rigth direction?
Migue
This is not a problem with SOAP::Lite or Catalyst per se. Pretty much any resource you query will most likely wait for the return (i.e.: file read on disk, database access). If the resource blocks for a long time, there's a chance that you could "starve" other requests while waiting for this return.
There's not an easy answer to this problem, but you could create a "job queue" that a separate process executes, then instead of calling the other service you would add the entry to the queue and get a token. When the request is finished, the queue stores the result associated with that token, then your app, in a separate request checks if the token you want already has a result or not.
There are specialized "job queue" frameworks, such as RabbitMQ, ApacheMQ and even some solutions on top of Redis. If your web application uses rich Javascript, you could even have the "job queue" notification reach the javascript client using, for instance, WebSockets, but otherwise, just poll every second to see if there is a response or not.