How do I properly shut down a Mojolicious::Lite server? - perl

In a Mojolicious::Lite app I have a route that I want to kill the server and redirect to another site. Here is the snippet.
my $me = $$;
get '/kill' => sub {
my $self = shift;
$self->res->code(301);
$self->redirect_to('http://www.google.com');
$self->app->log->debug("Goodbye, $name.");
# I need this function to return so I delay the kill a little.
system("(sleep 1; kill $me)&");
};
This code does what I want, but it doesn't feel right. I have tried $self->app->stop but that is not available.
Is there a proper technique I should be using to get access to the server?

Update 2021:
This answer was referred to recently in an IRC discussion, so an update is warranted. The response below was a mechanism that I had used in a very specific case. While it may still be useful in rare cases, the more correct manner of stopping a service would be
https://docs.mojolicious.org/Mojo/IOLoop#stop_gracefully
or https://docs.mojolicious.org/Mojo/Server/Daemon#SIGNALS for a single-process server or https://docs.mojolicious.org/Mojo/Server/Prefork#MANAGER-SIGNALS for preforking
Original:
There are several ways to do this, of course.
Probably the best, is to simply attach a finish handler to the transaction:
#!/usr/bin/env perl
use Mojolicious::Lite;
get '/kill' => sub {
my $c = shift;
$c->redirect_to('http://google.com');
$c->tx->on( finish => sub { exit } );
};
app->start;
The method most like your example would be to setup a Mojo::IOLoop timer which would wait a few seconds and exit.
#!/usr/bin/env perl
use Mojolicious::Lite;
use Mojo::IOLoop;
get '/kill' => sub {
my $c = shift;
$c->redirect_to('http://google.com');
my $loop = Mojo::IOLoop->singleton;
$loop->timer( 1 => sub { exit } );
$loop->start unless $loop->is_running; # portability
};
app->start;

Joel mentioned Mojo::IOLoop, so here's what I've used for a simple Mojo Lite throwaway app:
get '/shutdown' => sub ($c) {
$c->render(text => "Shutting down" );
$c->tx->on( finish => sub { Mojo::IOLoop->stop_gracefully } );
};
Sending signals also works since this is a single process program:
get '/shutdown' => sub ($c) {
$c->render(text => "Shutting down" );
$c->tx->on( finish => sub { kill 'TERM', $$ } );
};

Related

How can I ensure that a function never gets executed concurrently in perl?

I use Mojo::IOLoop to talk to a port to send commands to a digital receiver.
Because the receiver can only open the port once, but the code talking to it sits behind a web interface and a web server, there is a risk that commands get sent concurrently - so the digital receiver loses them, and things go wrong.
So - how can I ensure that some piece of code never gets run concurrently?
I was trying to use file locks like this:
sub enqueue {
my $self = shift;
my ($cmd, $promise) = #_;
while (!(flock DATA, LOCK_EX|LOCK_NB)) { sleep(0.25) }
my (#r);
my $id = Mojo::IOLoop->client({ address => '192.168.1.204',
port => 23, timeout => 13 }
=> sub ($loop, $err, $stream) {
return $promise->reject('could not open stream') unless $stream;
$stream->on(error => sub ($stream, $err) {
flock(DATA, LOCK_UN);
$stream->close;
$promise->reject($err);
});
$stream->on(read => sub ($stream, $bytes) {
$bytes =~ s/\r\n$//;
push #r, split /\r\n/, $bytes;
# stream done is a function that checks that
# the appropriate number of lines got sent back
if (stream_done(#r)) {
flock(DATA, LOCK_UN);
$promise->resolve(\#r);
$stream->close;
};
});
$stream->on(timeout => sub ($stream) {
flock(DATA, LOCK_UN);
$stream->close;
$promise->reject('timeout') if ($err);
});
$stream->write($cmd);
});
}
I'm not sure if I understand correctly what you want to do as I am not familiar with Mojo. But if you just don't want to execute a function two times create a lock file and delete if when it is finished:
use File::Slurp;
sub my_method_to_execute_only_once {
my $tmp = 'temp.txt';
return if (is_method_running($tmp));
# DO YOUR STUFF
unlink $tmp;
}
sub is_method_running{
my ($filename) = #_;
if (-e $tmp){
return 1;
}
write_file($tmp, "running");
return 0;
}

Set Inactivity Timeout Perl (Mojo) Subprocess

I have a Perl Mojo server running and when posting to a certain url, there is a script that creates a sub process for a very long process (around a minute's time).
This process runs for about 30 seconds then crashes, and here are no exceptions being thrown or any logs being generated.
My natural assumption is that this has something to do with a connection timeout, so I increased the server's timeout. This being said, I'm pretty confident that this has nothing to do with the server process but rather the perl script itself timing out.
I came across the docs on the subprocess page that says:
Note that it does not increase the timeout of the connection, so if your forked process is going to take a very long time, you might need to increase that using "inactivity_timeout" in Mojolicious::Plugin::DefaultHelpers.
The DefaultHelpers docs say:
inactivity_timeout
$c = $c->inactivity_timeout(3600);
Use "stream" in Mojo::IOLoop to find the current connection and increase timeout if possible.
Longer version
Mojo::IOLoop->stream($c->tx->connection)->timeout(3600);
but I'm not eactly sure how (or where) to define the inactivity timeout, or what excatly the $c variable is in the docs.
My Code:
sub long_process{
my ($self) = #_;
my $fc = Mojo::IOLoop::Subprocess->new;
$fc->run(
sub {
my #args = #_;
sleep(60);
},[],
);
}
links:
inactivity_timeout
subprocess
Here is a minimal example:
use Mojolicious::Lite;
get '/',
sub {
my $c = shift;
say Mojo::IOLoop->stream($c->tx->connection)->timeout;
$self->inactivity_timeout(60);
say Mojo::IOLoop->stream($c->tx->connection)->timeout;
my $fc = Mojo::IOLoop::Subprocess->new;
$fc->run(
sub {
my #args = #_;
sleep(20);
return 'Hello Mojo!';
},
sub {
my ($subprocess, $err, $result) = #_;
say $result;
$c->stash(result => $result);
$c->render(template => 'foo');
}
);
};
app->start;
__DATA__
## foo.html.ep
%== $result
The second callback passed to run() does the processing when the subprocess has finished.
See Mojo::IOLoop::Subprocess for details.

I am trying to create a template for placeholders for Error messages in perl. Any suggestions?

I have a solution for this currently but it may not be the most versatile code. I know there is a way to use templates with placeholders for variables instead of putting the actual runtime parameters into the error message. Apologies if what I'm asking is unclear. I don't have a whole lot of knowledge on how to use templates.
use constant {
#list will contain more errors
ERROR_SW => {
errorCode => 727,
message => sub{"Not able to ping switch $switch_ip in $timeout seconds"},
fatal => 1,
web_page => 'http://www.errorsolution.com/727',
}
};
sub error_post {
my ($error) = #_;
print($error->{message}());
}
error_post(ERROR_SW);
I am trying to design it so that I can use placeholders for $switch_ip and $timeout instead of having to declare the message as a subroutine reference.
Like below
use constant {
#list will contain more errors
ERROR_SW => {
errorCode => 727,
message => "Not able to ping switch **{{switch_ip}}** in **{{timeout}}** seconds",
fatal => 1,
web_page => 'http://www.errorsolution.com/727',
}
};
sub error_post {
my ($error) = #_;
print($error->{message});
}
error_post(ERROR_SW);
They also appear in code like so:
%%error%%
I'm not sure how to create the template which will handle the parameters.
Again Apologies for being vague or not explaining this well.
I can't immediately see what this approach buys you that isn't provided by the printf format I explained before, but
I suggest you use the Text::Template module to do it this way. It is less extensive than Template::Toolkit but perfectly adequate for your purposes
Here's what a program using Text::Template would look like. I hope it helps you
use strict;
use warnings 'all';
use Text::Template qw/ fill_in_string /;
use constant {
ERROR_SW => {
errorCode => 727,
message => 'Not able to ping switch {$switch_ip} in {$timeout} seconds',
fatal => 1,
web_page => 'http://www.errorsolution.com/727',
}
};
my $values = {
switch_ip => '192.168.0.1',
timeout => 60,
};
sub error_post {
my ($error) = #_;
print( fill_in_string($error->{message}, hash => $values) );
}
error_post(ERROR_SW);
output
Not able to ping switch 192.168.0.1 in 60 seconds
I would create a package for each error type in you project. Each error object should have the necessary attributes to describe the error and a as_string() method giving a human readable message.
These packages can be written using you normal Object oriented framework (e.g. Moose). With good old perl objects it could look like this:
package My::Error;
sub new {
my ($class, %self) = #_;
bless \%self, $class;
}
package My::Error::SW;
use parent 'My::Error';
sub as_string {
my $self = shift;
return sprintf "Not able to ping switch %s in %s seconds", $self->{switch_ip}, $self->{timeout};
}
There exists multiple frameworks for this on CPAN. One example is the Throwable modules which uses Moose.

Can't get Mojo::Redis2 to subscribe

I wrote the following program (redis.pl), Redis is running locally with the default port settings, but when I run redis.pl with morbo redis.pl I never get ********* 1 on the screen. Why is that? It seems the subscription never happens. How can I fix this?
#!/usr/bin/perl
use v5.18;
use warnings;
use Mojolicious::Lite;
use Mojo::Redis2;
say "Welcome";
my $redis = Mojo::Redis2->new();
$redis->subscribe(['pubsub'] => sub {
say "********* 1";
});
get '/' => sub {
my $self = shift;
$self->render(json => {a => 1});
};
app->start;
I don't have a redis instance installed currently, but I think this should work.
#!/usr/bin/perl
use v5.18;
use warnings;
use Mojolicious::Lite;
use Mojo::Redis2;
say "Welcome";
helper redis => sub {state $redis = Mojo::Redis2->new()};
app->redis->subscribe(['pubsub'] => sub {
say "********* 1";
});
get '/' => sub {
my $self = shift;
$self->render(json => {a => 1});
};
app->start;
I suspect that once the redis instance goes out of scope, you lose it and its connections.
I solved it, by making sure I retain the return value of $redis->subscribe in a permanent variable, like so:
Instead of...
$redis->subscribe(['pubsub'] => sub {
say "********* 1";
});
...I wrote...
our $subscription = $redis->subscribe(['pubsub'] => sub {
say "********* 1";
});
That fixed the problem. I guess it's similar to AnyEvent, where the return value must stay alive.

How to do asynchronous www-mechanize using anyevent

I've been doing a fair amount of research on the topic and while there are some questions out there that relate, I'm really having a hard time understanding how to properly do async programming using AnyEvent and www-mechanize. I'm trying to stick with mechanize because it has a clean interface and has functions built-in that I'm expecting to do: (like get all images of a site etc). If there is no reliable/good way to do what I want, then I'll start looking at AnyEvent::HTTP but I figure I'd ask first before moving in that direction.
I'm a newbie to AnyEvent programming but have done a fair amount of perl and javascript / jquery async calls with callbacks before. These make a lot of sense to me but it's just not clicking for me with AnyEvent + Mech.
Here is the code I'm working on that pulls URLs from an upstream queue. give the URL, I want to one get that says pulls in all the images on a page, and then async. grabs all the images.
So pseudo-code would look something like this:
grab url from queue
get page
get all img url links
do many async calls on the img urls (store the imgs for example in a backend)
I've read, I cannot (after researching errors) block in an AnyEvent callback. How do I structure my program to do the async calls without blocking?
AE events can only be processed when AE-aware functions block, so I'm using LWP::Protocol::AnyEvent::http. It replaces the normal HTTP backend for LWP (Net:HTTP) with AnyEvent::HTTP, which is AE-aware.
The worker gets created like:
my Worker->new(upstream_job_url => "tcp://127.0.0.1:5555', run_on_create => 1);
Async part is sub _recv_msg which calls _proc_msg.
I already have an AnyEvent loop watching the ZeroMQ socket as per the ZeroMQ perl binding docs...
Any help much appreciated!
Code:
package Worker;
use 5.12.0;
use Moose;
use AnyEvent;
use LWP::Protocol::AnyEvent::http;
use ZMQ::LibZMQ3;
use ZMQ::Constants qw/ZMQ_PUSH ZMQ_PULL ZMQ_POLLIN ZMQ_FD/;
use JSON;
use WWW::Mechanize;
use Carp;
use Coro;
has 'max_children' => (
is => 'rw',
isa => 'Int',
required => 1,
default => sub { 0 }
);
has 'upstream_job_url' => (
is => 'rw',
isa => 'URI',
required => 1,
);
has ['uri','sink_url'] => (
is => 'rw',
isa => 'URI',
required => 0,
);
has 'run_on_create' => (
is => 'rw',
isa => 'Bool',
required => 1,
default => sub { 1 }
);
has '_receiver' => (
is => 'rw',
isa => 'ZMQ::LibZMQ3::Socket',
required => 0
);
sub BUILD {
my $self = shift;
$self->start if $self->run_on_create;
}
sub start
{
my $self = shift;
$self->_init_zmq();
my $fh = zmq_getsockopt( $self->_receiver, ZMQ_FD );
my $w; $w = AnyEvent->io( fh => $fh, poll => "r", cb => sub { $self->_recv_msg } );
AnyEvent->condvar->recv;
}
sub _init_zmq
{
my $self = shift;
my $c = zmq_init() or die "zmq_init: $!\n";
my $recv = zmq_socket($c, ZMQ_PULL) or die "zmq_socket: $!\n";
if( zmq_connect($recv, $self->upstream_job_url) != 0 ) {
croak "zmq_connect: $!\n";
}
$self->_receiver($recv);
}
sub _recv_msg
{
my $self = shift;
while(my $message = zmq_msg_data(zmq_recvmsg($self->_receiver)) ) {
my $msg = JSON::from_json($message, {utf8 => 1});
$self->uri(URI->new($msg->{url}));
$self->_proc_msg;
}
}
sub _proc_msg
{
my $self = shift;
my $c = async {
my $ua = WWW::Mechanize->new;
$ua->protocols_allowed(['http']);
print "$$ processing " . $self->uri->as_string . "... ";
$ua->get($self->uri->as_string);
if ($ua->success()) {
say $ua->status . " OK";
} else {
say $ua->status . " NOT OK";
}
};
$c->join;
}
1;
As you can see, I was trying Coro in the _proc_msg, I've tried just doing mech calls but get an error
AnyEvent::CondVar: recursive blocking wait attempted at lib/Worker.pm line 91.
Because $mech is still blocking in the callback. I'm not sure how to do the mech calls in my callback properly.
At ikegami's request, i've added the driver program that sends the urls. For test purposes, I have it just reading a RSS feed, and sending the links to the workers to attempt to process. I was curious just about basic structure of anyevent with the callbacks but I'm more than happy just to get help on the program in general. Here is the driver code:
#!/usr/local/bin/perl
use strict;
use warnings;
use v5.12.0;
use lib './lib';
use Config::General;
use Getopt::Long;
use Carp;
use AnyEvent;
use AnyEvent::Feed;
use Parallel::ForkManager;
use ZMQ::LibZMQ3;
use ZMQ::Constants qw(ZMQ_PUSH ZMQ_PULL);
use Worker;
# Debug
use Data::Dumper;
$Data::Dumper::Deparse = 1;
my $config_file = "feeds.cfg";
GetOptions(
"--config|c" => \$config_file,
"--help|h" => sub { usage(); exit(0); }
);
sub usage()
{
say "TODO";
}
$SIG{INT} = sub { croak; }; $SIG{TERM} = sub { croak; };
$SIG{CHLD} = 'IGNORE';
my $conf = Config::General->new($config_file) or croak "Couldn't open config file '$config_file' $!\n";
my %config = $conf->getall();
my #readers = ();
my #feeds = load_feeds(\%config);
my $mgr = Parallel::ForkManager->new( $config{'max_download_children'} ) or croak "Can't create fork manager: $!\n";
my $context = zmq_init() or croak "zmq_init: $!\n";
my $sender = zmq_socket($context, ZMQ_PUSH) or die "zmq_socket: $!\n";
foreach my $feed_cfg (#feeds) {
my $reader = AnyEvent::Feed->new(url => delete $feed_cfg->{url}, %$feed_cfg);
push(#readers, $reader); # save, don't go out of scope
}
# Fork Downloader children. These processes will look for incoming data
# in the img_queue and download the images, storing them in nosql
for ( 1 .. $config{'max_download_children'} ) {
my $pid = $mgr->start;
if (!$pid) {
# Child
my $worker = Worker->new({
upstream_job_url => URI->new('tcp://127.0.0.1:5555')
});
$mgr->finish;
say "$$ exiting.";
exit(0);
} else {
# Parent
say "[forked child $pid] my pid is $$";
}
}
if (zmq_bind($sender, 'tcp://127.0.0.1:5555') < 0) {
croak "zmq_bind: $!\n";
}
# Event loop
AnyEvent->condvar->recv;
sub load_feeds
{
my $conf = shift;
my #feeds = ();
foreach my $feed ( keys %{$conf->{'feeds'}} ) {
my $feed_ref = $conf->{'feeds'};
$feed_ref->{$feed}->{'name'} = $feed;
$feed_ref->{$feed}->{'on_fetch'} = \&fetch_feed_cb;
push(#feeds, $feed_ref->{$feed});
}
return #feeds;
}
sub fetch_feed_cb
{
my ($feed_reader, $new_entries, $feed, $error) = #_;
if (defined $error) {
say "Error fetching feed: $error";
return;
}
say "$$ checking for new feeds";
for (#$new_entries) {
my ($hash, $entry) = #$_;
say "$$ sending " . $entry->link;
zmq_send($sender, JSON::to_json( { url => $entry->link }, { pretty => 1, utf8 => 1 } ));
}
}
Here is a sample run:
[forked child 40790] my pid is 40789
[forked child 40791] my pid is 40789
[forked child 40792] my pid is 40789
40789 checking for new feeds
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/f5nNM3zYBt0/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/Ay9V5pIpFBA/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/5XCVvt75ppU/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/mWprjBD3UhM/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/NngMs9pCQew/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/wiUsvafLGFU/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/QMp6gnZpFcA/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/kqUb_rpU5dE/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/tHItKqKhGXg/
40789 sending http://feedproxy.google.com/~r/PerlNews/~3/7LleQbVnPmE/
FATAL: $Coro::IDLE blocked itself - did you try to block inside an event loop callback? Caught at lib/Worker.pm line 99.
FATAL: $Coro::IDLE blocked itself - did you try to block inside an event loop callback? Caught at lib/Worker.pm line 99.
FATAL: $Coro::IDLE blocked itself - did you try to block inside an event loop callback? Caught at lib/Worker.pm line 99.
40791 processing http://feedproxy.google.com/~r/PerlNews/~3/Ay9V5pIpFBA/...
40790 processing http://feedproxy.google.com/~r/PerlNews/~3/f5nNM3zYBt0/...
40792 processing http://feedproxy.google.com/~r/PerlNews/~3/5XCVvt75ppU/... ^C at /usr/local/perls/perl5162/lib/perl5/site_perl/darwin-thread-multi-2level/AnyEvent/Loop.pm line 231.
If I don't explicitly do a 'use Coro;' in Worker.pm, the coro FATAL errors don't show. I don't know how async was working before without further runtime errors.
Sample config file (feeds.cfg):
max_download_children = 3
<feeds>
<feed1>
url="http://feeds.feedburner.com/PerlNews?format=xml"
interval=60
</feed1>
</feeds>
So I spent a little more time with this today. So the error of my ways doing a $c->join. I shouldn't do that since I can't block in the callback. Coro will schedule the async block and it will be done when it's done. The only thing I need to make sure to do is to somehow know when all the asyncs are done which I think I can figure out. Now the tricky part is trying to figure out this little piece of mystery:
sub _recv_msg
{
my $self = shift;
while(my $message = zmq_msg_data(zmq_recvmsg($self->_receiver)) ) {
my $msg = JSON::from_json($message, {utf8 => 1});
$self->uri(URI->new($msg->{url}));
$self->_proc_msg;
}
}
This while loop causes my async { } threads in _proc_msg to NOT RUN. Remove the while loop and just handle the first msg and the coros run. Leave the while loop in place and they never will get run. Strange to me, haven't figured out why yet.
Further updates:
zmq_msg_recv was blocking. Also, zmq_send in the parent can block. Have to use ZMQ_NOBLOCK.
I split the worker and main into separate programs entirely.
you could use https://metacpan.org/pod/AnyEvent::HTTP::LWP::UserAgent for async calls.
use AnyEvent::HTTP::LWP::UserAgent;
use AnyEvent;
my $ua = AnyEvent::HTTP::LWP::UserAgent->new;
my #urls = (...);
my $cv = AE::cv;
$cv->begin;
foreach my $url (#urls) {
$cv->begin;
$ua->get_async($url)->cb(sub {
my $r = shift->recv;
print "url $url, content " . $r->content . "\n";
$cv->end;
});
}
$cv->end;
$cv->recv;