How to shutdown Perl dancer applications nicely - perl

I run several Perl dancer applications at the same time with the same user in FCGI mode (Apache). As I understand correctly, Apache (or any other webserver) will fork a new dancer application if the current one(s) are busy.
To ensure that no visitor is interrupted by the dancer shutdown I like to let dancer handles the current connection until it finished and then exit/last the process.
How to shutdown a Perl dancer application using kill signal HUP to perfom such nice shutdown?
To rollout a new version of a dancer application I use pkill -HUP perl as the dancer user to "shutdown" the processes. But currently (due to missing signal handler) it's more like shoot 'em down as of shutdown an application.

The solution by mugen kenichi works (starman):
If you are able to change your infrastructure you could try one of the plack webservers that support your need. starman and hyponotoad both do graceful restarts on SIGHUP
There are a few shortcoming regarding <% request.uri_base %> so we have to develop with hard coded URI paths. Not very handsome but necessary.

If I read your question correctly, you are concerned that Apache/FCGI might kill the Dancer app while it is in the middle of handling a request. Is that correct?
If so, don't worry about it. Apache/FCGI doesn't do that. When it forks a new instance of the handler because existing ones are busy, that's a new one in addition to the existing instances. The existing ones are left alone to finish what they're doing.

Related

Terminate running perl script started with CGI

I am creating a Perl script that creates a Net::WebSocket::Server on port 3000. Now I had the (not so brilliant) idea to start the script in the browser via CGI, so it runs in the background and can't be stopped. However, I have to restart the script whenever I modify it.
Is it possible to stop a CGI script in an endless loop, except by restarting the computer?
You didn't say what operating system you are on, so we cannot give you specific advice on how to find and kill the process. But you can always restart the web server application. CGI scripts are children of the server process (probably an Apache) that starts them. If you simply restart the Apache server, they should all be terminated.
Please don't put code that is supposed to run persistently in your cgi-bin directory. That's a bad idea, as you discovered.

What is the application life cycle using Perl Dancer?

Can someone explain the live cycle for a request in a Perl Dancer application starting from the server accepting the request. Does the application stay in memory like FCGI or does it have to be loaded for every request?
When using CGI, the application must be loaded with each request. FCGI, like you said, will keep the application running. Here's the lifecycle for CGI:
loads the perl runtime
loads necessary modules
configures the application
sets up all routes (not just the one needed)
finds the correct route and handles the request.
exits
When using FCGI steps 1-4 are done at load time. So if you are running with apache, when apache is started so is the perl runtime for your application. You are left with just step 5. Requests respond much faster when using FCGI.
Nowadays, many web shared webhosts support FastCGI, it's just a matter of configuring it correctly.

Watchdog monitoring UNIX domain socket, triggering events upon specific content

I am on an embedded platform (mipsel architecture, Linux 2.6 kernel) where I need to monitor IPC between two closed-source processes (router firmware) in order to react to a certain event (dynamic IP change because of DSL reconnect). What I found out so far via strace is that whenever the IP changes, the DSL daemon writes a special message into a UNIX domain socket bound to a specific file name. The message is consumed by another daemon.
Now here is my requirement: I want to monitor the data flow through that specific UNIX domain socket and trigger an event (call a shell script) if a certain message is detected. I tried to monitor the file name with inotify, but it does not work on socket files. I know I could run strace all the time, filtering its output and react to changes in the filtered log file, but that would be too heavy a solution because strace really slows down the system. I also know I could just poll for the IP address change via cron, but I want a watchdog, not a polling solution. And I am interested in finding out whether there is a tool which can specifically monitor UNIX domain sockets and react to specific messages flowing through in a predefined direction. I imagine something similar to inotifywait, i.e. the tool should wait for a certain event, then exit, so I can react to the event and loop back into starting the tool again, waiting for the next event of the same type.
Is there any existing Linux tool capable of doing that? Or is there some simple C code for a stand-alone binary which I could compile on my platform (uClibc, not glibc)? I am not a C expert, but capable of running a makefile. Using a binary from the shell is no problem, I know enough about shell programming.
It has been a while since I was dealing with this topic and did not actually get around to testing what an acquaintance of mine, Denys Vlasenko, maintainer of Busybox, proposed as a solution to me several months ago. Because I just checked my account here on StackOverflow and saw the question again, let me share his insights with you. Maybe it is helpful for somebody:
One relatively easy hack I can propose is to do the following:
I assume that you have a running server app which opened a Unix domain listening socket (say, /tmp/some.socket), and client programs connect to it and talk to the server.
rename /tmp/some.socket -> /tmp/some.socket1
create a new socket /tmp/some.socket
listen on it for new client connections
for every such connection, open another connection to /tmp/some.socket1 to original server process
pump data (client<->server) over resulting pairs of sockets (code to do so is very similar to what telnetd server does) until EOF from either side.
While you are pumping data, it's easy to look at it, to save it, and even to modify it if you need to.
The downside is that this sniffer program needs to be restarted every time the original server program is restarted.
This is similar to what Celada also answered. Thanks to him as well! Denys's answer was a bit more concrete, though.
I asked back:
This sounds hacky, yes, because of the restart necessity, but feasible.
Me not being a C programmer, I keep wondering though if you know a
command line tool which could do the pass-through and protocolling or
event-based triggering work for me. I have one guy from our project in
mind who could hack a little C binary for that, but I am unsure if he
likes to do it. If there is something pre-fab, I would prefer it. Can it
even be done with a (combination of) BusyBox applet(s), maybe?
Denys answered again:
You need to build busybox with CONFIG_FEATURE_UNIX_LOCAL=y.
Run the following as intercepting server:
busybox tcpsvd -vvvE local:/tmp/socket 0 ./script.sh
Where script.sh is a simple passthrough connection
to the "original server":
#!/bin/sh
busybox nc -o /tmp/hexdump.$$ local:/tmp/socket1 0
As an example, I added hex logging to file (-o FILE option).
Test it by running an emulated "original server":
busybox tcpsvd -vvvE local:/tmp/socket1 0 sh -c 'echo PID:$$'
and by connecting to "intercepting server":
echo Hello world | busybox nc local:/tmp/socket 0
You should see "PID:19094" message and have a new /tmp/hexdump.19093 file
with the dumped data. Both tcpsvd processes should print some log too
(they are run with -vvv verbosity).
If you need more complex processing, replace nc invocation in script.sh
with a custom program.
I don't think there is anything that will let you cleanly sniff UNIX socket traffic. Here are some options:
Arrange for the sender process to connect to a different socket where you are listening. Also connect to the original socket as a client. On receipt of data, notice the data you want to notice and also pass everything along to the original socket.
Monitor the system for IP address changes yourself using a netlink socket (RTM_NEWADDR, RTM_NEWLINK, etc...).
Run ip monitor as an external process and take action when it writes messages about added & removed IP addresses on its standard output.

How to start a server monitoring perl script and execute the client side code in the same script

I need to launch a server script which will not exit. and after the server is ready I need to start the client code to run some automated tests.
tried, not work, the server process is not in the background and the client code cannot be executed.
system ($server &)
is it possible to use Parallel::ForkManager to handle this, how? all the examples are repetitive tasks, while my case is server and client.
Parallel::ForkManager isn't really designed for this; there are various other distributions for supporting what a server needs to do; Daemon::Daemonize looks like it does the fewest other things besides just running your designated server code in the background.

How do you deploy a PSGI script in Apache without restarting?

I want to deploy a PSGI scripts that runs in Apache2 with Plack. Apache is configured with:
<Location "/mypath">
SetHandler perl-script
PerlResponseHandler Plack::Handler::Apache2
PerlSetVar psgi_app /path/to/my/script.psgi
</Location>
When I test the script with plackup, the --reload parameter watches updates on the .psgi file. In the production environment it is fine that Apache and Plack do not check and restart on each change for performance reasons, but how can I tell them explicitly to restart Plack::Handler::Apache2 and/or the PSGI script to deploy a new version?
It looks like Plack regularly checks for some changes but I have no clue when. Moreover it seems to create multiple instances, so I sometimes get different versions of script.psgi when at /mypath. It would be helpful to manually flush perl response handler without having to restart Apache or to wait for an unknown amount of time.
The short answer is you can't. That's why we recommend you to use plackup (with -r) for quick development and use Apache only for deployment (production use).
The other option is have a development apache process, and set MaxRequestsPerChild to a really small value, so that you will get a fresh child spawned in a very short period of time. I haven't tested this, and doing so will definitely impact the performance of your entire httpd, if you run the non-development application running on the same process (which is a bad idea in the first place anyway).
Apache2::Reload (untested)
You can move your application out of the appache process,
e.g.
FastCgiExternalServer /virtual/filename/fcgi -socket /path/to/my/socket
an run your programm with
plackup -s FCGI --listen /path/to/my/socket --nproc 10 /path/to/my/script.psgi
This way you can restart your application without restarting apache.
if you save the pid of the main fcgi process (--pid $pid_file)
you can easyly restart an load your new code.
There is also a module avail to manage (start,stop, restart) all your fcgi pools:
https://metacpan.org/pod/FCGI::Engine::Manager::Server::Plackup (not tested)