FastCGI or PSGI Interface to NGINX in 2021 - perl

This question I asked has resulted in me exploring directly interfacing my FastCGI script to NGINX, rather than using a reverse proxy to Apache. I successfully modified my FastCGI script to run as a daemon using some code I found online:
my $s = FCGI::OpenSocket(':9000',20);
my $request = FCGI::Request( \*STDIN, \*STDOUT, \*STDERR, \%ENV, $s);
# Remaining code stays just as it does when using with Apache's mod_fcgid
while($request->Accept() >= 0) {
# Call core app subroutines.
}
It works, but near as I can tell this has a distinct disadvantage over mod_fcgid: I have one process running which will handle one request at a time and if that process dies, there's nothing to start it back up. There are references on Stack Overflow to code that properly spun off workers, but the sites referenced inevitably seem to have gone offline, much like FastCGI's own site.
So, I'm trying to figure out what I need to add and also -- pardon the pun -- figure out if I need to take a fork in this road. Here are the options that I am trying to consider, if I understand my issues correctly:
Directly implement some sort of forking mechanism, ideally it seems like it should (1) toss off the request to a process/thread/worker -- perhaps one that can stay alive for multiple requests -- and move on to being ready for the next request and (2) be independent enough from the workers that if something goes wrong with a worker, it doesn't bring down the whole system until I catch it and restart the main process (e.g. autorestart processes). If this can be done simply and reliably, this seems to have a huge appeal since the code already works with FastCGI.
Give up on direct FastCGI and convert to PSGI and use an application server to handle these things. Given that I'm using Perl, I'd guess Starman is the logical option, although I've been reading on uwsgi's PSGI support and it sounds almost ideal in "tyrant Emperor" mode, where it could run processes with different privileges, auto restart missing processes, etc.
Option 1 seems intriguing since it requires the least modification to my existing code and a FastCGI script started up without FastCGI still works like a normal CGI script. (I'm not running this code under FastCGI when it is used by sites that are very low traffic).
Option 2, though, feels like it might be more "modern." At least PSGI documentation seems to still be online, for example, and using Starman or uwsgi seem like they take care of the background stuff I need probably better than I would cooking up my own system. Downside: I'd need two startup scripts for my code: one to be used by the PSGI enabled sites and one for sites still running in CGI.
Update: Continuing to explore option 1, I read through this tutorial on Perl fork() which seems somewhat relevant. Would using fork to break off each FastCGI request be a good approach if I go with option 1? I assume I'd be at risk of fork bombing, although if I kept track of the number of forks and issued wait() if ($forks > 10); perhaps that would be a safe approach? (Or perhaps using Parallel::ForkManager to do that process watching.) Or would it be safer and/or more efficient using something like Thread::Queue and passing FastCGI request objects to a set a threads that are reliably already established? There seem to be plenty of pitfalls I might overlook, which then returns me to whether I should opt for Option 2.

Related

How to spawn pre-loaded mojolicious applications from a mojolicious application?

I work on a fairly large Mojolicious application, which takes many seconds to compile.
Parts of the test suite of that application are written using playwright, which currently sets up a pristine database for each test case, and spins up an instance of the mojolicious application using #mojolicious/server-starter.
The compile-time of the application is starting to make it impractical to significantly expand the playwright test suite, and I'd like to address that without having to give up on the isolation that separate test databases and mojolicious application instances currently afford me.
In order to achieve that, the idea I'm currently pursuing is to have a small perl application that can pre-load the larger Mojolicious application, and which can be asked by the playwright test-suite to spawn a new instance of the larger application with a pristine database on an open port.
I'd like to communicate with that small perl application using HTTP, mainly out of convenience, and I'd like that small perl application to use Mojolicious to perform the HTTP communication, because that's also convenient and consistent with the rest of the code-base.
I've tried some naive approaches to implementing this idea, which looked roughly like this:
use TheBigApp;
$app->routes->post('/spawn-child')->to(cb => sub ($c) {
my ($sock, $port) = new_listen_socket();
if (my $pid = fork()) {
# record pid to later be able to shut it down or whatever
$c->render(json => { url => "http://localhost:$port" })
} else {
my $bapp = TheBigApp->new($c->req->json);
my $s = Mojo::Server::Daemon->new(listen => ...);
$s->app($bapp);
$bapp->start;
return;
}
});
All of the implementations I've tried along these lines seemed to run into issues due to the various singletons, such as the IOLoop, and even when overriding IOLoop->singleton to return a new instance with a new reactor within the sub-process, it appeared as if the forked-off child processes were still listening on the same socket as the parent-process that spawned them.
Are there perhaps lower-level Mojolicious APIs that could make this use-case work? Would it perhaps be simpler to implement the small parent process without Mojolicious to sidestep the issue entirely?
Thanks!
Digging a little in the Mojo Server/Daemon code, it always(?) sets up 'ReusePort' on the IO::Socket.
On linux (not macOS or BSD, windows I have no idea) that means that TCP connections to the same IP and port combination are 'load balanced' across multiple server instances by the kernel.
It is unclear from you post if the spawned processes are listening on the same port or not.
Assuming you are running linux, changing the listening port for each spawned child might help.

Perl script running a periodic (main) task and providing a REST interface

I am working on a Perl script which does some periodic processing based on file-system contents.
The overall structure is like this:
# ... initialization...
while(1) {
# ... scan filesystem, perform actions depending on changes detected ...
sleep 5;
}
I would like to add the ability to input some data into this process by means of exposing an interface through HTTP. E.g. I would like to add an endpoint to skip the sleep, but also some means to input data that is processed in the next iteration. Additionally, I would like to be able to query some of the program's status through HTTP (i.e. a simple fork() to run the webserver-part in a separate process is insufficient?)
So far I have already used the Dancer2 framework once but it has a start; call that blocks and thus does not allow any other tasks (like my loop) to run. Additionally, I could of course move the code which is currently inside the loop to an endpoint exposed through Dancer2 but then I would need to call that periodically (though an external program?) which seems to be quite an obscure indirection compared to just having the webserver-part running in background.
Is it possible to unobtrusively (i.e. without blocking the program) add a REST-server capability to a Perl script? If yes: Which modules would be used for the purpose? If no: Should I really implement an external process to periodically invoke a certain endpoint or pursue a different solution altogether?
(I have tried to add a dancer2 tag, but could not do so due to insufficient reputation. Do not be mislead by this: I have so far only tried with Dancer2 not the Dancer (v.1))
You could try to launch your processing loop in a background thread, before you run start;.
See man perlthrtut
You probably want use threads::shared; to declare some variables shared between the REST part and the background thread. Or use dedicated queues/event mechanisms.

Sharing variables\data between Powershell processes

I would like to come up with a mechanism by which I can share 'data' between different Powershell processes. This would be in order to implement a kind of job system, whereby a function can be run in one Powershell process, complete and then someone communicate its status to a function run from another (distinct) Powershell process...
I guess what I'd ideally like psjob results to be shareable between sessions, but this does not seem to be possible.
I can think of a few dirty ways of achieving this (like O/S environment variables), but am I missing an semi-elegant way?
For example:
Function giveMeNumber
{
$return_vlaue = Get-Random -Minimum -100 -Maximum 100
Return $return_vlaue
}
What are some ways i could get this function to store it's return somewhere and then grab it from another Powershell session (without using a database).
Cheers.
The QA mentioned by Keith refers to using MSMQ, a message queueing feature optionally available on desktop, mobile & server OS's from Microsoft.
It doesn't run by default on desktop OS's so you would have to ensure that the appropriate service was started. Seems like serious overkill to me unless you wanted something pretty beefy.
Of course, the most common choice for this type of task would be a simple shared file.
Alternatively, you could create a TCP listener in each of the jobs that you want to have accept external info. Not done this myself in PowerShell though I know it is possible. Node.JS would be a more familiar environment or Python. Seems like overkill if a shared file would do the job!
Another way would be to use the registry. Though you might consider that cheating since it is actually a database (of a very broken and simplistic sort).
I'm actually not sure that environment variables would work since I know that they can be picky about the parent environment scope (for example setting an env variable in a cmd doesn't make it available outside of the cmd scope by default.
UPDATE: Doh, missed a few! Some of them very obvious. Microsoft have a list:
Clipboard
COM
Data Copy
DDE
File Mapping
Mailslots
Pipes
RPC
Windows Sockets
Pipes was the one I was trying to remember. Windows sockets would be similar to a TCP listener.

threads in Dancer

I'm using Dancer 1.31, in a standard configuration (plackup/Starman).
In a request I wished to call a perl function asynchronously, so that the request returns inmmediately. Think of the typical "long running operation" scenario, in which one wants to return a "processing page" with a refresh+redirect.
I (naively?) tried with a thread:
sub myfunc {
sleep 9; # just for testing a slow operation
}
any '/test1' => sub {
my $thr = threads->create('myfunc');
$thr->detach();
return "done" ;
};
I does not work, the server seems to freeze, and the error log does not show anything. I guess manual creation of threads are forbidden inside Dancer? It's an issue with PSGI? Which is the recommended way?
I would stay away from perl threads especially in a web server environment. It will most likely crash your server when you join or detach them.
I usually create a few threads (thread pool) BEFORE initializing other modules and keep them around for the entire life time of the application. Thread::Queue nicely provides communication between the workers and the main thread.
The best asynchronous solution I find in Perl is POE. In Linux I prefer using POE::Wheel::Run to run executables and subroutines asynchronously. It uses fork and has a beautiful interface allowing communication with the child process. (In Windows it's not usable due to thread dependency)
Setting up Dancer and POE inside the same application/script may cause problems and POE's event loop may be blocked. A single worker thread dedicated to POE may come handy, or I would write another server based on POE and just communicate with the Dancer application via sockets.
Threads are definitively iffy with Perl. It might be possible to write some threaded Dancer code, but to be honest I don't think we ever tried it. And considering that Dancer 1's core use simpleton classes, it might also be very tricky.
As Ogla says, there are other ways to implement asynchronous behavior in Dancer. You say that you are using Starman, which is a forking engine. But there is also Twiggy, which is AnyEvent-based. To see how to leverage it to write asynchronous code, have a gander at Dancer::Plugin::Async.

How can I avoid zombies in Perl CGI scripts run under Apache 1.3?

Various Perl scripts (Server Side Includes) are calling a Perl module with many functions on a website.
EDIT:
The scripts are using use lib to reference the libraries from a folder.
During busy periods the scripts (not the libraries) become zombies and overload the server.
The server lists:
319 ? Z 0:00 [scriptname1.pl] <defunct>
320 ? Z 0:00 [scriptname2.pl] <defunct>
321 ? Z 0:00 [scriptname3.pl] <defunct>
I have hundreds of instances of each.
EDIT:
We are not using fork, system or exec, apart form the SSI directive
<!--#exec cgi="/cgi-bin/scriptname.pl"-->
As far as I know, in this case httpd itself will be the owner of the process.
MaxRequestPerChild is set to 0 which should not let the parents die before the child process is finished.
So far we figured that temporarily suspending some of the scripts help the server coping with the defunct processes and prevent it from falling over however zombie processes are still forming without a doubt.
Apparently gbacon seems to be the closest to the truth with his theory that the server is not being able to cope with the load.
What could lead to httpd abandoning these processes?
Is there any best practice to prevent these from happening?
Thanks
Answer:
The point goes to Rob.
As he says, CGI scripts that generate SSI's will not have those SSI's handled. The evaluation of SSI's happens before the running of CGI's in the Apache 1.3 request cycle. This was fixed with Apache 2.0 and later so that CGI's can generate SSI commands.
Since we were running on Apache 1.3, for every page view the SSI's turned into defunct processes. Although the server was trying to clear them it was way too busy with the running tasks to be able to succeed. As a result, the server fell over and become unresponsive.
As a short term solution we reviewed all SSI's and moved some of the processes to client side to free up server resources and give it time to clean up.
Later we upgraded to Apache 2.2.
More Band-Aid than best practice, but sometimes you can get away with simple
$SIG{CHLD} = "IGNORE";
According to the perlipc documentation
On most Unix platforms, the CHLD (sometimes also known as CLD) signal has special behavior with respect to a value of 'IGNORE'. Setting $SIG{CHLD} to 'IGNORE' on such a platform has the effect of not creating zombie processes when the parent process fails to wait() on its child processes (i.e., child processes are automatically reaped). Calling wait() with $SIG{CHLD} set to 'IGNORE' usually returns -1 on such platforms.
If you care about the exit statuses of child processes, you need to collect them (commonly referred to as "reaping") by calling wait or waitpid. Despite the creepy name, a zombie is merely a child process that has exited but whose status has not yet been reaped.
If your Perl programs themselves are the child processes becoming zombies, that means their parents (the ones that are forking-and-forgetting your code) need to clean up after themselves. A process cannot stop itself from becoming a zombie.
I just saw your comment that you are running Apache 1.3 and that may be associated with your problem.
SSI's can run CGI's. But CGI scripts that generate SSI's will not have those SSI's handled. The evaluation of SSI's happens before the running of CGI's in the Apache 1.3 request cycle. This was fixed with Apache 2.0 and later so that CGI's can generate SSI commands.
As I'd suggested above, try running your scripts on their own and have a look at the output. Are they generating SSI's?
Edit: Have you tried launching a trivial Perl CGI script to simply printout a Hello World type HTTP response?
Then if this works add a trivial SSI directives such as
<!--#printenv -->
and see what happens.
Edit 2: Just realised what is probably happening. Zombies occur when a child process exits and isn't reaped. These processes are hanging around and slowly using up resources within the process table. A process without a parent is an orphaned process.
Are you forking off processes within your Perl script? If so, have you added a waitpid() call to the parent?
Have you also got the correct exit within the script?
CORE::exit(0);
As you have all the bits yourself, I'd suggest running the individual scripts one at a time from the command line to see if you can spot the ones that are hanging.
Does a ps listing show an inordinate number of instances of one particular script running?
Are you running the CGI's using mod_perl?
Edit: Just saw your comments regarding SSI's. Don't forget that SSI directives can run Perl scripts themselves. Have a look to see what the CGI's are trying to run?
Are they dependent on yet another server or service?