I'm using Dancer 1.31, in a standard configuration (plackup/Starman).
In a request I wished to call a perl function asynchronously, so that the request returns inmmediately. Think of the typical "long running operation" scenario, in which one wants to return a "processing page" with a refresh+redirect.
I (naively?) tried with a thread:
sub myfunc {
sleep 9; # just for testing a slow operation
}
any '/test1' => sub {
my $thr = threads->create('myfunc');
$thr->detach();
return "done" ;
};
I does not work, the server seems to freeze, and the error log does not show anything. I guess manual creation of threads are forbidden inside Dancer? It's an issue with PSGI? Which is the recommended way?
I would stay away from perl threads especially in a web server environment. It will most likely crash your server when you join or detach them.
I usually create a few threads (thread pool) BEFORE initializing other modules and keep them around for the entire life time of the application. Thread::Queue nicely provides communication between the workers and the main thread.
The best asynchronous solution I find in Perl is POE. In Linux I prefer using POE::Wheel::Run to run executables and subroutines asynchronously. It uses fork and has a beautiful interface allowing communication with the child process. (In Windows it's not usable due to thread dependency)
Setting up Dancer and POE inside the same application/script may cause problems and POE's event loop may be blocked. A single worker thread dedicated to POE may come handy, or I would write another server based on POE and just communicate with the Dancer application via sockets.
Threads are definitively iffy with Perl. It might be possible to write some threaded Dancer code, but to be honest I don't think we ever tried it. And considering that Dancer 1's core use simpleton classes, it might also be very tricky.
As Ogla says, there are other ways to implement asynchronous behavior in Dancer. You say that you are using Starman, which is a forking engine. But there is also Twiggy, which is AnyEvent-based. To see how to leverage it to write asynchronous code, have a gander at Dancer::Plugin::Async.
Related
This question I asked has resulted in me exploring directly interfacing my FastCGI script to NGINX, rather than using a reverse proxy to Apache. I successfully modified my FastCGI script to run as a daemon using some code I found online:
my $s = FCGI::OpenSocket(':9000',20);
my $request = FCGI::Request( \*STDIN, \*STDOUT, \*STDERR, \%ENV, $s);
# Remaining code stays just as it does when using with Apache's mod_fcgid
while($request->Accept() >= 0) {
# Call core app subroutines.
}
It works, but near as I can tell this has a distinct disadvantage over mod_fcgid: I have one process running which will handle one request at a time and if that process dies, there's nothing to start it back up. There are references on Stack Overflow to code that properly spun off workers, but the sites referenced inevitably seem to have gone offline, much like FastCGI's own site.
So, I'm trying to figure out what I need to add and also -- pardon the pun -- figure out if I need to take a fork in this road. Here are the options that I am trying to consider, if I understand my issues correctly:
Directly implement some sort of forking mechanism, ideally it seems like it should (1) toss off the request to a process/thread/worker -- perhaps one that can stay alive for multiple requests -- and move on to being ready for the next request and (2) be independent enough from the workers that if something goes wrong with a worker, it doesn't bring down the whole system until I catch it and restart the main process (e.g. autorestart processes). If this can be done simply and reliably, this seems to have a huge appeal since the code already works with FastCGI.
Give up on direct FastCGI and convert to PSGI and use an application server to handle these things. Given that I'm using Perl, I'd guess Starman is the logical option, although I've been reading on uwsgi's PSGI support and it sounds almost ideal in "tyrant Emperor" mode, where it could run processes with different privileges, auto restart missing processes, etc.
Option 1 seems intriguing since it requires the least modification to my existing code and a FastCGI script started up without FastCGI still works like a normal CGI script. (I'm not running this code under FastCGI when it is used by sites that are very low traffic).
Option 2, though, feels like it might be more "modern." At least PSGI documentation seems to still be online, for example, and using Starman or uwsgi seem like they take care of the background stuff I need probably better than I would cooking up my own system. Downside: I'd need two startup scripts for my code: one to be used by the PSGI enabled sites and one for sites still running in CGI.
Update: Continuing to explore option 1, I read through this tutorial on Perl fork() which seems somewhat relevant. Would using fork to break off each FastCGI request be a good approach if I go with option 1? I assume I'd be at risk of fork bombing, although if I kept track of the number of forks and issued wait() if ($forks > 10); perhaps that would be a safe approach? (Or perhaps using Parallel::ForkManager to do that process watching.) Or would it be safer and/or more efficient using something like Thread::Queue and passing FastCGI request objects to a set a threads that are reliably already established? There seem to be plenty of pitfalls I might overlook, which then returns me to whether I should opt for Option 2.
I am working on a Perl script which does some periodic processing based on file-system contents.
The overall structure is like this:
# ... initialization...
while(1) {
# ... scan filesystem, perform actions depending on changes detected ...
sleep 5;
}
I would like to add the ability to input some data into this process by means of exposing an interface through HTTP. E.g. I would like to add an endpoint to skip the sleep, but also some means to input data that is processed in the next iteration. Additionally, I would like to be able to query some of the program's status through HTTP (i.e. a simple fork() to run the webserver-part in a separate process is insufficient?)
So far I have already used the Dancer2 framework once but it has a start; call that blocks and thus does not allow any other tasks (like my loop) to run. Additionally, I could of course move the code which is currently inside the loop to an endpoint exposed through Dancer2 but then I would need to call that periodically (though an external program?) which seems to be quite an obscure indirection compared to just having the webserver-part running in background.
Is it possible to unobtrusively (i.e. without blocking the program) add a REST-server capability to a Perl script? If yes: Which modules would be used for the purpose? If no: Should I really implement an external process to periodically invoke a certain endpoint or pursue a different solution altogether?
(I have tried to add a dancer2 tag, but could not do so due to insufficient reputation. Do not be mislead by this: I have so far only tried with Dancer2 not the Dancer (v.1))
You could try to launch your processing loop in a background thread, before you run start;.
See man perlthrtut
You probably want use threads::shared; to declare some variables shared between the REST part and the background thread. Or use dedicated queues/event mechanisms.
I want to write unit tests for a subroutine in perl. The subroutine is using multiple threads to do its tasks. So, it first creates some threads and then it waits for them to join.
The problem is that our unit tests run on a server which is not able to run multi-threaded tests, so I need to somehow mock out the thread behavior. Basically I want to override the threads create and join functions such that its not threaded anymore. Any pointers how can I do that and test the code ?
Edit : The server fails to run the threaded code for the following reason:
Devel::Cover does not yet work with threads
Update: this answer doesn't solve the OP's problem as described in the edited question, but it might be useful to someone.
Perl threads are an interpreter emulation, not an operating system feature. So, they should work on any platform. If your testing server doesn't support threads, it's probably for one of these reasons:
Your version of Perl is very old.
Perl was compiled without thread support.
Your testing framework wasn't created with threaded code in mind.
The first two could be easily rectified by updating your environment. However, I suspect yours is the third issue.
I don't think you should solve this by mocking the thread behavior. This changes the original code too much to be a valid test. And it would be a significant amount of work anyway, so why not direct that effort toward getting a threaded test working?
The exact issues depend on your code, but probably the issue is that your subroutine starts a thread and then returns, with the thread still running. Then your test framework runs the sub over and over, accumulating a whole bunch of concurrent threads.
In that case, all you need is a wrapper sub that calls the sub you are testing, and then blocks until the threads are complete. This should be fairly simple. Take a look at threads->list() to see how you can detect running threads. Just have a loop that waits until the threads in question are no longer running before exiting the wrapper sub.
Here is a simple complete example demonstrating a wrapper sub:
#!usr/bin/perl
use strict;
use warnings;
use threads;
sub sub_to_test {
threads->create(sub { sleep 5; print("Thread done\n"); threads->detach() });
return "Sub done\n";
}
sub wrapper {
#Get a count of the running threads.
my $original_running_threads = threads->list(threads::running);
my #results = sub_to_test(#_);
#block until the number of running threads is the same as when we started.
sleep 1 while (threads->list(threads::running) > $original_running_threads);
return #results;
}
print wrapper;
I have a multithreded application in perl for which I have to rely on several non-thread safe modules, so I have been using fork()ed processes with kill() signals as a message passing interface.
The problem is that the signal handlers are a bit erratic (to say the least) and often end up with processes that get killed in inapropriate states.
Is there a better way to do this?
Depending on exactly what your program needs to do, you might consider using POE, which is a Perl framework for multi-threaded applications with user-space threads. It's complex, but elegant and powerful and can help you avoid non-thread-safe modules by confining activity to a single Perl interpreter thread.
Helpful resources to get started:
Programming POE presentation by Matt Sergeant (start here to understand what it is and does)
POE project page (lots of cookbook examples)
Plus there are hundreds of pre-built POE components you can use to assemble into an application.
You can always have a pipe between parent and child to pass messages back and forth.
pipe my $reader, my $writer;
my $pid = fork();
if ( $pid == 0 ) {
close $reader;
...
}
else {
close $writer;
my $msg_from_child = <$reader>;
....
}
Not a very comfortable way of programming, but it shouldn't be 'erratic'.
Have a look at forks.pm, a "drop-in replacement for Perl threads using fork()" which makes for much more sensible memory usage (but don't use it on Win32). It will allow you to declare "shared" variables and then it automatically passes changes made to such variables between the processes (similar to how threads.pm does things).
From perl 5.8 onwards you should be looking at the core threads module. Have a look at http://metacpan.org/pod/threads
If you want to use modules which aren't thread safe you can usually load them with a require and import inside the thread entry point.
I'm writing some server code that talks to a client process via STDIN. I'm trying to write a snippet of perl code that asynchronously receives responses from the client's STDOUT. The blocking version of the code might look like this:
sub _read_from_client
{
my ($file_handle) = #_;
while (my $line = <$file_handle>) {
print STDOUT $line;
}
return;
}
Importantly, the snippet needs to work in Win32 platform. There are many solutions for *nix platforms that I'm not interested in. I'm using ActivePerl 5.10.
This thread on Perlmonks suggests you can make a socket nonblocking on Windows in Perl this way:
ioctl($socket, 0x8004667e, 1);
More details and resources in that thread
If you don't want to go the low-level route, you will have to look at the other more frameworked solutions.
You can use a thread to read from the input and have it stuff all data it reads into a Thread::Queue which you then handle in your main thread.
You can look at POE which implements an event based framework, especially POE::Wheel::Run::Win32. Potentially, you can also steal the code from it to implement the nonblocking reads yourself.
You can look at [Coro], which implements a cooperative multitasking system using coroutines. This is mostly similar to threads except that you get userspace threads, not system threads.
You haven't stated how far up you want to go, but your choice is between sysread and a framework, or writing said framework yourself. The easiest route to go is just to use threads or by going through the code of Poe::Wheel::Run::Win32.