Can someone explain the live cycle for a request in a Perl Dancer application starting from the server accepting the request. Does the application stay in memory like FCGI or does it have to be loaded for every request?
When using CGI, the application must be loaded with each request. FCGI, like you said, will keep the application running. Here's the lifecycle for CGI:
loads the perl runtime
loads necessary modules
configures the application
sets up all routes (not just the one needed)
finds the correct route and handles the request.
exits
When using FCGI steps 1-4 are done at load time. So if you are running with apache, when apache is started so is the perl runtime for your application. You are left with just step 5. Requests respond much faster when using FCGI.
Nowadays, many web shared webhosts support FastCGI, it's just a matter of configuring it correctly.
Related
Hello I experimenting some small things with Mojolicious and I have the following question:
What happens when a request is received ?
Is there some caching like in modperl or is the code compiled each time ?
It depends on the server it runs under.
If you use a pre-forking app server or fastcgi server then you'll get one or more processes re-used for multiple requests.
You can run a simple CGI, launching the script for each request, but it wouldn't be common.
Deployment options are in the manual.
I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.
For the record I don't really know perl. I've deployed Rails apps on dotcloud. Here is what I am trying to do;
Currently I work for a SaaS. We run scripts (perl/python/php) on an external shared server to do what our software cannot. We need to move the script off of the shared server, and dotcloud seemed like a good option.
However I have nearly no experience running perl. It looks like I cannot just move the perl script, as dotcloud says that runs any perl using the psgi standard;
From dotcloud documentation: "The Perl service can host any Perl web application compatible with the PSGI standard."
I moved the script to my own hosting account and it worked but it appears to run too slow. It seems like a virtual host/server is the best option which was why I was excited about dotcloud, but since I'm not qualified to do modify perl myself (i.e. modify it to meet psgi standard) I need another option.
I question is 2 fold - how easy/difficult is it to make a simple perl script psgi standard OR are there any other virtual hosting options for perl with fewer restrictions?
If you just have a normal perl script that doesn't need to be served from a web server then you should use the perl-worker service. Using the perl worker service is meant for normal perl scripts so you don't need to worry about psgi because that is only for web stuff.
Here is a link to the perl worker page on dotcloud:
http://docs.dotcloud.com/0.9/services/perl-worker/
This will give you access to a normal perl environment, and you can run what ever you need, cron jobs, shell, etc.
I run several Perl dancer applications at the same time with the same user in FCGI mode (Apache). As I understand correctly, Apache (or any other webserver) will fork a new dancer application if the current one(s) are busy.
To ensure that no visitor is interrupted by the dancer shutdown I like to let dancer handles the current connection until it finished and then exit/last the process.
How to shutdown a Perl dancer application using kill signal HUP to perfom such nice shutdown?
To rollout a new version of a dancer application I use pkill -HUP perl as the dancer user to "shutdown" the processes. But currently (due to missing signal handler) it's more like shoot 'em down as of shutdown an application.
The solution by mugen kenichi works (starman):
If you are able to change your infrastructure you could try one of the plack webservers that support your need. starman and hyponotoad both do graceful restarts on SIGHUP
There are a few shortcoming regarding <% request.uri_base %> so we have to develop with hard coded URI paths. Not very handsome but necessary.
If I read your question correctly, you are concerned that Apache/FCGI might kill the Dancer app while it is in the middle of handling a request. Is that correct?
If so, don't worry about it. Apache/FCGI doesn't do that. When it forks a new instance of the handler because existing ones are busy, that's a new one in addition to the existing instances. The existing ones are left alone to finish what they're doing.
I am new to FastCGI and looking to use this platform to speed up my existing vanilla CGI (perl) programs.
However in reading the FastCGI/Apache FAQ, it appears I can setup my scripts (once converted to use separate initialization/request sections) in the Apache config as one of the following:
1) dynamic
2) static "inside the scope of the SetHandler"
3) static "inside the scope of the AddHandler"
4) static "outside the scope of the Set/AddHandler" (or, I think, this can be called 'external')
I am confused about those 4 options, and am assuming the default of 'dynamic' is what I should go with, but could someone explain the pros/cons of these?
There isn't much to worry about Add/SetHandlers. They are just a way of defining which extensions are to be recognized as fcgi scripts.
What you might want to consider is dynamic, static or external.
Static is started as apache starts (possible this is the most common setup)
Dynamic is started whenever the first request is made (This is the default)
External requires the fcgi server to run separately from apache. (This is the most advanced configration)
I suggest you refer to the module documentation for more information (at least the summary):
FastCGI applications under mod_fastcgi are defined as one of three types: static, dynamic, or external. They're configured using the FastCgiServer, FastCgiConfig, and FastCgiExternalServer directives respectively. Any URI that Apache identifies as a FastCGI application and which hasn't been explicitly configured using a FastCgiServer or FastCgiExternalServer directive is handled as a dynamic application (see the FastCgiConfig directive for more information).
FastCGI static and dynamic applications are spawned and managed by the FastCGI Process Manager, fcgi-pm. The process manager is spawned by Apache at server initialization. External applications are presumed to be started and managed independently.
Of course if you are using Perl you can try mod_perl where you can start by using your CGI scripts first.