I am trying to get FastCGI to work with Perl CGI in FastCGI mode. By that I mean that I can run Perl CGI scripts on Nginix without any issues but it does not appear (based on performance measurements) that my scripts are running in FastCGI mode. I've seen mentions about spawn-fcgi but nothing recent or details on how to implement with current versions of Nginx and Ubuntu.
I am relatively new to Linux.
Environment info:
Ubuntu 22.04 hosted in the "cloud"
nginx version: nginx/1.18.0 (Ubuntu)
Tutorial I followed: https://techexpert.tips/nginx/perl-cgi-nginx/
I have installed fcgiwrap.
CGI works, HTTPS works.
I am enclosing the bulk of my Perl CGI script within
use CGI::Fast;
while(my $q = new CGI::Fast)
{
# all my code
}
My script is doing a simple write and query from a very small SQLite3 database (4 columns, 500 rows, 290k db size).
I am testing across the Internet.
Script execution performance is about half of a second (.42 seconds per page) or 345ms. That is about the same as IIS Windows CGI.
I also saw that requesting a simple HTML file only has marginally better performance.
I understand that there is a lot more to performance testing than the preceding. The point is that I think there is something missing with my FastCGI configuration. I imagine it has something to do with me needing to launch the Perl FastCGI script somehow and then somehow connect that to Nginx. If anyone can point me in the right direction or provide example config files, that would be great. Thanks all!
Related
I have an example and question regarding unix/apache session scope. Here is the test script I am using:
#! /usr/bin/perl -I/gcne/etc
$pid = $$;
system("mkdir -p /gcne/var/nick/hello.$pid");
chdir "/gcne/var/nick/hello.$pid";
$num = 3;
while($num--){
system("> blah.$pid.$num");
#sleep(5);
system("sleep 5");
}
system("> blahDONE.$pid");
I have noticed that if I call this script TWICE from a web browser that it will execute these requests in sequence — a total of 30 seconds. How does Perl/unix deal with parallel execution and using system commands? Is there a possibility that I get cross-session problems when using system calls? Or does apache treat each of these server calls as a new console session process?
In this example, I'm basically trying to test whether or not different PID files would be created in the "wrong" PID folder.
CentOS release 5.3
Apache/2.2.3 Jul 14 2009
Thanks
If you call the script via the normal CGI interface, then each time you request a web page your script is called. This means each time it gets a new process ID. Basically for CGI's the interface between Apache and your programm are the commandline args, the environment variables and STDOUT and STDERR. Otherwise everything is a normal command call.
Situation is a little different when you use mechanism like mod_perl, but it seems you don't do this ATM.
Apache does not do any synchronisation, so you can expect up to MaxClients (see apache docs) parallel calls of your script.
P.S. The environment variables between a call from apache and from shell are a bit different, but this is not relevant for your question (but you'll probably wonder if e.g. USER or similar variables are missing).
See also for more information: http://httpd.apache.org/docs/2.4/howto/cgi.html
Especially: http://httpd.apache.org/docs/2.4/howto/cgi.html#behindscenes
A browser may only issue one call at a time (tested with firefox), so when testing it may appear requests are handled one after another. This is not server related, but caused by the web browser.
For the record I don't really know perl. I've deployed Rails apps on dotcloud. Here is what I am trying to do;
Currently I work for a SaaS. We run scripts (perl/python/php) on an external shared server to do what our software cannot. We need to move the script off of the shared server, and dotcloud seemed like a good option.
However I have nearly no experience running perl. It looks like I cannot just move the perl script, as dotcloud says that runs any perl using the psgi standard;
From dotcloud documentation: "The Perl service can host any Perl web application compatible with the PSGI standard."
I moved the script to my own hosting account and it worked but it appears to run too slow. It seems like a virtual host/server is the best option which was why I was excited about dotcloud, but since I'm not qualified to do modify perl myself (i.e. modify it to meet psgi standard) I need another option.
I question is 2 fold - how easy/difficult is it to make a simple perl script psgi standard OR are there any other virtual hosting options for perl with fewer restrictions?
If you just have a normal perl script that doesn't need to be served from a web server then you should use the perl-worker service. Using the perl worker service is meant for normal perl scripts so you don't need to worry about psgi because that is only for web stuff.
Here is a link to the perl worker page on dotcloud:
http://docs.dotcloud.com/0.9/services/perl-worker/
This will give you access to a normal perl environment, and you can run what ever you need, cron jobs, shell, etc.
I want to deploy a PSGI scripts that runs in Apache2 with Plack. Apache is configured with:
<Location "/mypath">
SetHandler perl-script
PerlResponseHandler Plack::Handler::Apache2
PerlSetVar psgi_app /path/to/my/script.psgi
</Location>
When I test the script with plackup, the --reload parameter watches updates on the .psgi file. In the production environment it is fine that Apache and Plack do not check and restart on each change for performance reasons, but how can I tell them explicitly to restart Plack::Handler::Apache2 and/or the PSGI script to deploy a new version?
It looks like Plack regularly checks for some changes but I have no clue when. Moreover it seems to create multiple instances, so I sometimes get different versions of script.psgi when at /mypath. It would be helpful to manually flush perl response handler without having to restart Apache or to wait for an unknown amount of time.
The short answer is you can't. That's why we recommend you to use plackup (with -r) for quick development and use Apache only for deployment (production use).
The other option is have a development apache process, and set MaxRequestsPerChild to a really small value, so that you will get a fresh child spawned in a very short period of time. I haven't tested this, and doing so will definitely impact the performance of your entire httpd, if you run the non-development application running on the same process (which is a bad idea in the first place anyway).
Apache2::Reload (untested)
You can move your application out of the appache process,
e.g.
FastCgiExternalServer /virtual/filename/fcgi -socket /path/to/my/socket
an run your programm with
plackup -s FCGI --listen /path/to/my/socket --nproc 10 /path/to/my/script.psgi
This way you can restart your application without restarting apache.
if you save the pid of the main fcgi process (--pid $pid_file)
you can easyly restart an load your new code.
There is also a module avail to manage (start,stop, restart) all your fcgi pools:
https://metacpan.org/pod/FCGI::Engine::Manager::Server::Plackup (not tested)
I have PHP running via Fastcgi on Apache2. The PHP process uses a Unix Socket.
Would it be possible to access the socket from the command line and to execute a PHP script?
I have some long running operations that can take hours to execute, so going via the webserver is not an option. On the other hand, calling PHP from the command line directly is not optimal, because the CLI process cannot access the shared caches of the fastcgi php process.
I tried the socket command, but I do notreally know what to do.
I finally got it working with cgi-fcgi.
I have 2 different web servers on a Debian Lenny machine. One is running FastCGI (TRAC) and the other web server is running PHP and some CGI scripts. So I have currently the 2 Apache2 modules enabled (cgi and fcgi) and the 2 vhosts setup accordingly. I have no other particular interest for these both modules running at the same time.
So I want to keep ONLY Apache fastcgi module running as it looks to be the more efficient one.
Could you pls confirm the following assessments to be right or correct ?
1- I will have nothing to do/change for the TRAC site (already running fcgi)
2- I will have to tune the other web server vhost to be set with an handler to fastcgi scripts
3- I will have to change only the perl modules from "use CGI" to "use CGI::Fast"
4- I will be able to keep the rest of the perl existing CGI scripts w/o other changes
5- I do not need to use CGI::Apache but CGI::FastCGI (i/o the current CGI module) in the web server scripts
I hope my point is clear as it's all a bit foreign to me ...
Thx
EDIT:
thx for the hints to Naveed and J-16,
Here is what I did to get it working if it can help others :
hum, installed CGI::Fast with CPAN, then it works better..
On Debian with libperl already installed
perl -MCPAN -e shell
cpan> install CGI::Fast
changed filename from *.cgi to *.fcgi,
included the fastcgi while loop as adviced below by Naveed,
setup the apache concerned vhost with the right handler for fastcgi (See fastcgi doc)
enabled the Apache fastcgi module (a2enmod fastcgi) and disabled the cgi module,
checked the fastcgi.conf file in the Apache settings,
restarted Apache,
checked the fastcgi running as an Apache sub process (ps -afx),
fixed some script issues, already in.. but newly appearing when running fastcgi, as adviced (errors detected by checking the Apache logs),
EDIT: adapted the file upload code as the initial script did not work anymore (still don't understand why), so I had to replace the while loop by a such one:
open(FILE,">$upload_dir/$file_name")
while ($bytes_count = read($file_query,$buffer,2096)) {
$size += $bytes_count;
print FILE $buffer;
}
close(FILE);
done.
World is not yet perfect but it finally works.
You will have to do a little more than just change use CGI to use CGI::Fast. Make sure you wrap you CGI scripts with a while loop, as the documentation states http://p3rl.org/CGI::Fast
use CGI::Fast;
while (CGI::Fast->new()) {
# The original CGI code goes in here
}