nginx and catalyst configuration - perl

I am having trouble deploying a Catalyst application using nginx and
fastcgi. I attempting to do this under ubuntu 12.04.
I have successfully configured nginx to serve static content from my
app's /root subdirectory. However, when I try to any of my dynamic
urls, I get a 404 error in my application's error log saying the
(unmapped) url is not found, which leads me to believe that nginx is
attempting to serve the request akin to a static page instead of
sending it to my Catalyst app.
To restate, hitting 'localhost:3001/root/static.html' results in the
static content being successfully displayed in the browser, but
hitting 'localhost:30001/expense/editor' results the following error:
"GET /expense/editor HTTP/1.1" 404
(where '/expense/editor' is a path in my app, one that I can
successfully access when running the built-in Catalyst development
server).
I am launching the Catalyst app as:
> perl script/budgetweb_fastcgi.pl -l localhost:3003
I also tried running /etc/init.d/fcgiwarp. I am unclear if I need to run a
separate fastcgi wrapper, or if the perl script above is my fastcgi
wrapper. I edited fcgiwrap to use TCP sockets (127.0.0.1:3003), which
then prevented me from running both /etc/init.d/fcgiwrap and
script/budgetweb_fastcgi.pl at the same time, since they both use the
same socket. So I'm guessing I'm only supposed to use the Catalyst
script? Also, when running fcgiwrap, I get 502 "bad gateway" errors
when attempting to access static content.
Any help, or pointers to help, will be much appreciated. So far I have looked at the following pages (among others; StackOverflow will only allow me to post two links):
Catalyst wiki
HOWTO: Deploy a Catalyst application using FastCGI and nginx
Here is my nginx config file for the server:
server {
listen 3001;
server_name budgetweb.com;
root /local/www/money/budgetweb;
location /root {
add_header Cache-control public;
root /local/www/money/budgetweb/;
}
location / {
access_log /local/www/money/budgetweb/logs/access.log;
error_log /local/www/money/budgetweb/logs/error.log;
index index.html index.htm index.pl;
try_files $uri =404;
gzip off;
fastcgi_pass localhost:3003;
fastcgi_index index.pl;
include /etc/nginx/fastcgi_params;
fastcgi_param SCRIPT_FILENAME /local/www/money/budgetweb$fastcgi_script_name;
fastcgi_param SCRIPT_NAME /;
fastcgi_param PATH_INFO $fastcgi_script_name;
}
# Disable gzip (it makes scripts feel slower since they have to complete
# before getting gzipped)
gzip off;
# include /etc/nginx/fcgiwrap.conf;
}

The fastcgi.pl script included with Catalyst is your FastCGI wrapper. All you should have to do is start that on a socket, then point your webserver to that socket and everything should pass through. The only thing you'll want to do for a production system is create a start/stop script that will start and stop your application on startup and shutdown. The start command will look pretty much like what you ran above (you may want to add a '-d' flag to daemonize it.)
On your webserver configuration, configuring '/' to point to your application should be fine. You might try removing the 'index', 'try_files', and 'fastcgi_index' configuration lines, that might be causing nginx to try and statically serve the content instead of passing the request to your application.

Related

Stop nginx from ignoring specific subdomain in server block

I've been using nginx on Ubuntu 20 for a few years now mostly without problems, but on a newly deployed server, I can't seem to get this right.
I have a serverblock in a file named alpha in /sites_available. The server block has server_name alpha.example.com set up with a document root = /var/www/alpha.
I have a serverblock in a file named beta in /sites_available. The server block has server_name beta.example.com set up with a document root = /var/www/beta.
DNS a records exist for both alpha.example.com and beta.example.com.
What happens:
URL http://beta.example.com displays http://beta.example.com in the browser address, but pulls content from /var/www/alpha.
By experimenting, I've found that nginx is consistently processing the file that comes first alphabetically in sites_available, regardless of the subdomain in the URL.
May questions are,
Why does it behave this way? and
How can I turn that off?
The behavior I want is for each server block to route a single subdomain to a specific document root and ignore everything else.
So... http://beta.example.com doesn't even try to go to /var/www/alpha
Here's an example of one of the server block file contents
server {
listen 80;
listen [::]:80;
server_name alpha.example.com;
root /var/www/alpha;
index index.php index.html index.htm index.nginx-debian.html;
access_log /var/log/nginx/alpha_access.log;
error_log /var/log/nginx/alpha_error.log;
location / {
try_files $uri $uri/ /index.php;
}
location ~ ^/(doc|sql|setup)/ {
deny all;
}
location ~ \.php$ {
fastcgi_pass unix:/run/php/php7.4-fpm.sock;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
include fastcgi_params;
include snippets/fastcgi-php.conf;
}
location ~ /\.ht {
deny all;
}
}
So, to say it another way: What do I need to put in there to tell nginx "If the subdomain isn't alpha, ignore all this... this is only for alpha.example.com"?
Turned out my problem was a typo in the server name in one of the blocks. With my attention on the sobdomain piece, I failed to notice the domain name itself was wrong (e.g. alpha.exampple.com would do it).
The results of this are counterintuitive and can send you barking up all sorts of wrong trees trying to figure out what's wrong.
Part of the answer, too, though, is that nginx is apparently designed with a strong bias toward finding a server block that can respond to the request... even if the subdomain in the url doesn't match anything.
The key to tightening that up is probably writing a good default server block . . . which I'm still working on.
For example, I have http://example.com going to the default, but in the scenario I described above, with server blocks for alpha.example.com and beta.example.com, keying in http://x.example.com doesn't land at my default.
So some work to do there. Advice welcome.
(Edit: I'm pretty sure that a catch-all default server for anyrandomcharacters.example.com can't happen. There would have to be DNS routing the subdomain to your server before nginx can do anything with the subdomain. Maybe wild card DNS can serve that purpose.)

Nginx not executing .pl extensions. It downloads the file instead [duplicate]

I have problem setting up CGI scripts to be run on Nginx, so far I've found http://wiki.nginx.org/SimpleCGI this stuff but problem is that I can't make perl script run as service so that it will run in background and even in case of restart it will start running automatically
Do you have any idea? I'm running Centos 5
I've found some solutions here but I couldn't integrate code given there with this Perl script
I'm completely zero at Perl, please help me
Thanks
Nginx doesn't have native CGI support (it supports fastCGI instead). The typical solution for this is to run your Perl script as a fastCGI process and edit the nginx config file to re-direct requests to the fastCGI process. This is quite a complex solution if all you want to do is run a CGI script.
Do you have to use nginx for this solution? If all you want to do is execute some Perl CGI scripts, consider using Apache or Lighttpd as they come with CGI modules which will process your CGI scripts natively and don't require the script to be run as a separate process. To do this you need install the web server and edit the web server config file to load the CGI module. For Lighttpd, you will need to add a line in the config file to enable processing of CGI files. Then put the CGI files into the cgi-bin folder.
Install another web server(Apache, Lighttpd) that runs on different port. Then proxy your CGI request to the webserver with nginx.
You just need to add this to nginx configuration, after installed a web server on 8080
location /cgi-bin {
proxy_pass http://127.0.0.1:8080;
}
Take a look at Nginx Location Directive Explained for more details.
Nginx is a web server. You need to use an application server for your task, such as uWSGI for example. It can talk with nginx using its native very effective binary interface called uwsgi.
I found this hack using FastCGI to be a little nicer than running another web server. http://nginxlibrary.com/perl-fastcgi/
I found this: https://github.com/ruudud/cgi It says:
===
On Ubuntu: apt-get install nginx fcgiwrap
On Arch: pacman -S nginx fcgiwrap
Example Nginx config (Ubuntu: /etc/nginx/sites-enabled/default):
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/access.log;
location / {
root /srv/static;
autoindex on;
index index.html index.htm;
}
location ~ ^/cgi {
root /srv/my_cgi_app;
rewrite ^/cgi/(.*) /$1 break;
include fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_param SCRIPT_FILENAME /srv/my_cgi_app$fastcgi_script_name;
}
}
Change the root and fastcgi_param lines to a directory containing CGI scripts, e.g. the cgi-bin/ dir in this repository.
If you are a control freak and run fcgiwrap manually, be sure to change fastcgi_pass accordingly. The path listed in the example is the default in Ubuntu when using the out-of-the-box fcgiwrap setup.
===
I'm about to try it.

nginx reverse proxy a REST service alternate 200 and 404 responses for same uri

the nginx.conf:
server {
listen 8080;
}
server {
listen 80;
server_name localhost;
location / {
root /test/public;
index index.html index.htm;
}
location /api {
proxy_pass http://localhost:8080;
}
error_page 500 502 503 504 /50x.html;
location = /50x.html {
root html;
}
}
the request and reponse headers are almost plain, no auth/session/cache parameters involved.
For same uri, first request will return successfully, while second will return 404, and so on.
I've tried disabling proxy buffering, but has no effect.
I'm 99.99% sure you have IPv6 enabled. In that case localhost resolves into two IP addresses 127.0.0.1 and [::1] and nginx balancing requests between them.
http://nginx.org/r/proxy_pass:
If a domain name resolves to several addresses, all of them will be used in a round-robin fashion.
On the other hand, you have listen 8080; directive that tends to listens only to IPv4 addresses (depends on OS, nginx version and other environment).
You could solve you problem in several ways:
use explicit IPv4 address proxy_pass http://127.0.0.1:8080;
use explicit IPv4 and IPv6 listen listen [::]:8080 ipv6only=off;
I observed the same problem in a docker enviroment. But the reason was independed from nginx. I just made a stupid copy-paste-mistake.
The setting:
I deployed the docker container by several docker-compose files. So I have following structure:
API-Gateway-Container based on nginx which references to
Webserver 1 based on nginx and
Webserver 2 based on nginx
Each of them has its own docker and docker-compose file. Because the structure of the compose-files for Webserver1 and Webserver2 is very similiar, I copied it and replaced the container name and some other stuff. So far so good. Starting and stopping the containers was no problem, watching them by docker container ls shows no abnormality. Accessing Webserver1 and Webserver2 by http://localhost:<Portnumber for server> was no problem, but accessing Webserver1 through the api gateway leads to alternating 200 and 404 responses, while Webserver2 works well.
After days of debugging I found the problem: As I mentioned I copied the docker-compose file from Webserver1 for Webserver2 and while I replaced the container name, I forgot to replace the service name. My docker-compose file starts like
version: '3'
services:
webserver1:
image: 'nginx:latest'
container_name: webserver2
...
This constellation also leads to the described behavior.
Hope, someone can save some days or hours by reading this post ;-)
André
Well. In my case, the problem was pretty straightforward. What was happening was, I had about 15 server blocks, and the port that I setup for my nodejs proxy_pass was already being used on some old server block hiding in my enabled servers directory. So nginx randomly was proxy passing to the old server which was not running and the one I just started.
So I just greped for the port number in the directory and found 2 instances. Changed my port number and the problem was fixed.

Replace image and javascript absolute paths through proxy_redirect in nginx

I have a scenario as follows.
Nginx is being used as a reverse proxy to a apache server listening at port 8080. nginx is running at port 80. There is a wsgi application which is being run by the apache server.
Now, I have added a proxy_pass to the nginx configuration such that whatever requests come to localhost/ (nginx port is the default port 80) they get redirected to localhost:8080.
Here is an excerpt from the nginx conf file:
server {
listen 80;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $remote_addr;
proxy_set_header Host $host;
location <app_name> {
proxy_pass http://localhost:8080/
proxy_redirect http://localhost/ http://localhost:8080/
}
}
I have added the proxy redirect to take care of any redirects from the apache server so that any request mapping to http://localhost/<something> gets redirected to http://localhost:8080/<something> because the application's resources are going to be available under port 8080.
Now, the problem here is that the wsgi application generates an html where the image and the javascript location paths are absolute paths like /img/image.png and /js/javascript.js. Now, they are part and parcel of the HTML and do not send any redirect with a complete http:// prefix as such therefore, the server tries to locate the images in the localhost/img directory instead of the localhost:8080/img directory.
A dirty workaround for the same could be to have the /img and /js directories also defined as separate "location" in the server config and a proxy_pass specified to them. But then these directories could be more in number and maintaining the same could potentially become a headache.
Is there a cleaner way of doing this?
This is in reference to fixing the issue in graphite Apache cannot serve graphite-web from URLs other than /
In case where the proxied application (the Graphite app in this case) isn't able to be configured as a "slave" app, it is better to spare a whole subdomain to the application.
Or you may try the ngx_http_sub_module which:
…modifies a response by replacing one specified string by another.
as an example this code will change all the ':/localhost:8080/' to ':/localhost/app_name/':
location <app_name> {
# ...
sub_filter ':/localhost:8080/' ':/localhost/app_name/';
}
Note that:
This module is not built by default, it should be enabled with the --with-http_sub_module configuration parameter.
But in most package systems nginx is built with all it's optional modules already. Just check if you version has the sub_module built in:
nginx -V | grep sub_module
In my case for homebrew it gives output like this.

How to run CGI scripts on Nginx

I have problem setting up CGI scripts to be run on Nginx, so far I've found http://wiki.nginx.org/SimpleCGI this stuff but problem is that I can't make perl script run as service so that it will run in background and even in case of restart it will start running automatically
Do you have any idea? I'm running Centos 5
I've found some solutions here but I couldn't integrate code given there with this Perl script
I'm completely zero at Perl, please help me
Thanks
Nginx doesn't have native CGI support (it supports fastCGI instead). The typical solution for this is to run your Perl script as a fastCGI process and edit the nginx config file to re-direct requests to the fastCGI process. This is quite a complex solution if all you want to do is run a CGI script.
Do you have to use nginx for this solution? If all you want to do is execute some Perl CGI scripts, consider using Apache or Lighttpd as they come with CGI modules which will process your CGI scripts natively and don't require the script to be run as a separate process. To do this you need install the web server and edit the web server config file to load the CGI module. For Lighttpd, you will need to add a line in the config file to enable processing of CGI files. Then put the CGI files into the cgi-bin folder.
Install another web server(Apache, Lighttpd) that runs on different port. Then proxy your CGI request to the webserver with nginx.
You just need to add this to nginx configuration, after installed a web server on 8080
location /cgi-bin {
proxy_pass http://127.0.0.1:8080;
}
Take a look at Nginx Location Directive Explained for more details.
Nginx is a web server. You need to use an application server for your task, such as uWSGI for example. It can talk with nginx using its native very effective binary interface called uwsgi.
I found this hack using FastCGI to be a little nicer than running another web server. http://nginxlibrary.com/perl-fastcgi/
I found this: https://github.com/ruudud/cgi It says:
===
On Ubuntu: apt-get install nginx fcgiwrap
On Arch: pacman -S nginx fcgiwrap
Example Nginx config (Ubuntu: /etc/nginx/sites-enabled/default):
server {
listen 80;
server_name localhost;
access_log /var/log/nginx/access.log;
location / {
root /srv/static;
autoindex on;
index index.html index.htm;
}
location ~ ^/cgi {
root /srv/my_cgi_app;
rewrite ^/cgi/(.*) /$1 break;
include fastcgi_params;
fastcgi_pass unix:/var/run/fcgiwrap.socket;
fastcgi_param SCRIPT_FILENAME /srv/my_cgi_app$fastcgi_script_name;
}
}
Change the root and fastcgi_param lines to a directory containing CGI scripts, e.g. the cgi-bin/ dir in this repository.
If you are a control freak and run fcgiwrap manually, be sure to change fastcgi_pass accordingly. The path listed in the example is the default in Ubuntu when using the out-of-the-box fcgiwrap setup.
===
I'm about to try it.