How to redirect URL with HAProxy - haproxy

I need redirect www.foo.com and foo.com to www.bar.com in haproxy, this is my configuration:
frontend http-in
bind *:80
acl bar.com hdr(host) -i www.bar.com
...
use_backend bar.com_cluster if bar.com
...
redirect prefix http://foo.com code 301 if { hdr(host) -i www.bar.com }
redirect prefix http://www.foo.com code 301 if { hdr(host) -i www.bar.com }
...
backend bar.com_cluster
balance roundrobin
option httpclose
option forwardfor
server bar 10.0.0.1:80 check
I have tried with redirect prefix but don't work, any idea?

Change order of the hostname:
redirect prefix http://www.bar.com code 301 if { hdr(host) -i foo.com }
redirect prefix http://www.bar.com code 301 if { hdr(host) -i www.foo.com }
instead of
redirect prefix http://foo.com code 301 if { hdr(host) -i www.bar.com }
redirect prefix http://www.foo.com code 301 if { hdr(host) -i www.bar.com }

Related

How to apply HTTPS in a Reverse Proxy Multidomain in Docker

I am in a project in which we have decided to have several web servers in Docker, so we make use of a Reverse Proxy Multidomain (all requests to the same machine, and this with the proxy is already responsible for redirecting to the web container that indicates the request). All this works correctly, but when it comes to securing it with HTTPS (using our own certificates), it is not able to make the redirection (I've not created a DNS server yet, so I'm using the /etc/hosts file to do the translation).
As to work this with our structure is something complex, I have mounted a simple example which also fails to make the redirection to HTTPS.
Here is the structure:
And here are the files:
reverse-proxy_simple/docker-compose.yml
version: "3.2"
services:
proxy:
image: nginx
container_name: proxy_examples
ports:
- 80:80
- 443:443
volumes:
- ./confProxy/default.conf:/etc/nginx/conf.d/default.conf
- ./confProxy/ssl:/etc/nginx/certs/
- ./confProxy/includes:/etc/nginx/includes/
- /var/run/docker.sock:/tmp/docker.sock:ro
networks:
- examples
example1.com:
image: php:7-apache
container_name: example1.com
ports:
- 8081:443
volumes:
- ./example1/sites-available:/etc/apache2/sites-available/
- ./example1/example1.com:/var/www/html/
- ./example1/certs:/etc/ssl/certs/
networks:
examples:
ipv4_address: 192.168.1.10
example2.com:
image: php:7-apache
container_name: example2.com
ports:
- 8082:443
volumes:
- ./example2/sites-available:/etc/apache2/sites-available/
- ./example2/example2.com:/var/www/html/
- ./example2/certs:/etc/ssl/certs/
networks:
examples:
ipv4_address: 192.168.1.20
networks:
examples:
ipam:
config:
- subnet: 192.168.1.0/24
reverse-proxy_simple/confProxy (directory):
reverse-proxy_simple/confProxy/default.conf
# web example1 config.
server {
listen 80;
listen 443 ssl http2;
server_name example1.com;
# Path for SSL
ssl_certificate /etc/nginx/certs/certificate.crt;
ssl_certificate_key /etc/nginx/certs/certificate.key;
ssl_trusted_certificate /etc/nginx/certs/certificate.ca.crt;
include /etc/nginx/includes/ssl.conf;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwared-Proto $scheme;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_pass https://example1.com/;
proxy_read_timeout 600;
proxy_redirect http://example1.com https://example1.com;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
# web example2 config.
server {
listen 80;
listen 443 ssl http2;
server_name example2.com;
# Path for SSL
ssl_certificate /etc/nginx/certs/certificate.crt;
ssl_certificate_key /etc/nginx/certs/certificate.key;
ssl_trusted_certificate /etc/nginx/certs/certificate.ca.crt;
include /etc/nginx/includes/ssl.conf;
location / {
include /etc/nginx/includes/proxy.conf;
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwared-Proto $scheme;
proxy_headers_hash_max_size 512;
proxy_headers_hash_bucket_size 128;
proxy_pass https://example2.com/;
proxy_read_timeout 600;
proxy_redirect http://example2.com https://example2.com;
}
access_log off;
error_log /var/log/nginx/error.log error;
}
reverse-proxy_simple/confProxy/includes (directory):
reverse-proxy_simple/confProxy/includes/proxy.conf
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_buffering off;
proxy_request_buffering off;
proxy_http_version 1.1;
proxy_intercept_errors on;
reverse-proxy_simple/confProxy/includes/ssl.conf
ssl_session_timeout 1d;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_protocols TLSv1 TLSv1.1 TLSv1.2;
ssl_ciphers 'ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:ECDHE-ECDSA-AES128-GCM- SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHAECDHE-RSA-AES256-SHA384:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA:ECDHE-RSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-RSA-AES256-SHA256:DHE-RSA-AES256-SHA:ECDHE-ECDSA-DES-CBC3-SHA:ECDHE-RSA-DES-CBC3-SHA:EDH-RSA-DES-CBC3-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:DES-CBC3- SHA:!DSS';
ssl_prefer_server_ciphers on;
reverse-proxy_simple/confProxy/ssl (directory)
certificate.crt
certificate.key
certificate.ca.crt
reverse-proxy_simple/example1 (directory)
reverse-proxy_simple/example1/certs (directory)
certificate.crt
certificate.key
certificate.ca.crt
reverse-proxy_simple/example1.com (directory)
error.log
requests.log
public_html (directory)
index.html
reverse-proxy_simple/sites-available (directory)
000-default.conf
<VirtualHost *:80>
ServerName example1.com
DocumentRoot /var/www/html/public_html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/html/public_html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName example1.com
DocumentRoot /var/www/html/public_html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/html/public_html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
SSLCertificateFile /etc/ssl/certs/certificate.crt
SSLCertificateKeyFile /etc/ssl/certs/certificate.key
SSLEngine on
</VirtualHost>
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
reverse-proxy_simple/example2 (directory)
reverse-proxy_simple/example2/certs (directory)
certificate.crt
certificate.key
certificate.ca.crt
reverse-proxy_simple/example2.com (directory)
error.log
requests.log
public_html (directory)
index.html
reverse-proxy_simple/sites-available (directory)
000-default.conf
<VirtualHost *:80>
ServerName example2.com
DocumentRoot /var/www/html/public_html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/html/public_html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
</VirtualHost>
<IfModule mod_ssl.c>
<VirtualHost *:443>
ServerName example2.com
DocumentRoot /var/www/html/public_html
ErrorLog ${APACHE_LOG_DIR}/error.log
CustomLog ${APACHE_LOG_DIR}/access.log combined
<Directory /var/www/html/public_html>
Options Indexes FollowSymLinks MultiViews
AllowOverride All
Order allow,deny
allow from all
</Directory>
SSLCertificateFile /etc/ssl/certs/certificate.crt
SSLCertificateKeyFile /etc/ssl/certs/certificate.key
SSLEngine on
</VirtualHost>
</IfModule>
# vim: syntax=apache ts=4 sw=4 sts=4 sr noet
/etc/hosts
192.168.1.10 example1.com
192.168.1.20 example2.com
Let me know if you know whats wrong here!
Thanks ;)

HTTPS not working in nginx reverse proxy (docker compose)

I have a rails app, a mysql db and I'm trying to configurate a reverse proxy server using nginx. HTTP connection goes well, but no matter what I try - the HTTPS connection won't go. The nginx server just won't listen on 443. I've tried many solutions (e.g 1, 2, 3) but neither worked.
I use our own certificates rather than letsencrypt or some similar possibilities.
docker-compose.yml:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
container_name: proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- /home/ssl:/etc/nginx/certs
- /home/log/nginx:/var/log/nginx
environment:
- DEFAULT_HOST=app.test
db:
container_name: db
image: mysql:8.0
restart: always
.
.
.
ports:
- "3306:3306"
app:
container_name: app
.
.
.
environment:
- VIRTUAL_HOST=app.test
- VIRTUAL_PORTO=https
- HTTPS_METHOD=redirect
- CERT_NAME=app.test
running docker exec -it proxy ls -l /etc/nginx/certs shows certificates are mounted:
total 8
-rw-rw-r-- 1 1000 1000 1391 Nov 8 14:36 app.test.crt
-rw-rw-r-- 1 1000 1000 1751 Nov 8 14:29 app.test.key
running docker exec -it proxy cat /etc/nginx/conf.d/default.conf:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$upstream_addr"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# app.test
upstream app.test {
## Can be connected with "test_default" network
# app
server 192.168.176.4:3000;
}
server {
server_name app.test;
listen 80 default_server;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://app.test;
}
}
As you can see, no 443 clauses created. When trying to reach, I get ERR_CONNECTION_REFUSED message on chrome but nothing is recorded to neither access.log nor error.log.
Any ideas? I've spend the last three days trying to crack it.
The solution has appeared here, needed to add CERT_NAME to the proxy environment and mount the certificates directory to the app as well:
docker-compose.yml:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
container_name: proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- /home/ssl:/etc/nginx/certs
- /home/log/nginx:/var/log/nginx
environment:
- DEFAULT_HOST=app.test
- CERT_NAME=app.test
db:
container_name: db
image: mysql:8.0
restart: always
.
.
.
ports:
- "3306:3306"
app:
container_name: app
.
.
.
volumes:
.
.
.
- /home/ssl:/etc/ssl/certs:ro
environment:
- VIRTUAL_HOST=app.test
- CERT_NAME=app.test

Kubernetes: using container as proxy

I have the following pod setup:
apiVersion: v1
kind: Pod
metadata:
name: proxy-test
namespace: test
spec:
containers:
- name: container-a
image: <Image>
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 8083
- name: container-proxy
image: <Image>
ports:
- name: server
containerPort: 7487
protocol: TCP
- name: container-b
image: <Image>
I exec into container-b and execute following curl request:
curl --proxy localhost:7487 -X POST http://localhost:8083/
Due to some reason, http://localhost:8083/ is directly getting called and proxy is ignored. Can someone explain why this can happen ?
Environment
I replicated the scenario on kubeadm and GCP GKE kubernetes clusters to see if there is any difference - no, they behave the same, so I assume AWS EKS should behave the same too.
I created a pod with 3 containers within:
apiVersion: v1
kind: Pod
metadata:
name: proxy-pod
spec:
containers:
- image: ubuntu # client where connection will go from
name: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: proxy-container # proxy - that's obvious
image: ubuntu
command: ['bash', '-c', 'while true ; do sleep 60; done']
- name: server # regular nginx server which listens to port 80
image: nginx
For this test stand I installed squid proxy on proxy-container (what is squid and how to install it). By default it listens to port 3128.
As well as curl was installed on ubuntu - client container. (net-tools package as a bonus, it has netstat).
Tests
Note!
I used 127.0.0.1 instead of localhost because squid has some resolving questions, didn't find an easy/fast solution.
curl is used with -v flag for verbosity.
We have proxy on 3128 and nginx on 80 within the pod:
# netstat -tulpn
Active Internet connections (only servers)
Proto Recv-Q Send-Q Local Address Foreign Address State PID/Program name
tcp 0 0 0.0.0.0:80 0.0.0.0:* LISTEN -
tcp 0 0 0.0.0.0:3128 0.0.0.0:* LISTEN -
tcp6 0 0 :::80 :::* LISTEN -
curl directly:
# curl 127.0.0.1 -vI
* Trying 127.0.0.1:80... # connection goes directly to port 80 which is expected
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
curl via proxy:
# curl --proxy 127.0.0.1:3128 127.0.0.1:80 -vI
* Trying 127.0.0.1:3128... # connecting to proxy!
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connected to proxy
> HEAD http://127.0.0.1:80/ HTTP/1.1 # going further to nginx on `80`
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
squid logs:
# cat /var/log/squid/access.log
1635161756.048 1 127.0.0.1 TCP_MISS/200 958 GET http://127.0.0.1/ - HIER_DIRECT/127.0.0.1 text/html
1635163617.361 0 127.0.0.1 TCP_MEM_HIT/200 352 HEAD http://127.0.0.1/ - HIER_NONE/- text/html
NO_PROXY
NO_PROXY environment variable might be set up, however by default it's empty.
I added it manually:
# export NO_PROXY=127.0.0.1
# printenv | grep -i proxy
NO_PROXY=127.0.0.1
Now curl request via proxy will look like:
# curl --proxy 127.0.0.1:3128 127.0.0.1 -vI
* Uses proxy env variable NO_PROXY == '127.0.0.1' # curl detects NO_PROXY envvar
* Trying 127.0.0.1:80... # and ignores the proxy, connection goes directly
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 80 (#0)
> HEAD / HTTP/1.1
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
It's possible to override NO_PROXY envvar while executing curl command with --noproxy flag.
--noproxy no-proxy-list
Comma-separated list of hosts which do not use a proxy, if one is specified. The only wildcard is a single *
character, which matches all hosts, and effectively disables the
proxy. Each name in this list is matched as either a domain which
contains the hostname, or the hostname itself. For example, local.com
would match local.com, local.com:80, and www.local.com, but not
www.notlocal.com. (Added in 7.19.4).
Example:
# curl --proxy 127.0.0.1:3128 --noproxy "" 127.0.0.1 -vI
* Trying 127.0.0.1:3128... # connecting to proxy as it was supposed to
* TCP_NODELAY set
* Connected to 127.0.0.1 (127.0.0.1) port 3128 (#0) # connection to proxy is established
> HEAD http://127.0.0.1/ HTTP/1.1 # connection to nginx on port 80
> Host: 127.0.0.1
> User-Agent: curl/7.68.0
> Accept: */*
This proves that proxy works! with localhost.
Another option is something incorrectly configured in proxy which is used in the question. You can get this pod and install squid and curl into both containers and try yourself.

Connect nginx on host with wsgi unicorn inside docker container

Starting to dockerize my Rails application I am facing following problem:
My idea was to have every web application with their Wsgi and dependencies running in an extra docker container and the database also ruiing in seperate containers while using docker-compose to set it up.
Outside the containers Nginx is routing traffic then depending on the domain to the specific container via unix sockets.(Didn't want nginx in a container to reduce the complexity and avoid having multiple nginx running in multiple containers to maintain multiple webapps).
Before starting with docker my wsgi and nginx got connected via unix sockets. But after dockerizing this is not working anymore. Only connecting them with ports works now which I would like to avoid.
Is there anyway way to connect Nginx on the host via unix sockets with the WSGI inside a container? If not what is best practice here?
My approach was to use shared volumes as location for the socket file but nginx cant access the socket created by the wsgi unicorn:
Socket created by unicorn:
srwxrwxrwx 1 root root 0 Nov 14 14:53 unicorn.sock=
Nginx error:
*2 connect() to unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock failed (13: Permission denied) while connecting to upstream
Nginx sites-avaible/myapp:
upstream myapp {
# Path to Unicorn SOCK file, as defined previously
server unix:/ruby-webapps/myapp/shared/sockets/unicorn.sock fail_timeout=0;
}
server {
listen 80 default_server;
...
}
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name myapp.de www.myapp.de;
root /ruby-webapps/myapp;
try_files $uri/index.html $uri #MyApp;
location #MyApp {
proxy_pass http://myapp;
#proxy_pass http://127.0.0.1:3000;
proxy_set_header X-Forwarded-For https;
proxy_redirect off;
}
}
docker-compose.yml:
version:'2'
services:
postgresmyapp:
image: postgres
env_file: .env
myapp:
build: .
env_file: .env
command: supervisord -c /myapp/unicorn_supervisord.conf
volumes:
- .:/myapp
ports:
- "3000:3000"
links:
- postgreslberg
config/unicorn.rb:
app_dir = File.expand_path("../..", __FILE__)
shared_dir = "#{app_dir}/shared"
working_directory app_dir
rails_env = ENV['RAILS_ENV'] || 'production'
# Set unicorn options
worker_processes 2
preload_app true
timeout 30
# Set up socket location
listen "#{shared_dir}/sockets/unicorn.sock", :backlog => 64
#listen(3000, backlog: 64)
stderr_path "#{shared_dir}/log/unicorn.stderr.log"
stdout_path "#{shared_dir}/log/unicorn.stdout.log"
pid "#{shared_dir}/pids/unicorn.pid"

Atomic deployment: after an Nginx reload, PHP-FPM is still pointing to the old webroot

I'm trying to make atomic deploys with Nginx and PHP5.5-FPM with Opcache.
The idea is just to change the webroot in nginx.conf and then just run
nginx reload
What I'm expecting is that Nginx will wait for the current requests to end and then reload itself passing the new webroot path to PHP FPM, but it's not working: PHP FPM is still loading the PHP files from the old directory.
I'm using the (undocumented) $realpath_root in Ngnix in order not to get the symlink (/prod/current) but the real path.
The technique is documented here: http://codeascraft.com/2013/07/01/atomic-deploys-at-etsy/
Debugging Nginx I can clearly see that it is passing the new(real) path.
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script var: "/www/htdocs/current/web"
2014/09/23 17:13:22 [debug] 26234#0: *1742 posix_memalign: 00000000010517A0:4096 #16
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script copy: "SCRIPT_FILENAME"
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script var: "/www/htdocs/prod/releases/20140923124417/web"
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script var: "/index.php"
2014/09/23 17:13:22 [debug] 26234#0: *1742 fastcgi param: "SCRIPT_FILENAME: /www/htdocs/prod/releases/20140923124417/web/app.php"
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script copy: "DOCUMENT_ROOT"
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script var: "/www/htdocs/prod/releases/20140923124417/web"
2014/09/23 17:13:22 [debug] 26234#0: *1742 fastcgi param: "DOCUMENT_ROOT: /www/htdocs/prod/releases/20140923124417/web"
2014/09/23 17:13:22 [debug] 26234#0: *1742 http script copy: "APPLICATION_ENV"
To make it work I have to run a
php-fpm reload
but I'm loosing some requests.
'recv() failed (104: Connection reset by peer) while reading response header from upstream'
This is the nginx file I'm using:
server {
listen 26023;
server_name prod.example.com;
client_max_body_size 20m;
client_header_timeout 1200;
client_body_timeout 1200;
send_timeout 1200;
keepalive_timeout 1200;
access_log /var/logs/prod/nginx/prod.access.log main;
error_log /var/logs/prod/nginx/prod.error.log;
set $root_location /var/www/htdocs/prod/current/web;
root $root_location;
try_files $uri $uri/ /index.php?$args;
index index.php;
location ~ \.php$ {
fastcgi_pass unix:/var/run/php5-fpm/prod.sock;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_connect_timeout 1200;
fastcgi_send_timeout 1200;
fastcgi_read_timeout 1200;
fastcgi_ignore_client_abort on;
fastcgi_param SCRIPT_FILENAME $realpath_root$fastcgi_script_name;
fastcgi_param DOCUMENT_ROOT $realpath_root;
fastcgi_param APPLICATION_ENV live;
fastcgi_param HTTPS $thttps;
}
}
this is the pool conf:
:~$ curl http://127.0.0.1/fpm_status_prod
pool: prod
process manager: dynamic
start time: 23/Sep/2014:22:42:34 +0400
start since: 1672
accepted conn: 446
listen queue: 0
max listen queue: 0
listen queue len: 0
idle processes: 49
active processes: 1
total processes: 50
max active processes: 2
max children reached: 0
slow requests: 0
Any suggestion?
I fixed the issue, I was also using APC for the classloader and it wasn't cleared.