HTTPS not working in nginx reverse proxy (docker compose) - docker-compose

I have a rails app, a mysql db and I'm trying to configurate a reverse proxy server using nginx. HTTP connection goes well, but no matter what I try - the HTTPS connection won't go. The nginx server just won't listen on 443. I've tried many solutions (e.g 1, 2, 3) but neither worked.
I use our own certificates rather than letsencrypt or some similar possibilities.
docker-compose.yml:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
container_name: proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- /home/ssl:/etc/nginx/certs
- /home/log/nginx:/var/log/nginx
environment:
- DEFAULT_HOST=app.test
db:
container_name: db
image: mysql:8.0
restart: always
.
.
.
ports:
- "3306:3306"
app:
container_name: app
.
.
.
environment:
- VIRTUAL_HOST=app.test
- VIRTUAL_PORTO=https
- HTTPS_METHOD=redirect
- CERT_NAME=app.test
running docker exec -it proxy ls -l /etc/nginx/certs shows certificates are mounted:
total 8
-rw-rw-r-- 1 1000 1000 1391 Nov 8 14:36 app.test.crt
-rw-rw-r-- 1 1000 1000 1751 Nov 8 14:29 app.test.key
running docker exec -it proxy cat /etc/nginx/conf.d/default.conf:
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header based on $proxy_x_forwarded_proto
map $proxy_x_forwarded_proto $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent" '
'"$upstream_addr"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
server_tokens off;
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
# app.test
upstream app.test {
## Can be connected with "test_default" network
# app
server 192.168.176.4:3000;
}
server {
server_name app.test;
listen 80 default_server;
access_log /var/log/nginx/access.log vhost;
location / {
proxy_pass http://app.test;
}
}
As you can see, no 443 clauses created. When trying to reach, I get ERR_CONNECTION_REFUSED message on chrome but nothing is recorded to neither access.log nor error.log.
Any ideas? I've spend the last three days trying to crack it.

The solution has appeared here, needed to add CERT_NAME to the proxy environment and mount the certificates directory to the app as well:
docker-compose.yml:
version: "3"
services:
proxy:
image: jwilder/nginx-proxy
container_name: proxy
restart: always
ports:
- "80:80"
- "443:443"
volumes:
- /var/run/docker.sock:/tmp/docker.sock
- /home/ssl:/etc/nginx/certs
- /home/log/nginx:/var/log/nginx
environment:
- DEFAULT_HOST=app.test
- CERT_NAME=app.test
db:
container_name: db
image: mysql:8.0
restart: always
.
.
.
ports:
- "3306:3306"
app:
container_name: app
.
.
.
volumes:
.
.
.
- /home/ssl:/etc/ssl/certs:ro
environment:
- VIRTUAL_HOST=app.test
- CERT_NAME=app.test

Related

Docker compose connect back and frontend

Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true

How to run docker and node together?

I want run docusaurus with docker which containing nginx and node.
My directories looking like this. Without docker docusaurus working correctly
application/
blog/
docs/
src/
static/
babel.config.js
default.conf
docker-compose.yml
docusaurus.config.js
nginx.dockerfile
nodejs.dockerfile
package.json
package-lock.json
sidebars.js
In the default.conf I have
server {
listen 80;
location / {
proxy_set_header Host $host;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $scheme;
}
}
Inside docker-compose.yml I have
version: '2'
services:
nodejs:
build:
context: .
dockerfile: nodejs.dockerfile
container_name: nodejs
ports:
- "1602:3000"
volumes:
- ./:/var/www/html
networks:
- application
nginx:
build:
context: .
dockerfile: nginx.dockerfile
container_name: nginx
ports:
- "1601:80"
volumes:
- ./:/var/www/html
depends_on:
- nodejs
networks:
- application
networks:
application:
driver: bridge
Inside nginx.dockerfile I have
FROM nginx
COPY ./default.conf /etc/nginx/conf.d/
Inside nodejs.dockerfile I have
FROM node:14
WORKDIR /var/www/html
COPY ./ /var/www/html
RUN npm install
RUN npm start
EXPOSE 1602

docker and nginx: How to reverse proxy multiple postgresql databases

I am having a docker container with multiple postgresql databases:
version: "3.7"
services:
nginx:
image: nginx:1.18.0-alpine
ports:
- 8028:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
depends_on:
- webapp
networks:
- postgresql_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db1
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
postgresql2:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA2
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db2
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
networks:
postgresql_network:
driver: bridge
Now I can expose two different ports for each of the postgresq instance and access them differently from host (like localhost 5432 and localhost 5445)
But instead of that i want to handle them using different domain names.
like pg1.docker.db, port 8028 (nginx port) and pg.docker.db, port 8028 (nginx port) and using nginx reverse proxy them internally to different docker containers using postgresql and postgresql2
and in the host /etc/hosts I will have
127.0.0.1 pg1.docker.db
127.0.0.1 pg2.docker.db
I am not sure, but something similar. Can someone say I am doing right
## WEBSERVER
upstream webapp {
server webapp:8000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
## POSTGRESQL DBS
upstream postgres {
server postgresql:5432;
}
upstream postgres2 {
server postgresql2:5432;
}
server {
listen 80 so_keepalive=on;
server_name db1.domain.my;
proxy_pass postgres;
}
server {
listen 80 so_keepalive=on;
server_name db2.domain.my;
proxy_pass postgres2;
}
I find people using stream and http. Will the above also work. Or should I do something
Redirecting http requests based on domain name is pretty easy.
You can use something like this:
http {
server {
listen 80;
server_name pg1.docker.db
location / {
...
}
}
server {
listen 80;
server_name pg2.docker.db;
location / {
...
}
}
}
But the problem here is that nginx forward only http request to servers, so unless you have a web-base interface behind your containers, it's not going to work properly.

Docker-compose nginx with letsencrypt -> ln: failed to create symbolic link - Not supported

Setup: Docker on OpenSuse-Server on local Intel-NUC
Here is my docker.compose.yml
version: '3.5'
services:
proxy:
image: jwilder/nginx-proxy:alpine
labels:
- "com.github.jrcs.letsencrypt_nginx_proxy_companion.nginx_proxy=true"
container_name: nextcloud-proxy
networks:
- nextcloud_network
dns:
- 192.168.178.15
ports:
- 443:443
- 80:80
volumes:
- ./proxy/conf.d:/etc/nginx/conf.d:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:ro
- ./proxy/html:/usr/share/nginx/html:rw
- ./proxy/certs:/etc/nginx/certs:rw
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
restart: unless-stopped
letsencrypt:
image: jrcs/letsencrypt-nginx-proxy-companion
container_name: nextcloud-letsencrypt
depends_on:
- proxy
networks:
- nextcloud_network
dns:
- 192.168.178.15 #need for access the letdyencrypt API
volumes:
- ./proxy/acme:/etc/acme.sh
- ./proxy/certs:/etc/nginx/certs:rw
- ./proxy/vhost.d:/etc/nginx/vhost.d:rw
- ./proxy/html:/usr/share/nginx/html:rw
- /etc/localtime:/etc/localtime:rw
- /var/run/docker.sock:/tmp/docker.sock:ro
- /var/run/docker.sock:/var/run/docker.sock:ro
environment:
NGINX_PROXY_CONTAINER: "nextcloud-proxy"
DEFAULT_EMAIL: "mymail#pm.me"
restart: unless-stopped
db:
image: mariadb
container_name: nextcloud-mariadb
networks:
- nextcloud_network
dns:
- 192.168.178.15
volumes:
- db-data2:/var/lib/mysql:rw
- /etc/localtime:/etc/localtime:ro
environment:
- MYSQL_ROOT_PASSWORD=Tstrong
- MYSQL_PASSWORD=Tstrong
- MYSQL_DATABASE=nextcloud
- MYSQL_USER=nextcloud
restart: unless-stopped
app:
image: nextcloud:latest
container_name: nextcloud-app
networks:
- nextcloud_network
depends_on:
- letsencrypt
- proxy
- db
dns:
- 192.168.178.15
# - 8.8.8.8
# ports:
# - 10000:80 -> makes the app available without nginx and ssl
volumes:
- nextcloud-stage:/var/www/html
- ./app/config:/var/www/html/config
- ./app/custom_apps:/var/www/html/custom_apps
- ./app/data:/var/www/html/data
- ./app/themes:/var/www/html/themes
- /etc/localtime:/etc/localtime:ro
environment:
- VIRTUAL_HOST=mydomain.chickenkiller.com
- LETSENCRYPT_HOST=mydomain.chickenkiller.com
- LETSENCRYPT_EMAIL=mymail#pm.me
- "ServerName=nextcloud"
restart: unless-stopped
volumes:
nextcloud-stage:
db-data2:
networks:
nextcloud_network:
# external:
driver: bridge
name: nginx-proxy
And then it throws this Warning/Info and the SSL does not work.
The application would only be available over port 80 if I open the port on this container - which is clearly wrong.
So is this warning actual a problem or do I miss something else?
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.dhparam.pem': Not supported
I need to specify the DNS so letsencrypt container is able to communicate with the API, so I did point the Docker DNS to my local router 192.168.178.15. Do I need this setting also for the other services? Or is that the problem that breaks the symbolic link?
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Your cert is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/mydomain.chickenkiller.com.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Your cert key is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/mydomain.chickenkiller.com.key
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] The intermediate CA cert is in /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/ca.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] And the full chain certs is there: /etc/acme.sh/mymail#pm.me/mydomain.chickenkiller.com/fullchain.cer
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing cert to:/etc/nginx/certs/mydomain.chickenkiller.com/cert.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing CA to:/etc/nginx/certs/mydomain.chickenkiller.com/chain.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:19 CET 2020] Installing key to:/etc/nginx/certs/mydomain.chickenkiller.com/key.pem
nextcloud-letsencrypt | [Wed Dec 23 10:47:20 CET 2020] Installing full chain to:/etc/nginx/certs/mydomain.chickenkiller.com/fullchain.pem
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.crt': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.key': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.dhparam.pem': Not supported
nextcloud-letsencrypt | ln: failed to create symbolic link '/etc/nginx/certs/mydomain.chickenkiller.com.chain.pem': Not supported
nextcloud-letsencrypt | Reloading nginx proxy (ac49344ba0acb6026615358abf5568dc6a1df173a308a936b615fa00e413f767)...
nextcloud-letsencrypt | 2020/12/23 09:47:20 Contents of /etc/nginx/conf.d/default.conf did not change. Skipping notification ''
nextcloud-letsencrypt | 2020/12/23 09:47:20 [notice] 115#115: signal process started
nextcloud-letsencrypt | Sleep for 3600s
Nginx throws Error 503 when accessing the application from WAN using the IP (and not the DynDNS)
So local port forwarding should also be correct, right? Port 80 and 443 forwarded from Router to NUC
Using DynDNS to access the application from WAN leads to SSL-error (HSTS)
So I think it is just the connection (symbolic link) from the certificate folder to the application?
Let me know if I can provide more information/logs
Cheers
UDPATE:
Here is the NGINX config from /proxy/conf.d/default.conf
# If we receive X-Forwarded-Proto, pass it through; otherwise, pass along the
# scheme used to connect to this server
map $http_x_forwarded_proto $proxy_x_forwarded_proto {
default $http_x_forwarded_proto;
'' $scheme;
}
# If we receive X-Forwarded-Port, pass it through; otherwise, pass along the
# server port the client connected to
map $http_x_forwarded_port $proxy_x_forwarded_port {
default $http_x_forwarded_port;
'' $server_port;
}
# If we receive Upgrade, set Connection to "upgrade"; otherwise, delete any
# Connection header that may have been passed to this server
map $http_upgrade $proxy_connection {
default upgrade;
'' close;
}
# Apply fix for very long server names
server_names_hash_bucket_size 128;
# Default dhparam
ssl_dhparam /etc/nginx/dhparam/dhparam.pem;
# Set appropriate X-Forwarded-Ssl header
map $scheme $proxy_x_forwarded_ssl {
default off;
https on;
}
gzip_types text/plain text/css application/javascript application/json application/x-javascript text/xml application/xml application/xml+rss text/javascript;
log_format vhost '$host $remote_addr - $remote_user [$time_local] '
'"$request" $status $body_bytes_sent '
'"$http_referer" "$http_user_agent"';
access_log off;
ssl_protocols TLSv1.2 TLSv1.3;
ssl_ciphers 'ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384';
ssl_prefer_server_ciphers off;
resolver 127.0.0.11;
# HTTP 1.1 support
proxy_http_version 1.1;
proxy_buffering off;
proxy_set_header Host $http_host;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $proxy_connection;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Proto $proxy_x_forwarded_proto;
proxy_set_header X-Forwarded-Ssl $proxy_x_forwarded_ssl;
proxy_set_header X-Forwarded-Port $proxy_x_forwarded_port;
# Mitigate httpoxy attack (see README for details)
proxy_set_header Proxy "";
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 80;
access_log /var/log/nginx/access.log vhost;
return 503;
}
server {
server_name _; # This is just an invalid value which will never trigger on a real hostname.
listen 443 ssl http2;
access_log /var/log/nginx/access.log vhost;
return 503;
ssl_session_cache shared:SSL:50m;
ssl_session_tickets off;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
# mydomain.chickenkiller.com
upstream mydomain.chickenkiller.com {
## Can be connected with "nginx-proxy" network
# nextcloud-app
server 172.23.0.5:80;
}
server {
server_name mydomain.chickenkiller.com;
listen 80 ;
access_log /var/log/nginx/access.log vhost;
include /etc/nginx/vhost.d/default;
location / {
proxy_pass http://mydomain.chickenkiller.com;
}
}
server {
server_name mydomain.chickenkiller.com;
listen 443 ssl http2 ;
access_log /var/log/nginx/access.log vhost;
return 500;
ssl_certificate /etc/nginx/certs/default.crt;
ssl_certificate_key /etc/nginx/certs/default.key;
}
I had this exact issue. For me, the volume the certificates were being save to was a mounted file share in Azure and those don't support symlinks out of the box.
See: https://learn.microsoft.com/en-us/azure/storage/files/storage-troubleshoot-linux-file-connection-problems.
I am using autofs but adding ",mfsymlinks" to the end of the "fstype" part worked fine once restarted.

Failing to debug Nginx load-balancer

I'm trying to set up a load balancer using Nginx for a perl Dancer2 app which runs inside docker containers.
And i have this problem i don't know how to debug:
If in my nginx.conf i set just one server in to the upstrem block it runs fine. But the second i add an other server it fails to execute login
My nginx.conf file looks like this:
worker_processes 2;
events { worker_connections 512; }
http {
upstream pb {
server pearlbee:5000 weight=10 max_fails=3 fail_timeout=30s;
server pearlbee2:5000 weight=10 max_fails=3 fail_timeout=30s;
}
server {
listen *:80;
location / {
proxy_pass http://pb;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection 'upgrade';
proxy_set_header Host $host;
proxy_cache_bypass $http_upgrade;
#proxy_redirect off;
#proxy_set_header Host $host;
#proxy_set_header X-Real-IP $remote_addr;
#proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
#proxy_set_header X-Forwarded-Host $server_name;
}
}
}
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
# here i pass my configuration to nginx
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
#ports:
#- "5000:5000"
volumes:
- ./config.yml:/config.yml
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
pearlbee2:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000"
volumes:
- ./config.yml:/config.yml
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root