docker-compose, wsl2 and self signed ssl certificate - docker-compose

So i'm using docker-compose to set up local dev environments while I'm working on Wordpress sites.
And now i'm trying to fix so I can get an self signed ssl certificate for that local dev environment. But all I end up with when i'm trying to visit the site is ERR_CONNECTION_CLOSED and nothing in the error log or anything.
So this is how my docker-compose.yml file looks like:
version: '3.1'
services:
wordpress:
image: wordpress:5.8-fpm
restart: always
container_name: wordpress
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./wp:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
depends_on:
- db
db:
image: mysql:5.7
restart: always
container_name: db
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
ports:
- "8086:3306"
nginx:
image: nginx:latest
container_name: nginx
ports:
- '80:80'
- '443:433'
volumes:
- ./nginx:/etc/nginx/conf.d
- ./logs/nginx:/var/log/nginx
- ./wp:/var/www/html
- ./certs:/etc/cert
depends_on:
- wordpress
restart: always
mailhog:
image: mailhog/mailhog
ports:
- "1025:1025" # smtp server
- "8025:8025" # web ui
volumes:
db:
And this is how my nginx config file looks like:
server {
listen 80;
listen [::]:80;
server_name dev.mydomain.com;
root /var/www/html;
index index.php;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
server {
listen 443 ssl ssl;
server_name dev.mydomain.com;
root /var/www/html;
index index.php;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
ssl_certificate /etc/cert/mydomain.com.crt;
ssl_certificate_key /etc/cert/mydomain.com.key;
location / {
try_files $uri $uri/ /index.php?$args;
}
location ~ \.php$ {
try_files $uri =404;
fastcgi_split_path_info ^(.+\.php)(/.+)$;
fastcgi_pass wordpress:9000;
fastcgi_index index.php;
include fastcgi_params;
fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name;
fastcgi_param PATH_INFO $fastcgi_path_info;
}
}
and to create the cert files I used openssl on the wsl2 (ubuntu 20.04) with this command:
openssl req -newkey rsa:2048 -nodes -keyout mydomain.com.key -x509 -days 365 -out mydomain.com.crt
So I can visit the site without https, and everything is working fine. But when i'm trying to visit the site with https I get the error ERR_CONNECTION_CLOSED, and I have no idea where to start. I have tried many different solutions but so far no luck.
Also good to be know that I have pointed dev.mydomain.com to 127.0.0.1 in the dns. And also added it in my host file on Windows.
But I have a feeling that there is something i'm missing here.
And my plan is to have a wildcard domain pointed to localhost, and make the docker-compose installation more or less automatic to set up self signed certificates when I run docker-compose up -d
So I hope someone out there have the solution or can point me to the right direction :)

Local dev: Be your own CA
This is the best tuturial you would have on the web for creating ssl locally
https://www.youtube.com/watch?v=oxDUdjbdfR0&t=1558s
Make sure to install your root cert into the browser
otherwise, it won't work
Production : Just use certbot and it will do the trick
command will be like this
certbot --nginx -d ${SERVER2} -d ${SERVER1}

Looks like you may have a DNS resolution problem. Use the Docker embeded DNS resolver inside the nginx location:
resolver 127.0.0.11
I would also recommend use of a root CA and the ssl_trusted_certificate directive.
CURITY EXAMPLE
For something to compare against, see this Curity example, which uses DNS via the hosts file, wildcard certs, a root CA, docker compose and nginx:
GitHub Repo
Certs Creation Script
NGINX Config
Tutorial
This is quite an advanced sample with two forms of Mutual TLS, though it also has a lot in common with your requirements. So hopefully it will help you to resolve your problem.

HTTPS uses port 443, but SSL/TLS does not itself use any port.
In your docker config , there is a line that contains - '443:433' . Is this a typo or is your actual port mapping? For network port mapping, check this document: https://docs.docker.com/compose/networking/

For anyone that will look at this, I found an solution that works really good for me.
What I ended up doing was to run it with ngrok instead. And have that implement in the docker-compose file.
I can choose to jsut run with an domain name that ngrok gives me. Or use my own(This needs a premium account if so). So this is how my docker-compose.yml file look like now:
version: '3.1'
networks:
default:
services:
wordpress:
image: wordpress:5.9.0-php8.0-fpm
restart: always
container_name: wordpress
environment:
WORDPRESS_DB_HOST: db
WORDPRESS_DB_USER: exampleuser
WORDPRESS_DB_PASSWORD: examplepass
WORDPRESS_DB_NAME: exampledb
volumes:
- ./wp:/var/www/html
- ./uploads.ini:/usr/local/etc/php/conf.d/uploads.ini
depends_on:
- db
db:
image: mysql:5.7
restart: always
container_name: db
environment:
MYSQL_DATABASE: exampledb
MYSQL_USER: exampleuser
MYSQL_PASSWORD: examplepass
MYSQL_RANDOM_ROOT_PASSWORD: '1'
volumes:
- db:/var/lib/mysql
ports:
- "8086:3306"
nginx:
image: nginx:latest
container_name: nginx
networks:
- default
ports:
- '80:80'
volumes:
- ${NGINX_CONF_DIR:-./nginx}:/etc/nginx/conf.d
- ${NGINX_LOG_DIR:-./logs/nginx}:/var/log/nginx
- ${WORDPRESS_DATA_DIR:-./wp}:/var/www/html
depends_on:
- wordpress
restart: always
ngrok:
image: wernight/ngrok:latest
ports:
- '4040:4040'
links:
- nginx
environment:
NGROK_PROTOCOL: http
NGROK_REGION: eu
NGROK_PORT: nginx:80
NGROK_AUTH: my auth key
NGROK_HOSTNAME: if I want my own domain otherwise comment this out
depends_on:
- nginx
networks:
- default
mailhog:
image: mailhog/mailhog
ports:
- "1025:1025" # smtp server
- "8025:8025" # web ui
volumes:
db:
As you can see here i'm using wernight/ngrok docker container to make this work

Related

Docker compose connect back and frontend

Problem: connect my backend and frontend together using Docker compose (Nestjs and Nextjs). Need it to use a unic cluster at AWS. Locally don't work the same way too...
But all worked in separated docker compose (creating a backend at AWS online and locally using my frontend at the created endpoints), but together... I don't have any idea how to solve it. I have try multiples solutions that found on the internet.
connect using docker host on front end:
const fetcher = (url: string) => fetch(url).then((res)=>res.json())
useSWR('http://host.docker.internal:3000/grandetabela', fetcher, {
onSuccess:(data,key,config)=>{
console.log(data)
}
})
This resulte on error: GET http://host.docker.internal:3000/grandetabela net::ERR_NAME_NOT_RESOLVED or if i try local host it's go to a CORS issue.
Inside api in nextjs too, but i don't get the CORS issue:
//
try {
const data = await axios.get('http://host.docker.internal:3000/grandetabela')
.then((resp:any)=>{
return resp
})
res.status(200).json(data)
} catch (error) {
console.error(error)
res.status(502).json({error:'error on sever request'})
}
If a try use the localhost as option its cause another problem about AxiosError: Request failed and if i try using another api from internet i can get response normaly.
to have some ideia what i try look my docker compose... i've try to use the ips... I can ping inside docker but i don't know get acess host:3000 for exemple to consult my endpoints.
version: '3.1'
services:
db:
image: postgres
# restart: always
container_name: 'pgsql'
ports:
- "5432:5432"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
# networks:
# mynetwork:
# ipv4_address: 172.20.20.1
adminer:
image: adminer
# restart: always
ports:
- "8080:8080"
# networks:
# mynetwork:
# ipv4_address: 172.20.70.1
node-ytalo-backend:
image: ytalojacs/nestjsbasic_1-0
ports:
- "3000:3000"
command: >
sh -c "npm run build \
npm run start:prod"
environment:
POSTGRES_USER: pgadmin
POSTGRES_PASSWORD: pgpalavra
POSTGRES_DB: mydatabase
POSTGRES_HOST: db
# networks:
# mynetwork:
# ipv4_address: 172.20.50.1
prophet:
image: ytalojacs/prophetforecast-1_0
ports:
- "3001:3001"
# networks:
# mynetwork:
# ipv4_address: 172.20.100.1
front-end:
depends_on:
- node-ytalo-backend
image: ytalojacs/frontendjsprophet
environment:
PORT: 3010
command: >
sh -c "npm run build \
npm run start"
ports:
- "3010:3010"
links:
- "node-ytalo-backend:myback.org"
# networks:
# mynetwork:
# ipv4_address: 172.20.128.1
# networks:
# mynetwork:
# ipam:
# config:
# - subnet: 172.20.0.0/16
When I use host.docker.internal whith 'curl' inside the docker (docker exec bash) all work as intented too. I can get response from my backend...
Is there something I missed? .env?
You have a similar/same issue to the few I forwarded the same SO answer to.
But I quote here:
I am no expert on MERN (we mainly run Angular & .Net), but I have to warn you of one thing. We had an issue when setting this up in the beginning as well as worked locally in containers but not on our deployment servers because we forgot the basic thing about web applications.
Applications run in your browser, whereas if you deploy an application stack somewhere else, the REST of the services (APIs, DB and such) do not. So referencing your IP/DNS/localhost inside your application won't work, because there is nothing there. A container that contains a WEB application is there to only serve your browser (client) files and then the JS and the logic are executed inside your browser, not the container.
I suspect this might be affecting your ability to connect to the backend.
To solve this you have two options.
Create an HTTP proxy as an additional service and your FE calls that proxy (set up a domain and routing), for instance, Nginx, Traefik, ... and that proxy then can reference your backend with the service name, since it does live in the same environment than API.
Expose the HTTP port directly from the container and then your FE can call remoteServerIP:exposedPort and you will connect directly to the container's interface. (NOTE: I do not recommend this way for real use, only for testing direct connectivity without any proxy)
UPDATE 2022-10-05
Added the nginx config from the utility server on how to get request calls from the nginx running inside the container to the other containers on the same network.
Nginx config:
server_tokens off;
# ----------------------------------------------------------------------------------------------------
upstream local-docker-verdaccio {
server verdaccio:4873; #verdaccio is docker compose's service name and port 4873 is port on which container is listening internally
}
# ----------------------------------------------------------------------------------------------------
# ----------------------------------------------------------------------------------------------------
# si.company.verdaccio
server {
listen 443 http2 ssl;
server_name verdaccio.company.org;
# ----------------------------------------------------------------------------------------------------
add_header Strict-Transport-Security "max-age=31536000" always;
proxy_set_header Host $http_host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header X-Forwarded-Port $server_port;
proxy_set_header X-Forwarded-Proto $scheme;
proxy_set_header X-Real-IP $remote_addr;
ssl_certificate /etc/tls/si.company.verdaccio-chain.crt;
ssl_certificate_key /etc/tls/si.company.verdaccio-unencrypted.key;
ssl_ciphers ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384;
ssl_prefer_server_ciphers off;
ssl_protocols TLSv1.2 TLSv1.3;
# ----------------------------------------------------------------------------------------------------
location / {
proxy_pass http://local-docker-verdaccio/;
proxy_redirect off;
}
}
server {
listen 80;
server_name verdaccio.company.org;
return 301 https://verdaccio.company.org$request_uri;
}
# ----------------------------------------------------------------------------------------------------
And corresponding docker-compose.yml file.
version: "3.7"
services:
proxy:
container_name: proxy
image: nginx:alpine
ports:
- "443:443"
restart: always
volumes:
- 5fb31181-8e07-4304-9276-9da8c3a581c9:/etc/nginx/conf.d:ro
- /etc/tls/:/etc/tls:ro
verdaccio:
container_name: verdaccio
depends_on:
- proxy
expose:
- "4873"
image: verdaccio/verdaccio:4
restart: always
volumes:
- d820f373-d868-40ec-bb6b-08a99efddc06:/verdaccio
- 542b4ca1-aefe-43a8-8fb3-804b46049bab:/verdaccio/conf
- ab018ca9-38b8-4dad-bbe5-bd8c41edff77:/verdaccio/storage
volumes:
542b4ca1-aefe-43a8-8fb3-804b46049bab:
external: true
5fb31181-8e07-4304-9276-9da8c3a581c9:
external: true
ab018ca9-38b8-4dad-bbe5-bd8c41edff77:
external: true
d820f373-d868-40ec-bb6b-08a99efddc06:
external: true

Connect multiple containers, nginx and php through fastcgi

I have three containers, mysql, phpfpm and nginx.
When I try to run the localhost:8080 I get this error:
[error] 11#11: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.27.0.1, server: _,.
request: "GET / HTTP/1.1", upstream: "fastcgi://172.27.0.3:9000", host: "localhost:8080"
Can anybody help me with this? What do I miss?
Here is my docker-compose.yaml:
version: '3'
services:
php:
build: php
container_name: phpfpm
expose:
- '9002'
- '9000'
depends_on:
- db
volumes:
- /srv/www/mito/app:/var/www/html
- ./dockerlive/logs:/var/log
command: /bin/bash -c "rm -rf /var/run/php && mkdir /var/run/php && rm -rf /run/php && mkdir /run/php && /usr/sbin/php-fpm7.4 -F -R"
nginx:
build: nginx
container_name: webserver
ports:
- '8080:80'
depends_on:
- php
- db
volumes:
- /srv/www/mito/app:/var/www/html
- ./dockerlive/logs:/var/log/nginx
environment:
- NGINX_HOST=localhost
- NGINX_PORT=80
command: /bin/bash -c "nginx -g 'daemon off;'"
db:
build: database
container_name: mysql
ports:
- "3306:3306"
environment:
MYSQL_ALLOW_EMPTY_PASSWORD: 1
MYSQL_DATABASE: mito
and the nginx config:
server {
listen 80;
listen [::]:80;
server_name _;
root /var/www/html;
index index.html index.php;
#error_log /var/log/nginx/mito.localhost-error.log;
#access_log /var/log/nginx/mito.localhost-acces.log;
location / {
try_files $uri $uri/ =404;
}
location ~ \.php$ {
include snippets/fastcgi-php.conf;
#fastcgi_pass unix:/var/run/php/php7.4-fpm.sock;
fastcgi_pass phpfpm:9000;
}
}
I found somewhere a useful idea what I've implemented.
I've added an entry to the nginx and the php volumes:
- php-fpm-socket:/var/run/php
and I've also defined it in the main volumes section without any parameter.

docker and nginx: How to reverse proxy multiple postgresql databases

I am having a docker container with multiple postgresql databases:
version: "3.7"
services:
nginx:
image: nginx:1.18.0-alpine
ports:
- 8028:80
volumes:
- ./nginx/localhost/conf.d:/etc/nginx/conf.d
depends_on:
- webapp
networks:
- postgresql_network
postgresql:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db1
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
postgresql2:
image: "postgres:13-alpine"
restart: always
volumes:
- type: bind
source: ../DO_NOT_DELETE_POSTGRESQL_DATA2
target: /var/lib/postgresql/data
environment:
POSTGRES_DB: db2
POSTGRES_USER: simha
POSTGRES_PASSWORD: krishna
PGDATA: "/var/lib/postgresql/data/pgdata"
networks:
- postgresql_network
networks:
postgresql_network:
driver: bridge
Now I can expose two different ports for each of the postgresq instance and access them differently from host (like localhost 5432 and localhost 5445)
But instead of that i want to handle them using different domain names.
like pg1.docker.db, port 8028 (nginx port) and pg.docker.db, port 8028 (nginx port) and using nginx reverse proxy them internally to different docker containers using postgresql and postgresql2
and in the host /etc/hosts I will have
127.0.0.1 pg1.docker.db
127.0.0.1 pg2.docker.db
I am not sure, but something similar. Can someone say I am doing right
## WEBSERVER
upstream webapp {
server webapp:8000;
}
server {
listen 80;
location / {
proxy_pass http://webapp;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
}
}
## POSTGRESQL DBS
upstream postgres {
server postgresql:5432;
}
upstream postgres2 {
server postgresql2:5432;
}
server {
listen 80 so_keepalive=on;
server_name db1.domain.my;
proxy_pass postgres;
}
server {
listen 80 so_keepalive=on;
server_name db2.domain.my;
proxy_pass postgres2;
}
I find people using stream and http. Will the above also work. Or should I do something
Redirecting http requests based on domain name is pretty easy.
You can use something like this:
http {
server {
listen 80;
server_name pg1.docker.db
location / {
...
}
}
server {
listen 80;
server_name pg2.docker.db;
location / {
...
}
}
}
But the problem here is that nginx forward only http request to servers, so unless you have a web-base interface behind your containers, it's not going to work properly.

Nginx routing with Docker Rails 5 Postgres app

When I tried to Dockerize Rails app into container and run Nginx on host I got problem with routing from outside in.
I can't access /public in rails app container. Instead I can see /var/www/app/public at host.
How can I route from Nginx to Docker Rails container?
nginx.conf:
upstream puma_app {
server 127.0.0.1:3000;
}
server {
listen 80;
client_max_body_size 4G;
keepalive_timeout 10;
error_page 500 502 504 /500.html;
error_page 503 #503;
server_name localhost app;
root /var/www/app/public;
try_files $uri/index.html $uri #puma_app;
location #puma_app {
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_set_header Host $http_host;
proxy_redirect off;
proxy_pass http://puma_app;
# limit_req zone=one;
access_log /var/log/nginx/access.log;
error_log /var/log/nginx/error.log;
}
location ^~ /assets/ {
gzip_static on;
expires max;
add_header Cache-Control public;
}
location = /50x.html {
root html;
}
location = /404.html {
root html;
}
location #503 {
error_page 405 = /system/maintenance.html;
if (-f $document_root/system/maintenance.html) {
rewrite ^(.*)$ /system/maintenance.html break;
}
rewrite ^(.*)$ /503.html break;
}
if ($request_method !~ ^(GET|HEAD|PUT|PATCH|POST|DELETE|OPTIONS)$ ){
return 405;
}
if (-f $document_root/system/maintenance.html) {
return 503;
}
location ~ \.(php|html)$ {
return 405;
}
}
docker-compose.yml:
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- 'app:/var/www/app'
- 'public:/var/www/app/public'
ports:
- '3000:3000'
depends_on:
- postgres
env_file:
- '.env'
postgres:
image: postgres:latest
environment:
POSTGRES_USER: 'postgres_user'
ports:
- '5432:5432'
volumes:
- 'postgres:/var/lib/postgresql/data'
volumes:
postgres:
app:
public:
Dockerfile
# Base image:
FROM ruby:2.4
# Install dependencies
RUN apt-get update -qq && apt-get install -y build-essential libpq-dev nodejs
# Set an environment variable where the Rails app is installed to inside of Docker image:
ENV RAILS_ROOT /var/www/app
RUN mkdir -p $RAILS_ROOT
ENV RAILS_ENV production
ENV RACK_ENV production
# Set working directory, where the commands will be ran:
WORKDIR $RAILS_ROOT
# Gems:
COPY Gemfile Gemfile
COPY Gemfile.lock Gemfile.lock
RUN gem install bundler
RUN bundle install
COPY config/puma.rb config/puma.rb
# Copy the main application.
COPY . .
RUN bundle exec rake RAILS_ENV=production assets:precompile
VOLUME ["$RAILS_ROOT/public"]
EXPOSE 3000
# The default command that gets ran will be to start the Puma server.
CMD bundle exec puma -C config/puma.rb
I think you are trying to access /public from host inside the container at /var/www/app/public.
You need to mount host directory inside container. You can use -v "/public:/var/www/app/public" while running the container.
There are some issues with your Dockerfile. I'm not sure how you want to setup your docker image, but it seem like you try to use a public directory in a docker volume; I would suggest to store the compiled assets into the docker image itself. This way, you can sure that the assets are always together with the image.
You current Dockerfile should run the assets:precompile before the COPY . .; Meaning the assets should be compiled into the public directory first before copying it into the docker image.
Anyhow, you should try a running a really simple docker app first before using it on a more complex project setup, here's a blog post that might help you (disclaimer: I wrote that post)
In docker, every container has it's own IP address and are not local to each other. So you can't use 127.0.0.1 ip in the Nginx container as ip of the Rails container. Fortunately docker containers can be linked together using their service names. So you must replace change your upstream to
upstream puma_app {
server http://app:3000;
}
Also you should add Nginx container to your docker-compose file (suppose your nginx conf files are in config/nginx/conf.d dir):
version: '2'
services:
app:
build: .
command: bundle exec puma -C config/puma.rb
volumes:
- app:/var/www/app
- public:/var/www/app/public
- nginx-confs:/var/www/app/config/nginx/conf.d
ports:
- 3000:3000
depends_on:
- postgres
env_file: .env
postgres:
image: postgres:latest
environment:
- POSTGRES_USER=postgres_user
ports:
- 5432:5432
volumes:
- postgres:/var/lib/postgresql/data
nginx:
image: nginx:latest
ports:
- 80:80
- 443:443
volumes:
- nginx-confs:/etc/nginx/conf.d
volumes:
postgres:
app:
public:
nginx-confs:

docker caddy proxy not forwarding

So I set up the caddy server using docker and have set it to proxy to another docker image that is running nodejs. However, when i hit the url it is routing to caddyserver.com instead of my nodejs. Is there something that I am missing:
myserver {
proxy / 172.17.0.5:8000 {
header_upstream Host {host}
header_upstream X-Real-IP {remote}
header_upstream X-Forwarded-For {remote}
header_upstream X-Forwarded-Proto {scheme}
}
}
My docker-compose looks like following:
web:
image: ca9a385372b0
env_file: .env
environment:
AUTO_ALIASES: /src/docker/aliases/auto
TERM: xterm-256color
VIRTUAL_HOST: myserver
VIRTUAL_PORT: 8000
volumes:
- .:/src
ports:
- "8000:8000"
container_name: web
links:
- caddy
caddy:
image: blackglory/caddy-proxy:0.2.1
env_file: .env
environment:
CADDY_OPTIONS: "-email someaddress#myserver.com"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- "80:80"
- "443:443"
i added following to my /etc/hosts also
127.0.0.1 myserver
However, when I goto http://myserver it redirects me to caddyserver.com
my .env file looks like this
WEB_VIRTUAL_HOST=myserver
WEB_VIRTUAL_PORT=8000