load ssl certificate in haproxy with docker-compose - docker-compose

Hi I have haproxy in docker-compose as below -
haproxy:
image: haproxy:2.3
depends_on:
- my-service
volumes:
- ./config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./ssl:/usr/local/etc/ssl:ro
ports:
- 80:80
haproxy.config
frontend https
bind *:443 ssl crt /usr/local/etc/ssl/cert1.pem
but when I am doing docker-compose up -d always I am getting
unable to stat SSL certificate from file '/etc/ssl/cert1.pem' : No such file or directory.
I do not understand how to pass certificate to haproxy or what am I missing here. Can someone help on this.
I have ssl directory in my local from which I am moving certificate to /usr/local/etc/ssl in container

Related

Keycloak redirecting to Hostname but with port number too

I have configured nginx and given hostname to keycloak as http://keycloak.formsflow.ai for localhost:8080, but as I see in redirection url it show port number 8080, how can I remove it?
Keycloak showing port number in redirection along with hostname
Below is my docker config for keycloak
keycloak:
image: quay.io/keycloak/keycloak:14.0.0
container_name: keycloak
volumes:
- ./configuration/imports:/opt/jboss/keycloak/imports
command:
- "-b 0.0.0.0 -bmanagement=0.0.0.0 -Dkeycloak.import=/opt/jboss/keycloak/imports/formsflow-ai-realm.json -Dkeycloak.migration.strategy=OVERWRITE_EXISTING"
environment:
- DB_VENDOR=POSTGRES
- DB_ADDR=keycloak-db
- KEYCLOAK_HOSTNAME=keycloak.formsflow.ai
- DB_DATABASE=${KEYCLOAK_JDBC_DB:-keycloak}
- DB_USER=${KEYCLOAK_JDBC_USER:-admin}
- DB_PASSWORD=${KEYCLOAK_JDBC_PASSWORD:-changeme}
- KEYCLOAK_USER=${KEYCLOAK_ADMIN_USERNAME:-admin}
- KEYCLOAK_PASSWORD=${KEYCLOAK_ADMIN_PASSWORD:-changeme}
ports:
- 8080:8080
config to set for keycloak to remove port number from redirection url
When behind a reverse proxy, configure Keycloak properties:
PROXY_ADDRESS_FORWARDING=true
KEYCLOAK_FRONTEND_URL=http://keycloak.formsflow.ai/auth
You may also need to configure headers X-Forwarded-Proto and X-Forwarded-Host in Nginx.

Attempt to rewrite minimal Traefik example to use TLS does not work

The minimal example from https://doc.traefik.io/traefik/user-guides/docker-compose/basic-example/ works on my local machine. However, when I try to adapt this to use TLS I run into an issue. I'm a Traefik newbie, so I might be doing a stupid mistake.
This is my attempt:
version: "3.3"
services:
traefik:
image: "traefik:v2.8"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "traefik/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
So the major modification is to use "traefik.http.routers.whoami.entrypoints=websecure" instead of "traefik.http.routers.whoami.entrypoints=web"
Running
$ curl -k https://127.0.0.1
I get
404 page not found
The traefik log shows no routing related issues and the internal traefik setup for routing etc shown using curl https://127.0.0.1:8080/api/rawdata | jq . looks the same as the one of the working example, except the changed port.
So I opted for new answer instead of just editing the old answer. (Reason being even incorrect answers teach something).
My reference is this great post by Marc Mogdanz (link: https://marcmogdanz.de/posts/infrastructure-with-traefik-and-cloudflare/).
The direct answer to your query is:
Expose port 8080 but do not publish it
Add a host name rule. This will allow Traefik to route a URL request to its own port 8080.
The affected part of the compose file would be as follows (assuming that the URL https://dashboard.example.com is the desired URL to reach the dashboard):
expose:
- 8080
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`dashboard.example.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
Finally, I noticed you are testing on localhost. If you are testing on a local machine, use localhost for the dashboard and keep 127.0.0.1 for whoami.
Or, alternately, add a static entry for a subdomain (see https://stackoverflow.com/a/19016600).
Either way, Traefik is looking at the SNI requested - not necessarily the IP address - when matching the Host rule.
Request ----> Docker:443 ---> {Traefik}-"SNI?"---"127.0.0.1"---> {whoami}
| \
| \
8080<---"dashboard.localhost"
Add the following entry to your Traefik:
"--entrypoints.websecure.address=:8080"
Normally it would be 8080 for http and 8443 for https alternative ports, but since your example specifically states https://~:8080, I have adapted it accordingly.

Using https with grafana/caddy on docker compose

I'm trying to understand how to implement https with grafana/caddy in docker compose without a domain name.
Currently, I access grafana via http://xx.xxx.xx.xx:3000/
I would like this to be https, but am struggling to understand how to generate the cert and have it work as expected. I think letsencrypt requires a domain which I don't have.
version: "3"
networks:
monitor-net:
driver: bridge
volumes:
grafana_data: {}
services:
grafana:
image: grafana/grafana:8.4.4
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: caddy:2.3.0
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${GF_ADMIN_USER}
- ADMIN_PASSWORD=${GF_ADMIN_PASS}
- ADMIN_PASSWORD_HASH=${ADMIN_PASS_HASH}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I'm assuming I would create a volume on /etc/caddy/certs where I'd store the certificates, but don't know how to generate it for IP only or how it gets recognized by caddy.
Caddy for IP with SSL
By default, Caddy serves all sites over HTTPS.
Caddy serves IP addresses and local/internal hostnames over HTTPS using self-signed certificates that are automatically trusted locally (if permitted).
Examples: localhost, 127.0.0.1
Offical Docs Here
in your Caddyfile you have to add something like this
http://192.168.1.25:3000 {
reverse_proxy grafana_ip:3000
}
It looks like Caddy does not support generating HTTPS certificates for IP addresses. Additionally, Let's Encrypt does not currently support issuing certificates for bare IP addresses.
However, it does appear that ZeroSSL supports generating certificates for IPs. You could try using these instructions to change one or all of your sites to use ZeroSSL, but I wasn't able to get this to work on my test server.
The best option is probably to get a domain that you can point at your server, and then serve it from there.

How to connect to Traefik TCP Services with TLS configuration enabled?

I am trying to configure Traefik so that I would have access to services via domain names, and that I would not have to set different ports. For example, two MongoDB services, both on the default port, but in different domains, example.localhost and example2.localhost. Only this example works. I mean, other cases probably work, but I can't connect to them, and I don't understand what the problem is. This is probably not even a problem with Traefik.
I have prepared a repository with an example that works. You just need to generate your own certificate with mkcert. The page at example.localhost returns the 403 Forbidden error but you should not worry about it, because the purpose of this configuration is to show that SSL is working (padlock, green status). So don't focus on 403.
Only the SSL connection to the mongo service works. I tested it with the Robo 3T program. After selecting the SSL connection, providing the host on example.localhost and selecting the certificate for a self-signed (or own) connection works. And that's the only thing that works that way. Connections to redis (Redis Desktop Manager) and to pgsql (PhpStorm, DBeaver, DbVisualizer) do not work, regardless of whether I provide certificates or not. I do not forward SSL to services, I only connect to Traefik. I spent long hours on it. I searched the internet. I haven't found the answer yet. Has anyone solved this?
PS. I work on Linux Mint, so my configuration should work in this environment without any problem. I would ask for solutions for Linux.
If you do not want to browse the repository, I attach the most important files:
docker-compose.yml
version: "3.7"
services:
traefik:
image: traefik:v2.0
ports:
- 80:80
- 443:443
- 8080:8080
- 6379:6379
- 5432:5432
- 27017:27017
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.toml:/etc/traefik/traefik.config.toml:ro
- ./certs:/etc/certs:ro
command:
- --api.insecure
- --accesslog
- --log.level=INFO
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.mongo.address=:27017
- --entrypoints.postgres.address=:5432
- --entrypoints.redis.address=:6379
- --providers.file.filename=/etc/traefik/traefik.config.toml
- --providers.docker
- --providers.docker.exposedByDefault=false
- --providers.docker.useBindPortIP=false
apache:
image: php:7.2-apache
labels:
- traefik.enable=true
- traefik.http.routers.http-dev.entrypoints=http
- traefik.http.routers.http-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.entrypoints=https
- traefik.http.routers.https-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.tls=true
- traefik.http.services.dev.loadbalancer.server.port=80
pgsql:
image: postgres:10
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
labels:
- traefik.enable=true
- traefik.tcp.routers.pgsql.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.pgsql.tls=true
- traefik.tcp.routers.pgsql.service=pgsql
- traefik.tcp.routers.pgsql.entrypoints=postgres
- traefik.tcp.services.pgsql.loadbalancer.server.port=5432
mongo:
image: mongo:3
labels:
- traefik.enable=true
- traefik.tcp.routers.mongo.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.mongo.tls=true
- traefik.tcp.routers.mongo.service=mongo
- traefik.tcp.routers.mongo.entrypoints=mongo
- traefik.tcp.services.mongo.loadbalancer.server.port=27017
redis:
image: redis:3
labels:
- traefik.enable=true
- traefik.tcp.routers.redis.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.redis.tls=true
- traefik.tcp.routers.redis.service=redis
- traefik.tcp.routers.redis.entrypoints=redis
- traefik.tcp.services.redis.loadbalancer.server.port=6379
config.toml
[tls]
[[tls.certificates]]
certFile = "/etc/certs/example.localhost.pem"
keyFile = "/etc/certs/example.localhost-key.pem"
Build & Run
mkcert example.localhost # in ./certs/
docker-compose up -d
Prepare step by step
Install mkcert (run also mkcert -install for CA)
Clone my code
In certs folder run mkcert example.localhost
Start container by docker-compose up -d
Open page https://example.localhost/ and check if it is secure connection
If address http://example.localhost/ is not reachable, add 127.0.0.1 example.localhost to /etc/hosts
Certs:
Public: ./certs/example.localhost.pem
Private: ./certs/example.localhost-key.pem
CA: ~/.local/share/mkcert/rootCA.pem
Test MongoDB
Install Robo 3T
Create new connection:
Address: example.localhost
Use SSL protocol
CA Certificate: rootCA.pem (or Self-signed Certificate)
Test tool:
Test Redis
Install RedisDesktopManager
Create new connection:
Address: example.localhost
SSL
Public Key: example.localhost.pem
Private Key: example.localhost-key.pem
Authority: rootCA.pem
Test tool:
So far:
Can connect to Postgres via IP (info from Traefik)
jdbc:postgresql://172.21.0.4:5432/postgres?sslmode=disable
jdbc:postgresql://172.21.0.4:5432/postgres?sslfactory=org.postgresql.ssl.NonValidatingFactory
Try telet (IP changes every docker restart):
> telnet 172.27.0.5 5432
Trying 172.27.0.5...
Connected to 172.27.0.5.
Escape character is '^]'.
^]
Connection closed by foreign host.
> telnet example.localhost 5432
Trying ::1...
Connected to example.localhost.
Escape character is '^]'.
^]
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad RequestConnection closed by foreign host.
If I connect directly to postgres, the data is nice. If I connect to via Traefik then I have Bad Request when closing the connection. I have no idea what this means and whether it must mean something.
At least for the PostgreSQL issue, it seems that the connection is started in cleartext and then upgraded to TLS:
Docs
Mailing list discussion
Issue on another proxy project
So it is basically impossible to use TLS termination with a proxy if said proxy doesn't support this cleartext handshake + upgrade to TLS function of the protocol.
Update to #jose-liber's answer:
SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI("example.com") along with tls to expose postgres databases under different subdomains.
As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)

Failing to execute nginx proxy_pass directive for a Dancer2 app inside a Docker container

I have tried to orchestrate a Dancer2 app which runs on starman using Docker-compose. I'm failing to integrate nginx it crashes with 502 Bad Gateway error.
Which inside my server looks like this :
*1 connect() failed (111: Connection refused) while connecting to upstream, client: 172.22.0.1,
My docker-compose file looks like this :
version: '2'
services:
web:
image: nginx
ports:
- "80:80"
volumes:
- ./nginx.conf:/etc/nginx/nginx.conf
links:
- pearlbee
volumes_from:
- pearlbee
pearlbee:
build: pearlbee
command: carton exec starman bin/app.psgi
ports:
- "5000:5000"
environment:
- MYSQL_PASSWORD=secret
depends_on:
- mysql
mysql:
image: mysql
environment:
- MYSQL_ROOT_PASSWORD=secret
- MYSQL_USER=root
My nginx.conf file looks like this :
user root nogroup;
worker_processes auto;
events { worker_connections 512; }
http {
include /etc/nginx/sites-enabled/*;
upstream pb{
# this the localhost that starts starman
#server 127.0.0.1:5000;
#the name of the docker-compose service that creats the app
server pearlbee;
#both return the same error mesage
}
server {
listen *:80;
#root /usr/share/nginx/html/;
#index index.html 500.html favico.ico;
location / {
proxy_pass http://pb;
}
}
}
You're right to use the service name as the upstream server for Nginx, but you need to specify the port:
upstream pb{
server pearlbee:5000;
}
Within the Docker network - which Compose creates for you - services can access each other by name. Also, you don't need to publish ports for other containers to use, unless you also want to access them externally. The Nginx container will be able to access port 5000 on your app container, you don't need to publish it to the host.