How to connect to Traefik TCP Services with TLS configuration enabled? - postgresql

I am trying to configure Traefik so that I would have access to services via domain names, and that I would not have to set different ports. For example, two MongoDB services, both on the default port, but in different domains, example.localhost and example2.localhost. Only this example works. I mean, other cases probably work, but I can't connect to them, and I don't understand what the problem is. This is probably not even a problem with Traefik.
I have prepared a repository with an example that works. You just need to generate your own certificate with mkcert. The page at example.localhost returns the 403 Forbidden error but you should not worry about it, because the purpose of this configuration is to show that SSL is working (padlock, green status). So don't focus on 403.
Only the SSL connection to the mongo service works. I tested it with the Robo 3T program. After selecting the SSL connection, providing the host on example.localhost and selecting the certificate for a self-signed (or own) connection works. And that's the only thing that works that way. Connections to redis (Redis Desktop Manager) and to pgsql (PhpStorm, DBeaver, DbVisualizer) do not work, regardless of whether I provide certificates or not. I do not forward SSL to services, I only connect to Traefik. I spent long hours on it. I searched the internet. I haven't found the answer yet. Has anyone solved this?
PS. I work on Linux Mint, so my configuration should work in this environment without any problem. I would ask for solutions for Linux.
If you do not want to browse the repository, I attach the most important files:
docker-compose.yml
version: "3.7"
services:
traefik:
image: traefik:v2.0
ports:
- 80:80
- 443:443
- 8080:8080
- 6379:6379
- 5432:5432
- 27017:27017
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.toml:/etc/traefik/traefik.config.toml:ro
- ./certs:/etc/certs:ro
command:
- --api.insecure
- --accesslog
- --log.level=INFO
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.mongo.address=:27017
- --entrypoints.postgres.address=:5432
- --entrypoints.redis.address=:6379
- --providers.file.filename=/etc/traefik/traefik.config.toml
- --providers.docker
- --providers.docker.exposedByDefault=false
- --providers.docker.useBindPortIP=false
apache:
image: php:7.2-apache
labels:
- traefik.enable=true
- traefik.http.routers.http-dev.entrypoints=http
- traefik.http.routers.http-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.entrypoints=https
- traefik.http.routers.https-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.tls=true
- traefik.http.services.dev.loadbalancer.server.port=80
pgsql:
image: postgres:10
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
labels:
- traefik.enable=true
- traefik.tcp.routers.pgsql.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.pgsql.tls=true
- traefik.tcp.routers.pgsql.service=pgsql
- traefik.tcp.routers.pgsql.entrypoints=postgres
- traefik.tcp.services.pgsql.loadbalancer.server.port=5432
mongo:
image: mongo:3
labels:
- traefik.enable=true
- traefik.tcp.routers.mongo.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.mongo.tls=true
- traefik.tcp.routers.mongo.service=mongo
- traefik.tcp.routers.mongo.entrypoints=mongo
- traefik.tcp.services.mongo.loadbalancer.server.port=27017
redis:
image: redis:3
labels:
- traefik.enable=true
- traefik.tcp.routers.redis.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.redis.tls=true
- traefik.tcp.routers.redis.service=redis
- traefik.tcp.routers.redis.entrypoints=redis
- traefik.tcp.services.redis.loadbalancer.server.port=6379
config.toml
[tls]
[[tls.certificates]]
certFile = "/etc/certs/example.localhost.pem"
keyFile = "/etc/certs/example.localhost-key.pem"
Build & Run
mkcert example.localhost # in ./certs/
docker-compose up -d
Prepare step by step
Install mkcert (run also mkcert -install for CA)
Clone my code
In certs folder run mkcert example.localhost
Start container by docker-compose up -d
Open page https://example.localhost/ and check if it is secure connection
If address http://example.localhost/ is not reachable, add 127.0.0.1 example.localhost to /etc/hosts
Certs:
Public: ./certs/example.localhost.pem
Private: ./certs/example.localhost-key.pem
CA: ~/.local/share/mkcert/rootCA.pem
Test MongoDB
Install Robo 3T
Create new connection:
Address: example.localhost
Use SSL protocol
CA Certificate: rootCA.pem (or Self-signed Certificate)
Test tool:
Test Redis
Install RedisDesktopManager
Create new connection:
Address: example.localhost
SSL
Public Key: example.localhost.pem
Private Key: example.localhost-key.pem
Authority: rootCA.pem
Test tool:
So far:
Can connect to Postgres via IP (info from Traefik)
jdbc:postgresql://172.21.0.4:5432/postgres?sslmode=disable
jdbc:postgresql://172.21.0.4:5432/postgres?sslfactory=org.postgresql.ssl.NonValidatingFactory
Try telet (IP changes every docker restart):
> telnet 172.27.0.5 5432
Trying 172.27.0.5...
Connected to 172.27.0.5.
Escape character is '^]'.
^]
Connection closed by foreign host.
> telnet example.localhost 5432
Trying ::1...
Connected to example.localhost.
Escape character is '^]'.
^]
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad RequestConnection closed by foreign host.
If I connect directly to postgres, the data is nice. If I connect to via Traefik then I have Bad Request when closing the connection. I have no idea what this means and whether it must mean something.

At least for the PostgreSQL issue, it seems that the connection is started in cleartext and then upgraded to TLS:
Docs
Mailing list discussion
Issue on another proxy project
So it is basically impossible to use TLS termination with a proxy if said proxy doesn't support this cleartext handshake + upgrade to TLS function of the protocol.

Update to #jose-liber's answer:
SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI("example.com") along with tls to expose postgres databases under different subdomains.
As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)

Related

Attempt to rewrite minimal Traefik example to use TLS does not work

The minimal example from https://doc.traefik.io/traefik/user-guides/docker-compose/basic-example/ works on my local machine. However, when I try to adapt this to use TLS I run into an issue. I'm a Traefik newbie, so I might be doing a stupid mistake.
This is my attempt:
version: "3.3"
services:
traefik:
image: "traefik:v2.8"
container_name: "traefik"
command:
- "--log.level=DEBUG"
- "--accesslog=true"
- "--api.insecure=true"
- "--providers.docker=true"
- "--providers.docker.exposedbydefault=false"
- "--entrypoints.web.address=:80"
- "--entrypoints.websecure.address=:443"
ports:
- "443:443"
- "8080:8080"
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
whoami:
image: "traefik/whoami"
container_name: "simple-service"
labels:
- "traefik.enable=true"
- "traefik.http.routers.whoami.rule=Host(`127.0.0.1`)"
- "traefik.http.routers.whoami.entrypoints=websecure"
So the major modification is to use "traefik.http.routers.whoami.entrypoints=websecure" instead of "traefik.http.routers.whoami.entrypoints=web"
Running
$ curl -k https://127.0.0.1
I get
404 page not found
The traefik log shows no routing related issues and the internal traefik setup for routing etc shown using curl https://127.0.0.1:8080/api/rawdata | jq . looks the same as the one of the working example, except the changed port.
So I opted for new answer instead of just editing the old answer. (Reason being even incorrect answers teach something).
My reference is this great post by Marc Mogdanz (link: https://marcmogdanz.de/posts/infrastructure-with-traefik-and-cloudflare/).
The direct answer to your query is:
Expose port 8080 but do not publish it
Add a host name rule. This will allow Traefik to route a URL request to its own port 8080.
The affected part of the compose file would be as follows (assuming that the URL https://dashboard.example.com is the desired URL to reach the dashboard):
expose:
- 8080
...
labels:
- "traefik.enable=true"
- "traefik.http.routers.traefik.rule=Host(`dashboard.example.com`)"
- "traefik.http.routers.traefik.tls=true"
- "traefik.http.services.traefik.loadbalancer.server.port=8080"
Finally, I noticed you are testing on localhost. If you are testing on a local machine, use localhost for the dashboard and keep 127.0.0.1 for whoami.
Or, alternately, add a static entry for a subdomain (see https://stackoverflow.com/a/19016600).
Either way, Traefik is looking at the SNI requested - not necessarily the IP address - when matching the Host rule.
Request ----> Docker:443 ---> {Traefik}-"SNI?"---"127.0.0.1"---> {whoami}
| \
| \
8080<---"dashboard.localhost"
Add the following entry to your Traefik:
"--entrypoints.websecure.address=:8080"
Normally it would be 8080 for http and 8443 for https alternative ports, but since your example specifically states https://~:8080, I have adapted it accordingly.

load ssl certificate in haproxy with docker-compose

Hi I have haproxy in docker-compose as below -
haproxy:
image: haproxy:2.3
depends_on:
- my-service
volumes:
- ./config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./ssl:/usr/local/etc/ssl:ro
ports:
- 80:80
haproxy.config
frontend https
bind *:443 ssl crt /usr/local/etc/ssl/cert1.pem
but when I am doing docker-compose up -d always I am getting
unable to stat SSL certificate from file '/etc/ssl/cert1.pem' : No such file or directory.
I do not understand how to pass certificate to haproxy or what am I missing here. Can someone help on this.
I have ssl directory in my local from which I am moving certificate to /usr/local/etc/ssl in container

Using https with grafana/caddy on docker compose

I'm trying to understand how to implement https with grafana/caddy in docker compose without a domain name.
Currently, I access grafana via http://xx.xxx.xx.xx:3000/
I would like this to be https, but am struggling to understand how to generate the cert and have it work as expected. I think letsencrypt requires a domain which I don't have.
version: "3"
networks:
monitor-net:
driver: bridge
volumes:
grafana_data: {}
services:
grafana:
image: grafana/grafana:8.4.4
container_name: grafana
volumes:
- grafana_data:/var/lib/grafana
- ./grafana/provisioning/dashboards:/etc/grafana/provisioning/dashboards
- ./grafana/provisioning/datasources:/etc/grafana/provisioning/datasources
environment:
- GF_SECURITY_ADMIN_USER=${GF_ADMIN_USER}
- GF_SECURITY_ADMIN_PASSWORD=${GF_ADMIN_PASS}
- GF_USERS_ALLOW_SIGN_UP=false
restart: unless-stopped
expose:
- 3000
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
caddy:
image: caddy:2.3.0
container_name: caddy
ports:
- "3000:3000"
- "9090:9090"
- "9093:9093"
- "9091:9091"
volumes:
- ./caddy:/etc/caddy
environment:
- ADMIN_USER=${GF_ADMIN_USER}
- ADMIN_PASSWORD=${GF_ADMIN_PASS}
- ADMIN_PASSWORD_HASH=${ADMIN_PASS_HASH}
restart: unless-stopped
networks:
- monitor-net
labels:
org.label-schema.group: "monitoring"
I'm assuming I would create a volume on /etc/caddy/certs where I'd store the certificates, but don't know how to generate it for IP only or how it gets recognized by caddy.
Caddy for IP with SSL
By default, Caddy serves all sites over HTTPS.
Caddy serves IP addresses and local/internal hostnames over HTTPS using self-signed certificates that are automatically trusted locally (if permitted).
Examples: localhost, 127.0.0.1
Offical Docs Here
in your Caddyfile you have to add something like this
http://192.168.1.25:3000 {
reverse_proxy grafana_ip:3000
}
It looks like Caddy does not support generating HTTPS certificates for IP addresses. Additionally, Let's Encrypt does not currently support issuing certificates for bare IP addresses.
However, it does appear that ZeroSSL supports generating certificates for IPs. You could try using these instructions to change one or all of your sites to use ZeroSSL, but I wasn't able to get this to work on my test server.
The best option is probably to get a domain that you can point at your server, and then serve it from there.

Setting up SMTP server for your app in a container

I have an app which sends email and requires smtp running on port 25. For which I created another container and mapped port 25 from host to container.
That didnt quite work well, as it kept throwing the following error
ERROR: for smtp Cannot start service smtp: driver failed programming external connectivity on endpoint push_smtp_1 (25f260f6185dd34cfdb8fb9956c28187028aaca4d850d7a73acc4c2180c55696): Error starting userland proxy: Bind for 0.0.0.0:25: unexpected error (Failure EADDRINUSE)
Not sure what could be wrong here, following the other posts I tried restarting the docker client, as well as verified that there is nothing else running on port 25 lsof -i:25 Let me know if I am missing something here.
The 2nd part to this question is, what is the Ideal way to deal with smtp server.
Should the smtp server be created within the app container. Came across this blog http://www.tothenew.com/blog/setting-up-sendmail-inside-your-docker-container/
If not (1), then is it better to create a smtp container and map ports.Is so, whats the reason for getting the above error.
Below is how my docker-compose:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "25:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default
Thanks!
If you want the SMTP server to just be reachable from the other container and not from the outside, no need to map the port.
Using docker-compose, all defined containers will automatically be added to a network in which containers can reach each other by their name (see https://docs.docker.com/compose/networking/). If your custom "default" network is a bridge network, this will work as well.
That means, your SMTP container will directly be reachable at smtp:25 from other containers (i.e. its internal port and internal hostname instead of the host port and publicly routable IP address of your docker host).
Nobody else will be able to use your SMTP server like that. I think this might lead to problems with recipients not accepting the emails sent by it (see https://serverfault.com/q/364473). #David Maze has a point in saying that it's probably better to use a public/official mail provider anyways.
I think the issue is that you have something else on the host that is already listening on that port
Try to find out what ports on the host are listening with https://www.cyberciti.biz/faq/how-do-i-find-out-what-ports-are-listeningopen-on-my-linuxfreebsd-server/
Use some other port on the host:
version: '3'
services:
push:
image: emailService
ports:
- "9602:9602/tcp"
networks:
- default
build:
context: ./
dockerfile: Dockerfile
args:
- "TARGET=build"
depends_on:
- gearmand
- smtp
smtp:
image: catatnight/postfix:latest
ports:
- "2525:25"
networks:
- default
gearmand:
image: <path>/<to>/gearmand:latest
ports:
- "4730:4730/tcp"
networks:
- default

Gogs + Drone getsockopt: connection refused

In the Gogs/webhooks interface when i click the test delivery button i got this error;
Delivery: Post http://localhost:8000/hook?access_token=eyJhbGciOiJIUzI1NiIsInR5cCI6IkpXVCJ9.eyJ0ZXh0IjoidG9tL2Ryb255IiwidHlwZSI6Imhvb2sifQ.UZdOVW2IdiDcLQzKcnlmlAxkuA8GTZBH634G0K7rggI: dial tcp [::1]:8000: getsockopt: connection refused
This is my docker-compose.yml file
version: '2'
services:
gogs:
image: gogs/gogs
ports:
- 3000:3000
- 22:22
links:
- mysql
mysql:
image: mysql
expose:
- 3306
environment:
- MYSQL_ROOT_PASSWORD=1234
- MYSQL_DATABASE=gogs
drone:
image: drone/drone
links:
- gogs
ports:
- 8000:8000
volumes:
- ./drone:/var/lib/drone
- /var/run/docker.sock:/var/run/docker.sock
environment:
- REMOTE_DRIVER=gogs
- REMOTE_CONFIG=http://gogs:3000?open=true
# - PUBLIC_MODE=true
The root cause is that Drone assumes its external address is localhost:8000 because that is how it is being accessed from your browser. Drone therefore configures all Gogs webhooks to use localhost:8000/hook as the callback URL.
The problem here is that Gogs and Drone are running in separate containers on separate networks. This means when Gogs tries to send the hook to drone it sends it to localhost:8000 and fails because Drone is on a separate bridge.
Recommended Fix
The recommended fix is to use DNS or an IP address with Drone and Gogs. If you are running a production installation of either system it is unlikely you will be using localhost, so this seems like reasonable soluation.
For local testing you can also use the local IP address assigned by Docker. You can find your Drone and Gogs IP addresses using Docker inspect:
"Networks": {
"default": {
"IPAddress": "172.18.0.3"
}
}
Why not custom hostnames?
Using custom hostnames, such as http://gogs, is problematic because drone creates ephemeral Docker containers for every build using the default Docker network settings. This means your build environment will use its own isolated network and will not be able to resolve http://gogs
So even if we configured Drone and Gogs to communicate using custom hostnames, the build environment would be unable to resolve the Gogs hostname to clone your repository.