Register gitab-runner throws cannot assign requested address (ubuntu, docker) - docker-compose

Hello everyone,
I have been fighting with this for 7 days now and am getting nowhere (only frustrated). I really hope someone can help me. Please keep in mind that I am no network expert, as I believe the problem lies here.
The problem:
Attempts to register a gitlab-runner results in this error:
screenshot of the error
The setup
Everything is installed on a single server in my home network. So it's
- Laptop accessing the server
Internet - Router (FritzBox) 192.168.1.1 - Server 192.168.1.100
- Other
The server runs
Ubuntu 18.04.4 LTS
Docker version 19.03.8, build afacb8b7f0
I got my gitlab and gitlab-runner working a few months ago without https (I figured being a one-man team inside my own network I don't need https). I used docker-compose to run gitlab, postgresql and redis and 'normal' docker to run a gitlab-runner. That too was a struggle for me and it took me a while to figure out that I had to use url = "http://192.168.1.100:30080/" to register the runner.
But then I decided to upgrade to https using a self-signed certificate. I did this because I wanted to use the gitlab-buildin docker registry to speed up my builds, and as I understand that requires https.
I succeeded with gitlab. I can view my repositores, push changed, create issues and whatnot. But, as the title says, I am unable to register a gitlab-runner over https.
docker-compose.yml
Lets start with the docker-compose, which starts postgres, redis, gitlab and now also the gitlab-runner:
version: '3.7'
services:
postgresql:
restart: always
image: postgres:12-alpine
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "10"
environment:
- POSTGRES_USER=xxxxxxxxxxx
- POSTGRES_PASSWORD=xxxxxxxxxxx
- POSTGRES_DB=xxxxxxxxxxx
volumes:
- /opt/postgresql:/var/lib/postgresql:rw
redis:
restart: always
image: redis:5-alpine
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "10"
gitlab:
image: 'gitlab/gitlab-ce'
restart: always
hostname: 'treffer-technologies.home-webserver.de'
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "10"
links:
- postgresql:postgresql
- redis:redis
environment:
GITLAB_OMNIBUS_CONFIG: |
# postgres
postgresql['enable'] = false
gitlab_rails['db_username'] = "xxxxxxxxxxx"
gitlab_rails['db_password'] = "xxxxxxxxxxx"
gitlab_rails['db_host'] = "postgresql"
gitlab_rails['db_port'] = "5432"
gitlab_rails['db_database'] = "xxxxxxxxxxx"
gitlab_rails['db_adapter'] = 'postgresql'
gitlab_rails['db_encoding'] = 'utf8'
# redis
redis['enable'] = false
gitlab_rails['redis_host'] = 'redis'
gitlab_rails['redis_port'] = '6379'
# nginx
nginx['redirect_http_to_https'] = true
registry_nginx['redirect_http_to_https'] = true
# email
gitlab_rails['smtp_enable'] = true
gitlab_rails['smtp_address'] = "smtp.gmail.com"
gitlab_rails['smtp_port'] = 587
gitlab_rails['smtp_user_name'] = "xxxxxxxxxxx"
gitlab_rails['smtp_password'] = "xxxxxxxxxxx"
gitlab_rails['smtp_domain'] = "xxxxxxxxxxx"
gitlab_rails['smtp_authentication'] = "login"
gitlab_rails['smtp_enable_starttls_auto'] = true
gitlab_rails['smtp_tls'] = false
gitlab_rails['smtp_openssl_verify_mode'] = 'peer'
# other
gitlab_rails['gitlab_shell_ssh_port'] = 30022
# https://docs.gitlab.com/omnibus/settings/ssl.html#lets-encrypt-integration
external_url 'https://treffer-technologies.home-webserver.de:30443'
# registry
registry_external_url 'https://treffer-technologies.home-webserver.de:30090'
ports:
# host:container
# both ports must match the port from external_url above
- "30080:30080"
# the mapped port must match ssh_port specified above.
- "30022:22"
# https
- "30443:30443"
# registry
- "30090:30090"
volumes:
- /opt/gitlab/config:/etc/gitlab:rw
- /opt/gitlab/log:/var/log/gitlab:rw
- /opt/gitlab/data:/var/opt/gitlab:rw
depends_on:
- postgresql
- redis
runner:
image: 'gitlab/gitlab-runner:alpine'
restart: always
logging:
driver: "json-file"
options:
max-size: "10m"
max-file: "10"
volumes:
- /opt/gitlab-runner/config:/etc/gitlab-runner
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- gitlab
As you can see, the url of my gitlab is https://treffer-technologies.home-webserver.de:30443.
gitlab-runner register
And here is the registration code:
docker run --rm -t -i -v /opt/gitlab-runner/config:/etc/gitlab-runner gitlab/gitlab-runner:alpine --debug register \
--non-interactive \
--executor "docker" \
--docker-image alpine:3 \
--url "https://treffer-technologies.home-webserver.de:30443" \
--registration-token "xxxxxxxxxxx" \
--description "gitlab-runner-docker" \
--tag-list "build,test,deploy" \
--locked="false"
which, wenn executed, results in this error:
Runtime platform arch=amd64 os=linux pid=6 revision=4c96e5ad
version=12.9.0
Checking runtime mode GOOS=linux uid=0
Running in system-mode.
Trying to load /etc/gitlab-runner/certs/treffer-technologies.home-webserver.de.crt ...
Dialing: tcp treffer-technologies.home-webserver.de:30443 ...
ERROR: Registering runner... failed runner=xxxxxxxx status=couldn't execute
POST against https://treffer-technologies.home-webserver.de:30443/api/v4/runners:
Post https://treffer-technologies.home-webserver.de:30443/api/v4/runners:
dial tcp [2001:16b8:a582:1800:314f:5277:9434:77ad]:30443:
connect: cannot assign requested address
PANIC: Failed to register this runner. Perhaps you are having network problems
According to Supported options for self-signed certificates I copied the same certificate I created and use for my gitlab to /opt/gitlab-runner/config/certs/treffer-technologies.home-webserver.de.crt. The content beginns with -----BEGIN, so I think it is encoded in PEM.
Firewall
ufw is inactive until this problem is resolved.
Logs
As far as I can tell, the registration process is not reaching my gitlab, since I can find no signs of a request in the gitlab logs. This is why I believe I have a network problem.
Probing gitlab-runner container
Using docker-compose exec runner /bin/sh I found out that:
ping gitlab
PING gitlab (172.22.0.5): 56 data bytes
64 bytes from 172.22.0.5: seq=0 ttl=64 time=0.055 ms
64 bytes from 172.22.0.5: seq=1 ttl=64 time=0.105 ms
64 bytes from 172.22.0.5: seq=2 ttl=64 time=0.150 ms
64 bytes from 172.22.0.5: seq=3 ttl=64 time=0.154 ms
64 bytes from 172.22.0.5: seq=4 ttl=64 time=0.151 ms
^C
--- gitlab ping statistics ---
5 packets transmitted, 5 packets received, 0% packet loss
round-trip min/avg/max = 0.055/0.123/0.154 ms
172.22.0.5 is exactly the IP of the docker-container gitlab, as expected. However, using register against https://gitlab:30443 results in
Dialing: tcp gitlab:30443 ...
ERROR: Registering runner... failed runner=xxxxxx
status=couldn't execute POST against https://gitlab:30443/api/v4/runners: Post https://gitlab:30443/api/v4/runners: dial tcp: lookup gitlab on 8.8.8.8:53: no such host
PANIC: Failed to register this runner. Perhaps you are having network problems
ping treffer-technologies.home-webserver.de
PING treffer-technologies.home-webserver.de (2001:16b8:a582:1800:314f:5277:9434:77ad): 56 data bytes
ping: sendto: Address not available
Adding the line
172.22.0.5 treffer-technologies.home-webserver.de
to the hosts of the gitlab-runner-container makes the ping work, but the register still results in
Trying to load /etc/gitlab-runner/certs/treffer-technologies.home-webserver.de.crt ...
Dialing: tcp treffer-technologies.home-webserver.de:30443 ...
ERROR: Registering runner... failed runner=xxxxxxxx status=couldn't execute POST against https://treffer-technologies.home-webserver.de:30443/api/v4/runners: Post https://treffer-technologies.home-webserver.de:30443/api/v4/runners: dial tcp [2001:16b8:a582:1800:314f:5277:9434:77ad]:30443: connect: cannot assign requested address
PANIC: Failed to register this runner. Perhaps you are having network problems
/etc/hosts
of gitlab-runner docker container
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.22.0.4 0181ad73e31f
# manually added to make ping work
# 172.22.0.5: gitlab-container
172.22.0.5 treffer-technologies.home-webserver.de
of host / the server
127.0.0.1 localhost
127.0.1.1 HP-ProDesk-400-G5-Desktop-Mini
# The following lines are desirable for IPv6 capable hosts
::1 ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
The self-signed certificate
I used this guide. Basically, I did openssl genrsa and used treffer-technologies.home-webserver.de and FQDN. The content starts with ----- BEGIN.
More information
Thank you for reading all of this. If you want to help and need more information I will provide them as fast as I can. Thanks :)
edit: entered image description, typos, grammar (I am german), removed statement that this is my first question (not true, it is my second), added probing gitlab-container, changed ips to reflect the current state after my tinkering

I found a working setup:
after noticing that running register against 192.168.1.100 resulted in a different error:
different erros for different register targets
I created a self-signed SSL using this guide which includes 192.168.1.100 alongside treffer-technologies.home-webserver.de. Than I moved that into gitlab-runner/config/certs/ca.crt instead of gitlab-runner/config/certs/treffer-technologies.home-webserver.de.crt.
[ v3_ca ]
subjectAltName = #alternate_names
# added manually
# https://stackoverflow.com/questions/21488845/how-can-i-generate-a-self-signed-certificate-with-subjectaltname-using-openssl
[ alternate_names ]
DNS.1 = treffer-technologies.home-webserver.de
DNS.2 = www.treffer-technologies.home-webserver.de
IP = 192.168.1.100
With that, gitlab-runner picked up the certificate and the register was successful. No need to edit any hosts or add additional networks or links to docker. I dont know if this is the proper way to do it but at least it works for me

Related

Enabling of SSL on the official postgresql docker image

I have an issue with enabling of SSL support on postgres docker image. The following configuration is used in docker-compose.yml
version: '3.5'
services:
postgresserver:
image: postgres:14.5
container_name: postgresserver
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: my_password
PGPORT: 5432
command: -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/CA.pem -c clientcert=verify-ca
volumes:
- "./certs/myCA.pem:/var/lib/postgresql/CA.pem"
- "./certs/postgresserver.internal.crt:/var/lib/postgresql/server.crt"
- "./certs/postgresserver.internal.key:/var/lib/postgresql/server.key"
networks:
default:
aliases:
- postgresserver.internal
openssl:
image: shamelesscookie/openssl:1.1.1
container_name: openssl
stdin_open: true # docker run -i
tty: true
networks:
default:
name: dummy network
driver: bridge
ipam:
config:
- subnet: 172.177.0.0/16
The files server.crt, server.key contain the server certificate and the private key signed by my own CA athorities whose certificate is in CA.pem
According to the official postgres/docker documentation
https://github.com/docker-library/docs/blob/master/postgres/README.md
it should works (Section: Database Configuration,
From the PostgreSQL docs we see that any option available in a .conf file can be set via -c.
see also https://www.postgresql.org/docs/14/app-postgres.html#id-1.9.5.14.6.3 for further details). I have tried to connect using the pre-installed psql-client from Windows PowerShell on host as follows:
& 'C:\Program Files\PostgreSQL\14\bin\psql.exe' "sslmode=require host=localhost port=5432 dbname=test"
This call have produced the following output:
psql: error: connection to server at "localhost" (::1), port 5432 failed: server does not support SSL, but SSL was required
The call without "sslmode=require" switch works like a charm.
I have also tried to use openssl from openssl container as follows:
openssl s_client -starttls postgres -connect postgresserver:5432
This call has produced the following output:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 1 bytes and written 8 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
What might be wrong here?
Alternatively: Working configuration is higly appreciated. Thanks!

PGHOST for GitLab pipeline with docker:dind for postgres created by docker-compose

I have a docker-compose file, that initializes postgres and service for postgres migration. And I want to run tests in gitlab pipeline against my docker-compose baked postgres service, but I can't connect to pg_db via localhost. Inside my code I use pgx package. On my local machine there is no trouble to use localhost for PGHOST env variable.
So my main question is what host to put in PGHOST variable for my tests to use for postgres connection inside gitlab pipeline.
docker-compose.yml
version: "3.3"
services:
pg_db:
container_name: pg_db
image: postgres:13.2-alpine
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_SSLMODE=${POSTGRES_SSLMODE}
- POSTGRES_HOST_AUTH_METHOD=${POSTGRES_HOST_AUTH_METHOD}
ports:
- ${POSTGRES_PORT}:5432
restart: always
deploy:
resources:
limits:
cpus: '1'
memory: 4G
networks:
- postgres
- backend
#init db
store-init:
image: x:latest
container_name: store-init
environment:
- PGHOST=pg_db
- PGUSER=${POSTGRES_USER}
- PGPASSWORD=${POSTGRES_PASSWORD}
- PGDATABASE=${POSTGRES_DB}
- PGPORT=${POSTGRES_PORT}
restart: on-failure
depends_on:
- pg_db
networks:
- postgres
- backend
networks:
backend:
postgres:
driver: bridge
And here is a significant part of my gitlab-ci.yml
services:
- docker:dind
stages:
- test
test:
stage: test
image: golang:1.17-alpine3.15
variables:
PGHOST: localhost
before_script:
- apk update && apk add make git openssh g++
- apk add --no-cache docker-compose
- git config --global user.email "$GITLAB_USER_EMAIL" && git config --global user.name "$GITLAB_USER_NAME"
- mkdir -p ~/.ssh && echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa && chmod -R 600 ~/.ssh && ssh-keyscan -t rsa ssh.x >> ~/.ssh/known_hosts
script:
- cp .env.example .env
- docker-compose up -d
- sleep 30 # a temporary line to get the logs
- cat /etc/hosts # debug line
- docker-compose port pg_db 5432 # debug line
- netstat -a # debug line
- docker-compose ps # debug line
- go test -v -timeout 30s ./... -tags=withDB
only:
- merge_request
- dev
- master
The logs I get for
variables:
PGHOST: localhost
$ cp .env.example .env
$ docker-compose up -d
Recreating alp-logger_pg_db_1 ...
Recreating alp-logger_pg_db_1 ... done
store-init is up-to-date
$ sleep 30
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 runner-lxzkchpx-project-304-concurrent-0
$ docker-compose port pg_db 5432
0.0.0.0:5432
$ netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 runner-lxzkchpx-project-304-concurrent-0:50294 static.124.194.21.65.clients.your-server.de:ssh TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
pg_db docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
store-init ./alp-store Up
and the error of connecting to postgres db:
failed to connect to `host=localhost user=test database=test`: dial error (dial tcp [::1]:5432: connect: cannot assign requested address)
The logs for debug commands are same, so I'll skip them. The errors I get for
variables:
PGHOST: pg_db
and for any other named host like docker.
failed to connect to `host=pg_db user=test database=test`: hostname resolving error (lookup pg_db on 1.1.1.1:53: no such host)
The errors I get for
variables:
PGHOST: 127.0.0.1
failed to connect to `host=127.0.0.1 user=test database=test`: dial error (dial tcp 127.0.0.1:5432: connect: connection refused)
One of the important distinctions between running containers on your local machine and running them in GitLab using docker:dind is that the containers are not available on 'localhost' -- they are available on the docker:dind container.
If you want to talk to this container, in your scenario, the postgres container would be available on docker:5432 (docker being the hostname of the docker:dind container where your postgres container has its port mapping).
Illustration with simple HTTP service container
As a simplified example if you were to run the container strm/helloworld-http locally with a port mapping, the following works:
docker run -d --rm -p 80:80 strm/helloworld-http
# give it some time to startup
curl http://localhost # this works
However, the same setup in GitLab does not:
myjob:
variables: # these variables are not necessarily required
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
services:
- docker:dind
script:
- docker run -d --rm -p 80:80 strm/helloworld-http
- sleep 10
- curl http://localhost # Fails!
One fix would be to use the docker hostname instead:
script:
- docker run -d --rm -p 80:80 strm/helloworld-http
- sleep 10
- curl http://docker # works!

Docker container communication with other container on diffirent host/server

I am having two servers (CentOS8).
On server1 I have mysql-server container and on server2 I have zabbix-front-end i.e zabbix-web-apache-mysql (container name zabbixfrontend).
I am trying to connect to mysql-server from zabbixfrontend container. Getting error
bash-4.4$ mysql -h <MYSQL_SERVER_IP> -P 3306 -uroot -p
Enter password:
ERROR 2002 (HY000): Can't connect to MySQL server on '<MYSQL_SERVER_IP>' (115)
When I do nc from zabbixfrontend container to my mysql-server IP I get "No route to host." error message.
bash-4.4$ nc -zv <MYSQL_SERVER_IP> 3306
Ncat: Version 7.70 ( https://nmap.org/ncat )
Ncat: No route to host.
NOTE : I am successfully do nc from the host machine (server2) mysql-server container.
docker-compose.yml
version: '3.5'
services:
zabbix-web-apache-mysql:
image: zabbix/zabbix-web-apache-mysql:centos-8.0-latest
container_name: zabbixfrontend
#network_mode: host
ports:
- "80:8080"
- "443:8443"
volumes:
- /etc/localtime:/etc/localtime:ro
- /etc/timezone:/etc/timezone:ro
- ./zbx_env/etc/ssl/apache2:/etc/ssl/apache2:ro
- ./usr/share/zabbix/:/usr/share/zabbix/
env_file:
- .env_db_mysql
- .env_web
secrets:
- MYSQL_USER
- MYSQL_PASSWORD
- MYSQL_ROOT_PASSWORD
# zbx_net_frontend:
sysctls:
- net.core.somaxconn=65535
secrets:
MYSQL_USER:
file: ./.MYSQL_USER
MYSQL_PASSWORD:
file: ./.MYSQL_PASSWORD
MYSQL_ROOT_PASSWORD:
file: ./.MYSQL_ROOT_PASSWORD
docker logs zabbixfrontend out as below
** Deploying Zabbix web-interface (Apache) with MySQL database
** Using MYSQL_USER variable from ENV
** Using MYSQL_PASSWORD variable from ENV
********************
* DB_SERVER_HOST: <MYSQL_SERVER_IP>
* DB_SERVER_PORT: 3306
* DB_SERVER_DBNAME: zabbix
********************
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
**** MySQL server is not available. Waiting 5 seconds...
The nc message is telling the truth: No route to host.
This happens because when you deploy your front-end container in the docker bridge network, its IP address belongs to the 172.18.0.0/16 subnet and you a are trying to reach an the database via an IP address that belongs to a different subnet (10.0.0.0/16).
On the other hand, when you deploy your front-end container on the host network, you no longer face that problem, because now the IP is literally using the IP address of the host machine, 10.0.0.2 and there is no need for a route to be explicitly created to reach 10.0.0.3.
Now the problem you are facing is that you can no longer access the web-ui via the browser. This happens because I assume you kept the ports:" option in your docker-compose.yml and tried to access the service on localhost:80/443. The source and destination ports do not need to be specified if you run the container on the host network. The container will just listen directly on the host on the port that's opened inside the container.
Try to run the front-end container with this config and then access it on localhost:8080 and localhost:8443:
...
network_mode: host
# ports:
# - "80:8080"
# - "443:8443"
volumes:
...
Running containers on the host network is not something that I would usually recommend, but hence your setup is quite special, having one container running on one docker host and another container running in another independent docker host, I assume you don't want create an overlay network and eventually register the two docker hosts to a swarm.

docker pgAdmin4 connection refused while connecting local postgres database

I am trying to connect my local postgres database using pgAdmin4 docker container. When I open http://localhost:5050/ and login after create new server connection I got Unable to connect to server: could not connect to server: Connection refused error message.
Here is my docker-compose.yml file
version: '3.5'
services:
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4
environment:
PGADMIN_DEFAULT_EMAIL: db#db.com
PGADMIN_DEFAULT_PASSWORD: odoo
volumes:
- pgadmin:/root/.pgadmin
ports:
- "5050:80"
restart: unless-stopped
volumes:
pgadmin:
I am finding solution to connect my local postgres databasw with pgadmin4 docker container. I am using Ubuntu 20.04 os system.
---- Updated base on #Veikko answer -----------
Here is my docker-compose file code https://pastebin.com/VmhZwtaL
and here is postgresql.conf file https://pastebin.com/R7ifFrGR
and pg_hba.conf file https://pastebin.com/yC2zCfBG
You can access your host machine from your docker container with dns name host.docker.internal. Replace localhost in your server connection with this name. The name localhost inside your pgAdmin refers to the docker container itself, not to your host machine.
You can use image qoomon/docker-host to access services on your host computer. Add docker-host service to your docker-compose like this:
version: '3.5'
services:
docker-host:
image: qoomon/docker-host
cap_add: [ 'NET_ADMIN', 'NET_RAW' ]
restart: on-failure
pgadmin:
container_name: pgadmin4
image: dpage/pgadmin4
environment:
...
After this you should be able to access your host Postgres service with host name docker-host. Replace localhost with docker-host in your connection string and connection should work.
If problems with connection after this, please make sure you do not have any firewall blocking the traffic, you have proper Docker network setup (see docs) and your postgresql is listening to this address.
Ubuntu/linux version of Docker does not currently support host.docker.internal DNS name that would point containers to the host. That is the easiest way to link to host in Docker for Mac or Windows. I hope we get this also to Linux soon.
More information about docker-host can be found in Github repo: https://github.com/qoomon/docker-host
I have some problem before then now I have find the solution.
Just type this command
docker exec <container_name> cat /etc/hosts
Then it will show this
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.18.0.3 607b00c25f29
use 172.18.0.3 as you hostname
Hope this can help you.
First call this: sudo netstat -nplt to be sure that someone listening on port 5432.
Than call sudo ip addr to know IP of your host mashine.
Than try connecting using real IP instead of localhost.
I see in your ip addr output:
3: wlp3s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 30:10:b3:e4:39:e2 brd ff:ff:ff:ff:ff:ff inet 192.168.43.190/24 brd 192.168.43.255 scope global dynamic noprefixroute wlp3s0
So your real IP is 192.168.43.190
Create a server with host name/address 172.17.0.1
and make sure PostgreSQL is listening (*).
install_pgadmin4_using_docker
you can try to add to your pg_hba.conf
host all all 0.0.0.0/0 trust
or
host all all 0.0.0.0/0 md5
to test.
Then if this works you should change the 0.0.0.0/0 netmask to your docker bright netmask. And check it again. Btw. to connect to you localhost (on host) you need to connect to docker bright ip for me it's 172.17.0.1/32.
EDIT:
Second: postgresql.conf
uncomment and read what there is written:
listen_addresses = '*' # what IP address(es) to listen on;
# comma-separated list of addresses;
# defaults to 'localhost'; use '*' for all
# (change requires restart)
you need to bind to your IP's that are connectable from docker so in my case it should be 172.17.0.1

How to connect to Traefik TCP Services with TLS configuration enabled?

I am trying to configure Traefik so that I would have access to services via domain names, and that I would not have to set different ports. For example, two MongoDB services, both on the default port, but in different domains, example.localhost and example2.localhost. Only this example works. I mean, other cases probably work, but I can't connect to them, and I don't understand what the problem is. This is probably not even a problem with Traefik.
I have prepared a repository with an example that works. You just need to generate your own certificate with mkcert. The page at example.localhost returns the 403 Forbidden error but you should not worry about it, because the purpose of this configuration is to show that SSL is working (padlock, green status). So don't focus on 403.
Only the SSL connection to the mongo service works. I tested it with the Robo 3T program. After selecting the SSL connection, providing the host on example.localhost and selecting the certificate for a self-signed (or own) connection works. And that's the only thing that works that way. Connections to redis (Redis Desktop Manager) and to pgsql (PhpStorm, DBeaver, DbVisualizer) do not work, regardless of whether I provide certificates or not. I do not forward SSL to services, I only connect to Traefik. I spent long hours on it. I searched the internet. I haven't found the answer yet. Has anyone solved this?
PS. I work on Linux Mint, so my configuration should work in this environment without any problem. I would ask for solutions for Linux.
If you do not want to browse the repository, I attach the most important files:
docker-compose.yml
version: "3.7"
services:
traefik:
image: traefik:v2.0
ports:
- 80:80
- 443:443
- 8080:8080
- 6379:6379
- 5432:5432
- 27017:27017
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.toml:/etc/traefik/traefik.config.toml:ro
- ./certs:/etc/certs:ro
command:
- --api.insecure
- --accesslog
- --log.level=INFO
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.mongo.address=:27017
- --entrypoints.postgres.address=:5432
- --entrypoints.redis.address=:6379
- --providers.file.filename=/etc/traefik/traefik.config.toml
- --providers.docker
- --providers.docker.exposedByDefault=false
- --providers.docker.useBindPortIP=false
apache:
image: php:7.2-apache
labels:
- traefik.enable=true
- traefik.http.routers.http-dev.entrypoints=http
- traefik.http.routers.http-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.entrypoints=https
- traefik.http.routers.https-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.tls=true
- traefik.http.services.dev.loadbalancer.server.port=80
pgsql:
image: postgres:10
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
labels:
- traefik.enable=true
- traefik.tcp.routers.pgsql.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.pgsql.tls=true
- traefik.tcp.routers.pgsql.service=pgsql
- traefik.tcp.routers.pgsql.entrypoints=postgres
- traefik.tcp.services.pgsql.loadbalancer.server.port=5432
mongo:
image: mongo:3
labels:
- traefik.enable=true
- traefik.tcp.routers.mongo.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.mongo.tls=true
- traefik.tcp.routers.mongo.service=mongo
- traefik.tcp.routers.mongo.entrypoints=mongo
- traefik.tcp.services.mongo.loadbalancer.server.port=27017
redis:
image: redis:3
labels:
- traefik.enable=true
- traefik.tcp.routers.redis.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.redis.tls=true
- traefik.tcp.routers.redis.service=redis
- traefik.tcp.routers.redis.entrypoints=redis
- traefik.tcp.services.redis.loadbalancer.server.port=6379
config.toml
[tls]
[[tls.certificates]]
certFile = "/etc/certs/example.localhost.pem"
keyFile = "/etc/certs/example.localhost-key.pem"
Build & Run
mkcert example.localhost # in ./certs/
docker-compose up -d
Prepare step by step
Install mkcert (run also mkcert -install for CA)
Clone my code
In certs folder run mkcert example.localhost
Start container by docker-compose up -d
Open page https://example.localhost/ and check if it is secure connection
If address http://example.localhost/ is not reachable, add 127.0.0.1 example.localhost to /etc/hosts
Certs:
Public: ./certs/example.localhost.pem
Private: ./certs/example.localhost-key.pem
CA: ~/.local/share/mkcert/rootCA.pem
Test MongoDB
Install Robo 3T
Create new connection:
Address: example.localhost
Use SSL protocol
CA Certificate: rootCA.pem (or Self-signed Certificate)
Test tool:
Test Redis
Install RedisDesktopManager
Create new connection:
Address: example.localhost
SSL
Public Key: example.localhost.pem
Private Key: example.localhost-key.pem
Authority: rootCA.pem
Test tool:
So far:
Can connect to Postgres via IP (info from Traefik)
jdbc:postgresql://172.21.0.4:5432/postgres?sslmode=disable
jdbc:postgresql://172.21.0.4:5432/postgres?sslfactory=org.postgresql.ssl.NonValidatingFactory
Try telet (IP changes every docker restart):
> telnet 172.27.0.5 5432
Trying 172.27.0.5...
Connected to 172.27.0.5.
Escape character is '^]'.
^]
Connection closed by foreign host.
> telnet example.localhost 5432
Trying ::1...
Connected to example.localhost.
Escape character is '^]'.
^]
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad RequestConnection closed by foreign host.
If I connect directly to postgres, the data is nice. If I connect to via Traefik then I have Bad Request when closing the connection. I have no idea what this means and whether it must mean something.
At least for the PostgreSQL issue, it seems that the connection is started in cleartext and then upgraded to TLS:
Docs
Mailing list discussion
Issue on another proxy project
So it is basically impossible to use TLS termination with a proxy if said proxy doesn't support this cleartext handshake + upgrade to TLS function of the protocol.
Update to #jose-liber's answer:
SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI("example.com") along with tls to expose postgres databases under different subdomains.
As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)