I have an issue with enabling of SSL support on postgres docker image. The following configuration is used in docker-compose.yml
version: '3.5'
services:
postgresserver:
image: postgres:14.5
container_name: postgresserver
ports:
- "5432:5432"
environment:
POSTGRES_PASSWORD: my_password
PGPORT: 5432
command: -c ssl=on -c ssl_cert_file=/var/lib/postgresql/server.crt -c ssl_key_file=/var/lib/postgresql/server.key -c ssl_ca_file=/var/lib/postgresql/CA.pem -c clientcert=verify-ca
volumes:
- "./certs/myCA.pem:/var/lib/postgresql/CA.pem"
- "./certs/postgresserver.internal.crt:/var/lib/postgresql/server.crt"
- "./certs/postgresserver.internal.key:/var/lib/postgresql/server.key"
networks:
default:
aliases:
- postgresserver.internal
openssl:
image: shamelesscookie/openssl:1.1.1
container_name: openssl
stdin_open: true # docker run -i
tty: true
networks:
default:
name: dummy network
driver: bridge
ipam:
config:
- subnet: 172.177.0.0/16
The files server.crt, server.key contain the server certificate and the private key signed by my own CA athorities whose certificate is in CA.pem
According to the official postgres/docker documentation
https://github.com/docker-library/docs/blob/master/postgres/README.md
it should works (Section: Database Configuration,
From the PostgreSQL docs we see that any option available in a .conf file can be set via -c.
see also https://www.postgresql.org/docs/14/app-postgres.html#id-1.9.5.14.6.3 for further details). I have tried to connect using the pre-installed psql-client from Windows PowerShell on host as follows:
& 'C:\Program Files\PostgreSQL\14\bin\psql.exe' "sslmode=require host=localhost port=5432 dbname=test"
This call have produced the following output:
psql: error: connection to server at "localhost" (::1), port 5432 failed: server does not support SSL, but SSL was required
The call without "sslmode=require" switch works like a charm.
I have also tried to use openssl from openssl container as follows:
openssl s_client -starttls postgres -connect postgresserver:5432
This call has produced the following output:
CONNECTED(00000003)
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 1 bytes and written 8 bytes
Verification: OK
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
No ALPN negotiated
Early data was not sent
Verify return code: 0 (ok)
---
What might be wrong here?
Alternatively: Working configuration is higly appreciated. Thanks!
Related
Hi I have haproxy in docker-compose as below -
haproxy:
image: haproxy:2.3
depends_on:
- my-service
volumes:
- ./config/haproxy.cfg:/usr/local/etc/haproxy/haproxy.cfg:ro
- ./ssl:/usr/local/etc/ssl:ro
ports:
- 80:80
haproxy.config
frontend https
bind *:443 ssl crt /usr/local/etc/ssl/cert1.pem
but when I am doing docker-compose up -d always I am getting
unable to stat SSL certificate from file '/etc/ssl/cert1.pem' : No such file or directory.
I do not understand how to pass certificate to haproxy or what am I missing here. Can someone help on this.
I have ssl directory in my local from which I am moving certificate to /usr/local/etc/ssl in container
I am trying to run bamboo-server using a docker container and connect it to postgres db that is running on another container. First I run the postgres db and create an empty database named bamboo with a user postgres and password postgres.
And I run this commend to run bamboo server from https://hub.docker.com/r/atlassian/bamboo
$> docker volume create --name bambooVolume
$> docker run -v bambooVolume:/var/atlassian/application-data/bamboo --name="bamboo" -d -p 8085:8085 -p 54663:54663 atlassian/bamboo
Then I open localhost:8085 and generate a license and reach the point that I see this error
Error accessing database: org.postgresql.util.PSQLException: Connection to localhost:5432 refused. Check that the hostname and port are correct and that the postmaster is accepting TCP/IP connections.
What is the problem?
SOLUTION:
Worked with this dokcer-compose yaml:
version: '2'
services:
bamboo:
image: atlassian/bamboo
container_name: bamboo
ports:
- '54663:5436'
- '8085:8085'
networks:
- bamboonet
volumes:
- bamboo-data:/var/atlassian/application-data/bamboo
hostname: bamboo
environment:
CATALINA_OPTS: -Xms256m -Xmx1g
BAMBOO_PROXY_NAME:
BAMBOO_PROXY_PORT:
BAMBOO_PROXY_SCHEME:
BAMBOO_DELAYED_START:
labels:
com.blacklabelops.description: "Atlassian Bamboo"
com.blacklabelops.service: "bamboo"
db-bamboo:
image: postgres
container_name: postgres
hostname: postgres
networks:
- bamboonet
volumes:
- bamboo-data-db:/var/lib/postgresql/data
ports:
- '5432:5432'
environment:
POSTGRES_PASSWORD: password
POSTGRES_USER: bamboo
POSTGRES_DB: bamboo
POSTGRES_ENCODING: UTF8
POSTGRES_COLLATE: C
POSTGRES_COLLATE_TYPE: C
PGDATA: /var/lib/postgresql/data/pgdata
labels:
com.blacklabelops.description: "PostgreSQL Database Server"
com.blacklabelops.service: "postgresql"
volumes:
bamboo-data:
external: false
bamboo-data-db:
external: false
networks:
bamboonet:
driver: bridge
If you don't set network of your docker it will be used bridge mode as default.
I think the problem is you might use {containerName}:5432 instead of localhost:5432 from your JDBC connection string, because localhost mean your container of website instead of real computer, so that you can't connect to DB by that.
jdbc:postgresql://bamboo-pg-db-container:5432/bamboo
I have a docker-compose file, that initializes postgres and service for postgres migration. And I want to run tests in gitlab pipeline against my docker-compose baked postgres service, but I can't connect to pg_db via localhost. Inside my code I use pgx package. On my local machine there is no trouble to use localhost for PGHOST env variable.
So my main question is what host to put in PGHOST variable for my tests to use for postgres connection inside gitlab pipeline.
docker-compose.yml
version: "3.3"
services:
pg_db:
container_name: pg_db
image: postgres:13.2-alpine
environment:
- POSTGRES_PASSWORD=${POSTGRES_PASSWORD}
- POSTGRES_USER=${POSTGRES_USER}
- POSTGRES_DB=${POSTGRES_DB}
- POSTGRES_SSLMODE=${POSTGRES_SSLMODE}
- POSTGRES_HOST_AUTH_METHOD=${POSTGRES_HOST_AUTH_METHOD}
ports:
- ${POSTGRES_PORT}:5432
restart: always
deploy:
resources:
limits:
cpus: '1'
memory: 4G
networks:
- postgres
- backend
#init db
store-init:
image: x:latest
container_name: store-init
environment:
- PGHOST=pg_db
- PGUSER=${POSTGRES_USER}
- PGPASSWORD=${POSTGRES_PASSWORD}
- PGDATABASE=${POSTGRES_DB}
- PGPORT=${POSTGRES_PORT}
restart: on-failure
depends_on:
- pg_db
networks:
- postgres
- backend
networks:
backend:
postgres:
driver: bridge
And here is a significant part of my gitlab-ci.yml
services:
- docker:dind
stages:
- test
test:
stage: test
image: golang:1.17-alpine3.15
variables:
PGHOST: localhost
before_script:
- apk update && apk add make git openssh g++
- apk add --no-cache docker-compose
- git config --global user.email "$GITLAB_USER_EMAIL" && git config --global user.name "$GITLAB_USER_NAME"
- mkdir -p ~/.ssh && echo "$SSH_PRIVATE_KEY" > ~/.ssh/id_rsa && chmod -R 600 ~/.ssh && ssh-keyscan -t rsa ssh.x >> ~/.ssh/known_hosts
script:
- cp .env.example .env
- docker-compose up -d
- sleep 30 # a temporary line to get the logs
- cat /etc/hosts # debug line
- docker-compose port pg_db 5432 # debug line
- netstat -a # debug line
- docker-compose ps # debug line
- go test -v -timeout 30s ./... -tags=withDB
only:
- merge_request
- dev
- master
The logs I get for
variables:
PGHOST: localhost
$ cp .env.example .env
$ docker-compose up -d
Recreating alp-logger_pg_db_1 ...
Recreating alp-logger_pg_db_1 ... done
store-init is up-to-date
$ sleep 30
$ cat /etc/hosts
127.0.0.1 localhost
::1 localhost ip6-localhost ip6-loopback
fe00::0 ip6-localnet
ff00::0 ip6-mcastprefix
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters
172.17.0.2 runner-lxzkchpx-project-304-concurrent-0
$ docker-compose port pg_db 5432
0.0.0.0:5432
$ netstat -a
Active Internet connections (servers and established)
Proto Recv-Q Send-Q Local Address Foreign Address State
tcp 0 0 runner-lxzkchpx-project-304-concurrent-0:50294 static.124.194.21.65.clients.your-server.de:ssh TIME_WAIT
Active UNIX domain sockets (servers and established)
Proto RefCnt Flags Type State I-Node Path
$ docker-compose ps
Name Command State Ports
---------------------------------------------------------------------------
pg_db docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
store-init ./alp-store Up
and the error of connecting to postgres db:
failed to connect to `host=localhost user=test database=test`: dial error (dial tcp [::1]:5432: connect: cannot assign requested address)
The logs for debug commands are same, so I'll skip them. The errors I get for
variables:
PGHOST: pg_db
and for any other named host like docker.
failed to connect to `host=pg_db user=test database=test`: hostname resolving error (lookup pg_db on 1.1.1.1:53: no such host)
The errors I get for
variables:
PGHOST: 127.0.0.1
failed to connect to `host=127.0.0.1 user=test database=test`: dial error (dial tcp 127.0.0.1:5432: connect: connection refused)
One of the important distinctions between running containers on your local machine and running them in GitLab using docker:dind is that the containers are not available on 'localhost' -- they are available on the docker:dind container.
If you want to talk to this container, in your scenario, the postgres container would be available on docker:5432 (docker being the hostname of the docker:dind container where your postgres container has its port mapping).
Illustration with simple HTTP service container
As a simplified example if you were to run the container strm/helloworld-http locally with a port mapping, the following works:
docker run -d --rm -p 80:80 strm/helloworld-http
# give it some time to startup
curl http://localhost # this works
However, the same setup in GitLab does not:
myjob:
variables: # these variables are not necessarily required
DOCKER_TLS_CERTDIR: ""
DOCKER_HOST: "tcp://docker:2375"
services:
- docker:dind
script:
- docker run -d --rm -p 80:80 strm/helloworld-http
- sleep 10
- curl http://localhost # Fails!
One fix would be to use the docker hostname instead:
script:
- docker run -d --rm -p 80:80 strm/helloworld-http
- sleep 10
- curl http://docker # works!
Docker desktop (windows10) running in WSL2
postgresql running in WSL2
pgadmin running in windows10
I can connect with pgadmin (local machine) to postgresql (localmachine WSL2) with the default settings
(localhost:5432)
postgres.conf
listen_addresses = '*'
port = 5432
When I create a docker container it will not connect to my local postgresql.
cmd used in WSL2
docker run -d --net=host \
-e HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:password#localhost:5432/mydb \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e HASURA_GRAPHQL_DEV_MODE=true \
hasura/graphql-engine:v1.3.3
error
"could not connect to server: Connection refused\n\tIs the server running on host \"localhost\" (127.0.0.1) and accepting\n\tTCP/IP connections on port 5432?\n","path":"$","error":"connection error","code":"postgres-error"}
What am I missing?
turned out I had to use this:
docker run -d -p 8080:8080
-e HASURA_GRAPHQL_DATABASE_URL=postgres://postgres:password#host.docker.internal:5432/mydb \
-e HASURA_GRAPHQL_ENABLE_CONSOLE=true \
-e HASURA_GRAPHQL_DEV_MODE=true \
hasura/graphql-engine:v1.3.3
I thought "host.docker.internal" was only ment for Mac. Seems to work with Docker Desktop Windows10(WSL2) too.
here is a working solution for me, important is hostname
version: "3.8"
services:
postgres:
restart: always
image: postgres
container_name: postgres
hostname: postgres
#depends_on:
#sql-server:
#condition: service_healthy
volumes:
- pg_data:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgrespassword
networks:
- backend
sql-api:
restart: always
container_name: api
image: hasura/graphql-engine:v2.2.0
ports:
- 8055:8080
depends_on:
- "postgres"
hostname: sqlapi
environment:
## postgres database to store Hasura metadata
HASURA_GRAPHQL_METADATA_DATABASE_URL: postgres://postgres:postgrespassword#postgres:5432/postgres
## enable the console served by server
HASURA_GRAPHQL_ENABLE_CONSOLE: "true" # set to "false" to disable console
## enable debugging mode. It is recommended to disable this in production
HASURA_GRAPHQL_DEV_MODE: "true"
HASURA_GRAPHQL_ENABLED_LOG_TYPES: startup, http-log, webhook-log, websocket-log, query-log
## uncomment next line to set an admin secret
# HASURA_GRAPHQL_ADMIN_SECRET: myadminsecretkey
networks:
- backend
networks:
backend:
driver: bridge
volumes:
pg_data:
I am trying to configure Traefik so that I would have access to services via domain names, and that I would not have to set different ports. For example, two MongoDB services, both on the default port, but in different domains, example.localhost and example2.localhost. Only this example works. I mean, other cases probably work, but I can't connect to them, and I don't understand what the problem is. This is probably not even a problem with Traefik.
I have prepared a repository with an example that works. You just need to generate your own certificate with mkcert. The page at example.localhost returns the 403 Forbidden error but you should not worry about it, because the purpose of this configuration is to show that SSL is working (padlock, green status). So don't focus on 403.
Only the SSL connection to the mongo service works. I tested it with the Robo 3T program. After selecting the SSL connection, providing the host on example.localhost and selecting the certificate for a self-signed (or own) connection works. And that's the only thing that works that way. Connections to redis (Redis Desktop Manager) and to pgsql (PhpStorm, DBeaver, DbVisualizer) do not work, regardless of whether I provide certificates or not. I do not forward SSL to services, I only connect to Traefik. I spent long hours on it. I searched the internet. I haven't found the answer yet. Has anyone solved this?
PS. I work on Linux Mint, so my configuration should work in this environment without any problem. I would ask for solutions for Linux.
If you do not want to browse the repository, I attach the most important files:
docker-compose.yml
version: "3.7"
services:
traefik:
image: traefik:v2.0
ports:
- 80:80
- 443:443
- 8080:8080
- 6379:6379
- 5432:5432
- 27017:27017
volumes:
- /var/run/docker.sock:/var/run/docker.sock:ro
- ./config.toml:/etc/traefik/traefik.config.toml:ro
- ./certs:/etc/certs:ro
command:
- --api.insecure
- --accesslog
- --log.level=INFO
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --entrypoints.traefik.address=:8080
- --entrypoints.mongo.address=:27017
- --entrypoints.postgres.address=:5432
- --entrypoints.redis.address=:6379
- --providers.file.filename=/etc/traefik/traefik.config.toml
- --providers.docker
- --providers.docker.exposedByDefault=false
- --providers.docker.useBindPortIP=false
apache:
image: php:7.2-apache
labels:
- traefik.enable=true
- traefik.http.routers.http-dev.entrypoints=http
- traefik.http.routers.http-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.entrypoints=https
- traefik.http.routers.https-dev.rule=Host(`example.localhost`)
- traefik.http.routers.https-dev.tls=true
- traefik.http.services.dev.loadbalancer.server.port=80
pgsql:
image: postgres:10
environment:
POSTGRES_DB: postgres
POSTGRES_USER: postgres
POSTGRES_PASSWORD: password
labels:
- traefik.enable=true
- traefik.tcp.routers.pgsql.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.pgsql.tls=true
- traefik.tcp.routers.pgsql.service=pgsql
- traefik.tcp.routers.pgsql.entrypoints=postgres
- traefik.tcp.services.pgsql.loadbalancer.server.port=5432
mongo:
image: mongo:3
labels:
- traefik.enable=true
- traefik.tcp.routers.mongo.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.mongo.tls=true
- traefik.tcp.routers.mongo.service=mongo
- traefik.tcp.routers.mongo.entrypoints=mongo
- traefik.tcp.services.mongo.loadbalancer.server.port=27017
redis:
image: redis:3
labels:
- traefik.enable=true
- traefik.tcp.routers.redis.rule=HostSNI(`example.localhost`)
- traefik.tcp.routers.redis.tls=true
- traefik.tcp.routers.redis.service=redis
- traefik.tcp.routers.redis.entrypoints=redis
- traefik.tcp.services.redis.loadbalancer.server.port=6379
config.toml
[tls]
[[tls.certificates]]
certFile = "/etc/certs/example.localhost.pem"
keyFile = "/etc/certs/example.localhost-key.pem"
Build & Run
mkcert example.localhost # in ./certs/
docker-compose up -d
Prepare step by step
Install mkcert (run also mkcert -install for CA)
Clone my code
In certs folder run mkcert example.localhost
Start container by docker-compose up -d
Open page https://example.localhost/ and check if it is secure connection
If address http://example.localhost/ is not reachable, add 127.0.0.1 example.localhost to /etc/hosts
Certs:
Public: ./certs/example.localhost.pem
Private: ./certs/example.localhost-key.pem
CA: ~/.local/share/mkcert/rootCA.pem
Test MongoDB
Install Robo 3T
Create new connection:
Address: example.localhost
Use SSL protocol
CA Certificate: rootCA.pem (or Self-signed Certificate)
Test tool:
Test Redis
Install RedisDesktopManager
Create new connection:
Address: example.localhost
SSL
Public Key: example.localhost.pem
Private Key: example.localhost-key.pem
Authority: rootCA.pem
Test tool:
So far:
Can connect to Postgres via IP (info from Traefik)
jdbc:postgresql://172.21.0.4:5432/postgres?sslmode=disable
jdbc:postgresql://172.21.0.4:5432/postgres?sslfactory=org.postgresql.ssl.NonValidatingFactory
Try telet (IP changes every docker restart):
> telnet 172.27.0.5 5432
Trying 172.27.0.5...
Connected to 172.27.0.5.
Escape character is '^]'.
^]
Connection closed by foreign host.
> telnet example.localhost 5432
Trying ::1...
Connected to example.localhost.
Escape character is '^]'.
^]
HTTP/1.1 400 Bad Request
Content-Type: text/plain; charset=utf-8
Connection: close
400 Bad RequestConnection closed by foreign host.
If I connect directly to postgres, the data is nice. If I connect to via Traefik then I have Bad Request when closing the connection. I have no idea what this means and whether it must mean something.
At least for the PostgreSQL issue, it seems that the connection is started in cleartext and then upgraded to TLS:
Docs
Mailing list discussion
Issue on another proxy project
So it is basically impossible to use TLS termination with a proxy if said proxy doesn't support this cleartext handshake + upgrade to TLS function of the protocol.
Update to #jose-liber's answer:
SNI routing for postgres with STARTTLS has been added to Traefik in this PR. Now Treafik will listen to the initial bytes sent by postgres and if its going to initiate a TLS handshake (Note that postgres TLS requests are created as non-TLS first and then upgraded to TLS requests), Treafik will handle the handshake and then is able to receive the TLS headers from postgres, which contains the SNI information that it needs to route the request properly. This means that you can use HostSNI("example.com") along with tls to expose postgres databases under different subdomains.
As of writing this answer, I was able to get this working with the v3.0.0-beta2 image (Reference)