I want to use both internal and an external traefik network in my container.
Problem: when I define an internal network, traefik loses communication with my service.
traefik_webgateway internal network
+----------+ +--------------------------+
| traefik | | +------+ +-----+ |
| +--------------+ | app | | api | |
| | | | | +------+ +-----+ |
| | | proxy | | |
| | | | | +-----+ +------+ |
| | | | | |auth | |worker| |
| +--------------+ +-----+ +------+ |
| | | |
+----------+ +--------------------------+
docker-compose.traefik.yml:
services:
traefik:
image: traefik:v2.4
restart: unless-stopped
ports:
- 80:80
- 8080:8080
- 443:443
networks:
- webgateway
networks:
webgateway:
driver: bridge
docker-compose.yml:
services:
proxy:
networks:
- internal # <=== this causes traefik point the healthcheck to the 172 IP instead of the 192 IP (see Edit below)
- traefik
labels:
- traefik.http....
app:
networks:
- internal
api:
networks:
- internal
auth:
networks:
- internal
worker:
networks:
- internal
networks:
internal:
traefik:
external:
name: traefik_webgateway
I don't want my services to use the external traefik network because I want my services to be namespaced for my green/blue deployment.
I was wondering why this is happening and if there's a solution.
Thanks in advance!
EDIT:
I obtained the proxy container's network:
docker inspect -f '{{range.NetworkSettings.Networks}} {{.IPAddress}}{{end}}' <container id>
And received 2 IPS:
172.18.A.A 192.168.B.B
I have a healthcheck that pings /health. On the traefik dashboard:
http://172.18.A.A:8000
Traefik is unable to communicate with this IP. Sometimes it's able to when it picks the other IP: 192.168.B.B.
From within the proxy container, I'm able to ping proxy (it uses the "B" IP)
PING proxy (192.168.B.B): 56 data bytes
64 bytes from 192.168.16.4: seq=0 ttl=64 time=0.090 ms
64 bytes from 192.168.16.4: seq=1 ttl=64 time=0.080 ms
64 bytes from 192.168.16.4: seq=2 ttl=64 time=0.065 ms
64 bytes from 192.168.16.4: seq=3 ttl=64 time=0.069 ms
I am able to ping: ping 192.168.B.B
I am unable to ping: ping 172.18.A.A
Related
I have a postgres deployment, whose configuration looks like this
apiVersion: postgres-operator.crunchydata.com/v1beta1
kind: PostgresCluster
metadata:
name: hippo
spec:
image: registry.developers.crunchydata.com/crunchydata/crunchy-postgres:centos8-13.5-0
postgresVersion: 13
users:
- name: hippo
databases: ["hippo"]
options: "CREATEDB"
instances:
- name: instance1
dataVolumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
backups:
pgbackrest:
image: registry.developers.crunchydata.com/crunchydata/crunchy-pgbackrest:centos8-2.36-0
repos:
- name: repo1
volume:
volumeClaimSpec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: 1Gi
And I forward the local port 5432 to it, like so
DB_PORT=5432
PG_CLUSTER_PRIMARY_POD=$(microk8s kubectl get pod -o name \
-l postgres-operator.crunchydata.com/cluster=hippo,postgres-operator.crunchydata.com/role=master)
microk8s kubectl port-forward "${PG_CLUSTER_PRIMARY_POD}" ${DB_PORT}:${DB_PORT}
And I can then connect via psql. I can list the databases and connect to the hippo database.
rob#rob-Vostro-5402:~$ psql postgresql://hippo:Zw%5EAQuPf%3D%3Bi%3B%3F2%3E1RRbLTLrT#localhost:5432/hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
Type "help" for help.
hippo=> \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+-------------+-------------+-----------------------
hippo | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =Tc/postgres +
| | | | | postgres=CTc/postgres+
| | | | | hippo=CTc/postgres
postgres | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 |
template0 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf-8 | en_US.utf-8 | =c/postgres +
| | | | | postgres=CTc/postgres
(4 rows)
hippo=> \c hippo
psql (13.7 (Ubuntu 13.7-1.pgdg20.04+1), server 13.5)
SSL connection (protocol: TLSv1.3, cipher: TLS_AES_256_GCM_SHA384, bits: 256, compression: off)
You are now connected to database "hippo" as user "hippo".
However, when I run \dt, I get disconnected.
hippo=> \dt
SSL SYSCALL error: EOF detected
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Failed.
!?>
And the terminal in which I was running the port-forwarding now shows an error.
Forwarding from 127.0.0.1:5432 -> 5432
Forwarding from [::1]:5432 -> 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
Handling connection for 5432
E0625 15:59:16.859963 72918 portforward.go:406] an error occurred forwarding 5432 -> 5432: error forwarding port 5432 to pod 8f58bd2f87d0ef63b969725920c98793f0dd1f41a25dc04bfe1b06a0ad7b58fc, uid : failed to execute portforward in network namespace "/var/run/netns/cni-0f76b252-b44c-017f-e337-b0285117cc4e": read tcp4 127.0.0.1:46248->127.0.0.1:5432: read: connection reset by peer
E0625 15:59:16.860854 72918 portforward.go:234] lost connection to pod
Any help would be much appreciated. Thanks
I am used to the same brittle behavior of port forwarding to Postgres and resorted to a simple reconnect as a workable solution:
while true; do kubectl port-forward "$path" -n "$namespace" "$ports"; done
I have a Golang server alongside Postgres instance running inside docker compose. For some reason the Postgres is refusing connection. From all of my previous searches, usually the problem is typo, not exposing the port, having SSL and so on, but I don't have anything like that going on and still having this issue
version: "3.2"
services:
ingress:
image: jwilder/nginx-proxy
ports:
- "3000:80"
volumes:
- /var/run/docker.sock:/tmp/docker.sock:ro
auth-service:
depends_on:
- rabbitmq
- auth-db
- ingress
build: ./auth
container_name: "auth-service"
ports:
- 3001:3000
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_HOST=auth-db
- POSTGRES_DB=auth-dev
- POSTGRES_PORT=5435
- PORT=3000
- RABBITMQ_USER=guest
- RABBITMQ_PASSWORD=guest
- RABBITMQ_HOST=rabbitmq
- RABBITMQ_PORT=5672
- VIRTUAL_HOST=api.twitchy.dev
- VIRTUAL_PATH=/v1/auth/
deploy:
restart_policy:
condition: on-failure
delay: 5s
max_attempts: 3
window: 120s
# networks:
# - rabbitmq_net
# - default
rabbitmq:
image: rabbitmq:3-management-alpine
container_name: "rabbitmq"
ports:
- 5672:5672
- 15672:15672
volumes:
- rabbitmq_data:/var/lib/rabbitmq/
- rabbitmq_log:/var/log/rabbitmq/
# networks:
# - rabbitmq_net
auth-db:
image: postgres:14.1-alpine
restart: always
container_name: "auth-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=auth-dev
ports:
- "5435:5432"
volumes:
- db:/var/lib/postgresql/data
chat-db:
image: postgres:14.1-alpine
restart: always
container_name: "chat-db"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=chat-dev
ports:
- "5434:5432"
volumes:
- db:/var/lib/postgresql/data
# networks:
# rabbitmq_net:
# driver: bridge
volumes:
db:
driver: local
rabbitmq_data:
rabbitmq_log:
This is the error I am getting
auth-service | Retrying connection to database...
auth-service | failed to connect to `host=auth-db user=postgres database=auth-dev`: dial error (dial tcp 172.23.0.3:5435: connect: connection refused)
And my Golang code used to connect to the DB (using pgx)
dbUrl := fmt.Sprintf("postgres://%s:%s#%s:%s/%s?sslmode=disable",
os.Getenv("POSTGRES_USER"),
os.Getenv("POSTGRES_PASSWORD"),
os.Getenv("POSTGRES_HOST"),
os.Getenv("POSTGRES_PORT"),
os.Getenv("POSTGRES_DB"))
This is why I am confused
The ports match up, I expose 5435 from postgres, and I connect to 5435
The host should be correct as I am referencing the auth-db service name and they are on the same default network so that should be fine
The password and username match up
The POSTGRES_DB also match up, the default database should be auth-dev
POSTGRES_DB
This optional environment variable can be used to define a different name for the default database that is created when the image is first started. If it is not specified, then the value of POSTGRES_USER will be used.
I have sslmode disable as well
Is there anything else that can cause the connection to be refused?
Tried changing db to template1 and postgres as they are created by default but both aren't working either
54511e50369c postgres:14.1-alpine "docker-entrypoint.s…" 16 minutes ago Up 16 seconds 0.0.0.0:5435->5432/tcp, :::5435->5432/tcp auth-db
docker exec -it 54511e50369c psql -U postgres
psql (14.1)
Type "help" for help.
postgres=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------+----------+----------+------------+------------+-----------------------
postgres | postgres | UTF8 | en_US.utf8 | en_US.utf8 |
template0 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | en_US.utf8 | en_US.utf8 | =c/postgres +
| | | | | postgres=CTc/postgres
(3 rows)
The database is ready when I am trying to connect (I am retrying 20 times so, and restarting the service if it crashes, so it should be available)
When you map ports in docker-compose, say like "5435:5432", you are mapping port 5435 on the HOST machine to 5432 on the CONTAINER. However, your db url in the auth-service definition is using the name of the service, auth-db, so you are actually hitting the db container directly, not going through the host machine. Because the db container does not expose 5435, you are unable to connect using port 5435.
If you were to try to connect to the database from your host machine for example, you would probably be successful using port 5435 and localhost.
How do I run Percona PMM2 with docker-compose?
I can run PMM1 just fine. But PMM2 has absolutely zero documentation available and I can't seem to figure it out.
Sample docker-compose.yml file
db:
image: mariadb:10.4.13
ports:
- ${DB_PORT}:3306
volumes:
- db_data:/var/lib/mysql
tmpfs:
- /tmp/mysql-tmp
restart: always
environment:
MYSQL_ROOT_PASSWORD: "${DB_ROOT_PASS}"
MYSQL_USER: "${DB_USER}"
MYSQL_PASSWORD: "${DB_PASS}"
pmm-server:
image: percona/pmm-server:2.7
ports:
- 8100:80
environment:
SERVER_USER: "${PMM_USER}"
SERVER_PASSWORD: "${PMM_PASS}"
restart: always
volumes:
- pmm_data:/srv
pmm-client:
image: perconalab/pmm-client:2.7
environment:
PMM_AGENT_SERVER_ADDRESS: pmm-server:443
PMM_AGENT_SERVER_USERNAME: "${PMM_USER}"
PMM_AGENT_SERVER_PASSWORD: "${PMM_PASS}"
PMM_AGENT_SERVER_INSECURE_TLS: 1
DB_TYPE: mysql
DB_HOST: "${DB_HOST}"
DB_PORT: 3306
DB_USER: root
DB_PASSWORD: "${DB_ROOT_PASS}"
restart: always
depends_on:
- db
volumes:
db_data:
pmm_data:
DB_* ENV variables are form PMM1 config. I have no idea how to set DB configuration for PMM2's Docker image. But the pmm-client seems to be failing before that. With above config I get following log for pmm-client. I can't understand why am I getting this error: "Agent ID is not provided, halting"
pmm-client | Starting pmm-agent ...
pmm-client | INFO[2020-06-09T20:28:20.963+00:00] Using /usr/local/percona/pmm2/exporters/node_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.963+00:00] Using /usr/local/percona/pmm2/exporters/mysqld_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.963+00:00] Using /usr/local/percona/pmm2/exporters/mongodb_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.963+00:00] Using /usr/local/percona/pmm2/exporters/postgres_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.963+00:00] Using /usr/local/percona/pmm2/exporters/proxysql_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.964+00:00] Using /usr/local/percona/pmm2/exporters/rds_exporter component=main
pmm-client | INFO[2020-06-09T20:28:20.964+00:00] Starting... component=client
pmm-client | INFO[2020-06-09T20:28:20.964+00:00] Starting local API server on http://127.0.0.1:7777/ ... component=local-server/JSON
pmm-client | ERRO[2020-06-09T20:28:20.964+00:00] Agent ID is not provided, halting. component=client
pmm-client | INFO[2020-06-09T20:28:20.966+00:00] Started. component=local-server/JSON
Can anyone push me to the right direction with this?
I can't understand why there isn't any documentation available for it neither. For PMM1 docker image I found some but for PMM2 there's absolutely nothing.
They are releasing this image publicly but no info on how to use it.
We have been using single container Docker images for some time without issues on RHEL8. We need to move toward integrating multiple services using docker-compose but have not been successful in even simple attempts.
We are using Mongo (mongo:4.2.3-bionic) and NodeJS (node:alpine).
We created a simple node application which is trying to add a single document to a MongoDB collection. The code for dbwrite.js is:
var MongoClient = require('mongodb').MongoClient;
MongoClient.connect("mongodb://mongo:27017/", function(err, mongodb) {
if (err) throw err;
var mongodbo = mongodb.db("test");
var doc = {"payload":"test doc"};
mongodbo.collection("test2").insertOne(doc, function(err, res) {
if (err) throw err;
});
mongodb.close();
});
The Dockerfile for dbwrite.js is:
FROM node:alpine
ADD . /
CMD ["node", "dbwrite.js"]
The Mongo container was pulled from DockerHub as was the Node container.
The docker-compose.yaml file:
version: '3.1'
services:
mongo:
image: mongo:4.2.3-bionic
container_name: mongo
restart: always
ports:
- 27017:27017
volumes:
- ./mongo_db:/data/db
app:
image: dbwrite:v0.1
container_name: dbwrite
If we perform "docker-compose up" the dbwrite container throws an error:
dbwrite | /node_modules/mongodb/lib/topologies/server.js:233
dbwrite | throw err;
dbwrite | ^
dbwrite |
dbwrite | MongoNetworkError: failed to connect to server [mongo:27017] on first connect [Error: connect EHOSTUNREACH 172.22.0.2:27017
dbwrite | at TCPConnectWrap.afterConnect [as oncomplete] (net.js:1137:16) {
dbwrite | name: 'MongoNetworkError',
dbwrite | [Symbol(mongoErrorContextSymbol)]: {}
dbwrite | }]
dbwrite | at Pool.<anonymous> (/node_modules/mongodb/lib/core/topologies/server.js:438:11)
dbwrite | at Pool.emit (events.js:321:20)
dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:561:14
dbwrite | at /node_modules/mongodb/lib/core/connection/pool.js:994:11
dbwrite | at /node_modules/mongodb/lib/core/connection/connect.js:31:7
dbwrite | at callback (/node_modules/mongodb/lib/core/connection/connect.js:264:5)
dbwrite | at Socket.<anonymous> (/node_modules/mongodb/lib/core/connection/connect.js:294:7)
dbwrite | at Object.onceWrapper (events.js:428:26)
dbwrite | at Socket.emit (events.js:321:20)
dbwrite | at emitErrorNT (internal/streams/destroy.js:84:8) {
dbwrite | name: 'MongoNetworkError',
dbwrite | [Symbol(mongoErrorContextSymbol)]: {}
dbwrite | }
dbwrite exited with code 1
Rebuilding the container (doing it the hard way -- I know -- but wanting to keep everything as identical as possible), and replacing the Dockerfile CMD line
CMD ["node", "dbwrite.js"]
with
CMD ["ping", "-c", "20", "mongo"]
yields normal ping responses from "mongo" so I believe the default network was created right and the DNS is happening as expected, yet my node application gets EHOSTUNREACH.
dbwrite | 64 bytes from 172.22.0.2: seq=15 ttl=64 time=0.072 ms
dbwrite | 64 bytes from 172.22.0.2: seq=16 ttl=64 time=0.080 ms
dbwrite | 64 bytes from 172.22.0.2: seq=17 ttl=64 time=0.067 ms
dbwrite | 64 bytes from 172.22.0.2: seq=18 ttl=64 time=0.121 ms
dbwrite | 64 bytes from 172.22.0.2: seq=19 ttl=64 time=0.097 ms
dbwrite |
dbwrite | --- mongo ping statistics ---
dbwrite | 20 packets transmitted, 20 packets received, 0% packet loss
dbwrite | round-trip min/avg/max = 0.065/0.086/0.121 ms
dbwrite exited with code 0
If we edit the dbwrite.js code and replace, "mongo" in the connect() method with "localhost" and execute "node dbwrite.js" from the localhost (outside a container), the Document to the Collection. The Mongo container log reports that it is listening on 0.0.0.0.
mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] Listening on 0.0.0.0
mongo | 2020-02-10T19:35:26.337+0000 I NETWORK [listener] waiting for connections on port 27017
While I don't have the output captured, previous executions of "docker network inspect" showed both containers and their assigned IPv4 addresses on 172.22.0.x/16. IPAM showed using the default driver "bridge" on subnet 172.22.0.0/16 and a gateway of 172.22.0.1.
Any suggestions on what could be wrong would be greatly appreciated. We are on the verge of down-grading off RHEL8 to see if that is related to our problem given that Red Hat so vocally claims NOT to support Docker. Seems like it is some network security issue since ICMP ping can traverse the bridge but TCP socket connection cannot.
How to solve Haproxy not working on ubuntu server, did I missing something, need a guide here.
Below I do not have a problem on my local macbook docker compose,
stm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | frontend default_port_80
lstm-haproxy | bind :80
lstm-haproxy | reqadd X-Forwarded-Proto:\ http
lstm-haproxy | maxconn 4096
lstm-haproxy | default_backend default_service
lstm-haproxy | backend default_service
lstm-haproxy | server lstm_lstm_1 lstm_lstm_1:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | server lstm_lstm_2 lstm_lstm_2:8008 check inter 2000 rise 2 fall 3
lstm-haproxy | INFO:haproxy:Config check passed
lstm-haproxy | INFO:haproxy:Reloading HAProxy
lstm-haproxy | INFO:haproxy:Restarting HAProxy gracefully
lstm-haproxy | INFO:haproxy:HAProxy is reloading (new PID: 11)
lstm-haproxy | INFO:haproxy:===========END===========
But when I push to my staging server, ubuntu server 18.04
lstm-haproxy | listen stats
lstm-haproxy | bind :1936
lstm-haproxy | mode http
lstm-haproxy | stats enable
lstm-haproxy | timeout connect 10s
lstm-haproxy | timeout client 1m
lstm-haproxy | timeout server 1m
lstm-haproxy | stats hide-version
lstm-haproxy | stats realm Haproxy\ Statistics
lstm-haproxy | stats uri /
lstm-haproxy | stats auth stats:stats
lstm-haproxy | INFO:haproxy:Launching HAProxy
lstm-haproxy | INFO:haproxy:HAProxy has been launched(PID: 9)
My docker and docker-compose versions on macbook,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.2, build 1110ad01
My docker and docker-compose versions on ubuntu server,
Docker version 18.09.1, build 4c52b90
docker-compose version 1.23.1, build b02f1306
This is my docker-compose.yml,
version: '3'
services:
lstm:
restart: always
build:
context: .
environment:
MAX_REQUEST: 100
NUM_WORKER: 5
BIND_ADDR: 0.0.0.0:8008
command: bash monkey-sync.sh
lstm-haproxy:
image: dockercloud/haproxy
links:
- lstm
ports:
- '8008:80'
container_name: lstm-haproxy
volumes:
- /var/run/docker.sock:/var/run/docker.sock
This is my dockerfile,
FROM python:3.6.1 AS base
RUN pip3 install blablabla
WORKDIR /app
COPY . /app
RUN echo
ENV LC_ALL C.UTF-8
ENV LANG C.UTF-8
EXPOSE 8008
Any guides really help me a lot, thanks!
Solved! Need to set SERVICE_PORTS: 8008 on lstm environment.