ECONNREFUSED 127.0.0.1:27017 - Compass <> MongoDB Replication Set connection challenges - mongodb

I'm testing using docker compose for a replicated set on my local machine and am having trouble with getting Compass connected and functioning with the local set. Everything has deployed fine and I can see the exposed ports listening on the host machine. I just keep getting ECONNREFUSED.
Here is the Docker Compose file.
services:
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30001:27017
volumes:
- md1:/data/db
restart: always
command: mongod --replSet mongo-net
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30002:27017
volumes:
- md2:/data/db
restart: always
command: mongod --replSet mongo-net
mongo3:
hostname: mongo3
container_name: mongo3
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30003:27017
volumes:
- md3:/data/db
restart: always
command: mongod --replSet mongo-net
mongoinit:
image: mongo
restart: "no"
depends_on:
- mongo1
- mongo2
- mongo3
command: >
sh -c "mongosh --host mongo1:27017 --eval
'
db = (new Mongo("localhost:27017")).getDB("test");
config = {
"_id" : "mongo-net",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority": 1
},
{
"_id" : 1,
"host" : "localhost:27017",
"priority": 2
},
{
"_id" : 2,
"host" : "localhost:27017",
"priority": 3
}
]
};
rs.initiate(config);
'"
networks:
mongo-network:
volumes:
md1:
md2:
md3:
The containers and replication set deploy fine and are communicating. I can see the defined ports exposed and listening.
My issue is trying to use Compass to connect to the replicated set. I get ECONNREFUSED.
What's odd is I can actually see the client connecting on the primary logs, but get no other information about what the connection was refused/DCed.
{
"attr": {
"connectionCount": 7,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "listener",
"id": 22943,
"msg": "Connection accepted",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.032+00:00"
}
}
{
"attr": {
"client": "conn129",
"doc": {
"application": {
"name": "MongoDB Compass"
},
"driver": {
"name": "nodejs",
"version": "4.8.1"
},
"os": {
"architecture": "arm64",
"name": "darwin",
"type": "Darwin",
"version": "22.1.0"
},
"platform": "Node.js v16.5.0, LE (unified)|Node.js v16.5.0, LE (unified)"
},
"remote": "192.168.32.1:61508"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 51800,
"msg": "client metadata",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.034+00:00"
}
}
{
"attr": {
"connectionCount": 6,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 22944,
"msg": "Connection ended",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.044+00:00"
}
}

Related

Container on same network not communicating with each other

I have a mongodb container which i name e-learning
and i have a docker image which should connect to the mongodb container to update my database but it's not working i get this error:
Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }
here's my docker build file
# syntax=docker/dockerfile:1
FROM golang:1.18
WORKDIR /go/src/github.com/worker
COPY go.mod go.sum main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM jrottenberg/ffmpeg:4-alpine
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
ENV LD_LIBRARY_PATH=/usr/local/lib
COPY --from=jrottenberg/ffmpeg / /
COPY app.env /root
COPY --from=0 /go/src/github.com/worker/app .
CMD ["./app"]
my docker compose file
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://localhost:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
networks:
app_network:
external: true
volumes:
worker:
my docker inspect network
[
{
"Name": "app_network",
"Id": "f688edf02a194fd3b8a2a66076f834a23fa26cead20e163cde71ef32fc1ab598",
"Created": "2022-06-27T12:18:00.283531947+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2907482267e1f6e42544e5e8d852c0aac109ec523c6461e003572963e299e9b0": {
"Name": "rabbitmq",
"EndpointID": "4b46e091e4d5a79782185dce12cb2b3d79131b92d2179ea294a639fe82a1e79a",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"8afd004a981715b8088af53658812215357e156ede03905fe8fdbe4170e8b13f": {
"Name": "e-learning",
"EndpointID": "1c12d592a0ef6866d92e9989f2e5bc3d143602fc1e7ad3d980efffcb87b7e076",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"ad026f7e10c9c1c41071929239363031ff72ad1b9c6765ef5c977da76f24ea31": {
"Name": "video-transformation-worker-1",
"EndpointID": "ce3547014a6856725b6e815181a2c3383d307ae7cf7132e125c58423f335b73f",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Change MONGO_URI="mongodb://localhost:27017/" to MONGO_URI="mongodb://e-learning:27017/" (working on the assumption that e-learning is the mongo container).
Within a container attached to a bridge network (the default) localhost (127.0.0.1) is the container itself. So your app container is trying to access the database at port 27017 on itself (not on the host or on the db container). The easiest solution is to use the automatic DNS resolution between containers that docker provides.
I added extra hosts and changed my mongo uri to host.docker.internal
and it solved my problems
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://host.docker.internal:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
app_network:
external: true
volumes:
worker:

Cannot connect with Mongo Compass to docerized mongoDB replica sets

I have the following docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Note that the first service will use the mongo-setup.sh script:
#!/bin/bash
MONGODB_REPLICA_1=mongo_replica_1
MONGODB_REPLICA_2=mongo_replica_2
MONGODB_REPLICA_3=mongo_replica_3
echo "************ [ Waiting for startup ] **************" ${MONGODB_REPLICA_1}
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************" ${MONGODB_REPLICA_1}
mongosh --host ${MONGODB_REPLICA_1}:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "${MONGODB_REPLICA_1}:27017",
"priority": 3
},
{
"_id": 2,
"host": "${MONGODB_REPLICA_2}:27018",
"priority": 2
},
{
"_id": 3,
"host": "${MONGODB_REPLICA_3}:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
When I run docker-compose up -d, it succeeds and when I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c8f6d916e4a mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27018->27018/tcp mongo_replica_2
c59cb625362e mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 0.0.0.0:27017->27017/tcp mongo_replica_1
cb61093e4dd0 mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27019->27019/tcp mongo_replica_3
But when I try to connect one of the replica sets with Mongo Compass I get the following error:
So it seems as the containers are all running fine, why do I get this issue? Please advice..
I could solve the issue by appending the following content to my /etc/hosts file:
127.0.0.1 mongo_replica_1
127.0.0.1 mongo_replica_2
127.0.0.1 mongo_replica_3
Could anyone come up with more elegant way?
I tried to bypass the /etc/hosts solution, so I edited docker-compose.yaml file and I applied extra_hosts:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Then I edited the mongo-setup.sh to:
#!/bin/bash
echo "************ [ Waiting for startup ] **************"
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************"
mongosh --host localhost:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "localhost:27017",
"priority": 3
},
{
"_id": 2,
"host": "localhost:27018",
"priority": 2
},
{
"_id": 3,
"host": "localhost:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
And I removed the edits I made in /etc/hosts file. I don't get an error like before, I do get timeout:

Unable to connect to Mongodb container from flask container

#Docker compose file
version: "3.4" # optional since v1.27.0
services:
flaskblog:
build:
context: flaskblog
dockerfile: Dockerfile
image: flaskblog
restart: unless-stopped
environment:
APP_ENV: "prod"
APP_DEBUG: "False"
APP_PORT: 5000
MONGODB_DATABASE: flaskdb
MONGODB_USERNAME: flaskuser
MONGODB_PASSWORD:
MONGODB_HOSTNAME: mongodbuser
volumes:
- appdata:/var/www
depends_on:
- mongodb
networks:
- frontend
- backend
mongodb:
image: mongo:4.2
#container_name: mongodb
restart: unless-stopped
command: mongod --auth
environment:
MONGO_INITDB_ROOT_USERNAME: admin
MONGO_INITDB_ROOT_PASSWORD:
MONGO_INITDB_DATABASE: flaskdb
MONGODB_DATA_DIR: /data/db2
MONDODB_LOG_DIR: /dev/null
volumes:
- mongodbdata:/data/db2
#- ./mongo-init.js:/docker-entrypoint-initdb.d/mongo-init.js:ro
networks:
- backend
networks:
backend:
driver: bridge
volumes:
mongodbdata:
driver: local
appdata:
driver: local
#Flask container file
FROM python:3.8-slim-buster
#python:3.6-stretch
LABEL MAINTAINER="Shekhar Banerjee"
ENV GROUP_ID=1000 \
USER_ID=1000 \
SECRET_KEY="4" \
EMAIL_USER="dankml.com" \
EMAIL_PASS="nzw"
RUN mkdir /home/logix3
WORKDIR /home/logix3
ADD . /home/logix3/
RUN pip install -r requirements.txt
RUN pip install gunicorn
RUN groupadd --gid $GROUP_ID www
RUN useradd --create-home -u $USER_ID --shell /bin/sh --gid www www
USER www
EXPOSE 5000/tcp
#CMD ["python","run.py"]
CMD [ "gunicorn", "-w", "4", "--bind", "127.0.0.1:5000", "run:app"]
These are my docker-compose file and Dockerfile for an application . The container for MongoDB and Flask are working fine individually, but when I try to run them together, I do not get any response on the localhost, There are no error messages in the logs or the containers
Curl in the system gives this :
$ curl -i http://127.0.0.1:5000
curl: (7) Failed to connect to 127.0.0.1 port 5000: Connection refused
Can anyone suggest how do I debug this ??
**Only backend network is functional
[
{
"Name": "mlspace_backend",
"Id": "e7e37cad0058330c99c55ffea2b98281e0c6526763e34550db24431b30030b77",
"Created": "2022-05-15T22:52:25.0281001+05:30",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": null,
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": true,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"12c50f5f70d18b4c7ffc076177b59ff063a8ff81c4926c8cae0bf9e74dc2fc83": {
"Name": "mlspace_mongodb_1",
"EndpointID": "8278f672d9211aec9b539e14ae4eeea5e8f7aaef95448f44aab7ec1a8c61bb0b",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"75cde7a34d6b3c4517c536a06349579c6c39090a93a017b2c280d987701ed0cf": {
"Name": "mlspace_flaskblog_1",
"EndpointID": "20489de8841d937f01768170a89f74c37ed049d241b096c8de8424c51e58704c",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {
"com.docker.compose.network": "backend",
"com.docker.compose.project": "mlspace",
"com.docker.compose.version": "1.23.2"
}
}
]
I checked the logs of the containers. no errors found, flask engine seem to be running but the site wont run , even curl won't given any output

System.TimeoutException: A timeout occurred after 30000ms MongoDb

I am trying to compose my application (asp net core web api and mongodb) , but encountered error while trying to connect to db:
System.TimeoutException: A timeout occurred after 30000ms selecting a server using CompositeServerSelector{ Selectors = MongoDB.Driver.MongoClient+AreSessionsSupportedServerSelector, LatencyLimitingServerSelector{ AllowedLatencyRange = 00:00:00.0150000 }, OperationsCountServerSelector }. Client view of cluster state is { ClusterId : "1", Type : "Unknown", State : "Disconnected", Servers : [{ ServerId: "{ ClusterId : 1, EndPoint : "Unspecified/localhost:27017" }", EndPoint: "Unspecified/localhost:27017", ReasonChanged: "Heartbeat", State: "Disconnected", ServerVersion: , TopologyVersion: , Type: "Unknown", HeartbeatException: "MongoDB.Driver.MongoConnectionException: An exception occurred while opening a connection to the server.
My appsettings.json:
{
"DatabaseSettings": {
"ConnectionString": "mongodb://localhost:27017",
"DatabaseName": "CatalogDb",
"CollectionName": "Products"
},
"Logging": {
"LogLevel": {
"Default": "Information",
"Microsoft.AspNetCore": "Warning"
}
},
"AllowedHosts": "*"
}
docker-compose.override:
version: '3.4'
services:
catalogdb:
container_name: catalogdb
restart: always
ports:
- "27017:27017"
volumes:
- mongo_data:/data/db
eshop.catalog.api:
container_name: catalog.api
environment:
- ASPNETCORE_ENVIRONMENT=Development
- "DatabaseSettings:ConnectionString=mongodb://catalogdb:27017"
depends_on:
- catalogdb
ports:
- "8000:80"
docker-compose:
version: '3.4'
services:
catalogdb:
image: mongo
eshop.catalog.api:
image: ${DOCKER_REGISTRY-}eshopcatalogapi
build:
context: .
dockerfile: EShop.Catalog.API/Dockerfile
volumes:
mongo_data:
Solved. I have change compose command on docker-compose -f .\docker-compose.yml -f .\docker-compose.override.yml up -d and it works now

Consul Client refuses connection to Vault server

I'm trying to create a Vault network backed by Consul cluster of 3 nodes. I have created a cluster of 3 Consul servers and a Consul client has been connected to the cluster. Now I'm trying to connect
a Vault server to Consul client but client always refuse connection.
2021-12-03T12:59:27.578Z [WARN] storage migration check error: error="Get \"http://consul_c1:8501/v1/kv/vault/core/migration\": dial tcp 192.168.48.3:8501: connect: connection refused"
I built all in docker compose. here are my consul server configs:
consul_s1.json
{
"server": true,
"node_name": "consul_s1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s2", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s2.json
{
"server": true,
"node_name": "consul_s2",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s3.json
{
"server": true,
"node_name": "consul_s3",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s2"],
"log_level": "DEBUG",
"ui": true
}
and consul client config is:
consul_c1.json
{
"node_name": "consul_c1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"retry_join": ["consul_s1", "consul_s2", "consul_s3"],
"data_dir": "/consul/data"
}
and configs for vault:
vault_s1.json
{
"backend": {
"consul": {
"address": "consul_c1:8501",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"ui": true
}
and here is the docker compose file
version: '3.7'
services:
consul_s1:
image: consul:1.10.4
container_name: consul_s1
restart: always
volumes:
- ./consul/consul_s1/config/consul_s1.json:/consul/config/consul_s1.json:ro
networks:
- consul
ports:
- '8500:8500'
- '8600:8600/tcp'
- '8600:8600/udp'
command: 'agent'
consul_s2:
image: consul:1.10.4
container_name: consul_s2
restart: always
volumes:
- ./consul/consul_s2/config/consul_s2.json:/consul/config/consul_s2.json:ro
networks:
- consul
command: 'agent'
consul_s3:
image: consul:1.10.4
container_name: consul_s3
restart: always
volumes:
- ./consul/consul_s3/config/consul_s3.json:/consul/config/consul_s3.json:ro
networks:
- consul
command: 'agent'
consul_c1:
image: consul:1.10.4
container_name: consul_c1
restart: always
ports:
- 8501:8500
volumes:
- ./consul/consul_c1/config/consul_c1.json:/consul/config/consul_c1.json:ro
networks:
- consul
command: 'agent'
vault:
image: vault:latest
container_name: vault_s1
ports:
- 8200:8200
volumes:
- ./vault/vault_s1/config/vault_s1.json:/vault/config/vault_s1.json
- ./vault/vault_s1/policies:/vault/policies
- ./vault/vault_s1/data:/vault/data
- ./vault/vault_s1/logs:/vault/logs
environment:
- VAULT_ADDR=http://127.0.0.1:8200
networks:
- consul
command: server -config=/vault/config/vault_s1.json
cap_add:
- IPC_LOCK
depends_on:
- consul_s1
networks:
consul:
driver: bridge
Try setting the "client_addr" configuration on the Consul client. By default it is localhost. The HTTP interface will use this address, and if it's listening only on localhost, then Vault may not be able to communicate with the client, as you have Vault configured to reach out to it on a different network interface.
https://www.consul.io/docs/agent/options#_client