Consul Client refuses connection to Vault server - hashicorp-vault

I'm trying to create a Vault network backed by Consul cluster of 3 nodes. I have created a cluster of 3 Consul servers and a Consul client has been connected to the cluster. Now I'm trying to connect
a Vault server to Consul client but client always refuse connection.
2021-12-03T12:59:27.578Z [WARN] storage migration check error: error="Get \"http://consul_c1:8501/v1/kv/vault/core/migration\": dial tcp 192.168.48.3:8501: connect: connection refused"
I built all in docker compose. here are my consul server configs:
consul_s1.json
{
"server": true,
"node_name": "consul_s1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s2", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s2.json
{
"server": true,
"node_name": "consul_s2",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s3.json
{
"server": true,
"node_name": "consul_s3",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s2"],
"log_level": "DEBUG",
"ui": true
}
and consul client config is:
consul_c1.json
{
"node_name": "consul_c1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"retry_join": ["consul_s1", "consul_s2", "consul_s3"],
"data_dir": "/consul/data"
}
and configs for vault:
vault_s1.json
{
"backend": {
"consul": {
"address": "consul_c1:8501",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"ui": true
}
and here is the docker compose file
version: '3.7'
services:
consul_s1:
image: consul:1.10.4
container_name: consul_s1
restart: always
volumes:
- ./consul/consul_s1/config/consul_s1.json:/consul/config/consul_s1.json:ro
networks:
- consul
ports:
- '8500:8500'
- '8600:8600/tcp'
- '8600:8600/udp'
command: 'agent'
consul_s2:
image: consul:1.10.4
container_name: consul_s2
restart: always
volumes:
- ./consul/consul_s2/config/consul_s2.json:/consul/config/consul_s2.json:ro
networks:
- consul
command: 'agent'
consul_s3:
image: consul:1.10.4
container_name: consul_s3
restart: always
volumes:
- ./consul/consul_s3/config/consul_s3.json:/consul/config/consul_s3.json:ro
networks:
- consul
command: 'agent'
consul_c1:
image: consul:1.10.4
container_name: consul_c1
restart: always
ports:
- 8501:8500
volumes:
- ./consul/consul_c1/config/consul_c1.json:/consul/config/consul_c1.json:ro
networks:
- consul
command: 'agent'
vault:
image: vault:latest
container_name: vault_s1
ports:
- 8200:8200
volumes:
- ./vault/vault_s1/config/vault_s1.json:/vault/config/vault_s1.json
- ./vault/vault_s1/policies:/vault/policies
- ./vault/vault_s1/data:/vault/data
- ./vault/vault_s1/logs:/vault/logs
environment:
- VAULT_ADDR=http://127.0.0.1:8200
networks:
- consul
command: server -config=/vault/config/vault_s1.json
cap_add:
- IPC_LOCK
depends_on:
- consul_s1
networks:
consul:
driver: bridge

Try setting the "client_addr" configuration on the Consul client. By default it is localhost. The HTTP interface will use this address, and if it's listening only on localhost, then Vault may not be able to communicate with the client, as you have Vault configured to reach out to it on a different network interface.
https://www.consul.io/docs/agent/options#_client

Related

is it possible to run Hyperledger Explorer on ARM64 like Raspberry pi?

I am trying to see the blocks or Fabric using Hyperledger Explorer on raspberry pi. everytime I run docker-compose up the result is always the same like this:
Creating explorerdb.mynetwork.com ... done Creating explorer.mynetwork.com ... done Attaching to explorerdb.mynetwork.com, explorer.mynetwork.com explorer.mynetwork.com | exec /usr/local/bin/docker-entrypoint.sh: exec format error explorerdb.mynetwork.com | 2022-12-13 19:59:02.094 UTC [1] LOG: listening on IPv4 address "0.0.0.0", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.096 UTC [1] LOG: listening on IPv6 address "::", port 5432 explorerdb.mynetwork.com | 2022-12-13 19:59:02.102 UTC [1] LOG: listening on Unix socket "/var/run/postgresql/.s.PGSQL.5432" explorerdb.mynetwork.com | 2022-12-13 19:59:02.152 UTC [20] LOG: database system was shut down at 2022-12-13 19:58:39 UTC explorerdb.mynetwork.com | 2022-12-13 19:59:02.164 UTC [1] LOG: database system is ready to accept connections explorer.mynetwork.com exited with code 1
my docker-compose.yaml
`version: '2.1'
volumes:
pgdata:
walletstore:
networks:
mynetwork.com:
name: el_red
services:
explorerdb.mynetwork.com:
image: hyperledger/explorer-db:latest
container_name: explorerdb.mynetwork.com
hostname: explorerdb.mynetwork.com
environment:
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWORD=password
healthcheck:
test: "pg_isready -h localhost -p 5432 -q -U postgres"
interval: 30s
timeout: 10s
retries: 5
volumes:
- pgdata:/var/lib/postgresql/data
networks:
- mynetwork.com
explorer.mynetwork.com:
image: hyperledger/explorer:latest
container_name: explorer.mynetwork.com
hostname: explorer.mynetwork.com
environment:
- DATABASE_HOST=explorerdb.mynetwork.com
- DATABASE_DATABASE=fabricexplorer
- DATABASE_USERNAME=hppoc
- DATABASE_PASSWD=password
- LOG_LEVEL_APP=debug
- LOG_LEVEL_DB=debug
- LOG_LEVEL_CONSOLE=info
- LOG_CONSOLE_STDOUT=true
- DISCOVERY_AS_LOCALHOST=false
volumes:
- ./config.json:/opt/explorer/app/platform/fabric/config.json
- ./connection-profile:/opt/explorer/app/platform/fabric/connection-profile
- ./organizations:/tmp/crypto
- walletstore:/opt/explorer/wallet
ports:
- 8080:8080
depends_on:
explorerdb.mynetwork.com:
condition: service_healthy
networks:
- mynetwork.com
`
my test-network.json
`
{
"name": "my-network",
"version": "1.0.0",
"client": {
"tlsEnable": true,
"adminCredential": {
"id": "exploreradmin",
"password": "exploreradminpw"
},
"enableAuthentication": true,
"organization": "Org1MSP",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1MSP": {
"mspid": "Org1MSP",
"adminPrivateKey": {
"pem": "-----BEGIN PRIVATE KEY-----\nMIGHAgEAMBMGByqGSM49AgEGCCqGSM49AwEHBG0wawIBAQQgtkP3OchnVeSd6c0n\ns/SXp7E3JLiBQZExi1UVXuCQYcahRANCAAQgJLvV9SaRC550c1hyDVfDao1MaxJU\nlvnDq1Yi51/d2d5sLndQ4q33nuAoybKIR3eQIrvE2Wu4wTGQCL2r3t2F\n-----END PRIVATE KEY-----\n"
},
"peers": ["peer0.org1.example.com"],
"signedCert": {
"path": "/tmp/crypto//peerOrganizations/org1.example.com/users/Admin#org1.example.com/msp/signcerts/cert.pem"
}
}
},
"peers": {
"peer0.org1.example.com": {
"tlsCACerts": {
"path": "/tmp/crypto/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/tls/ca.crt"
},
"url": "grpcs://peer0.org1.example.com:7051"
}
}
}
`
I wait for like an hour everytime I run docker-compose up but nothing happen. Please help. Thank you so much in advance.

ECONNREFUSED 127.0.0.1:27017 - Compass <> MongoDB Replication Set connection challenges

I'm testing using docker compose for a replicated set on my local machine and am having trouble with getting Compass connected and functioning with the local set. Everything has deployed fine and I can see the exposed ports listening on the host machine. I just keep getting ECONNREFUSED.
Here is the Docker Compose file.
services:
mongo1:
hostname: mongo1
container_name: mongo1
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30001:27017
volumes:
- md1:/data/db
restart: always
command: mongod --replSet mongo-net
mongo2:
hostname: mongo2
container_name: mongo2
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30002:27017
volumes:
- md2:/data/db
restart: always
command: mongod --replSet mongo-net
mongo3:
hostname: mongo3
container_name: mongo3
image: mongo:latest
networks:
- mongo-network
expose:
- 27017
ports:
- 30003:27017
volumes:
- md3:/data/db
restart: always
command: mongod --replSet mongo-net
mongoinit:
image: mongo
restart: "no"
depends_on:
- mongo1
- mongo2
- mongo3
command: >
sh -c "mongosh --host mongo1:27017 --eval
'
db = (new Mongo("localhost:27017")).getDB("test");
config = {
"_id" : "mongo-net",
"members" : [
{
"_id" : 0,
"host" : "localhost:27017",
"priority": 1
},
{
"_id" : 1,
"host" : "localhost:27017",
"priority": 2
},
{
"_id" : 2,
"host" : "localhost:27017",
"priority": 3
}
]
};
rs.initiate(config);
'"
networks:
mongo-network:
volumes:
md1:
md2:
md3:
The containers and replication set deploy fine and are communicating. I can see the defined ports exposed and listening.
My issue is trying to use Compass to connect to the replicated set. I get ECONNREFUSED.
What's odd is I can actually see the client connecting on the primary logs, but get no other information about what the connection was refused/DCed.
{
"attr": {
"connectionCount": 7,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "listener",
"id": 22943,
"msg": "Connection accepted",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.032+00:00"
}
}
{
"attr": {
"client": "conn129",
"doc": {
"application": {
"name": "MongoDB Compass"
},
"driver": {
"name": "nodejs",
"version": "4.8.1"
},
"os": {
"architecture": "arm64",
"name": "darwin",
"type": "Darwin",
"version": "22.1.0"
},
"platform": "Node.js v16.5.0, LE (unified)|Node.js v16.5.0, LE (unified)"
},
"remote": "192.168.32.1:61508"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 51800,
"msg": "client metadata",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.034+00:00"
}
}
{
"attr": {
"connectionCount": 6,
"connectionId": 129,
"remote": "192.168.32.1:61508",
"uuid": "f78c3803-fc7c-4655-9b77-94329bd41a7d"
},
"c": "NETWORK",
"ctx": "conn129",
"id": 22944,
"msg": "Connection ended",
"s": "I",
"t": {
"$date": "2022-11-02T18:57:20.044+00:00"
}
}

Container on same network not communicating with each other

I have a mongodb container which i name e-learning
and i have a docker image which should connect to the mongodb container to update my database but it's not working i get this error:
Unknown, Last error: connection() error occurred during connection handshake: dial tcp 127.0.0.1:27017: connect: connection refused }
here's my docker build file
# syntax=docker/dockerfile:1
FROM golang:1.18
WORKDIR /go/src/github.com/worker
COPY go.mod go.sum main.go ./
RUN go mod download
RUN CGO_ENABLED=0 GOOS=linux go build -a -installsuffix cgo -o app .
FROM jrottenberg/ffmpeg:4-alpine
FROM alpine:latest
RUN apk --no-cache add ca-certificates
WORKDIR /root/
ENV LD_LIBRARY_PATH=/usr/local/lib
COPY --from=jrottenberg/ffmpeg / /
COPY app.env /root
COPY --from=0 /go/src/github.com/worker/app .
CMD ["./app"]
my docker compose file
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://localhost:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
networks:
app_network:
external: true
volumes:
worker:
my docker inspect network
[
{
"Name": "app_network",
"Id": "f688edf02a194fd3b8a2a66076f834a23fa26cead20e163cde71ef32fc1ab598",
"Created": "2022-06-27T12:18:00.283531947+03:00",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.20.0.0/16",
"Gateway": "172.20.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"2907482267e1f6e42544e5e8d852c0aac109ec523c6461e003572963e299e9b0": {
"Name": "rabbitmq",
"EndpointID": "4b46e091e4d5a79782185dce12cb2b3d79131b92d2179ea294a639fe82a1e79a",
"MacAddress": "02:42:ac:14:00:03",
"IPv4Address": "172.20.0.3/16",
"IPv6Address": ""
},
"8afd004a981715b8088af53658812215357e156ede03905fe8fdbe4170e8b13f": {
"Name": "e-learning",
"EndpointID": "1c12d592a0ef6866d92e9989f2e5bc3d143602fc1e7ad3d980efffcb87b7e076",
"MacAddress": "02:42:ac:14:00:02",
"IPv4Address": "172.20.0.2/16",
"IPv6Address": ""
},
"ad026f7e10c9c1c41071929239363031ff72ad1b9c6765ef5c977da76f24ea31": {
"Name": "video-transformation-worker-1",
"EndpointID": "ce3547014a6856725b6e815181a2c3383d307ae7cf7132e125c58423f335b73f",
"MacAddress": "02:42:ac:14:00:04",
"IPv4Address": "172.20.0.4/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
Change MONGO_URI="mongodb://localhost:27017/" to MONGO_URI="mongodb://e-learning:27017/" (working on the assumption that e-learning is the mongo container).
Within a container attached to a bridge network (the default) localhost (127.0.0.1) is the container itself. So your app container is trying to access the database at port 27017 on itself (not on the host or on the db container). The easiest solution is to use the automatic DNS resolution between containers that docker provides.
I added extra hosts and changed my mongo uri to host.docker.internal
and it solved my problems
version: "3.9"
services:
worker:
image: worker
environment:
- MONGO_URI="mongodb://host.docker.internal:27017/"
- MONGO_DATABASE=e-learning
- RABBITMQ_URI=amqp://user:password#rabbitmq:5672/
- RABBITMQ_QUEUE=upload
networks:
- app_network
external_links:
- e-learning
- rabbitmq
volumes:
- worker:/go/src/github.com/worker:rw
extra_hosts:
- "host.docker.internal:host-gateway"
networks:
app_network:
external: true
volumes:
worker:

I had a problem in the process of running kafka and zookeeper on docker compose

this is my docker-compose-single-broker.yml file.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
my-network:
ipv4_address: 172.18.0.100
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.18.0.101
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
networks:
my-network:
ipv4_address: 172.18.0.101
networks:
my-network:
name: ecommerce-network # 172.18.0.1 ~
and I executed the command.
docker-compose -f docker-compose-single-broker.yml up -d
I check my network by the command.
docker network inspect ecommerce-network
[
{
"Name": "ecommerce-network",
"Id": "f543bd92e299455454bd1affa993d1a4b7ca2c347d576b24d8f559d0ac7f07c2",
"Created": "2021-05-23T12:42:01.804785417Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cad97d79a92ea8e0f24b000c8255c2db1ebc64865fab3d7cda37ff52a8755f14": {
"Name": "kafka-docker_kafka_1",
"EndpointID": "4c867d9d5f4d28e608f34247b102f1ff2811a9bbb2f78d30b2f55621e6ac6187",
"MacAddress": "02:42:ac:12:00:65",
"IPv4Address": "172.18.0.101/16",
"IPv6Address": ""
},
"f7df5354b9e114a1a849ea9d558d8543ca5cb02800c5189d9f09ee1b95a517d6": {
"Name": "kafka-docker_zookeeper_1",
"EndpointID": "b304581db258dd3da95e15fb658cae0e40bd38440c1f845b09936d9b69d4fb23",
"MacAddress": "02:42:ac:12:00:64",
version: '2'
"IPv4Address": "172.18.0.100/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
and I entered kafka container. I executed the command to look up the topic list.
however, I couldn't get the topic list even though I waited indefinitely.
this is my kafka container's logs.
What should I do to solve this problem?
Unclear why you think you'll need IP addresses. Remove those
For example, KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 works out of the box with Docker networking
The KAFKA_ADVERTISED_HOST_NAME can simply be localhost if you don't plan on connecting outside of that container, otherwise, it can be the Docker service name you've set of kafka
And you don't need to mount the Docker socket
Related - Connect to Kafka running in Docker
For local development purposes, you could check out my containerized Kafka that uses only single image for both Zookeeper and Kafka if you interested in it
You could find it via
https://github.com/howiesynguyen/Java-basedExamples/tree/main/DockerContainerizedKafka4Dev
Or
https://hub.docker.com/repository/docker/howiesynguyen/kafka4dev
I tested it with one of my examples of Spring Cloud Stream and it worked for me. Hopefully someone can find it helpful
Just in case it would be useful to anybody.
I had pretty similar setup as question's author did and in my case Kafka didn't see Zookeeper (but I was using different image - from Debezium). In my case I figured out that environment variable KAFKA_ZOOKEEPER_CONNECT should be named just ZOOKEEPER_CONNECT.
Solution that worked to me (network can be omitted, it's not necessary):
version: "3.9"
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
networks:
- main
kafka:
image: debezium/kafka
ports:
- 9092:9092
environment:
ZOOKEEPER_CONNECT: zookeeper
depends_on:
- zookeeper
networks:
- main
networks:
main:

Sequelize not work in docker (Node + Postgres)

Even with containers connected to the same network, when executing the migration on sequelize, the error 'getaddrinfo ENOTFOUND' is returned.
If I remove the host address in the 'database.js' settings, the migration runs but the CRUD routines stop running returning the error "connect ECONNREFUSED 127.0.0.1:5432"
If I point to the 'postgresdb' container in the 'database.js' settings, the CRUD routines execute but the sequelize migration does not.
Help me please
Fragment of docker network inspect in bridge
"Containers": {
"35467ab419c3632c4c0cfe57e972bd94c7de0a5818e37fdae6eb82a25381ceab": {
"Name": "api",
"EndpointID": "d54d95b94bec543fa831526d1abb99346045efc4cc5f4425d5d59b200ece3e62",
"MacAddress": "02:42:ac:1a:00:05",
"IPv4Address": "172.26.0.5/16",
"IPv6Address": ""
},
"7e028bcd80948fd61d14ff87437b963af31291220b7adbc8861fb98a2171f04a": {
"Name": "postgresdb",
"EndpointID": "e1d55e0fe8658a142880da27eea12a47d73f6cce472eeff38322ed4d6a60e8ad",
"MacAddress": "02:42:ac:1a:00:03",
"IPv4Address": "172.26.0.3/16",
"IPv6Address": ""
},
My docker-compose.yml
version: '3'
networks:
api:
driver: bridge
services:
api:
container_name: api
depends_on:
- postgresdb
- mongodb
- redisdb
links:
- postgresdb
environment:
POSTGRES_HOST: postgresdb
build: .
volumes:
- .:/home/node/api
command: yarn dev
networks:
- api
ports:
- 3333:3333
postgresdb:
image: postgres:alpine
container_name: postgresdb
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=postdb
networks:
- api
ports:
- "5432:5432"
mongodb:
image: mongo
container_name: mongodb
networks:
- api
ports:
- "27017:27017"
redisdb:
image: redis
container_name: redisdb
networks:
- api
ports:
- "6379:6379"
My database.js file
require('dotenv/config');
module.exports = {
dialect: process.env.DB_DIALECT,
host: process.env.POSTGRES_HOST,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
define: {
timestamps: true,
underscored: true,
underscoredAll: true,
},
};