Sequelize not work in docker (Node + Postgres) - postgresql

Even with containers connected to the same network, when executing the migration on sequelize, the error 'getaddrinfo ENOTFOUND' is returned.
If I remove the host address in the 'database.js' settings, the migration runs but the CRUD routines stop running returning the error "connect ECONNREFUSED 127.0.0.1:5432"
If I point to the 'postgresdb' container in the 'database.js' settings, the CRUD routines execute but the sequelize migration does not.
Help me please
Fragment of docker network inspect in bridge
"Containers": {
"35467ab419c3632c4c0cfe57e972bd94c7de0a5818e37fdae6eb82a25381ceab": {
"Name": "api",
"EndpointID": "d54d95b94bec543fa831526d1abb99346045efc4cc5f4425d5d59b200ece3e62",
"MacAddress": "02:42:ac:1a:00:05",
"IPv4Address": "172.26.0.5/16",
"IPv6Address": ""
},
"7e028bcd80948fd61d14ff87437b963af31291220b7adbc8861fb98a2171f04a": {
"Name": "postgresdb",
"EndpointID": "e1d55e0fe8658a142880da27eea12a47d73f6cce472eeff38322ed4d6a60e8ad",
"MacAddress": "02:42:ac:1a:00:03",
"IPv4Address": "172.26.0.3/16",
"IPv6Address": ""
},
My docker-compose.yml
version: '3'
networks:
api:
driver: bridge
services:
api:
container_name: api
depends_on:
- postgresdb
- mongodb
- redisdb
links:
- postgresdb
environment:
POSTGRES_HOST: postgresdb
build: .
volumes:
- .:/home/node/api
command: yarn dev
networks:
- api
ports:
- 3333:3333
postgresdb:
image: postgres:alpine
container_name: postgresdb
environment:
- POSTGRES_USER=docker
- POSTGRES_PASSWORD=docker
- POSTGRES_DB=postdb
networks:
- api
ports:
- "5432:5432"
mongodb:
image: mongo
container_name: mongodb
networks:
- api
ports:
- "27017:27017"
redisdb:
image: redis
container_name: redisdb
networks:
- api
ports:
- "6379:6379"
My database.js file
require('dotenv/config');
module.exports = {
dialect: process.env.DB_DIALECT,
host: process.env.POSTGRES_HOST,
username: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD,
database: process.env.POSTGRES_DB,
define: {
timestamps: true,
underscored: true,
underscoredAll: true,
},
};

Related

ASP.NET Core : Where to keep the connection string? In Docker Profile or appsettings.json?

I have an ASP.NET Core application and I have kept the connection string in the appsettings.json file.
Should I also add the connection string in the Docker Profile as an environment variable?
This is my appsettings.json:
{
"PersistenceAccess": {
"ConnectionString": "Server=localhost;Port=5432;Database=DemoDatabase;User Id=postgres;Password=postgres;"
}
}
And this is the Docker Profile in the LaunchSettings.json:
{
"iisSettings": {
"windowsAuthentication": false,
"anonymousAuthentication": true,
"iisExpress": {
"applicationUrl": "http://localhost:35847",
"sslPort": 0
}
},
"profiles": {
"IIS Express": {
..
}
},
"Docker": {
"commandName": "Docker",
"launchBrowser": true,
"launchUrl": "{Scheme}://{ServiceHost}:{ServicePort}",
"environmentVariables": {
"PersistenceAccess__ConnectionString": "Server=host.docker.internal;Port=5432;Database=DemoDatabase;User Id=postgres;Password=postgres;"
},
"DockerfileRunArguments": "--add-host host.docker.internal:host-gateway",
"publishAllPorts": true,
"useSSL": false
}
}
}
I also have a Docker-Compose.yml file:
version: '3.4'
services:
Demo.api:
image: ${DOCKER_REGISTRY-}demoapi
build:
context: .
dockerfile: Sources/Code/Demo.Api/Dockerfile
environment:
- ASPNETCORE_ENVIRONMENT=Development
- PersistenceAccess__ConnectionString= Server=db;Port=5432;Database=DemoDatabase;User Id=postgres;Password=postgres;
ports:
- '8081:80'
depends_on:
- db
db:
image: postgres
restart: always
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
logging:
options:
max-size: 10m
max-file: "3"
ports:
- '5438:5432'
volumes:
- ./postgres-data:/var/lib/postgresql/data
# copy the sql script to create tables
- ./sql/create_tables.sql:/docker-entrypoint-initdb.d/create_tables.sql
# copy the sql script to fill tables
- ./sql/fill_tables.sql:/docker-entrypoint-initdb.d/fill_tables.sql

Cannot connect with Mongo Compass to docerized mongoDB replica sets

I have the following docker-compose.yaml file:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Note that the first service will use the mongo-setup.sh script:
#!/bin/bash
MONGODB_REPLICA_1=mongo_replica_1
MONGODB_REPLICA_2=mongo_replica_2
MONGODB_REPLICA_3=mongo_replica_3
echo "************ [ Waiting for startup ] **************" ${MONGODB_REPLICA_1}
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************" ${MONGODB_REPLICA_1}
mongosh --host ${MONGODB_REPLICA_1}:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "${MONGODB_REPLICA_1}:27017",
"priority": 3
},
{
"_id": 2,
"host": "${MONGODB_REPLICA_2}:27018",
"priority": 2
},
{
"_id": 3,
"host": "${MONGODB_REPLICA_3}:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
When I run docker-compose up -d, it succeeds and when I run docker ps I get:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
0c8f6d916e4a mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27018->27018/tcp mongo_replica_2
c59cb625362e mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 0.0.0.0:27017->27017/tcp mongo_replica_1
cb61093e4dd0 mongo:5.0.8 "/usr/bin/mongod --b…" 11 seconds ago Up 10 seconds 27017/tcp, 0.0.0.0:27019->27019/tcp mongo_replica_3
But when I try to connect one of the replica sets with Mongo Compass I get the following error:
So it seems as the containers are all running fine, why do I get this issue? Please advice..
I could solve the issue by appending the following content to my /etc/hosts file:
127.0.0.1 mongo_replica_1
127.0.0.1 mongo_replica_2
127.0.0.1 mongo_replica_3
Could anyone come up with more elegant way?
I tried to bypass the /etc/hosts solution, so I edited docker-compose.yaml file and I applied extra_hosts:
version: '3.8'
services:
mongo_launcher:
container_name: mongo_launcher
image: mongo:5.0.8
restart: on-failure
networks:
- mongo_network
volumes:
- ./docker/mongo-setup.sh:/scripts/mongo-setup.sh
entrypoint: ['sh', '/scripts/mongo-setup.sh']
mongo_replica_1:
container_name: mongo_replica_1
image: mongo:5.0.8
ports:
- 27017:27017
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27017',
]
volumes:
- ./.volumes/mongo/replica1:/data/db
- ./.volumes/mongo/replica1/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_2:
container_name: mongo_replica_2
image: mongo:5.0.8
ports:
- 27018:27018
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27018',
]
volumes:
- ./.volumes/mongo/replica2:/data/db
- ./.volumes/mongo/replica2/configdb:/data/configdb
networks:
- mongo_network
mongo_replica_3:
container_name: mongo_replica_3
image: mongo:5.0.8
ports:
- 27019:27019
extra_hosts:
- 'localhost:0.0.0.0'
restart: always
entrypoint:
[
'/usr/bin/mongod',
'--bind_ip_all',
'--replSet',
'dbrs',
'--dbpath',
'/data/db',
'--port',
'27019',
]
volumes:
- ./.volumes/mongo/replica3:/data/db
- ./.volumes/mongo/replica3/configdb:/data/configdb
networks:
- mongo_network
networks:
mongo_network:
driver: bridge
Then I edited the mongo-setup.sh to:
#!/bin/bash
echo "************ [ Waiting for startup ] **************"
until curl http://${MONGODB_REPLICA_1}:27017/serverStatus\?text\=1 2>&1 | grep uptime | head -1; do
printf '.'
sleep 1
done
echo "************ [ Startup completed ] **************"
mongosh --host localhost:27017 <<EOF
var cfg = {
"_id": "dbrs",
"protocolVersion": 1,
"version": 1,
"members": [
{
"_id": 1,
"host": "localhost:27017",
"priority": 3
},
{
"_id": 2,
"host": "localhost:27018",
"priority": 2
},
{
"_id": 3,
"host": "localhost:27019",
"priority": 1
}
],settings: {chainingAllowed: true}
};
rs.initiate(cfg, { force: true });
rs.reconfig(cfg, { force: true });
rs.secondaryOk();
db.getMongo().setReadPref('nearest');
db.getMongo().setSecondaryOk();
EOF
And I removed the edits I made in /etc/hosts file. I don't get an error like before, I do get timeout:

Consul Client refuses connection to Vault server

I'm trying to create a Vault network backed by Consul cluster of 3 nodes. I have created a cluster of 3 Consul servers and a Consul client has been connected to the cluster. Now I'm trying to connect
a Vault server to Consul client but client always refuse connection.
2021-12-03T12:59:27.578Z [WARN] storage migration check error: error="Get \"http://consul_c1:8501/v1/kv/vault/core/migration\": dial tcp 192.168.48.3:8501: connect: connection refused"
I built all in docker compose. here are my consul server configs:
consul_s1.json
{
"server": true,
"node_name": "consul_s1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s2", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s2.json
{
"server": true,
"node_name": "consul_s2",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s3"],
"log_level": "DEBUG",
"ui": true
}
consul_s3.json
{
"server": true,
"node_name": "consul_s3",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"client_addr": "0.0.0.0",
"bootstrap_expect": 3,
"data_dir": "/consul/data",
"retry_join": ["consul_s1", "consul_s2"],
"log_level": "DEBUG",
"ui": true
}
and consul client config is:
consul_c1.json
{
"node_name": "consul_c1",
"datacenter": "dc1",
"bind_addr": "0.0.0.0",
"retry_join": ["consul_s1", "consul_s2", "consul_s3"],
"data_dir": "/consul/data"
}
and configs for vault:
vault_s1.json
{
"backend": {
"consul": {
"address": "consul_c1:8501",
"path": "vault/"
}
},
"listener": {
"tcp":{
"address": "0.0.0.0:8200",
"tls_disable": 1
}
},
"ui": true
}
and here is the docker compose file
version: '3.7'
services:
consul_s1:
image: consul:1.10.4
container_name: consul_s1
restart: always
volumes:
- ./consul/consul_s1/config/consul_s1.json:/consul/config/consul_s1.json:ro
networks:
- consul
ports:
- '8500:8500'
- '8600:8600/tcp'
- '8600:8600/udp'
command: 'agent'
consul_s2:
image: consul:1.10.4
container_name: consul_s2
restart: always
volumes:
- ./consul/consul_s2/config/consul_s2.json:/consul/config/consul_s2.json:ro
networks:
- consul
command: 'agent'
consul_s3:
image: consul:1.10.4
container_name: consul_s3
restart: always
volumes:
- ./consul/consul_s3/config/consul_s3.json:/consul/config/consul_s3.json:ro
networks:
- consul
command: 'agent'
consul_c1:
image: consul:1.10.4
container_name: consul_c1
restart: always
ports:
- 8501:8500
volumes:
- ./consul/consul_c1/config/consul_c1.json:/consul/config/consul_c1.json:ro
networks:
- consul
command: 'agent'
vault:
image: vault:latest
container_name: vault_s1
ports:
- 8200:8200
volumes:
- ./vault/vault_s1/config/vault_s1.json:/vault/config/vault_s1.json
- ./vault/vault_s1/policies:/vault/policies
- ./vault/vault_s1/data:/vault/data
- ./vault/vault_s1/logs:/vault/logs
environment:
- VAULT_ADDR=http://127.0.0.1:8200
networks:
- consul
command: server -config=/vault/config/vault_s1.json
cap_add:
- IPC_LOCK
depends_on:
- consul_s1
networks:
consul:
driver: bridge
Try setting the "client_addr" configuration on the Consul client. By default it is localhost. The HTTP interface will use this address, and if it's listening only on localhost, then Vault may not be able to communicate with the client, as you have Vault configured to reach out to it on a different network interface.
https://www.consul.io/docs/agent/options#_client

I had a problem in the process of running kafka and zookeeper on docker compose

this is my docker-compose-single-broker.yml file.
version: '2'
services:
zookeeper:
image: wurstmeister/zookeeper
ports:
- "2181:2181"
networks:
my-network:
ipv4_address: 172.18.0.100
kafka:
image: wurstmeister/kafka
ports:
- "9092:9092"
environment:
KAFKA_ADVERTISED_HOST_NAME: 172.18.0.101
KAFKA_CREATE_TOPICS: "test:1:1"
KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
volumes:
- /var/run/docker.sock:/var/run/docker.sock
depends_on:
- zookeeper
networks:
my-network:
ipv4_address: 172.18.0.101
networks:
my-network:
name: ecommerce-network # 172.18.0.1 ~
and I executed the command.
docker-compose -f docker-compose-single-broker.yml up -d
I check my network by the command.
docker network inspect ecommerce-network
[
{
"Name": "ecommerce-network",
"Id": "f543bd92e299455454bd1affa993d1a4b7ca2c347d576b24d8f559d0ac7f07c2",
"Created": "2021-05-23T12:42:01.804785417Z",
"Scope": "local",
"Driver": "bridge",
"EnableIPv6": false,
"IPAM": {
"Driver": "default",
"Options": {},
"Config": [
{
"Subnet": "172.18.0.0/16",
"Gateway": "172.18.0.1"
}
]
},
"Internal": false,
"Attachable": false,
"Ingress": false,
"ConfigFrom": {
"Network": ""
},
"ConfigOnly": false,
"Containers": {
"cad97d79a92ea8e0f24b000c8255c2db1ebc64865fab3d7cda37ff52a8755f14": {
"Name": "kafka-docker_kafka_1",
"EndpointID": "4c867d9d5f4d28e608f34247b102f1ff2811a9bbb2f78d30b2f55621e6ac6187",
"MacAddress": "02:42:ac:12:00:65",
"IPv4Address": "172.18.0.101/16",
"IPv6Address": ""
},
"f7df5354b9e114a1a849ea9d558d8543ca5cb02800c5189d9f09ee1b95a517d6": {
"Name": "kafka-docker_zookeeper_1",
"EndpointID": "b304581db258dd3da95e15fb658cae0e40bd38440c1f845b09936d9b69d4fb23",
"MacAddress": "02:42:ac:12:00:64",
version: '2'
"IPv4Address": "172.18.0.100/16",
"IPv6Address": ""
}
},
"Options": {},
"Labels": {}
}
]
and I entered kafka container. I executed the command to look up the topic list.
however, I couldn't get the topic list even though I waited indefinitely.
this is my kafka container's logs.
What should I do to solve this problem?
Unclear why you think you'll need IP addresses. Remove those
For example, KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181 works out of the box with Docker networking
The KAFKA_ADVERTISED_HOST_NAME can simply be localhost if you don't plan on connecting outside of that container, otherwise, it can be the Docker service name you've set of kafka
And you don't need to mount the Docker socket
Related - Connect to Kafka running in Docker
For local development purposes, you could check out my containerized Kafka that uses only single image for both Zookeeper and Kafka if you interested in it
You could find it via
https://github.com/howiesynguyen/Java-basedExamples/tree/main/DockerContainerizedKafka4Dev
Or
https://hub.docker.com/repository/docker/howiesynguyen/kafka4dev
I tested it with one of my examples of Spring Cloud Stream and it worked for me. Hopefully someone can find it helpful
Just in case it would be useful to anybody.
I had pretty similar setup as question's author did and in my case Kafka didn't see Zookeeper (but I was using different image - from Debezium). In my case I figured out that environment variable KAFKA_ZOOKEEPER_CONNECT should be named just ZOOKEEPER_CONNECT.
Solution that worked to me (network can be omitted, it's not necessary):
version: "3.9"
services:
zookeeper:
image: debezium/zookeeper
ports:
- 2181:2181
- 2888:2888
- 3888:3888
networks:
- main
kafka:
image: debezium/kafka
ports:
- 9092:9092
environment:
ZOOKEEPER_CONNECT: zookeeper
depends_on:
- zookeeper
networks:
- main
networks:
main:

Mongodb data is not seeding using docker compose

My directory structure:
root
- mongo_seed
- Dockerfile
- init.json
- docker-compose.yml
- Dockerfile
Docker compose file
version: "3"
services:
web:
container_name: "hgbackend"
build: .
image: tahashin/hgbackend:v2
ports:
- "3000:3000"
links:
- mongodb
depends_on:
- mongodb
mongodb:
image: mongo:latest
container_name: "mongodb"
ports:
- "27017:27017"
mongo_seeding:
build: ./mongo_seed .
volumes:
- ./config/db-seed:/data
links:
- mongodb
depends_on:
- mongodb
docker file under mongo_seed directory
FROM mongo:latest
COPY init.json /init.json
CMD mongoimport --host mongodb --db alifhala --collection honcollection --type json --file /init.json --jsonArray
mongodb test data file init.json
[
{
"name": "Joe Smith",
"email": "jsmith#gmail.com",
"age": 40,
"admin": false
},
{
"name": "Jen Ford",
"email": "jford#gmail.com",
"age": 45,
"admin": true
}
]
After running docker-compose up in windows powershell database and collection is not creating and data is not dumping. After running mongo query in docker it is showing only 3 databases: local, admin, config
Check this Answer
https://stackoverflow.com/a/48179360/1124364
mongo_seeding:
build: ./mongo_seed .
Change it to
build:mongo_seed/.