Multiple Mongo Container - mongodb

I wrote the following docker-compose file:
version: '3.7'
services:
mongo-db-showcase-db:
image: mongo:latest
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: '${DB_USER}'
MONGO_INITDB_ROOT_PASSWORD: '${DB_PASSWORD}'
ports:
- '27018:27017'
frontend-showcase:
build:
context: showcase
dockerfile: Dockerfile-frontend-showcase
image: showcase-frontend-showcase-image:latest
ports:
- '80:80'
backend-showcase:
build:
context: showcase
dockerfile: Dockerfile-backend-showcase
image: showcase-backend-showcase-image:latest
environment:
DATABASE_URL: 'mongodb://${DB_USER}:${DB_PASSWORD}#mongo-db-showcase-db:27018/'
ports:
- '3000:3000'
links:
- mongo-db-showcase-db
mongo-db-admin-manager-db:
image: mongo:latest
restart: always
environment:
MONGO_INITDB_ROOT_USERNAME: '${DB_USER}'
MONGO_INITDB_ROOT_PASSWORD: '${DB_PASSWORD}'
ports:
- '27017:27017'
frontend-manager:
build:
context: manager-app
dockerfile: Dockerfile-frontend-manager
image: manager-frontend-manager-image:latest
ports:
- '8080:80'
backend-manager:
build:
context: manager-app
dockerfile: Dockerfile-backend-manager
image: manager-backend-manager-image:latest
environment:
DATABASE_URL: 'mongodb://${DB_USER}:${DB_PASSWORD}#mongo-db-admin-manager-db:27017/'
ports:
- '8000:8000'
links:
- mongo-db-admin-manager-db
Here I have two mongo db container, one reachable to port 27017 the other should be reachable to port 27018. The service connecting to the db into port 27017 connect fine, the other doesn't connect. The error I get is:
MongooseServerSelectionError: connect ECONNREFUSED 172.20.0.6:27018
at NativeConnection.Connection.openUri (/app/node_modules/mongoose/lib/connection.js:807:32)
at /app/node_modules/mongoose/lib/index.js:342:10
at /app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:32:5
at new Promise (<anonymous>)
at promiseOrCallback (/app/node_modules/mongoose/lib/helpers/promiseOrCallback.js:31:10)
at Mongoose._promiseOrCallback (/app/node_modules/mongoose/lib/index.js:1176:10)
at Mongoose.connect (/app/node_modules/mongoose/lib/index.js:341:20)
at Object.<anonymous> (/app/backend/app.js:26:10)
at Module._compile (internal/modules/cjs/loader.js:999:30)
at Object.Module._extensions..js (internal/modules/cjs/loader.js:1027:10) {
reason: TopologyDescription {
type: 'Unknown',
servers: Map { 'mongo-db-showcase-db:27018' => [ServerDescription] },
stale: false,
compatible: true,
heartbeatFrequencyMS: 10000,
localThresholdMS: 15,
logicalSessionTimeoutMinutes: undefined
}
}
172.20.0.6 is the address of the container.
What could be the problem? Any suggestion is really appreciated.
Thanks in advance to anyone that would want to answer.

Your connection string port should be 27017 because all your container running inside a internal network, 27018 is the port you forwarded and can only connect from outside.
DATABASE_URL: 'mongodb://${DB_USER}:${DB_PASSWORD}#mongo-db-showcase-db:27017/'

Related

How to fix "Error: getaddrinfo ENOTFOUND redis" on docker?

I am using Redis using NestJS and I see following error. I am going through different articles like here and looks like I am following the same but still getting this error.
Steps:
I used docker compose up command
Made sure that host in redis.module.ts is same as service name in docker-compose.yml which is redis.
What am I missing here?
Error:
Error: getaddrinfo ENOTFOUND redis
at GetAddrInfoReqWrap.onlookup [as oncomplete] (node:dns:71:26)
Code:
redis.module.ts
import { CacheModule, Module } from '#nestjs/common';
import { ConfigModule, ConfigService } from '#nestjs/config';
import { RedisService } from './redis.service';
import * as redisStore from 'cache-manager-redis-store';
import { envVariables } from '../env.variables';
#Module({
imports: [
CacheModule.registerAsync({
imports: [ConfigModule],
inject: [ConfigService],
useFactory: async (configService: ConfigService) => ({
store: redisStore,
host: process.env.REDIS_HOST,
port: configService.get('REDIS_PORT'),
ttl: configService.get('CACHE_TTL'),
max: configService.get('MAX_ITEM_IN_CACHE'),
}),
}),
],
providers: [RedisService],
exports: [RedisService],
})
export class RedisModule {}
.env
#REDIS
REDIS_HOST=redis
docker-compose.yml
version: "3.8"
services:
partnersusers:
image: partnersusers
build:
context: .
dockerfile: ./Dockerfile
environment:
- RUN_ENV=dev
- NODE_ENV=development
ports:
- "4000:4000"
networks:
- default
redis:
image: 'redis:alpine'
ports:
- "6379:4000"
networks:
default:
driver: bridge
Error in Docker:
I'm not an expert, but I notice a couple of things on your docker-compose.yml file.
First your redis service is missing the network assignation:
networks:
- default
Without this, redis-commander won't be able to find it as it's not on the same network.
Second redis by default runs on port: 6379 if you want it to run on port 4000 I believe you will need to specify an env var for it.
Or here maybe you just confused the order of the port matching which should've been: 4000:6379 (host_port:container_port).
this is my working docker-compose.yml for reference:
---
version: '3.8'
services:
...
redis:
image: redis
container_name: redis
hostname: redis
environment:
- ALLOW_EMPTY_PASSWORD=yes
ports:
- '6379:6379'
networks:
- my-net
redis-commander:
depends_on:
- redis
container_name: redis-commander
hostname: redis-commander
image: rediscommander/redis-commander:latest
restart: always
environment:
- REDIS_HOSTS=local:redis:6379 # note: this has to be the port the redis container exposes.
ports:
- "8081:8081"
networks:
- my-net
networks:
my-net:
Hope this helps :)

I can't connect to my MongoDB, which I run in docker compose

Here is my docker compose:
version: "3.9"
services:
app:
container_name: app
build:
context: ./..
dockerfile: deployments/Dockerfile
env_file:
- ../configs/config.env
ports:
- ${APP_PORT:-8080}:8080
networks:
- network
restart: always
db:
image: mongo:latest
container_name: mongodb
restart: always
env_file:
- ../configs/config.env
ports:
- ${MONGODB_PORT:-27017}:27017
environment:
MONGO_INITDB_ROOT_USERNAME: ${MONGODB_ROOT_USER:-admin}
MONGO_INITDB_ROOT_PASSWORD: ${MONGODB_ROOT_PASSWORD:-admin}
volumes:
- ../assets/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh
adminer:
image: adminer
container_name: db-adminer
restart: always
ports:
- ${ADMINER_PORT:-17860}:8080
networks:
- network
depends_on:
- db
networks:
network:
driver: bridge
Here is my mongo-init.sh file:
use newdb
db.createUser(
{
user: admin,
pwd: admin,
roles: [
{
role: "readWrite",
db: "newdb"
}
]
}
);
Here I am trying to connect to the database, but I get the following error: topology is connected or connecting
client, err := mongo.Connect(ctx, options.Client().ApplyURI(fmt.Sprintf("mongodb://%s:%s#%s:%s",
cfg.DbUsername, cfg.DbPassword, cfg.DbHost, cfg.DbPort)))
I don't understand what the error might be, since everything is fine in the env file and all the variables match the authentication parameters in mongo
check this out:
services:
productinfo:
container_name: product_cntnr
build: ./api-productinfo-service
image: arc1999/api-productinfo-service
ports:
- '8090:8090'
depends_on:
db:
condition: service_healthy
links:
- db
scraping:
build: ./api-scraping-service
image: arc1999/api-scraping-service
ports:
- '8080:8080'
depends_on:
db:
condition: service_healthy
links:
- db
db:
image: mongo:latest
container_name: mongo_db
# environment:
# MONGO_INITDB_ROOT_USERNAME: root
# MONGO_INITDB_ROOT_PASSWORD: rootpassword
ports:
- 27017:27017
volumes:
- mongodb_data_container:/data/db
healthcheck:
test: echo 'db.runCommand("ping").ok'
interval: 10s
timeout: 10s
retries: 5
volumes:
mongodb_data_container:
func InitDb() {
host := "mongodb://mongo_db:27017"
rb := bson.NewRegistryBuilder()
rb.RegisterTypeMapEntry(bsontype.EmbeddedDocument, reflect.TypeOf(bson.M{}))
clientOptions := options.Client().ApplyURI(host).SetRegistry(rb.Build())
// Connect to MongoDB
client, err := mongo.Connect(context.TODO(), clientOptions)
if err != nil {
log.Panicln(err)
}
err = client.Ping(context.TODO(), nil)
if err != nil {
log.Panicln(err)
}
db = client.Database(os.Getenv("MONGO_DB_NAME"))
fmt.Println("Connected to MongoDB!")
}

Docker-compose Postgres connection refused

I'm running Postgres DB with pg-admin and GO on the docker-compose.
Problem: I can connect from pg-admin to Postgres. But cannot establish a connection from Go.
I tried different combinations of authentication string but it does not work. String format same as here https://github.com/karlkeefer/pngr - but different container name - database
(ERROR) Connection URl:
backend_1 | 2021/08/08 14:24:40 DB connection: database://main:fugZwypczB94m0LP7CcH#postgres:5432/temp_db?sslmode=disable
backend_1 | 2021/08/08 14:24:40 Unalble to open DB connection: dial tcp 127.0.0.1:5432: connect: connection refused
(URI generation same as here https://github.com/karlkeefer/pngr)
Docker:
version: '3.8'
services:
backend:
restart: always
build:
context: backend
target: dev
volumes:
- ./backend:/root
ports:
- "5000:5000"
env_file: .env
depends_on:
- database
database:
build: database
restart: always
environment:
POSTGRES_DB: ${POSTGRES_DB}
POSTGRES_USER: ${POSTGRES_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
PGDATA: /var/lib/postgresql/data
volumes:
- ./database/data:/var/lib/postgresql/data
- ./logs/databse:/var/log/postgresql
- ./database/migrations:/docker-entrypoint-initdb.d/migrations
ports:
- "5432:5432"
database-admin:
image: dpage/pgadmin4:5.5
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: ${PG_ADMIN_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PG_ADMIN_PASSWORD}
PGADMIN_LISTEN_PORT: 80
ports:
- "8080:80"
volumes:
- ./database/admin:/var/lib/pgadmin
links:
- "database:pgsql-server"
depends_on:
- database
volumes:
database:
database-admin:
Environment:
POSTGRES_HOST=postgres
POSTGRES_PORT=5432
POSTGRES_DB=temp_db
POSTGRES_USER=main
POSTGRES_PASSWORD=fugZwypczB94m0LP7CcH
PG_ADMIN_EMAIL=admin#temp.com
PG_ADMIN_PASSWORD=ayzi2ta8f1TnX3vKQSN1
PG_ADMIN_PORT=80
GO Code:
db, err = sqlx.Open("postgres", str)
str
func buildConnectionString() string {
user := os.Getenv("POSTGRES_USER")
pass := os.Getenv("POSTGRES_PASSWORD")
if user == "" || pass == "" {
log.Fatalln("You must include POSTGRES_USER and POSTGRES_PASSWORD environment variables")
}
host := os.Getenv("POSTGRES_HOST")
port := os.Getenv("POSTGRES_PORT")
dbname := os.Getenv("POSTGRES_DB")
if host == "" || port == "" || dbname == "" {
log.Fatalln("You must include POSTGRES_HOST, POSTGRES_PORT, and POSTGRES_DB environment variables")
}
str := fmt.Sprintf("database://%s:%s#%s:%s/%s?sslmode=disable", user, pass, host, port, dbname)
log.Println("DB connection: " + str)
return str
}
Thanks in advance!
You reference the database hostname as postgres (POSTGRES_HOST=postgres) which is fine, but the container/service name is database.
Either change the name in your compose.yaml from database to postgres or add an explicit hostname field:
database:
build: database
restart: always
hostname: postgres # <- add this
You may also want to add a dedicated network for multiple container services to talk to one another (or prevent others from). To do this, add this to each service your want to use a specific network e.g.
database:
# ...
networks:
- mynet
backend:
# ...
networks:
- mynet
and define the network at the end of your compose.yaml
networks:
mynet:
name: my-shared-db-network

Knex & postgres connection error - error: role "admin" does not exist

I am trying to connect to postgres db image using knex. But I am getting error:-
error: role "admin" does not exist
My knexfile is ass below:
module.exports = {
development: {
debug: true,
client: 'pg',
connection: {
database: process.env.POSTGRES_DB,
user: process.env.POSTGRES_USER,
password: process.env.POSTGRES_PASSWORD
},
migrations: {
directory: __dirname + '/src/migrations'
},
seeds: {
directory: __dirname + '/src/seeds'
}
}
};
And my docker-compose file is:-
version: '3.1'
services:
db:
image: postgres
restart: always
volumes:
- ./docker-data/db-data:/var/lib/postgresql/data
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD}
POSTGRESS_USER: ${POSTGRES_USER}
POSTGRESS_DB: ${POSTGRES_DB}
ports:
- 5432:5432
pgadmin:
depends_on:
- db
image: dpage/pgadmin4
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: ${PGADMIN_DEFAULT_EMAIL}
PGADMIN_DEFAULT_PASSWORD: ${PGADMIN_DEFAULT_PASSWORD}
volumes:
- ./docker-data/pgadmin-data:/root/.pgadmin
ports:
- 8080:80
Environment variables are straight forward
POSTGRES_USER=admin
POSTGRES_PASSWORD=admin
POSTGRES_DB=configs
PGADMIN_DEFAULT_EMAIL=admin#admin.com
PGADMIN_DEFAULT_PASSWORD=admin
Packages details:
"knex": "^0.21.1",
"pg": "^8.3.0"
I ran command docker-compose up and on completion i created migrations and post that i ran npx knex migrate:latest. This is resulting in error
error: role "admin" does not exist
at Parser.parseErrorMessage (/Users/test/packages/config-server/node_modules/pg-protocol/dist/parser.js:278:15)
I am not able to debug and fix. Please help.

SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:5432 when using docker to use sequelize

I'm getting the following error when dockerizing a node postgres database using sequelize as an orm backend
Unhandled rejection SequelizeConnectionRefusedError: connect
ECONNREFUSED 127.0.0.1:5432 app_1 | at
connection.connect.err
(/home/app/node_modules/sequelize/lib/dialects/postgres/connection-manager.js:170:24)
These lines of code seems to be the culprit, docker should not be connecting these credentials as this is for my local.
if (process.env.NODE_ENV === "production") {
var sequelize = new Sequelize(process.env.DATABASE_URL);
} else {
// docker is looking at these credentials..... when it should not
var sequelize = new Sequelize("elifullstack", "eli", "", {
host: "127.0.0.1",
dialect: "postgres",
pool: {
max: 100,
min: 0,
idle: 200000,
// #note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
acquire: 1000000,
},
});
}
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
# database refers to the database server at the bottom called "database"
- PSQL_HOST=database
- PSQL_USER=postgres
- PORT=5000
- PSQL_NAME=elitypescript
command: npm run server
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
volumes:
database:
./server/dockerFile
FROM node:10.6.0
COPY . /home/app
WORKDIR /home/app
COPY package.json ./
RUN npm install
EXPOSE 5000
I looked at other similar questions like the following, but it ultimately did not help solve the issue.
Docker - SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
SequelizeConnectionRefusedError: connect ECONNREFUSED 127.0.0.1:3306
I solved it...
What i did was change this
host: "127.0.0.1",
to this
let sequelize;
if (process.env.NODE_ENV === "production") {
sequelize = new Sequelize(process.env.DATABASE_URL);
} else {
sequelize = new Sequelize(
process.env.POSTGRES_DB || "elitypescript",
process.env.POSTGRES_USER || "eli",
"",
{
host: process.env.PSQL_HOST || "localhost",
dialect: "postgres",
pool: {
max: 100,
min: 0,
idle: 200000,
// #note https://github.com/sequelize/sequelize/issues/8133#issuecomment-359993057
acquire: 1000000,
},
}
);
}
that way the host would be set to docker environment variable like this
PSQL_HOST: database
and that connects to
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
Edit
# docker-compose.yml
version: "3"
services:
app:
build: ./server
depends_on:
- database
ports:
- 5000:5000
environment:
PSQL_HOST: database
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-postgres}
POSTGRES_DB: ${POSTGRES_DB:-elitypescript}
command: npm run server
client:
build: ./client
image: react_client
links:
- app
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 3001:3001
command: npm run start
env_file:
- ./client/.env
database:
image: postgres:9.6.8-alpine
volumes:
- database:/var/lib/postgresql/data
ports:
- 3030:5432
volumes:
database: