Docker MongoDB seed notification - mongodb

I want to deploy in Docker Swarm a stack with a MongoDB and a Web Server; the db must be filled with initial data and I found out this valid solution.
Given that Stack services start in casual order, how could I be sure that the Web Server will read initial data correctly?
Probably I need some notification system (Redis?), but I am new to MongoDB, so I am looking for well-known solutions to this problem (that I think is pretty common).

I would highly suggest looking at health checks of docker-compose.yml. You can change the health check command to specific MongoDB check, Once health-check is ready, then only Web-server will start sending the request to the MongoDB container.
Have a look at the example file. Please change the health-check as per your need
version: "3.3"
services:
mongo:
image: mongo
ports:
- "27017:27017"
volumes:
- ./data/mongodb/db:/data/db
healthcheck:
test: echo 'db.runCommand("ping").ok' | mongo mongo:27017/test --quiet 1
interval: 10s
timeout: 10s
retries: 5
start_period: 40s
webserver:
image: custom_webserver_image:latest
volumes:
- $PWD:/app
links:
- mongodb
ports:
- 8000:8000
depends_on:
- mongo
Ref:- https://docs.docker.com/compose/compose-file/#healthcheck

Related

docker-compose - PHP instance seems not to communicate with database service

I'm developing a project based on the Github template dunglas/symfony-docker to which I want to add a postgres database..
It seems that my docker compose.yml file is incorrectly configured because the communication between PHP and postgres is malfunctioning.
Indeed when I try to perform a symfony migration, doctrine returns me the following error :
password authentication failed for user "postgres"
When I inspect the PHP logs I notice that PHP is waiting after the database
php_1 | Still waiting for db to be ready... Or maybe the db is not reachable.
My docker-compose.yml :
version: "3.4"
services:
php:
links:
- database
build:
context: .
target: symfony_php
args:
SYMFONY_VERSION: ${SYMFONY_VERSION:-}
SKELETON: ${SKELETON:-symfony/skeleton}
STABILITY: ${STABILITY:-stable}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
healthcheck:
interval: 10s
timeout: 3s
retries: 3
start_period: 30s
environment:
# Run "composer require symfony/orm-pack" to install and configure Doctrine ORM
DATABASE_URL: ${DATABASE_URL}
# Run "composer require symfony/mercure-bundle" to install and configure the Mercure integration
MERCURE_URL: ${CADDY_MERCURE_URL:-http://caddy/.well-known/mercure}
MERCURE_PUBLIC_URL: https://${SERVER_NAME:-localhost}/.well-known/mercure
MERCURE_JWT_SECRET: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
caddy:
build:
context: .
target: symfony_caddy
depends_on:
- php
environment:
SERVER_NAME: ${SERVER_NAME:-localhost, caddy:80}
MERCURE_PUBLISHER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
MERCURE_SUBSCRIBER_JWT_KEY: ${CADDY_MERCURE_JWT_SECRET:-!ChangeMe!}
restart: unless-stopped
volumes:
- php_socket:/var/run/php
- caddy_data:/data
- caddy_config:/config
ports:
# HTTP
- target: 80
published: 80
protocol: tcp
# HTTPS
- target: 443
published: 443
protocol: tcp
# HTTP/3
- target: 443
published: 443
protocol: udp
###> doctrine/doctrine-bundle ###
database:
image: postgres:${POSTGRES_VERSION:-13}-alpine
environment:
POSTGRES_DB: ${POSTGRES_DB:-app}
# You should definitely change the password in production
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-ChangeMe}
POSTGRES_USER: ${POSTGRES_USER:-symfony}
volumes:
- db-data:/var/lib/postgresql/data:rw
# You may use a bind-mounted host directory instead, so that it is harder to accidentally remove the volume and lose all your data!
# - ./docker/db/data:/var/lib/postgresql/data:rw
###< doctrine/doctrine-bundle ###
volumes:
php_socket:
caddy_data:
caddy_config:
###> doctrine/doctrine-bundle ###
db-data:
###< doctrine/doctrine-bundle ###
extract of my .env file :
POSTGRES_DB=proximityNL
POSTGRES_PASSWORD=postgres
POSTGRES_USER=postgres
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
Can you help me ?
Best regards ..
UPDATE :
Indeed I understood on Saturday that it was just necessary to remove orphan ..
docker-compose down --remove-orphans --volumes
When running in a container, 127.0.0.1 refers to the container itself. Docker compose creates a virtual network where each container has its own IP address. You can address the containers by their service names.
So your connection string should point to database:5432 instead of 127.0.0.1:5432 like this
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
You use database because that's the service name of your postgresql container in your docker compose file.
In docker you can call images via the name of it.
So try to use the name of the docker image for your config
DATABASE_URL="postgresql://postgres:postgres#database:5432/proximityNL?serverVersion=13&charset=utf8"
and maybe add an link between your php and database image
services:
php:
links:
- database
This is the way how i am connect a java app with an mysql db.
Docker should map DNS resolution from the Docker Host into your containers.
Networking in Compose link
Because of that, you DB URL should look like:
"postgresql://postgres:postgres#database:5432/..."
I understood on Saturday that it was just necessary to remove orphan
docker-compose down --remove-orphans --volumes

Nestjs with postgres and redis on docker connection refused

I've dockerize nestjs app with postgres and redis.
How can I fix this issue?
Postgres and redis are refusing tcp connections.
This is docker-compose.yml and console result.
I am using typeorm and #nestjs/bull for redis.
Hope much help.
Thanks
When using docker-compose, the containers do not necessarily need to specify the network they are in since they can reach each other from the container name (e.g. postgres, redis in your case) because they are in the same network. See Networking in Compose for more info.
Also, expose doesn't do any operation and it is redundant. Since you specify ports, it should be more than enough to tell Docker which ports of the container are exposed and bound to which ports of the host.
For redis-alpine, the startup command is redis-server and it is not necessary to specify it again. Also, the environment you specified is redundant in this case.
Try the following docker-compose.yaml with all of the above suggestions added:
version: "3"
services:
postgres:
image: postgres:alpine
restart: always
ports:
- 5432:5432
volumes:
- ./db_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=root
- POSTGRES_DB=lighthouse
redis:
image: redis:alpine
restart: always
ports:
- 6379:6379
Hope this helps. Cheers! 🍻

Max MongoDB Connections when running with docker-compose

I'm currently running MongoDB and Mongo Express with docker-compose. When I look on express, it shows I only have 815 available connections. How do I increase this? I tried adding 'command: maxConns 2000' in the docker-compose file but it had no impact. I believe MongoDB doesn't limit the number of connections so I assume this is a limitationenter code here with docker-compose?
version: '2'
services:
mongo:
image: mongo
restart: always
ports:
- 27017:27017
mongo-express:
image: mongo-express
restart: always
ports:
- 8081:8081
Unless constrained by maxConns, mongo will calculate available connections based on system limits. UNIX ulimit settings has guidance on checking and properly configuring ulimit values.

How do I properly set up my Keystone.js app to run in docker with mongo?

I have built my app which runs fine locally. When I try to run it in docker (docker-compose up) it appears to start, but then throws an error message:
Creating mongodb ... done
Creating webcms ... done
Attaching to mongodb, webcms
...
Mongoose connection "error" event fired with:
MongoError: failed to connect to server [localhost:27017] on first connect
...
webcms exited with code 1
I have read that with Keystone.js you need to configure the Mongo location in the .env file, which I have:
MONGO_URI=mongodb://localhost:27017
Here is my Docker file:
# Use node 9.4.0
FROM node:9.4.0
# Copy source code
COPY . /app
# Change working directory
WORKDIR /app
# Install dependencies
RUN npm install
# Expose API port to the outside
EXPOSE 3000
# Launch application
CMD ["node","keystone"]
...and my docker-compose
version: "2"
services:
# NodeJS app
web:
container_name: webcms
build: .
ports:
- 3000:3000
depends_on:
- mongo
# MongoDB
mongo:
container_name: mongo
image: mongo
volumes:
- ./data:/data/db/mongo
ports:
- 27017:27017
When I run docker ps it confirms that mongo is up and running in a container...
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
f3e06e4a5cfe mongo "docker-entrypoint.s…" 2 hours ago Up 2 hours 0.0.0.0:27017->27017/tcp mongodb
I am either missing some config or I have it configured incorrectly. Could someone tell me what that is?
Any help would be appreciated.
Thanks!
It is not working properly because you are sending the wrong host.
your container does not understand what is localhost:27017 since it's your computer address and not its container address.
Important to understand that each service has it's own container with a different IP.
The beauty of the docker-compose that you do not need to know your container address! enough to know your service name:
version: "2"
volumes:
db-data:
driver: local
services:
web:
build: .
ports:
- 3000:3000
depends_on:
- mongo
environment:
- MONGO_URI=mongodb://mongo:27017
mongo:
image: mongo
volumes:
- "db-data:/data/db/mongo"
ports:
- 27017:27017
just run docker-compose up and you are all-set
A couple of things that may help:
First. I am not sure what your error logs look like but buried in my error logs was:
...Error: The cookieSecret config option is required when running Keystone in a production environment.Update your app or environment config so this value is supplied to the Keystone constructor....
To solve this problem, in your Keystone entry file (eg: index.js) make sure your Keystone constructor has the cookieSecret parameter set correctly: process.env.NODE_ENV === 'production'
Next. Change the mongo uri from the one Keystone generated (mongoUri: mongodb://localhost/my-keystone) to: mongoUri: 'mongodb://mongo:27017'. Docker needs this because it is the mongo container address. This change should also be reflected in your docker-compose file under the environment variable under MONGO_URI:
... environment: - MONGO_URI=mongodb://mongo:27017 ...
After these changes your Keystone constructor should look like this:
const keystone = new Keystone({
adapter: new Adapter(adapterConfig),
cookieSecret: process.env.NODE_ENV === 'production',
sessionStore: new MongoStore({ url: 'mongodb://mongo:27017' }),
});
And your docker-compose file, something like this (I used a network instead of links for my docker-compose as Docker has stated that links are a legacy option. I've included mine in case its useful for anyone else):
version: "3.3"
services:
mongo:
image: mongo
networks:
- appNetwork
ports:
- "27017:27017"
environment:
- MONGO_URI=mongodb://mongo:27017
appservice:
build:
context: ./my-app
dockerfile: Dockerfile
networks:
- appNetwork
ports:
- "3000:3000"
networks:
appNetwork:
external: false
It is better to use mongo db atlas if you does not want complications. You can use it in local and in deployment.
Simple steps to get the mongo url is available in https://www.mongodb.com/cloud/atlas
Then add a env variable
CONNECT_TO=mongodb://your_url
For passing the .env to docker, use
docker run --publish 8000:3000 --env-file .env --detach --name kb keystoneblog:1.0

CouchDB with docker-compose not reachable from host (but from localhost)

I am setting up CouchDB using docker-compose with the following docker-compose.yml (the following is a minimal example):
version: "3.6"
services:
couchdb:
container_name: couchdb
image: apache/couchdb:2.2.0
restart: always
ports:
- 5984:5984
volumes:
- ./test/couchdb/data:/opt/couchdb/data
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
couchdb_setup:
depends_on: ['couchdb']
container_name: couchdb_setup
image: apache/couchdb:2.2.0
command: ['/bin/bash', '-x', '-c', 'cat /usr/local/bin/couchdb_setup.sh | tr -d "\r" | bash']
volumes:
- ./scripts/couchdb_setup.sh:/usr/local/bin/couchdb_setup.sh:ro
environment:
- 'COUCHDB_USER=admin'
- 'COUCHDB_PASSWORD=password'
The setup script of the second container is executing the script ./scripts/couchdb_setup.sh that starts with:
until curl -f http://couchdb:5984; do
sleep 1
done
Now, the issue is that the curl call is always returning The requested URL returned error: 502 Bad Gateway. I figured that CouchDB is only listening on http://localhost:5984 but not on http://couchdb:5984 as is evident when I bash into the couchdb container and issue both curls; for http://localhost:5984 I get the expected response, for http://couchdb:5984 as well as http://<CONTAINER_IP>:5984 (that's http://192.168.32.2:5984, in my case) responds with server 192.168.32.2 is unreachable ...
I looked into the configs and especially into the [chttp] settings and its bind_address argument. By default, bind_address is set to any, but I have also tried using 0.0.0.0, to no avail.
I'm looking for hints what I did wrong and for advice how to set up CouchDB with docker-compose. Any help is appreciated.