I've in Dockerfile service which depends on another service, but I'd like to negate the condition when not service_healthy. So opposite of the following:
service1:
depends_on:
service2:
condition: service_healthy
So basically I'd like to start service1 when service2 is not healthy.
Secondly, based on the documentation for depends_on, the condition option has been removed and it is no longer supported in version 3 of Compose file format.
So how the above logic can be achieved?
Here is the workaround where main container waits for other hosts to exit, by pinging the other hosts and waiting when both are off-line:
version: '3'
services:
main:
image: bash
depends_on:
- test01
- test02
command: bash -c "sleep 2 && until ! ping -qc1 test01 && ! ping -qc1 test02; do sleep 1; done &>/dev/null"
networks:
intra:
ipv4_address: 172.10.0.254
test01:
image: bash
hostname: test01
command: bash -c "ip route && sleep 10"
networks:
intra:
ipv4_address: 172.10.0.11
test02:
image: bash
hostname: test02
command: bash -c "ip route && sleep 20"
networks:
intra:
ipv4_address: 172.10.0.12
networks:
intra:
driver: bridge
ipam:
config:
- subnet: 172.10.0.0/24
See also: Docker compose - Start service only when other service had completed
Related
This is the portion of the dockerfile that has served us well to date. However, now I need to convert this to be a single node replica set (for transactions to work). I don't want any secondary or arbiter - just the primary node. What am I missing to get this working?
mongo:
image: mongo:4.4.3
container_name: mongo
restart: unless-stopped
environment:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD: myPass
command: mongod --port 27017
ports:
- '27017:27017'
volumes:
- ./data/mongodb:/data/db
- ./data/mongodb/home:/home/mongodb/
- ./configs/mongodb/mongo-init.sh:/docker-entrypoint-initdb.d/mongo-init.sh:ro
Got it working. I inserted the following into the block in my question above:
hostname: mongodb
volumes:
- ./data/mongodb/data/log/:/var/log/mongodb/
# the healthcheck avoids the need to initiate the replica set
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u root -p imagiaRoot --quiet) -eq 1
interval: 10s
start_period: 30s
I was unable to initiate the replica set via the healthcheck. I used the bash script below instead. For Windows users, be sure to call your DB with the name of your computer. For example:
mongodb://DESKTOP-QPRKMN2:27017
run-test.sh
#!/bin/bash
echo "Running docker-compose"
docker-compose up -d
echo "Waiting for DB to initialize"
sleep 10
echo "Initiating DB"
docker exec mongo_container mongo --eval "rs.initiate();"
echo "Running tests"
# test result
if go test ./... -v
then
echo "Test PASSED"
else
echo "Test FAILED"
fi
# cleanup
docker-compose -f docker-compose.test.yml down
docker-compose file
version: '3.8'
services:
mongo:
hostname: $HOST
container_name: mongo_container
image: mongo:5.0.3
volumes:
- ./test-db.d
expose:
- 27017
ports:
- "27017:27017"
restart: always
command: ["--replSet", "test", "--bind_ip_all"]
This forum post was very helpful: https://www.mongodb.com/community/forums/t/docker-compose-replicasets-getaddrinfo-enotfound/14301/4
everyone. I have an odd problem (who hasn't?)
I have this docker-compose file:
version: '3.4'
services:
ludustack-web:
container_name: ludustack-web
image: ${DOCKER_REGISTRY-}ludustack-web
build:
context: .
dockerfile: LuduStack.Web/Dockerfile
networks:
- ludustack-network
ports:
- '80:80'
- '443:443'
depends_on:
- 'ludustack-db'
ludustack-db:
container_name: ludustack-db
command: mongod --auth
image: mongo:latest
hostname: mongodb
networks:
- ludustack-network
ports:
- '27017:27017'
env_file:
- .env
environment:
- MONGO_INITDB_ROOT_USERNAME=${MONGO_INITDB_ROOT_USERNAME}
- MONGO_INITDB_ROOT_PASSWORD=${MONGO_INITDB_ROOT_PASSWORD}
- MONGO_INITDB_DATABASE=${MONGO_INITDB_DATABASE}
- MONGO_REPLICA_SET_NAME=${MONGO_REPLICA_SET_NAME}
healthcheck:
test: test $$(echo "rs.initiate().ok || rs.status().ok" | mongo -u $${MONGO_INITDB_ROOT_USERNAME} -p $${MONGO_INITDB_ROOT_PASSWORD} --quiet) -eq 1
interval: 60s
start_period: 60s
command: ["--replSet", "${MONGO_REPLICA_SET_NAME}", "--bind_ip_all"]
networks:
ludustack-network:
driver: bridge
The problem is the web application only waits for the mongodb container to be ready, not the replica set itself. So, when the application starts, it crashes because the replicaset is not ready yet. Right after the crash, it logs the replicaset continuing its job:
Any tips on how to make the web application wait the replicaset to be ready?
The application did wait, for 30 seconds. You can increase the timeout by adjusting serverSelectionTimeoutMS URI option or through language-specific means.
I've got the following docker-compose file and it serves up the application on port 80 fine.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
volumes:
static:
postgres_data:
Once in there I can log into the admin and add an extra user which gets saved to the database as I can reload the page and the user is still there. Once I stop the backend docker container however that user is gone. Given Postgres is operating in a different container and I'm not bringing it down I'm unsure how stopping the backend container and restarting it is causing the data not to be available.
Thanks in advance.
EDIT:
I'm bringing up the docker container with the following command.
docker-compose -f docker-compose.prod.yml up -d
I'm bringing down the container by just using docker desktop and stopping the container.
I'm running DJANGO 3 for the backend and I've also tried adding a superuser in the terminal when the container is running:
# python manage.py createsuperuser
Username (leave blank to use 'root'): mikey
Email address:
Password:
Password (again):
This password is too common.
Bypass password validation and create user anyway? [y/N]: y
Superuser created successfully.
Which works and the user appears while the container is running. However, once again when I shut the container down via docker desktop and then restart it that user that was just created is gone.
FURTHER EDIT:
settings.py using dotenv "from dotenv import load_dotenv"
DATABASES = {
"default": {
"ENGINE": os.getenv("SQL_ENGINE"),
"NAME": os.getenv("SQL_DATABASE"),
"USER": os.getenv("SQL_USER"),
"PASSWORD": os.getenv("SQL_PASSWORD"),
"HOST": os.getenv("SQL_HOST"),
"PORT": os.getenv("SQL_PORT"),
}
}
with the .env.prod file having the following values:
DEBUG=0
DJANGO_ALLOWED_HOSTS=localhost 127.0.0.1 [::1]
SQL_ENGINE=django.db.backends.postgresql
SQL_DATABASE=postgres
SQL_USER=postgres
SQL_PASSWORD=postgres
SQL_HOST=db
SQL_PORT=5432
SOLUTION:
Read the comments to see the diagnosis by other legends but updated docker-compose file looks like this. Note the "depends_on" block.
version: '3'
services:
backend:
build: ./Django-Backend
command: gunicorn testing.wsgi:application --bind 0.0.0.0:8000 --log-level debug
expose:
- "8000"
volumes:
- static:/code/backend/static
env_file:
- ./.env.prod
depends_on:
- db
nginx:
build: ./nginx
ports:
- 80:80
volumes:
- static:/static
depends_on:
- backend
db:
image: postgres
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgres_data:/var/lib/postgresql/data/
expose:
- "5432"
volumes:
static:
postgres_data:
FINAL EDIT:
Added the following code to my entrypoint.sh file to ensure Postgres is ready to accept connections by the backend container.
if [ "$DATABASE" = "postgres" ]
then
echo "Waiting for postgres..."
while ! nc -z $SQL_HOST $SQL_PORT; do
sleep 0.1
done
echo "PostgreSQL started"
fi
I have a few systems where I use docker-compose and there is no problem.
However, I have one here where 'down' doesn't do anything at all.
'up' works perfectly though. This is on MacOS.
The project is nicknamed 'stormy', and here is the script:
version: '3.3'
services:
rabbitmq:
container_name: stormy_rabbitmq
image: rabbitmq:management-alpine
restart: unless-stopped
ports:
- 5672:5672
- 15672:15672
expose:
- 5672
volumes:
#- /appdata/stormy/rabbitmq/etc/:/etc/rabbitmq/
- /appdata/stormy/rabbitmq/data/:/var/lib/rabbitmq/
- /appdata/stormy/rabbitmq/logs/:/var/log/rabbitmq/
networks:
- default
settings:
container_name: stormy_settings
image: registry.gitlab.com/robinhoodcrypto/stormy/settings:latest
restart: unless-stopped
volumes:
- /appdata/stormy/settings:/appdata/stormy/settings
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
capture:
container_name: stormy_capture
image: registry.gitlab.com/robinhoodcrypto/stormy/capture:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/capture
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
livestream:
container_name: stormy_livestream
image: registry.gitlab.com/robinhoodcrypto/stormy/livestream:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/livestream
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
networks:
default:
external:
name: stormy-network
the 'up' script is as follows:
[ ! "$(docker network ls | grep stormy-network)" ] && docker network create stormy-network
echo '*****' | docker login registry.gitlab.com -u 'gitlab+deploy-token-******' --password-stdin
docker-compose down
docker-compose build --pull
docker-compose -p 'stormy' up -d
and the 'down' is simply:
docker-compose down
version:
$ docker-compose -v
docker-compose version 1.24.1, build 4667896b
when I do 'down', here is the output:
$ docker-compose down
Network stormy-network is external, skipping
and I put a verbose log output at: https://pastebin.com/Qnw5J88V
Why isn't 'down' working?
The docker-compose -p option sets the project name which gets included in things like container names and labels; Compose uses it to know which containers belong to which Compose services. You need to specify it on all of the commands that interact with containers (docker-compose up, down, ps, ...); if you're doing this frequently, setting the COMPOSE_PROJECT_NAME environment variable might be easier.
#!/bin/sh
export COMPOSE_PROJECT_NAME=stormy
docker-compose build --pull
docker-compose down
docker-compose up -d
I am trying to build an image and deploy it to a VPS.
I am running the app successfully with
docker-compose up
Then I build it with
docker build -t mystore .
When I try to run it for a test locally or on the VPS trough docker cloud:
docker run -p 4000:8000 mystore
The container works fine, but when I hit http://0.0.0.0:4000/
I am getting:
OperationalError at /
could not translate host name "db" to address: Name or service not known
I have changed the postgresql.conf listen_addresses to "*", nothing changes. The posgresql logs are empty. I am running MacOS.
Here is my DATABASE config:
DATABASES = {
'default': {
'ENGINE': 'django.db.backends.postgresql_psycopg2',
'NAME': 'postgres',
'USER': 'user',
'PASSWORD': 'password',
'HOST': 'db',
'PORT': '5432',
}
}
This is the Dockerfile
FROM python:3.5
ENV PYTHONUNBUFFERED 1
RUN \
apt-get -y update && \
apt-get install -y gettext && \
apt-get clean
ADD requirements.txt /app/
RUN pip install -r /app/requirements.txt
ADD . /app
WORKDIR /app
EXPOSE 8000
ENV PORT 8000
CMD ["uwsgi", "/app/saleor/wsgi/uwsgi.ini"]
This is the docker-compose.yml file:
version: '2'
services:
db:
image: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
redis:
image: redis
ports:
- '6379:6379'
celery:
build:
context: .
dockerfile: Dockerfile
env_file: common.env
command: celery -A saleor worker --app=saleor.celeryconf:app --loglevel=info
volumes:
- .:/app:Z
links:
- redis
depends_on:
- redis
search:
image: elasticsearch:5.4.3
mem_limit: 512m
environment:
- "ES_JAVA_OPTS=-Xms512m -Xmx512m"
ports:
- '127.0.0.1:9200:9200'
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
makemigrations:
build: .
command: python manage.py makemigrations --noinput
volumes:
- .:/app:Z
migration:
build: .
command: python manage.py migrate --noinput
volumes:
- .:/app:Z
You forgot to add links to your web image
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
env_file: common.env
depends_on:
- db
- redis
- search
links: # <- here
- db
- redis
- search
ports:
- '8000:8000'
volumes:
- .:/app:Z
Check the available networks. There are 3 by default:
$ docker network ls
NETWORK ID NAME DRIVER SCOPE
db07e84f27a1 bridge bridge local
6a1bf8c2d8e2 host host local
d8c3c61003f1 none null local
I've a simplified setup of your docker compose. Only postgres:
version: '2'
services:
postgres:
image: postgres
name: postgres
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=password
ports:
- '5432:5432'
networks:
random:
networks:
random:
I gave the postgres container the name postgres and called the service postgres, I created a network called 'random' (last commands), and I've added the service postgres to the network random. If you don't specify a network you will see that docker-compose creates its a selfnamed network.
After starting docker-compose, you will have 4 networks. A new bridge network called random.
Check in which network your docker compose environment is created by inspecting for example your postgres container:
Mine is created in the network 'random':
$ docker inspect postgres
It's in the network 'random'.
"Networks": {
"random": {..
Now start your mystore container in the same network:
$ docker run -p 4000:8000 --network=random mystore
You can check again with docker inspect. To be sure you can exec inside your mystore container and try to ping postgres. They are deployed inside the same network so this should be possible and your container should be able to translate the name postgres to an address.
in your docker-compose.yml, add a network and add your containers to it like so:
to each container definition add:
networks:
- mynetwork
and then, at the end of the file, add:
networks:
mynetwork: