How do I get a service container to exit once the dependent container has finished?
I have test suite running in the app_unittestbot container that depends_on a postgresql db server (postgres:9.5-alpine) running in separate container. Once the test suite exits, I want to check the return code of the test suite and halt the database container. With the docker-compose.yml below, the db service container never halts.
docker-compose.yml
version: '2.1'
services:
app_postgresql95:
build: ./postgresql95/
ports:
- 54321:5432
app_unittestbot:
command: /root/wait-for-it.sh app_postgresql95:5432 --timeout=60 -- nose2 tests
build: ./unittestbot/
links:
- app_postgresql95
volumes:
- /app/src:/src
depends_on:
- 'app_postgresql95'
You can run docker-compose up --abort-on-container-exit to have compose stop all the containers if any one of them exits. That will likely solve your use case.
For something a little more resilient, I'd probably split this into two compose files so that an abort on postgresql doesn't get accidentally registered as a successful test. Then you'd just run those files in the order you need:
docker-compose -f docker-compose.yml up -d
docker-compose -f docker-compose.test.yml up
docker-compose -f docker-compose.yml down
Related
I know I can use two diffents terminals. Here is the example
I have a create-react-app project, and I want run
sudo docker compose up
And I want interact with test service via terminal, Jest give me some options like a to run all tests or p to filter some files.
docker-compose.yml
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
stdin_open: true # docker run -i
tty: true # docker run -t
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
When I run
sudo docker compose up
I can't interact with test service.
When I run
sudo docker compose run --rm test
I can interact with jest.
There is any way to use only one terminal and interact directly with test service?
There is any way to use only one terminal and interact directly with test service?
No, not in the way you probably expect.
docker-compose up is intended for starting the whole project with all its services. Running it in the foreground will output the logs of all started containers.
In no case will docker-compose up connect you directly to one of the containers.
Instead, use docker-compose run to start a one-off container or docker-compose exec to connect to a running service.
So, to start the project and connect to the test container using one terminal you could do
docker compose up -d # start the services in the background
docker compose run --rm test
Knowing this, you can now further optimize your docker-compose.yml for this:
drop stdin_open and tty since these will be automatically set when using docker-compose run / docker-compose exec
use service profiles so the test service is not automatically started by default but only when using docker-compose run to start it explicitly - and interactively
if test needs the dev service to be running add a depends_on so it will be started automatically whenever test is started
services:
test:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm test'
depends_on:
- dev # start `dev` whenever `test` is started
profiles:
- cli-only # start `test` only when specified explicitly
dev:
image: 'node'
working_dir: '/app'
volumes:
- ./:/app
entrypoint: 'npm start'
With this you can simply run
docker compose run --rm test
to start an interactive terminal connected to test. The dev service will be started automatically if it is not already running - so basically the same as above but without prior docker-compose up -d.
On the other hand, running
docker compose up
would now only start the dev service.
I am building a Flask application in Python. I'm using SQLAlchemy to connect to PostgreSQL.
In the flask application, I'm using this to connect SQLAlchemy to PostgreSQL
engine = create_engine('postgresql://postgres:[mypassword]#db:5432/employee-manager-db')
And this is my docker-compose.yml
version: '3.8'
services:
backend:
build:
context: .
dockerfile: Dockerfile
ports:
- 8000:8000
volumes:
- .:/app
links:
- db:db
depends_on:
- pgadmin
db:
image: postgres:14.5
restart: always
volumes:
- .dbdata:/var/lib/postgresql
hostname: postgres
environment:
POSTGRES_PASSWORD: [mypassword]
POSTGRES_DB: employee-manager-db
pgadmin:
image: 'dpage/pgadmin4'
restart: always
environment:
PGADMIN_DEFAULT_EMAIL: [myemail]
PGADMIN_DEFAULT_PASSWORD: [mypassword]
ports:
- "5050:80"
depends_on:
- db
I can do "docker build -t employee-manager ." to build the image. However, when I do "docker run -p 5000:5000 employee-manager" to run the image, I get an error saying
conn = _connect(dsn, connection_factory=connection_factory, **kwasync)
psycopg2.OperationalError: could not translate host name "db" to address: Try again
Does anybody know how to fix this? Thank you so much for your help
Your containers are on different networks and that is why they don't see each other.
When you run docker-compose up, docker-compose creates a separate network and puts all the services defined inside docker-compose.yml on that network. You can see that with docker network ls.
When you run a container with docker run, it is attached to the default bridge network, which is isolated from other networks.
There are several ways to fix this, but this one will serve you in many other scenarios:
Run docker container ls and identify the name or ID of the db container that was started with docker-compose
Then run your container with:
# ID_or_name from the previous point
docker run -p 5000:5000 --network container:<ID_or_name> employee-manager
This attached the new container to the same network as your database container.
Other ways include creating a network manually and defining that network as default in the docker-compose.yml. Then you can use docker run --network <network_name> ... to attach other containers to that network.
docker run doesn't read any of the information in the docker-compose.yml file, and it doesn't see things like the Docker network that Compose automatically creates.
In your case you already have the service fully-defined in the docker-compose.yml file, so you can use Compose commands to build and restart it
docker-compose build
docker-compose up -d # will delete and recreate changed containers
(If the name of the image is important to you – maybe you're pushing to a registry – you can specify image: alongside build:. links: are obsolete and you should remove them. I'd also avoid replacing the image's content with volumes:, since this misses any setup or modification that's done in the Dockerfile and it means you're running untested code if you ever deploy the image without the mount.)
I have a schema script /data/cb-create.sh that I have made available on a container volume. When I run docker-compose up, my server is not initialized at the time command is executed. So those commands fail because the server isn't launched just yet. I do not see a Starting Couchbase Server -- Web UI available at http://<ip>:8091 log line when the .sh script is running to initialize the schema. This is my docker compose file. How can I sequence it properly?
version: '3'
services:
couchbase:
image: couchbase:community-6.0.0
deploy:
replicas: 1
ports:
- 8091:8091
- 8092:8092
- 8093:8093
- 8094:8094
- 11210:11210
volumes:
- ./:/data
command: /bin/bash -c "/data/cb-create.sh"
container_name: couchbase
volumes:
kafka-data:
First: You should choose either an entrypoint or a command statement.
I guess an option is to write a small bash script where you put these commands in order.
Then in the command you specify running that bash script.
I created the following Dockerfile:
FROM postgres
COPY short_codes.csv /var/lib/postgresql/data/short_codes.txt
ENTRYPOINT ["docker-entrypoint.sh"]
And docker-compose:
version: '3'
services:
codes:
container_name: short_codes
build:
context: codes_store
image: andrey1981spb/short_codes
ports:
- 5432:5432
I up docker-compose successfully. But when I try to enter in container, I receive:
"Container ... is not running"
I suppose, I have to prescribe some run-command in Dockerfile. But what is this command?
Your container is probably not running because you haven't copied your docker-entrypoint.sh script anywhere to your container.
You also don't need to supply a run command, since entrypoint is going to run a command on start up, and docker-compose up auto runs your container.
I'm using docker compose to run tests for my application. The configuration looks like:
version: '2'
services:
web:
build: .
image: myapp:web
ports:
- "3000:3000"
depends_on:
- mongo
links:
- mongo
mongo:
image: mongo:3.2.6
Right now, when I run docker-compose up, there is a volume created automatically (by docker-compose or the mongo image?) which maps the Mongo storage data to path like: /var/lib/docker/volumes/c297a1c91728cb225a13d6dc1e37621f966067c1503511545d0110025479ea65/_data.
Since I am running tests rather than production code, I'd actually like to avoid this persistence (the mongo data should go away when the docker-compose exits) -- is this possible? If so, what's the best way to do it?
After the containers exit (or you stop them with a down command), clean up the old containers and volumes with
docker-compose rm -v
The -v tells it to also remove the volumes (container volumes and named volumes created with docker-compose, but not host volumes).