docker-compose up stuck on "Attaching to ...." - docker-compose

I run the below command under the docker-compose.yml folder:
docker-compose up
my "docker-compose.yml" file:
version: '3'
services:
ubuntu:
image: "ubuntu:latest"
tty: true
But the issue is:
Jianfengs-MBP:homedocker jianfengli$ docker-compose up ubuntu
Recreating homedocker_ubuntu_1 ... done
Attaching to homedocker_ubuntu_1
My docker info(About docker)
Engine: 18.06.1-ce
Compose: 1.22.0

This is the expected behaviour. After attaching it goes to print logs, but if there are no logs then it just waits.
If you want up to exit use up docker-compose up -d

Related

unknown shorthand flag: 'd' in -d docker compose

I am working with a docker compose. when a trying to run docker compose in background, but it shows error unknown shorthand flag: 'd' in -d
I am tried in this way
docker compose -d up
docker-compose.yml
version: '3'
networks:
loki:
services:
loki:
image: grafana/loki:2.5.0
# volumes:
# - ./loki:/loki
ports:
- 3100:3100
networks:
- loki
promtail:
image: grafana/promtail
volumes:
- ./promtail:/etc/promtail
- /var/log/nginx/:/var/log/nginx/
command: -config.file=/etc/promtail/promtail-config.yml
ports:
- 9080:9080
networks:
- loki
grafana:
image: grafana/grafana
ports:
- 3000:3000
networks:
- loki
-d is an option of subcommand up.
if you run docker compose up --help you will have more information.
To solve the problem run docker compose up -d
We were using a legacy version of Docker for compatibility purposes; in the older versions it's docker-compose not docker compose. Changing it to be hyphenated resolved this error.
The accepted answer is the right answer for the question asker, and anyone else putting the -d in the wrong place. But this is the top hit for the error message, and I'm sure I'm not the only one getting this error after running:
docker-compose up -d
The accepted answer telling me to run exactly what I ran was pretty confusing. I finally worked out that:
docker-compose is a separate package from docker, at least on Arch Linux, and likely elsewhere, and
If docker-compose isn't installed, Docker thinks this makes sense:
$ docker compose up -d
unknown shorthand flag: 'd' in -d
I think it wold be infinitely more sensible to respond with:
docker: 'compose' is not a docker command.
Which is what it says if you don't use -d while not having docker-compose installed. I've now installed docker-compose and things are working, but I thought it was worth the time to hopefully save someone else some trouble if they end up here because they have docker but not docker-compose.

Docker-compose: Container is not running

I created the following Dockerfile:
FROM postgres
COPY short_codes.csv /var/lib/postgresql/data/short_codes.txt
ENTRYPOINT ["docker-entrypoint.sh"]
And docker-compose:
version: '3'
services:
codes:
container_name: short_codes
build:
context: codes_store
image: andrey1981spb/short_codes
ports:
- 5432:5432
I up docker-compose successfully. But when I try to enter in container, I receive:
"Container ... is not running"
I suppose, I have to prescribe some run-command in Dockerfile. But what is this command?
Your container is probably not running because you haven't copied your docker-entrypoint.sh script anywhere to your container.
You also don't need to supply a run command, since entrypoint is going to run a command on start up, and docker-compose up auto runs your container.

Changing Environment in docker-compose up

I'm new to docker.
Here is my simple docker-compose file.
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=production
To run this, I usually use docker-compose up command.
Can I change the NODE_ENV variable to anything while running docker-compose up?
For example:
docker-compose up -x NODE_ENV=staging
Use docker-compose run, you can manage services but not the complete stack. Useful for one-off commands.
$ docker-compose run -d -e NODE_ENV=staging web
Ref - https://docs.docker.com/compose/reference/run/
OR
Best way i could see as if now is to use shell & export the environment variable before doing a docker-compose up as below -
$ export NODE_ENV=staging && docker-compose up -d
Where your docker-compose will look something as below -
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=${NODE_ENV}

docker-compose exit depends_on service after tests

How do I get a service container to exit once the dependent container has finished?
I have test suite running in the app_unittestbot container that depends_on a postgresql db server (postgres:9.5-alpine) running in separate container. Once the test suite exits, I want to check the return code of the test suite and halt the database container. With the docker-compose.yml below, the db service container never halts.
docker-compose.yml
version: '2.1'
services:
app_postgresql95:
build: ./postgresql95/
ports:
- 54321:5432
app_unittestbot:
command: /root/wait-for-it.sh app_postgresql95:5432 --timeout=60 -- nose2 tests
build: ./unittestbot/
links:
- app_postgresql95
volumes:
- /app/src:/src
depends_on:
- 'app_postgresql95'
You can run docker-compose up --abort-on-container-exit to have compose stop all the containers if any one of them exits. That will likely solve your use case.
For something a little more resilient, I'd probably split this into two compose files so that an abort on postgresql doesn't get accidentally registered as a successful test. Then you'd just run those files in the order you need:
docker-compose -f docker-compose.yml up -d
docker-compose -f docker-compose.test.yml up
docker-compose -f docker-compose.yml down

use nvidia-docker-compose launch a container, but exited soon

My docker-compose.yml file :
version: '2'
services:
zl:
image: zl/caffe-torch-gpu:12.27
ports:
- "8801:8888"
- "6001:6008"
devices:
- /dev/nvidia0
volumes:
- ~/dl-data:/root/dl-data
After nvidia-docker-compose up -d the container launched, but exited soon.
But when I launch a container by nvidia-docker way, it worked well.
nvidia-docker run -itd -p 6008:6006 -p 8808:8888 -v `pwd`:/root/dl-data --name zl_test
You don't have to use nvidia-docker-compose.
By configuring the nvdia-docker plugin correctly you can just use docker-compose!
Via the nvidia docker git repo:
(can confirm it works for me)
Step 1:
Figure out nvidia driver version (it matters).
run:
nvidia-smi
output:
+---------------------------------------------------------------+
NVIDIA-SMI 367.57 Driver Version: 367.57
|-------------------------------+--------+----------------------+
Step 2:
create a docker volume that uses the nvidia-docker plugin must be done outside of compose as compose will mangle the volume name if it creates it.
docker volume create --name=nvidia_driver_367.57 -d nvidia-docker
Step 3
in the docker-compose.yml file:
version: '2'
volumes:
nvidia_driver_367.57: # same name as one created above
external: true #this will use the volume we created above
services:
cuda:
command: nvidia-smi
devices: #this is required
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0 #in general: /dev/nvidia# where # depends on which gpu card is wanted to be used
image: nvidia/cuda
volumes:
- nvidia_driver_367.57:/usr/local/nvidia/:ro