My docker-compose.yml file :
version: '2'
services:
zl:
image: zl/caffe-torch-gpu:12.27
ports:
- "8801:8888"
- "6001:6008"
devices:
- /dev/nvidia0
volumes:
- ~/dl-data:/root/dl-data
After nvidia-docker-compose up -d the container launched, but exited soon.
But when I launch a container by nvidia-docker way, it worked well.
nvidia-docker run -itd -p 6008:6006 -p 8808:8888 -v `pwd`:/root/dl-data --name zl_test
You don't have to use nvidia-docker-compose.
By configuring the nvdia-docker plugin correctly you can just use docker-compose!
Via the nvidia docker git repo:
(can confirm it works for me)
Step 1:
Figure out nvidia driver version (it matters).
run:
nvidia-smi
output:
+---------------------------------------------------------------+
NVIDIA-SMI 367.57 Driver Version: 367.57
|-------------------------------+--------+----------------------+
Step 2:
create a docker volume that uses the nvidia-docker plugin must be done outside of compose as compose will mangle the volume name if it creates it.
docker volume create --name=nvidia_driver_367.57 -d nvidia-docker
Step 3
in the docker-compose.yml file:
version: '2'
volumes:
nvidia_driver_367.57: # same name as one created above
external: true #this will use the volume we created above
services:
cuda:
command: nvidia-smi
devices: #this is required
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0 #in general: /dev/nvidia# where # depends on which gpu card is wanted to be used
image: nvidia/cuda
volumes:
- nvidia_driver_367.57:/usr/local/nvidia/:ro
Related
I am working with a docker compose. when a trying to run docker compose in background, but it shows error unknown shorthand flag: 'd' in -d
I am tried in this way
docker compose -d up
docker-compose.yml
version: '3'
networks:
loki:
services:
loki:
image: grafana/loki:2.5.0
# volumes:
# - ./loki:/loki
ports:
- 3100:3100
networks:
- loki
promtail:
image: grafana/promtail
volumes:
- ./promtail:/etc/promtail
- /var/log/nginx/:/var/log/nginx/
command: -config.file=/etc/promtail/promtail-config.yml
ports:
- 9080:9080
networks:
- loki
grafana:
image: grafana/grafana
ports:
- 3000:3000
networks:
- loki
-d is an option of subcommand up.
if you run docker compose up --help you will have more information.
To solve the problem run docker compose up -d
We were using a legacy version of Docker for compatibility purposes; in the older versions it's docker-compose not docker compose. Changing it to be hyphenated resolved this error.
The accepted answer is the right answer for the question asker, and anyone else putting the -d in the wrong place. But this is the top hit for the error message, and I'm sure I'm not the only one getting this error after running:
docker-compose up -d
The accepted answer telling me to run exactly what I ran was pretty confusing. I finally worked out that:
docker-compose is a separate package from docker, at least on Arch Linux, and likely elsewhere, and
If docker-compose isn't installed, Docker thinks this makes sense:
$ docker compose up -d
unknown shorthand flag: 'd' in -d
I think it wold be infinitely more sensible to respond with:
docker: 'compose' is not a docker command.
Which is what it says if you don't use -d while not having docker-compose installed. I've now installed docker-compose and things are working, but I thought it was worth the time to hopefully save someone else some trouble if they end up here because they have docker but not docker-compose.
I'd like to run some integration tests against a real database, but I fail to start an additional container (for the db), because I need to mount a config file that is in my repo before it is starting up.
This is how I use the database on my local computer (docker-compose):
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-
I guess I cannot use a service container as the code is not available at the time the service container is started, therefore it won't pick up my configuration.
That's why I tried to run a container within my container using --network host (see below) and the container seems to be running fine, still I'm not able to curl it.
- name: Start DB for tests
run: |
docker run -d \
--network host \
-v ${{ github.workspace }}/dev/conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml \
-v ${{ github.workspace }}/dev/conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties \
tinkerpop/gremlin-server:3.5
- name: Test connection
run: |
curl "localhost:8182/gremlin?gremlin=g.V().valueMap()"
According to the documentation about the job context the id of the container network should be available ({{job.container.network}}) but is empty if you don’t use any job-level service or container.
Any ideas what I could try next?
This is what I ended up with: I'm now using docker-compose to run the integration tests (on my local computer as well as on GitHub Actions). I'm just mounting the entire directory/repo in the test container. Pulling the node:14-slim delays the build by some seconds, but I guess it's still the best option:
version: "3.2"
services:
gremlin-server:
image: tinkerpop/gremlin-server:3.5
container_name: 'gremlin-server'
entrypoint: ./bin/gremlin-server.sh conf/gremlin-server-config.yaml
networks:
- graphdb_net
ports:
- 8182:8182
volumes:
- ./data/:/opt/gremlin-server/data/
- ./conf/gremlin-server-config.yaml:/opt/gremlin-server/conf/gremlin-server-config.yaml
- ./conf/tinkergraph-empty.properties:/opt/gremlin-server/conf/tinkergraph-empty.properties
- ./conf/initData.groovy:/opt/gremlin-server/scripts/initData.groovy
test:
image: node:14-slim
working_dir: /app
depends_on:
- gremlin-server
networks:
- graphdb_net
volumes:
- ../:/app
environment:
- NEPTUNE_CONNECTION_STRING=ws://gremlin-server:8182
command:
yarn test
networks:
graphdb_net:
driver: bridge
and I'm running them like this in my workflow:
- name: Spin up test environment
run: |
docker compose -f dev/docker-compose.yaml pull
docker compose -f dev/docker-compose.yaml build
- name: Run tests
run: |
docker compose -f dev/docker-compose.yaml run test
It's based on #DannyB's suggestion and his answer here so all props go to him.
I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.
New to Docker and I'm trying to set Postgres and pgadmin4 to run as a single service on docker for Mac inside a virtual machine. Everything works but as soon as I stop the service my data is gone. I'm using a named volume to persist data but probably doing something wrong. What is it?
Here's my setup:
# create my VM
docker-machine create dbvm
# set the right environment
eval $(docker-machine env dbvm)
Here's my docker-compose.yaml file:
version: '3'
services:
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=my_db
volumes:
- pgdata:/pgdata
ports:
- 5432:5432
pgadmin:
image: fenglc/pgadmin4
ports:
- 5050:5050
volumes:
- pgadmindata:/pgadmindata
volumes:
pgdata:
pgadmindata:
With docker-compose.yaml, I run:
docker stack deploy -c docker-compose.yaml dbstack
I can do everything on this setup, but if I run docker stack rm dbstack the data is gone after this, but the volumes still exist.
$ docker volume ls
DRIVER VOLUME NAME
local 0c15b0b22c6b850e8768c14045da166253424dda4df8d2e13df75fd54d833412
local 22bab81d9d1de0e07de97363596b096f944752eba617ff574a0ab525239227f5
local 6da6e29fb98ad0f66d7da6a75dc76066ce014b26ea43567c55ed318fda707105
local dbstack_pgadmindata
local dbstack_pgdata
What am I missing?
Unless you have it in some config not shown, I believe you need to map to the default data location inside the container e.g., pgdata:/var/lib/postgresql/data
#Idg is partially correct. postgres data lives at /var/lib/postgresql/data per the Docker Hub readme.
But for it to work in your named volume, you can't use a path on the left side, so correct value would be:
volumes:
- pgdata:/var/lib/postgresql/data
Then the postgres data will stay in that named volume, on the node it was created on.
I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet