I am working with a docker compose. when a trying to run docker compose in background, but it shows error unknown shorthand flag: 'd' in -d
I am tried in this way
docker compose -d up
docker-compose.yml
version: '3'
networks:
loki:
services:
loki:
image: grafana/loki:2.5.0
# volumes:
# - ./loki:/loki
ports:
- 3100:3100
networks:
- loki
promtail:
image: grafana/promtail
volumes:
- ./promtail:/etc/promtail
- /var/log/nginx/:/var/log/nginx/
command: -config.file=/etc/promtail/promtail-config.yml
ports:
- 9080:9080
networks:
- loki
grafana:
image: grafana/grafana
ports:
- 3000:3000
networks:
- loki
-d is an option of subcommand up.
if you run docker compose up --help you will have more information.
To solve the problem run docker compose up -d
We were using a legacy version of Docker for compatibility purposes; in the older versions it's docker-compose not docker compose. Changing it to be hyphenated resolved this error.
The accepted answer is the right answer for the question asker, and anyone else putting the -d in the wrong place. But this is the top hit for the error message, and I'm sure I'm not the only one getting this error after running:
docker-compose up -d
The accepted answer telling me to run exactly what I ran was pretty confusing. I finally worked out that:
docker-compose is a separate package from docker, at least on Arch Linux, and likely elsewhere, and
If docker-compose isn't installed, Docker thinks this makes sense:
$ docker compose up -d
unknown shorthand flag: 'd' in -d
I think it wold be infinitely more sensible to respond with:
docker: 'compose' is not a docker command.
Which is what it says if you don't use -d while not having docker-compose installed. I've now installed docker-compose and things are working, but I thought it was worth the time to hopefully save someone else some trouble if they end up here because they have docker but not docker-compose.
Related
I'm using FastAPI and postgresql to build a simple python api backend. When I try to build the database migration file (using Alembic) this error occures:
sqlalchemy.exc.OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
As I searched a lot about this problem, it seems to be because of my docker config, but I couldn't figure it out. Here is my docker-compose code:
version: '3.7'
services:
web:
build: ./src
command: sh -c "alembic upgrade head && uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000"
volumes:
- ./src/:/usr/src/app/
ports:
- 8002:8000
environment:
- DATABASE_URL=postgresql://user:pass#db/my_db
db:
image: postgres:12.1-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=pass
- POSTGRES_DB=my_db
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 5432:5432
volumes:
postgres_data:
I'm also suspicious to my other project that is using docker in my local machine (same setup with FastAPI, postgres, and docker), and it works fine and doesn't have any similar problem.
What should I check and change to fix the problem?
P.S: I'm a beginner in docker.
Here's what's happening:
Your web container is trying to connect to db, but db is not up yet, in simple words web cannot "see" db.
The short answer: start db first: docker-compose up -d db then start web: docker-compose up web or whatever you are running.
Now, the long answer. If you proceed with the short answer, that's ok, but it's cumbersome if you ask me. You could try changing the version to 2.x eg 2.4 and adding depends_on to web, example:
version: '2.4'
services:
web:
...
depends_on:
- db
db:
image: ...
...
If you just run docker-compose up ..., docker will start db first.
Now, you may hit another problem: postgres may not be ready to receive connections. In this case you will have to wait for it to be ready. You can achieve this by retrying the db connection on error (I don't remember the exact error, you will have to check the docs) or use something like pg_isready. Assuming this is a dev environment, you can add it to your Dockerfile by installing postgresql-client and changing your cmd to:
command: >
sh -c "
until pg_isready -q -h db; do sleep 1; done
&&
alembic upgrade head
&&
uvicorn app.main:app --reload --workers 1 --host 0.0.0.0 --port 8000"
I have a scrapy application which I'm trying to containerized it. Basically, this is my docker-compose.yml file:
version: '3'
services:
scrapper:
container_name: scrapper
build: .
ports:
- 80:80
depends_on:
- db
links:
- db
db:
volumes:
- ./scrapper/sql:/docker-entrypoint-initdb.d
image: postgres
container_name: postgres
restart: always
ports:
- 5432:5432
And this is my Dockerfile:
FROM python:3
WORKDIR /usr/app
COPY requirements.txt .
RUN pip3 install -r requirements.txt
COPY . .
But when I try to execute my application using the following command: docker run -it scrapper_scrapper scrapy crawl angeloni, I'm receiving this message:
File "/usr/local/lib/python3.7/site-packages/scrapy/crawler.py", line 88, in crawl
yield self.engine.open_spider(self.spider, start_requests)
psycopg2.OperationalError: could not translate host name "db" to address: Name or service not known
Why this is happening? When I execute docker-compose ps command, it shows:
Name Command State Ports
--------------------------------------------------------------------------
postgres docker-entrypoint.sh postgres Up 0.0.0.0:5432->5432/tcp
scrapper python3 Exit 0
When running docker-compose up to start db, that container will run under its network that is also created by docker compose. As such, running docker run ... will not be able to connect to that instance, since it is not running on the same network. But you can specify it with:
docker run --network $network_name
To get the docker networks available, you can run:
docker networks ls
I think you have to explicitly define a user network and put your containers on it:
https://docs.docker.com/network/bridge/
Under the section:
User-defined bridges provide automatic DNS resolution between containers.
I am running a rethinkdb alongside a .NET Core App using docker-compose.
Is there any way so that i can set up 2 tables for rethinkdb and some secondary indexes?
Can Rethinkdb be configured (set up a db,table) directly with a bash command?
docker-compose
version: "3.3"
services:
rethink:
restart: always
image: rethinkdb:2.3.6
container_name: rethink0
ports: //i want to create a db,a table and a secondary index after set up
- 8080:8080
networks:
- ret-net
mp:
build: ./mpserver
image: mp
restart: always
container_name: mp0
depends_on:
- rethink
ports:
- 8203:8202
networks:
- ret-net
networks:
ret-net:
your best option is to setup the python driver and then you can run commands as bash script
sudo pip install rethinkdb
import rethinkdb as r
r.connect('localhost',28015).repl()
r.db_create('test').run()
r.db('test').table_create('myTable').run()
you can also consider building a docker image that includes this driver, i think the official image does not include it.
I cannot tell you confidently how to build a docker container like this, but based on this description it should be something like:
FROM library/rethinkdb
apt-get update &&
apt-get install -y python-pip &&
RUN pip install rethinkdb
.. and the you can execute the creation commands from inside the docker container
docker exec -it <container name> <command>
I am having a strange situation where I can not connect to my running mongo DB in my docker compose. My compose file:
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
external_links:
- myname:mongo
environment:
- MONGO_URL=mongodb://myname:27017/test
I have found a few infos on that that all did not solve my issue. I.e. I tried:
1)
Create a custom network:
docker network create mongonet
Then start mongo with the --network mongonet flag and add to the compose:
networks:
default:
external:
name: mongonet
Got nothing there either.
I looked into my /etc/hosts file on my compose, and it did not list any DNS entry.
If i do a docker inspect and grab the mongo IP and add it to my compose, that is fine and works like a charm.
I start mongo like this:
docker run -d -p 27017:27017 -v ~/mongo_data:/data/db mongo
I am really rather confused as I believed this to be a out-of-the-box kind of thing. Strangely I can't make it work. I have found examples on internal links (vs external_link) but that does not work for me as I have many services that I would like to run like this and not all of them should run at the same time.
I start my docker compose as this:
docker-compose up --force-recreate
My versions are:
docker-compose version 1.17.1, build 6d101fb
Docker version 17.05.0-ce, build 89658be
My question: How do I successfully link a running mongo container as an external link into my application containers such that they can connect to them?
My docker PS:
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
5cf6e08d6fde mongo "docker-entrypoint..." About an hour ago Up About an hour 0.0.0.0:27017->27017/tcp gallant_feynman
Links are deprecated, use networks instead.
Notes:
If you’re using the version 2 or above file format, the
externally-created containers must be connected to at least one of the
same networks as the service which is linking to them. Links are a
legacy option. We recommend using networks instead.
The network way should work. I think you are missing some pieces. Make sure to give the mongo container a name, and make sure to attach the app container to the network in the compose file:
docker network create mongonet
docker run -d -p 27017:27017 --network mongonet --name mongo -v ~/mongo_data:/data/db mongo
version: '3'
services:
app:
image: myimage:latest
ports:
- "8080:8080"
environment:
- MONGO_URL=mongodb://mongo:27017/test
networks:
- mongonet
networks:
default:
external:
name: mongonet
My docker-compose.yml file :
version: '2'
services:
zl:
image: zl/caffe-torch-gpu:12.27
ports:
- "8801:8888"
- "6001:6008"
devices:
- /dev/nvidia0
volumes:
- ~/dl-data:/root/dl-data
After nvidia-docker-compose up -d the container launched, but exited soon.
But when I launch a container by nvidia-docker way, it worked well.
nvidia-docker run -itd -p 6008:6006 -p 8808:8888 -v `pwd`:/root/dl-data --name zl_test
You don't have to use nvidia-docker-compose.
By configuring the nvdia-docker plugin correctly you can just use docker-compose!
Via the nvidia docker git repo:
(can confirm it works for me)
Step 1:
Figure out nvidia driver version (it matters).
run:
nvidia-smi
output:
+---------------------------------------------------------------+
NVIDIA-SMI 367.57 Driver Version: 367.57
|-------------------------------+--------+----------------------+
Step 2:
create a docker volume that uses the nvidia-docker plugin must be done outside of compose as compose will mangle the volume name if it creates it.
docker volume create --name=nvidia_driver_367.57 -d nvidia-docker
Step 3
in the docker-compose.yml file:
version: '2'
volumes:
nvidia_driver_367.57: # same name as one created above
external: true #this will use the volume we created above
services:
cuda:
command: nvidia-smi
devices: #this is required
- /dev/nvidiactl
- /dev/nvidia-uvm
- /dev/nvidia0 #in general: /dev/nvidia# where # depends on which gpu card is wanted to be used
image: nvidia/cuda
volumes:
- nvidia_driver_367.57:/usr/local/nvidia/:ro