I have been searching for over a week on this issue with no solution.
I am trying to mount a volume from the docker-compose.yml
Here is my directory structure:
-docker-compose.yml
-api
-dockerfile
-frontend
-dockerfile
-models
I have want the models shared between the api service, and the frontend service. first I attempt to get the models into the container. In the container's /usr/src/models directory along with all of its contents. This command works GREAT
docker run -it --mount src="$(pwd)/models",target=/usr/src/models,type=bind -p 3000:3000 website_api
and an important thing to note is that it produces this, when I inspect the docker container in VS code:
website_api.json:
"Mounts": [
{
"Type": "bind",
"Source": "/home/kevin/source/repos/cropwatch/website/models",
"Target": "/usr/src/models"
}
],
This is inside of the json file with lots of other stuff.
however, when I run my docker-compose, that is setup like so:
version: "3.8"
services:
api:
container_name: api
restart: always
build:
context: ./
dockerfile: ./api/dockerfile
ports:
- "3000:3000"
- "3001:3001"
volumes:
- type: bind
source: "./models"
target: "/usr/src/models"
the mounts path in the json file displays as so:
"Mounts": [],
and the /usr/src/models directory in my container is empty...
So these two things do not do the same thing as I seemed to believe before.
Any ideas as to what I am doing wrong in my docker-compose.yml file?
This should do the job:
tree
.
├── api
│ └── dockerfile
├── docker-compose.yml
└── models
└── someFile
cat docker-compose.yml
version: "3.8"
services:
api:
container_name: api
restart: always
build:
context: ./
dockerfile: ./api/dockerfile
volumes:
- ./models:/usr/src/models
docker-compose up -d
docker exec 5ea0c49003f6 sh -c "ls -la /usr/src/models"
total 8
drwxr-xr-x 2 1000 1000 4096 Aug 3 20:09 .
drwxr-xr-x 1 root root 4096 Aug 3 20:15 ..
-rw-r--r-- 1 1000 1000 0 Aug 3 20:09 someFile
docker container inspect --format '{{.Mounts}}' 5ea0c49003f6
[{bind /home/neo/so-playground/mounts-63236400/models /usr/src/models rw true rprivate}]
Related
I have FastAPI app running in docker docker container. It works well except only one thing
The app doesn't reload if any changes. The changes applied only if i restart the container. But i wonder why it doesn't reload app if i put in command --reload flag?
I understand that docker itself do not reload if some changes in code. But app must be if flag --reload in command .
If I misunderstand, please advise how to achieve what i want. Thanks
main.py
from typing import Optional
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
docker-compose.yml
version: "3"
services:
web:
build: .
restart: always
command: bash -c "uvicorn main:app --host 0.0.0.0 --port 8000 --reload"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
ports:
- "50009:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db
this works for me
version: "3.9"
services:
people:
container_name: people
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --reload
environment:
DEBUG: 1
volumes:
- ./app:/code/app
ports:
- 8008:8000
restart: on-failure
this is my directory structure
.
├── Dockerfile
├── Makefile
├── app
│ └── main.py
├── docker-compose.yml
└── requirements.txt
make sure working_dir and volumes section's - ./app:/code/app match
example run:
docker-compose up --build
...
Attaching to people
people | INFO: Will watch for changes in these directories: ['/code/app']
Are you starting the container with docker compose up? This is working for me with hot reload at http://127.0.0.1.
version: "3.9"
services:
bff:
container_name: bff
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
environment:
DEBUG: 1
volumes:
- .:/code
ports:
- "80:8000"
restart: on-failure
Also, I don't have your final two lines, if __name__ == etc., in my app. Not sure if that would change anything.
I found this solution that worked for me, in this answer.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system
notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via the environment variable WATCHFILES_FORCE_POLLING. Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true
I am trying to run a MongoDB container on a DigitalOcean droplet and so created a new dir with mkdir -pv mongodb/data then inside mongodb I created a new file with touch mongodb.conf then edited it with nano mongodb.conf so that it looks like this:
systemLog:
destination: file
path: "/var/log/mongodb/mongod.log"
logAppend: true
storage:
journal:
enabled: true
processManagement:
fork: true
net:
bindIp: 0.0.0.0
port: 27017
setParameter:
enableLocalhostAuthBypass: false
I then used the same process to create a file docker-compose.yml Which looks like:
version: "3.8"
services:
mongodb:
image: mongo:latest
container_name: mongodb
environment:
- PUID=1000
- PGID=1000
volumes:
- /mongodb/mongod.conf:/etc/mongod.conf
- /mongodb/data:/data/db
command: ["mongod", "--config", "/etc/mongod.conf"]
ports:
- 0.0.0.0:27017:27017
This makes the filesystem look like:
.
├── mongodb
│ ├── data
│ ├── docker-compose.yml
│ └── mongod.conf
When I then try running docker-compose up it gives the error:
mongodb | grep: /etc/mongo/mongod.conf: Is a directory
mongodb | error: unexpected "js-yaml.js" output while parsing config:
mongodb | undefined
mongodb exited with code 1
I have also tried changing the command to: command: ["mongod", "-f", "/etc/mongod.conf"] but this gives a similar error:
mongodb | Error opening config file: Is a directory
mongodb | try 'mongod --help' for more information
mongodb exited with code 2
Any help to solve this would be appreciated.
I did it following way:
version: "3.8"
services:
mongodb:
image: mongo:latest
container_name: mongodb
environment:
- PUID=1000
- PGID=1000
volumes:
- /mongodb/cnf:/etc/mongo/
- /mongodb/data:/data/db
command: ["mongod", "--config", "/etc/mongo/mongod.conf"]
ports:
- 0.0.0.0:27017:27017
Then just put your mongod.conf in /mongodb/cnf on host machine
I need to share data between containers with docker compose. Here shared_data_setup container should seed the shared volume with data to be used during build of the app container. However when I run this, app container /shared is empty. Is there a way to achieve this ?
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local
You need to specify the version of the docker-compose.yml file:
version: "3"
services:
# This will setup some seed data to be used in other containers
shared_data_setup:
build: ./shared_data_setup/
volumes:
- shared:/shared
app:
build: ./app/
volumes:
- shared:/shared
depends_on:
- shared_data_setup
volumes:
shared:
driver: local
Edit: Results:
# Test volume from app
$ docker-compose exec app bash
root#e652cb9e5c46:/# ls -l /shared
total 0
root#e652cb9e5c46:/# touch /shared/test
root#e652cb9e5c46:/# exit
# Test volume from shared_data_setup
$ docker-compose exec shared_data_setup bash
root#b21ead1a7354:/# ls -l /shared
total 0
-rw-r--r-- 1 root root 0 Feb 26 11:23 test
root#b21ead1a7354:/# exit
This is my first time with docker, I'm working on this problem for two days, it would make me very happy to find a solution.
I'm running docker-compose.yml file with "docker-compose up":
version: '3.3'
services:
base:
networks:
- brain_storm-network
volumes:
- brain_storm-storage:/usr/src/brain_storm
build: "./brain_storm"
data_base:
image: mongo
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- '27017:27017'
api:
build: "./brain_storm/api"
volumes:
- brain_storm-storage:/usr/src/brain_storm
networks:
- brain_storm-network
ports:
- 5000:5000
depends_on:
- data_base
- base
restart: on-failure
the base Dockerfile inside ./brain_storm does the following:
FROM brain_storm-base:latest
RUN mkdir -p /usr/src/brain_storm/brain_storm
ADD . /usr/src/brain_storm/brain_storm
and when running the Dockerfile inside brain_storm/api
FROM brain_storm-base:latest
CMD cd /usr/src/brain_storm \
&& python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm getting this error :
brain_storm_api_1 exited with code 1
api_1 | /usr/local/bin/python: Error while finding module specification for 'brain_storm.api' (ModuleNotFoundError: No module named 'brain_storm')
pwd says I'm in '/' and not in the current directory when running the base Dockerfile,
so that might be the problem but how do I solve it without going to /home/user/brain_storm in the Dockerfile, because I want to keep the location of brain_storm folder general.
How can I make Dockerfile see and take the file from the current directory (where the Dockerfile file is) ?
You should probably define WORKDIR command in both your Dockerfiles. The WORKDIR command is used to define the working directory of a Docker container at any given time. Any RUN, CMD, ADD,COPY, or ENTRYPOINT command will be executed in the specified working directory.:
base:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
COPY . .
api:
FROM brain_storm-base:latest
WORKDIR /usr/src/brain_storm
CMD python -m brain_storm.api run-server -h 0.0.0.0 -p 5000 -d mongodb://0.0.0.0:27017
I'm trying to create a mongodb service that runs in a docker container. My purpose is persist all the data inside container in the host machine. For this I have a docker-compose.yml file, whose content is:
version: '3.2'
services:
mongodb:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: myrootmongousername
MONGO_INITDB_ROOT_PASSWORD: myrootmongopassword
MONGO_INITDB_DATABASE: dbnameiwanttocreate
ports:
- '27010:27017'
volumes:
- 'mongodata:/data'
As you can see, my volumes section declares to drop all content of /data folder inside container into mongodata folder in host. I created a folder named mongodata just at the same height that my docker-compose.yml file.
myprojectfolder
|__docker-compose.yml
|__mongodata
When I do docker-compose up it creates a container and I can connect and so on. However, my mongodata folder is completely empty. That's cannot be true, because if I go inside docker container (docker exec -it <container-id> bash) and explore /data folder, it is not empty at all.
What is my mistake here?
Thanks!
The following Docker compose file works for the directory structure below on ubuntu 18.04, check if docker has permissions to write into the local folder mongodata
If that doesn't work paste logs by running docker-compose logs will update answer specific to the issue.
version: '3.2'
services:
mongodb:
image: mongo:latest
environment:
MONGO_INITDB_ROOT_USERNAME: myrootmongousername
MONGO_INITDB_ROOT_PASSWORD: myrootmongopassword
MONGO_INITDB_DATABASE: dbnameiwanttocreate
ports:
- '27010:27017'
volumes:
- './mongodata:/data'
├── docker-compose.yml
└── mongodata
├── configdb
└── db