Docker-compose FastAPI --reload - docker-compose

I have FastAPI app running in docker docker container. It works well except only one thing
The app doesn't reload if any changes. The changes applied only if i restart the container. But i wonder why it doesn't reload app if i put in command --reload flag?
I understand that docker itself do not reload if some changes in code. But app must be if flag --reload in command .
If I misunderstand, please advise how to achieve what i want. Thanks
main.py
from typing import Optional
import uvicorn
from fastapi import FastAPI
app = FastAPI()
#app.get("/")
def read_root():
return {"Hello": "World"}
#app.get("/items/{item_id}")
def read_item(item_id: int, q: Optional[str] = None):
return {"item_id": item_id, "q": q}
if __name__ == '__main__':
uvicorn.run(app, host="0.0.0.0", port=8000, reload=True)
docker-compose.yml
version: "3"
services:
web:
build: .
restart: always
command: bash -c "uvicorn main:app --host 0.0.0.0 --port 8000 --reload"
volumes:
- .:/code
ports:
- "8000:8000"
depends_on:
- db
db:
image: postgres
ports:
- "50009:5432"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=test_db

this works for me
version: "3.9"
services:
people:
container_name: people
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --reload
environment:
DEBUG: 1
volumes:
- ./app:/code/app
ports:
- 8008:8000
restart: on-failure
this is my directory structure
.
├── Dockerfile
├── Makefile
├── app
│ └── main.py
├── docker-compose.yml
└── requirements.txt
make sure working_dir and volumes section's - ./app:/code/app match
example run:
docker-compose up --build
...
Attaching to people
people | INFO: Will watch for changes in these directories: ['/code/app']

Are you starting the container with docker compose up? This is working for me with hot reload at http://127.0.0.1.
version: "3.9"
services:
bff:
container_name: bff
build: .
working_dir: /code/app
command: uvicorn main:app --host 0.0.0.0 --port 8000 --reload
environment:
DEBUG: 1
volumes:
- .:/code
ports:
- "80:8000"
restart: on-failure
Also, I don't have your final two lines, if __name__ == etc., in my app. Not sure if that would change anything.

I found this solution that worked for me, in this answer.
In the watchfiles documentation it is understood that the detection relies on file system notifications, and I think that via docker its events are not launched when using a volume.
Notify will fall back to file polling if it can't use file system
notifications
So you have to tell watchfiles to force the polling, that's what you did in your test python script with the parameter force_polling and that's why it works:
for changes in watch('/code', force_polling=True):
Fortunately in the documentation we are given the possibility to force the polling via the environment variable WATCHFILES_FORCE_POLLING. Add this environment variable to your docker-compose.yml and auto-reload will work:
services:
fastapi-dev:
image: myimagename:${TAG:-latest}
build:
context: .
volumes:
- ./src:/code
- ./static:/static
- ./templates:/templates
restart: on-failure
ports:
- "${HTTP_PORT:-8080}:80"
environment:
- WATCHFILES_FORCE_POLLING=true

Related

docker-compose restore mongo database every time collection deletes

i want to be able to recreate some base data that is dumped when mongo-data folder is deleted and docker-compose up is called.
the problem that im facing is that app does not have mongo
these are my files:
docker-compose.yml
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db
Dockerfile:
FROM node:14.15.5
RUN mkdir -p /testapp
WORKDIR /testapp
EXPOSE 3000
ENTRYPOINT ["./entrypoint.sh"]
entrypoint.sh:
#!/bin/bash
sh ./__backup__/db/restore.sh
sh ./__backup__/app/restore.sh
yarn install
yarn start:dev
backup/app/restore.sh:
#!/bin/bash
if [[ ! -d '/testapp/uploads' ]]
then
tar -xvf ./uploads.tar.gz /testapp/
fi
backup/app/restore.sh:
#!/bin/bash
until mongo --eval "print(\"waited for connection\")"
do
sleep 1
done
if [[ ! -d '/testapp/mongo-data' ]]
then
mongorestore --archive ./db.dump
fi
is there anyway to run these resotre.sh files after mongo service is up or running mongo from app?
If I understand the question correctly, you want to restore the MongoDB to a certain state every time your app launches, and you're asking if there's a way to do it after MongoDB container launches.
There's a tool called docker-compose-wait, quoting from its GitHub README, it's a small command-line utility to wait for other docker images to be started while using docker-compose.
It's fairly simple to use it. Add it to the image, run /wait to wait for services to be up, and get on to whatever you want next.
So according to your current setup, your Dockerfile could be like this:
FROM node:14.15.5
## Add the wait script to the image
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.9.0/wait /wait
RUN chmod +x /wait
RUN mkdir -p /testapp
WORKDIR /testapp
ADD . .
EXPOSE 3000
## Launch the wait tool and then your entrypoint.sh
ENTRYPOINT /wait && /testapp/entrypoint.sh"
In which your entrypoint.sh was already written to call the restore script. In your docker-compose.yml, add environment variable to set up the services to be waited.
version: "3"
services:
app:
build:
context: .
dockerfile: Dockerfile
ports:
- "3000:3000"
volumes:
- .:/testapp
environment:
DB_URL: mongodb://test_mongo/appdb
WAIT_HOSTS: mongo:27017
depends_on:
- mongo
mongo:
image: "mongo:4.4.4"
restart: always
container_name: test_mongo
ports:
- "27017:27017"
- "27018:27018"
volumes:
- ./mongo-data:/data/db

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

Docker run not working

When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
I have a API Rest (Express & Mongodb) with nginx proxy-pass.
Docker file:
FROM node:8-alpine
EXPOSE 3000
ARG NODE_ENV
ENV NODE_ENV $NODE_ENV
RUN mkdir /app
WORKDIR /app
ADD package.json yarn.lock /app/
RUN yarn --pure-lockfile
ADD . /app
CMD ["yarn", "start"]
Docker compose:
version: "2"
services:
api:
build: .
environment:
- NODE_ENV=production
command: yarn start
volumes:
- .:/app
ports:
- "3000:3000"
tty: true
depends_on:
- mongodb
restart: always
nginx:
image: nginx
depends_on:
- api
ports:
- "80:80"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
depends_on:
- mongodb
restart: always
mongodb:
image: mongo
ports:
- "27017:27017"
restart: always
When I use docker compose it performs perfectly the application, however, when I use docker run nothing happens
That seems expected, since docker run would run one image.
As opposed to docker compose, which will run a multi-container Docker application.
You need for all images to run, starting with the right order, in order to anything to happen.

docker-compose: run rails db:setup and rake tasks to init data

I can't find the way to execute the following commands from a docker-compose.yml file:
rails db:setup
rails db:init_data.
I tried to do that as follows and it failed:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ["rails", "db:setup"]
command: ["rails", "db:init_data"]
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Any idea on what's going wrong here ? Thank you.
The code source is on the GitHub.
You can do two things in my opinion:
Change command: to the following line, because two commands are not allowed in compose file:
command:
- /bin/bash
- -c
- |
rails db:setup
rails db:init_data
Use supervisord app: supervisord web page
The solution that worked for me was to remove CMD commad from Dockerfile because using command option in docker-compose.yml would have overridden CMD command.
So, Docker file will look like that:
FROM ruby:2.5.1
LABEL maintainer="DECATHLON"
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends nodejs
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install
COPY . /usr/src/app/
Then add command option to docker-compose file:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command:
- |
rails db:reset
rails db:init_data
rails s -p 3000 -b '0.0.0.0'
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
If the above solution does not work for somebody, there is an alternative solution:
Create a shell script in the project route and name it entrypoint.sh, for example:
#!/bin/bash
set -e
bundle exec rails db:reset
bundle exec rails db:migrate
exec "$#"
Declare entrypoint option in dpcker-compose file:
v
version: '3'
services:
web:
build: .
entrypoint:
- /bin/sh
- ./entrypoint.sh
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ['./wait-for-it.sh', 'database:5432', '--', 'bundle', 'exec', 'rails', 's', '-p', '3000', '-b', '0.0.0.0']
database:
image: postgres:9.6
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
I also user wait-for-it script to ensure the DB is started.
Hope this helps. I pushed the modifications to the Github repo. Sorry for some extra letters left in the text before code blocks, - for some unknown reasons, the code markdown didn't work, so I left them to get it working.

Docker-compose mongoose

I'm new to Docker, and I'm trying the simplest of setups with docker-compose, but don't succeed to connect to Mongodb.
My docker-compose.local.yaml file:
version: "2"
services:
posts-api:
build:
dockerfile: Dockerfile.local
context: ./
volumes:
- ".:/app"
ports:
- "6820:6820"
depends_on:
- mongodb
mongodb:
image: mongo:3.5
ports:
- "27018:27018"
command: mongod --port 27018
My Docker file:
FROM node:7.8.0
MAINTAINER Livefeed 'project.livefeed#gmail.com'
RUN mkdir /app
VOLUME /app
WORKDIR /app
ADD package.json yarn.lock ./
RUN eval rm -rf node_modules && \
yarn
ADD server.js .
RUN mkdir config src
ADD config config/
ADD src src/
EXPOSE 6820
EXPOSE 27018
CMD yarn run local
In server.js I try to connect with:
mongoose.connect('mongodb://localhost:27018');
I also tried:
mongoose.connect('mongodb://mongodb:27018');
To run docker-compose:
docker-compose -f docker-compose.local.yaml up --build
And I receive the error:
connection error: { MongoError: failed to connect to server [localhost:27018] on first connect [MongoError: connect ECONNREFUSED 127.0.0.1:27018]
What am I missing?
In server.js use mongodb instead of localhost:
mongoose.connect('mongodb://mongodb:27018');
Because containers in the same network can communicate using their service name.
Bear in mind that each container and your host have their own localhost. Each localhost is a different host: container A, container B, your host (each one has its own network interface).
Edit:
Be sure to get your mongo up:
docker-compose logs mongodb
docker-compose ps
Sometimes it doesn't get up because of disk space.
Edit 2:
With newer versions of mongo, you need to specify to listen to all interfaces too:
command: mongod --port 27018 --bind_ip_all
I think, that you should add links option in your config. Like this:
ports:
- "6820:6820"
depends_on:
- mongodb
links:
- mongodb
update
As I promised
version: '2.1'
services:
pm2:
image: keymetrics/pm2-docker-alpine:6
restart: always
container_name: pm2
volumes:
- ./pm2:/app
links:
- redis_db
- db
environment:
REDIS_CONNECTION_STRING: redis://redis_db:6379
nginx:
image: firesh/nginx-lua
restart: always
volumes:
- ./nginx:/etc/nginx
- /var/run/docker.sock:/tmp/docker.sock:ro
ports:
- 80:80
links:
- pm2
s3: # mock for development
image: lphoward/fake-s3:latest
redis_db:
container_name: redis_db
image: redis
ports:
- 6379:6379
db: # for scorebig-syncer
image: mysql:5.7
ports:
- 3306:3306