docker-compose: run rails db:setup and rake tasks to init data - docker-compose

I can't find the way to execute the following commands from a docker-compose.yml file:
rails db:setup
rails db:init_data.
I tried to do that as follows and it failed:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ["rails", "db:setup"]
command: ["rails", "db:init_data"]
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Any idea on what's going wrong here ? Thank you.
The code source is on the GitHub.

You can do two things in my opinion:
Change command: to the following line, because two commands are not allowed in compose file:
command:
- /bin/bash
- -c
- |
rails db:setup
rails db:init_data
Use supervisord app: supervisord web page

The solution that worked for me was to remove CMD commad from Dockerfile because using command option in docker-compose.yml would have overridden CMD command.
So, Docker file will look like that:
FROM ruby:2.5.1
LABEL maintainer="DECATHLON"
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends nodejs
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install
COPY . /usr/src/app/
Then add command option to docker-compose file:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command:
- |
rails db:reset
rails db:init_data
rails s -p 3000 -b '0.0.0.0'
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
If the above solution does not work for somebody, there is an alternative solution:
Create a shell script in the project route and name it entrypoint.sh, for example:
#!/bin/bash
set -e
bundle exec rails db:reset
bundle exec rails db:migrate
exec "$#"
Declare entrypoint option in dpcker-compose file:
v
version: '3'
services:
web:
build: .
entrypoint:
- /bin/sh
- ./entrypoint.sh
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ['./wait-for-it.sh', 'database:5432', '--', 'bundle', 'exec', 'rails', 's', '-p', '3000', '-b', '0.0.0.0']
database:
image: postgres:9.6
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
I also user wait-for-it script to ensure the DB is started.
Hope this helps. I pushed the modifications to the Github repo. Sorry for some extra letters left in the text before code blocks, - for some unknown reasons, the code markdown didn't work, so I left them to get it working.

Related

docker-compose replace postgres data

I have Django app with postgres database which locally I run with docker-compose. I would like to replace my local data in database with my backup from production server to make some optimizations.
my docker-compose.yml looks like:
version: '3.8'
services:
app:
build: .
links:
- db
depends_on:
- db
volumes:
- .:/code
ports:
- '8000:8000'
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
command: >
bash -c "python manage.py makemigrations --merge
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:12
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
volumes:
- pg_db:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
pg_db:
When I run docker volume ls I got my local driver and volume name. Could somebody give me a hint how to load data from my sql file to my local database?

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards

How mongorestore DB with docker

I got that docker-compose.yml:
version: '2'
services:
web:
build: .
ports:
- "3000:3000"
volumes:
- .:/code
depends_on:
- db1
links:
- db1:mongo
db1:
image: mongo
Dockerfile:
FROM node:4.4.2
ADD . /code
WORKDIR /code
RUN npm i
CMD node app.js
I store dump in project files so folder is shared with container. How should look like proccess of restoring dump ? In web container I don't have access to mongoresore commnad ...

Docker postgres does not run init file in docker-entrypoint-initdb.d

Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.