docker-compose replace postgres data - postgresql

I have Django app with postgres database which locally I run with docker-compose. I would like to replace my local data in database with my backup from production server to make some optimizations.
my docker-compose.yml looks like:
version: '3.8'
services:
app:
build: .
links:
- db
depends_on:
- db
volumes:
- .:/code
ports:
- '8000:8000'
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
command: >
bash -c "python manage.py makemigrations --merge
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:12
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
volumes:
- pg_db:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
pg_db:
When I run docker volume ls I got my local driver and volume name. Could somebody give me a hint how to load data from my sql file to my local database?

Related

Docker file wait for postgres container

Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data:

Why is my flask server unable to speak to the postgres database using docker-compose?

I have posted the relevant files below. Everything builds as expected, however when trying to use SQLAlchemy to make a call to the database, I invariably get the following error:
OperationalError: (psycopg2.OperationalError) could not translate host name "db" to address: Name or service not known
The string that sqlalchemy is using is (as given in .env.web.dev): postgres://postgres:postgres#db:5432/spaceofmotion.
What am I doing wrong?
docker-compose.yml:
version: '3'
services:
db:
container_name: db
ports:
- '5432:5432'
expose:
- '5432'
build:
context: ./
dockerfile: Dockerfile.postgres
networks:
- db_web
web:
container_name: web
restart: always
build:
context: ../
dockerfile: Dockerfile.web
ports:
- '5000:5000'
env_file:
- ./.env.web.dev
networks:
- db_web
depends_on:
- db
- redis
- celery
redis:
image: 'redis:5.0.7-buster'
container_name: redis
command: redis-server
ports:
- '6379:6379'
celery:
container_name: celery
build:
context: ../
dockerfile: Dockerfile.celery
env_file:
- ./.env.celery.dev
command: celery worker -A a.celery --loglevel=info
depends_on:
- redis
client:
container_name: react-app
build:
context: ../a/client
dockerfile: Dockerfile.client
volumes:
- '../a/client:/src/app'
- '/src/app/node_modules'
ports:
- '3000:3000'
depends_on:
- "web"
environment:
- NODE_ENV=development
- HOST_URL=http://localhost:5000
networks:
db_web:
driver: bridge
Dockerfile.postgres:
FROM postgres:latest
ENV POSTGRES_DB spaceofmotion
ENV POSTGRES_USER postgres
ENV POSTGRES_PASSWORD postgres
COPY ./spaceofmotion-db.sql /
COPY ./docker-entrypoint-initdb.d/restore-database.sh /docker-entrypoint-initdb.d/
restore-database.sh:
file="/spaceofmotion-db.sql"
psql -U postgres spaceofmotion < "$file"
Dockerfile.web:
FROM python:3.7-slim-buster
RUN apt-get update
RUN apt-get -y install python-pip libpq-dev python-dev && \
pip install --upgrade pip && \
pip install psycopg2
ENV PYTHONDONTWRITEBYTECODE 1
ENV PYTHONUNBUFFERED 1
COPY . /app
WORKDIR /app
RUN pip install -r requirements.txt
CMD ["python", "manage.py", "runserver"]
.env.web.dev:
DATABASE_URL=postgres://postgres:postgres#db:5432/spaceofmotion
... <other config vars> ...
Is this specifically coming from your celery container?
Your db container declares
networks:
- db_web
but the celery container has no such declaration; that means that it will be on the default network Compose creates for you. Since the two containers aren't on the same network they can't connect to each other.
There's nothing wrong with using the Compose-managed default network, especially for routine Web applications, and I'd suggest deleting all of the networks: blocks in the entire file. (You also don't need to specify container_name:, since Compose will come up with reasonable names on its own.)

Docker postgress User "postgres" has no password assigned

I keep getting
User "postgres" has no password assigned.
updated
.env
POSTGRES_PORT=5432
POSTGRES_DB=demo_db2
POSTGRES_USER=postgres
POSTGRES_PASSWORD=password
Even though the postgres password is set.
I'm trying to use the same variables from the following command
docker run --name demo4 -e POSTGRES_PASSWORD=password -d postgres
Could this be an issue with volumes ? im very confused.
I ran this command as well
docker run -it --rm --name demo4 -p 5432:5432 -e POSTGRES_PASSWORD=password -e POSTGRES_USER=postgress postgres:9.4
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
database:
image: postgres:9.6.8-alpine
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
You are missing the inclusion of the .env file...
Docker composer:
database:
environment:
- ENV_VAR=VALUE
or
database:
env_file:
- .env
Plain Docker:
docker run options --env ENV_VAR=VALUE ...
or
docker run options --env-file .env ...`

docker-compose: run rails db:setup and rake tasks to init data

I can't find the way to execute the following commands from a docker-compose.yml file:
rails db:setup
rails db:init_data.
I tried to do that as follows and it failed:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ["rails", "db:setup"]
command: ["rails", "db:init_data"]
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
Any idea on what's going wrong here ? Thank you.
The code source is on the GitHub.
You can do two things in my opinion:
Change command: to the following line, because two commands are not allowed in compose file:
command:
- /bin/bash
- -c
- |
rails db:setup
rails db:init_data
Use supervisord app: supervisord web page
The solution that worked for me was to remove CMD commad from Dockerfile because using command option in docker-compose.yml would have overridden CMD command.
So, Docker file will look like that:
FROM ruby:2.5.1
LABEL maintainer="DECATHLON"
RUN apt-get update -yqq
RUN apt-get install -yqq --no-install-recommends nodejs
COPY Gemfile* /usr/src/app/
WORKDIR /usr/src/app
RUN bundle install
COPY . /usr/src/app/
Then add command option to docker-compose file:
version: '3'
services:
web:
build: .
links:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command:
- |
rails db:reset
rails db:init_data
rails s -p 3000 -b '0.0.0.0'
redis:
image: redis
database:
image: postgres
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
If the above solution does not work for somebody, there is an alternative solution:
Create a shell script in the project route and name it entrypoint.sh, for example:
#!/bin/bash
set -e
bundle exec rails db:reset
bundle exec rails db:migrate
exec "$#"
Declare entrypoint option in dpcker-compose file:
v
version: '3'
services:
web:
build: .
entrypoint:
- /bin/sh
- ./entrypoint.sh
depends_on:
- database
- redis
ports:
- "3000:3000"
volumes:
- .:/usr/src/app
env_file:
- .env/development/database
- .env/development/web
command: ['./wait-for-it.sh', 'database:5432', '--', 'bundle', 'exec', 'rails', 's', '-p', '3000', '-b', '0.0.0.0']
database:
image: postgres:9.6
env_file:
- .env/development/database
volumes:
- db-data:/var/lib/postgresql/data
volumes:
db-data:
I also user wait-for-it script to ensure the DB is started.
Hope this helps. I pushed the modifications to the Github repo. Sorry for some extra letters left in the text before code blocks, - for some unknown reasons, the code markdown didn't work, so I left them to get it working.

How to connect to localhost postgres database from docker container?

I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards