I'm configured my project to docker. I have database that have been used in non-docker period and now I want to connect my docker-compose db service to it. But when I write docker-compose up - existing database not used - new one created instead (I suspect, docker container simply doesn't see the database). If I do nonsense please let me know. Maybe I shoud migrate my server db into container.
Here is my docker-compose.yml:
services:
db:
restart: always
image: postgres:latest
environment:
- POSTGRES_DB=mydb
- POSTGRES_PASSWORD=p#ssw0rd
- POSTGRES_USER=root
ports:
- "5432:5432"
volumes:
# We'll mount the 'postgres-data' volume into the location Postgres stores it's data:
- postgres-data:/var/lib/postgresql/data
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
depends_on:
- db
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
I think, the canonic approach is to have your DB engine running in container while storing the data on the persistent storage (map the volume to your hard disk).
So I would use the Postgres in docker as ServerDB, as you suggested.
If you only want your application connect to the external database, declare it as an external host:
version: '2'
services:
web:
build: .
command: bash -c "python manage.py collectstatic --noinput && ./manage.py migrate && ./run_gunicorn.sh"
volumes:
- .:/code
- /static:/static
ports:
- 443:443
extra_hosts:
- "db:192.168.1.2"
nginx:
restart: always
image: nginx:latest
ports:
- 80:80
volumes:
- ./misc/nginx.conf:/etc/nginx/conf.d/default.conf
- /static:/static
depends_on:
- web
Just be sure your application reference the database as db and replace the ip I put there with your host ip.
Regards
Related
I have been using docker for a postgres database as I work on my project. I used this docker-compose file to spin it up
version: '3'
services:
postgres:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4: {}
I now want to containerise my back and front ends together with the database. I made this docker-compose file to do so
version: '3.8'
services:
frontend:
build: ./4x4
ports:
- "3000:3000"
backend:
build: ./server
ports:
- "8000:8000"
db:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4:
external: true
However, when I execute the command docker-compose up on the second file, I do not access the same data as the first one -- the database is blank. If I spin up the first one again, I return to the old data (i.e. nothing is overwritten).
I presumed that the same postgres database would be connected to
I would appreciate any elucidation
I have Django app with postgres database which locally I run with docker-compose. I would like to replace my local data in database with my backup from production server to make some optimizations.
my docker-compose.yml looks like:
version: '3.8'
services:
app:
build: .
links:
- db
depends_on:
- db
volumes:
- .:/code
ports:
- '8000:8000'
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
- POSTGRES_HOST=db
- POSTGRES_PORT=5432
command: >
bash -c "python manage.py makemigrations --merge
&& python manage.py migrate
&& python manage.py runserver 0.0.0.0:8000"
db:
image: postgres:12
environment:
- POSTGRES_DB=DB
- POSTGRES_USER=USER
- POSTGRES_PASSWORD=PASSWORD
volumes:
- pg_db:/var/lib/postgresql/data
ports:
- 5432:5432
volumes:
pg_db:
When I run docker volume ls I got my local driver and volume name. Could somebody give me a hint how to load data from my sql file to my local database?
I try to set up a Dockerized airflow instance, but whatever I do (so far..) it keeps trying to access some sqlite3 database where I do not know where the instruction comes from. I point to the Postgres instance everywhere (deemed) possible through AIRFLOW__CORE__SQL_ALCHEMY_CONN, and even AIRFLOW_CONN_METADATA_DB.
A typical error message when starting up is like:
sqlalchemy.exc.OperationalError: (sqlite3.OperationalError) no such table: job
Full docker-compose.yml:
version: '3'
x-airflow-common:
&airflow-common
image: apache/airflow:2.0.0
environment:
- AIRFLOW__CORE__EXECUTOR=LocalExecutor
- AIRFLOW__CORE__SQL_ALCHEMY_CONN=postgresql+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres+psycopg2://postgres:postgres#db:9501/airflow
- AIRFLOW__CORE__FERNET_KEY=FB0o_zt4e3Ziq3LdUUO7F2Z95cvFFx16hU8jTeR1ASM=
- AIRFLOW__CORE__LOAD_EXAMPLES=True
- AIRFLOW__CORE__LOGGING_LEVEL=INFO
volumes:
- /home/x/docker/airflow/dags:/opt/airflow/dags
- /home/x/docker/airflow/airflow-data/logs:/opt/airflow/logs
- /home/x/docker/airflow/airflow-data/plugins:/opt/airflow/plugins
- /home/x/docker/airflow/airflow-data/airflow.cfg:/opt/airlfow/airflow.cfg
depends_on:
- db
services:
db:
image: postgres:12
#image: postgres:12.1-alpine
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- POSTGRES_DB=airflow
- POSTGRES_PORT=9501
- POSTGRES_HOST_AUTH_METHOD=trust
ports:
- 9501:9501
command:
- -p 9501
airflow-init:
<< : *airflow-common
container_name: airflow_init
entrypoint: /bin/bash
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
command:
- -c
- airflow users list || ( airflow db init &&
airflow users create
--role Admin
--username airflow
--password airflow
--email airflow#airflow.com
--firstname airflow
--lastname airflow )
restart: on-failure
airflow-webserver:
<< : *airflow-common
command: airflow webserver
ports:
- 9500:8080
container_name: airflow_webserver
environment:
- AIRFLOW_USERNAME=airflow
- AIRFLOW_PASSWORD=airflow
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
airflow-scheduler:
<< : *airflow-common
command: airflow scheduler
container_name: airflow_scheduler
environment:
- SQL_ALCHEMY_CONN=postgresql://postgres:postgres#db:9501/airflow
- AIRFLOW_CONN_METADATA_DB=postgres://postgres:postgres#db:9501/airflow
restart: always
Solved by following this docker-compose.yaml file:
https://github.com/apache/airflow/blob/master/docs/apache-airflow/start/docker-compose.yaml
And instead of trying to tweak the ports of postgres (and redis) used the "expose" option, which avoids conflicts with other containers on the same host.
So not:
environment:
POSTGRES_PORT: 9501
ports:
- 9501:9501
But: run it (internally) with the default ports and do not try to share them external:
expose:
- 5432
Still not sure what was the problem with using the higher ports. It may be some default fallback to sqlite when the configured DB for some reason cannot be connected.
I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.
Hello I have the following error in my node project:
(node:51) UnhandledPromiseRejectionWarning: Error: getaddrinfo
ENOTFOUND ${DB_HOST}
I'm thinking the problem is that my postgress is not yet started when my project starts
and so I'm not able to think of a solution on how to start my container after my postgres is ready, I read something about dockerize, but I'm not able to imagine how to apply
my docker file:
FROM node:lts-alpine
RUN mkdir -p /home/node/api/node_modules && chown -R node:node /home/node/api
WORKDIR /home/node/api
COPY ormconfig.json .env package.json yarn.* ./
USER node
RUN yarn
COPY --chown=node:node . .
EXPOSE 4000
CMD ["yarn", "dev"]
my docker compose:
version: '3.7'
services:
ci-api:
build: .
container_name: ci-api
volumes:
- .:/home/node/api
- /home/node/api/node_modules
ports:
- '${SERVER_PORT}:${SERVER_PORT}'
depends_on:
- ci-postgres
networks:
- ci-network
ci-postgres:
image: postgres:12
container_name: ci-postgres
ports:
- '${DB_PORT}:5432'
environment:
- ALLOW_EMPTY_PASSWORD=no
- POSTGRES_USER=${DB_USER}
- POSTGRES_PASSWORD=${DB_PASS}
- POSTGRES_DB=${DB_NAME}
volumes:
- ci-postgres-data:/data
networks:
- ci-network
volumes:
ci-postgres-data:
networks:
ci-network:
driver: bridge
and this is my .env
SERVER_PORT=4000
DB_HOST=ci-postgres
DB_PORT=5432
DB_USER=spirit
DB_PASS=api
DB_NAME=emasa_ci
You can reference the below docker-compose.yml in which depends_on, healthcheck and links are added as web service depends on db service.
Reference:
Postgresql Container is not running in docker-compose file - Why is this?
version: "3"
services:
webapp:
build: .
container_name: webapp
ports:
- "5000:5000"
links:
- postgres
depends_on:
postgres:
condition: service_healthy
postgres:
image: postgres:11-alpine
container_name: postgres
ports:
- "5432:5432"
environment:
- POSTGRES_DB=tmp
- POSTGRES_USER=tmp
- POSTGRES_PASSWORD=tmp_password
volumes: # Persist the db data
- database-data:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 10s
timeout: 5s
retries: 5
volumes:
database-data: