Here is a part of my project structure:
Here is a part of my docker-compose.yml file:
Here is my Dockerfile (which is inside postgres-passport folder):
I have init.sql script which should create user, database and tables (user and db are the same as in docker-compose.yml file)
But when I look into my docker-entrypoint-initdb.d folder it is empty (there is no init.sql file). I use this command:
docker exec latest_postgres-passport_1 ls -l
docker-entrypoint-initdb.d/
On my server (Ubuntu) I see:
I need your help, what am I doing wrong? (how can I copy a folder with init.sql script. Postgres tell me that
/usr/local/bin/docker-entrypoint.sh: ignoring
/docker-entrypoint-initdb.d/*
(as he can't find this folder)
All code in text format below:
Full docker-compose.yml:
version: '3'
volumes:
redis_data: {}
proxy_certs: {}
nsq_data: {}
postgres_passport_data: {}
storage_data: {}
services:
# ####################################################################################################################
# Http services
# ####################################################################################################################
back-passport:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
expose:
- 9000
depends_on:
- postgres-passport
- redis
- nsq
environment:
ACCESS_LOG: ${ACCESS_LOG}
AFTER_CONFIRM_BASE_URL: ${AFTER_CONFIRM_BASE_URL}
CONFIRM_BASE_URL: ${CONFIRM_BASE_URL}
COOKIE_DOMAIN: ${COOKIE_DOMAIN}
COOKIE_SECURE: ${COOKIE_SECURE}
DEBUG: ${DEBUG}
POSTGRES_URL: ${POSTGRES_URL_PASSPORT}
NSQ_ADDR: ${NSQ_ADDR}
REDIS_URL: ${REDIS_URL}
SIGNING_KEY: ${SIGNING_KEY}
command: "passport"
# ####################################################################################################################
# Background services
# ####################################################################################################################
back-email:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
depends_on:
- nsqlookup
environment:
DEFAULT_FROM: ${EMAIL_DEFAULT_FROM}
NSQLOOKUP_ADDR: ${NSQLOOKUP_ADDR}
MAILGUN_DOMAIN: ${MAILGUN_DOMAIN}
MAILGUN_API_KEY: ${MAILGUN_API_KEY}
TEMPLATES_DIR: "/var/templates/email"
command: "email"
# ####################################################################################################################
# Frontend apps
# ####################################################################################################################
front-passport:
image: ${REGISTRY_BASE_URL}/frontend-passport:${TAG}
restart: always
expose:
- 80
# ####################################################################################################################
# Reverse proxy
# ####################################################################################################################
proxy:
image: ${REGISTRY_BASE_URL}/proxy:${TAG}
restart: always
ports:
- 80:80
- 443:443
volumes:
- "proxy_certs:/root/.caddy"
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_API_KEY: ${CLOUDFLARE_API_KEY}
# ACME_AGREE: 'true'
# ####################################################################################################################
# Services (database, event bus etc)
# ####################################################################################################################
postgres-passport:
image: postgres:latest
restart: always
expose:
- 5432
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: ${POSTGRES_PASSPORT_DB}
POSTGRES_USER: ${POSTGRES_PASSPORT_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSPORT_PASSWORD}
redis:
image: redis
restart: always
expose:
- 6379
volumes:
- "redis_data:/data"
nsqlookup:
image: nsqio/nsq:v1.1.0
restart: always
expose:
- 4160
- 4161
command: /nsqlookupd
nsq:
image: nsqio/nsq:v1.1.0
restart: always
depends_on:
- nsqlookup
expose:
- 4150
- 4151
volumes:
- "nsq_data:/data"
command: /nsqd --lookupd-tcp-address=nsqlookup:4160 --data-path=/data
# ####################################################################################################################
# Ofelia cron job scheduler for docker
# ####################################################################################################################
scheduler:
image: mcuadros/ofelia
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./etc/scheduler:/etc/ofelia"
Dockerfile:
FROM postgres:latest
COPY init.sql /docker-entrypoint-initdb.d/
In your docker-compose.yml file, you say in part:
postgres-passport:
image: postgres:latest
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
So you're running the stock postgres image (the Dockerfile you show never gets called); and whatever's in your local postgres-passport directory, starting from the same directory as the docker-compose.yml file, appears as the /docker-entrypoint-initdb.d directory inside the container.
In the directory tree you show, if you
cd deploy/latest
docker-compose up
The ./postgres-passport is expected to be in the deploy/latest tree. Since it's not actually there, Docker doesn't complain, but just creates it as an empty directory.
If you're just trying to inject this configuration file, using a volume is a reasonable way to do it; you don't need the Dockerfile. However, you need to give the correct path to the directory you're trying to mount into the container.
postgres-passport:
image: postgres:latest
volumes:
# vvv Change this path vvv
- "../../postgres-passport/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
If you want to use that Dockerfile instead, you need to tell Docker Compose to build the custom image instead of using the standard one. Since you're building the init file into the image, you don't also need a bind-mount of the same file.
postgres-passport:
build: ../../postgres-passport
volumes:
# Only this one
- "./data/postgres_passport_data:/var/lib/postgresql/data"
(You will also need to adjust the COPY statement to match the path layout; just copying the entire local docker-entrypoint-initdb.d directory into the image is probably the most straightforward thing.)
Related
I have been using docker for a postgres database as I work on my project. I used this docker-compose file to spin it up
version: '3'
services:
postgres:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4: {}
I now want to containerise my back and front ends together with the database. I made this docker-compose file to do so
version: '3.8'
services:
frontend:
build: ./4x4
ports:
- "3000:3000"
backend:
build: ./server
ports:
- "8000:8000"
db:
image: postgres
ports:
- "4001:5432"
environment:
- POSTGRES_DB=4x4-db
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- pgdata-4x4:/var/lib/postgresql/data
volumes:
pgdata-4x4:
external: true
However, when I execute the command docker-compose up on the second file, I do not access the same data as the first one -- the database is blank. If I spin up the first one again, I return to the old data (i.e. nothing is overwritten).
I presumed that the same postgres database would be connected to
I would appreciate any elucidation
I am trying to set up a docker-pod with laravel, mariadb, nginx, redis and phpmyadmin. The laravel webspace works finde but if i switch to port 10081 like configured in the docker-compose.yml i am not able to login with the root account.
it sais " mysqli::real_connect(): php_network_getaddresses: getaddrinfo failed: Temporary failure in name resolution"
i already tried to configure a "my-network" which links all of the container, but if i understand docker right there is already a "defaul" network which does this. It didnt change the error message anyway.
here is my full docker-compose file
version: "3.8"
services:
redis:
image: redis:6.0-alpine
expose:
- "6380"
db:
image: mariadb:10.4
ports:
- "3307:3306"
environment:
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
MYSQL_DATABASE: laravel
volumes:
- db-data:/var/lib/mysql
nginx:
image: nginx:1.19-alpine
build:
context: .
dockerfile: ./docker/nginx.Dockerfile
restart: always
depends_on:
- php
ports:
- "10080:80"
networks:
- default
environment:
VIRTUAL_HOST: cockpit.example.de
volumes:
- ./docker/nginx.conf:/etc/nginx/nginx.conf:ro
- ./public:/app/public:ro
php:
build:
target: dev
context: .
dockerfile: ./docker/php.Dockerfile
working_dir: /app
env_file: .env
restart: always
expose:
- "9000"
depends_on:
- composer
- redis
- db
volumes:
- ./:/app
- ./docker/www.conf:/usr/local/etc/php-fpm.d/www.conf:ro
links:
- db:mysql
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
ports:
- 10081:80
restart: always
environment:
PMA_HOST : db
MYSQL_USERNAME: root
MYSQL_ROOT_PASSWORD: secret
depends_on:
- db
#user: "109:115"
links:
- db:mysql
node:
image: node:12-alpine
working_dir: /app
volumes:
- ./:/app
command: sh -c "npm install && npm run watch"
composer:
image: composer:1.10
working_dir: /app
#environment:
#SSH_AUTH_SOCK: /ssh-auth.sock
volumes:
- ./:/app
#- "$SSH_AUTH_SOCK:/ssh-auth.sock"
- /etc/passwd:/etc/passwd:ro
- /etc/group:/etc/group:ro
command: composer install --ignore-platform-reqs --no-scripts
volumes:
db-data:
Make sure you have defined all attributes correctly for phpmyadmin container, in the current case there was the absence of -network definition
phpmyadmin:
image: phpmyadmin/phpmyadmin:latest
container_name: phpmyadmin
restart: always
ports:
# 8080 is the host port and 80 is the docker port
- 8080:80
environment:
- PMA_ARBITRARY:1
- PMA_HOST:mysql
- MYSQL_USERNAME:root
- MYSQL_ROOT_PASSWORD:secret
depends_on:
- mysql
networks:
# define your network where all containers are connected to each other
- laravel
volumes:
# define directory path where you shall store your persistent data and config
# files of phpmyadmin
- ./docker/phpmyadmin
Maybe your container cannot start because its volume contains incompatible data. It can happen if you downgrade the version of mysql or mariadb image.
You can resolve the problem if you remove the volume and import the database again. Maybe you have to create a backup first.
I have a few systems where I use docker-compose and there is no problem.
However, I have one here where 'down' doesn't do anything at all.
'up' works perfectly though. This is on MacOS.
The project is nicknamed 'stormy', and here is the script:
version: '3.3'
services:
rabbitmq:
container_name: stormy_rabbitmq
image: rabbitmq:management-alpine
restart: unless-stopped
ports:
- 5672:5672
- 15672:15672
expose:
- 5672
volumes:
#- /appdata/stormy/rabbitmq/etc/:/etc/rabbitmq/
- /appdata/stormy/rabbitmq/data/:/var/lib/rabbitmq/
- /appdata/stormy/rabbitmq/logs/:/var/log/rabbitmq/
networks:
- default
settings:
container_name: stormy_settings
image: registry.gitlab.com/robinhoodcrypto/stormy/settings:latest
restart: unless-stopped
volumes:
- /appdata/stormy/settings:/appdata/stormy/settings
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
capture:
container_name: stormy_capture
image: registry.gitlab.com/robinhoodcrypto/stormy/capture:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/capture
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
livestream:
container_name: stormy_livestream
image: registry.gitlab.com/robinhoodcrypto/stormy/livestream:latest
restart: unless-stopped
volumes:
- /appdata/stormy/capture:/appdata/stormy/livestream
external_links:
- stormy_rabbitmq:rabbitmq
networks:
- default
networks:
default:
external:
name: stormy-network
the 'up' script is as follows:
[ ! "$(docker network ls | grep stormy-network)" ] && docker network create stormy-network
echo '*****' | docker login registry.gitlab.com -u 'gitlab+deploy-token-******' --password-stdin
docker-compose down
docker-compose build --pull
docker-compose -p 'stormy' up -d
and the 'down' is simply:
docker-compose down
version:
$ docker-compose -v
docker-compose version 1.24.1, build 4667896b
when I do 'down', here is the output:
$ docker-compose down
Network stormy-network is external, skipping
and I put a verbose log output at: https://pastebin.com/Qnw5J88V
Why isn't 'down' working?
The docker-compose -p option sets the project name which gets included in things like container names and labels; Compose uses it to know which containers belong to which Compose services. You need to specify it on all of the commands that interact with containers (docker-compose up, down, ps, ...); if you're doing this frequently, setting the COMPOSE_PROJECT_NAME environment variable might be easier.
#!/bin/sh
export COMPOSE_PROJECT_NAME=stormy
docker-compose build --pull
docker-compose down
docker-compose up -d
Problem
I have an application with postgres. I want to be able to back up the initial database data so that I don't have to re enter it each deployment. However, despite having a named volume set up in my compose file.
What I'm not sure of is how to have postgres save its data into the directory associated with the volume. I'm also not sure exactly how to associate a directory with the named volume. What I want is for the docker host server to be able to see the postgress data in the named volume's associated directory.
Could someone please provide an explanation/some examples of how to handle this? Right now even though the volume is associated with the docker service in the compose file, it doesn't write any data to the database_volume/ directory. This is what I would like to address.
Code
Here's my Dockerfile:
FROM python:3.6
ARG requirements=requirements/production.txt
ENV DJANGO_SETTINGS_MODULE=sasite.settings.production_test
WORKDIR /app
COPY manage.py /app/
COPY requirements/ /app/requirements/
RUN pip install -r $requirements
COPY config config
COPY sasite sasite
COPY templates templates
COPY logs logs
ADD /scripts/docker-entrypoint.sh /docker-entrypoint.sh
RUN chmod a+x /docker-entrypoint.sh
EXPOSE 8001
ENTRYPOINT ["/docker-entrypoint.sh"]
CMD ["/usr/local/bin/gunicorn", "--config", "config/gunicorn.conf", "--log-config", "config/logging.conf", "-e", "DJANGO_SETTINGS_MODULE=sasite.settings.production_test", "-w", "4", "-b", "0.0.0.0:8001", "sasite.wsgi:application"]
And my docker-compose.yml:
version: "3.2"
services:
app:
restart: always
build:
context: .
dockerfile: Dockerfile.prodtest
args:
requirements: requirements/production.txt
container_name: dj01
environment:
- DJANGO_SETTINGS_MODULE=sasite.settings.production_test
- PYTHONDONTWRITEBYTECODE=1
volumes:
- ./:/app
- /static:/static
- /media:/media
networks:
- main
depends_on:
- db
db:
restart: always
image: postgres:10.1-alpine
container_name: ps01
environment:
POSTGRES_DB: sasite_db
POSTGRES_USER: pguser
POSTGRES_PASSWORD: pguser123
ports:
- "5432:5432"
volumes:
- database_volume:/var/lib/postgresql/data
networks:
- main
nginx:
restart: always
image: nginx
container_name: ng01
volumes:
- ./config/nginx-prodtest.conf:/etc/nginx/conf.d/default.conf:ro
- ./static:/usr/share/nginx/sasite/static
- ./media:/usr/share/nginx/sasite/media
ports:
- "80:80"
- "443:443"
networks:
- main
depends_on:
- app
networks:
main:
volumes:
database_volume:
driver_opts:
type: none
device: ./database_volume
o: bind
I am using a docker container for my FiWare WireCloud. It is working properly but when I stop my container with docker compose down and restart it with docker compose up all my data are erased even if I specified a volume for the postgresql database and I have the following error:
ERROR: relation "wirecloud_workspace" does not exist at character 370
If I want to make it work again, I have to recreate the whole database from scratch (initdb & createsuperuser)
What I would like to do is to be able to save my wirecloud data inside a volume and be able te backup it and reload it. Here is my current docker-compose.yml file in version 3:
version: '3.3'
services:
iot-mongo:
image: mongo:3.2
ports:
- "27017:27017"
volumes:
- ./data/mongo:/data/db
orion:
image: fiware/orion:1.9.0
links:
- iot-mongo
ports:
- "1026:1026"
command: -dbhost iot-mongo
nginx:
restart: always
image: nginx:1.13
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf:ro
- ./nginx/sites-available:/etc/nginx/sites-available
- ./letsencrypt/well-known:/www/letsencrypt
- /etc/letsencrypt/:/etc/letsencrypt/
- wirecloudwww:/var/www/static
- wirecloudinstance:/opt/wirecloud_instance
links:
- wirecloud:wirecloud
- orion:orion
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgresdata:/var/lib/postgresql
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: postgres
PGDATA: /tmp
wirecloud:
restart: always
image: fiware/wirecloud:1.0-composable
links:
- postgres:postgres
volumes:
- wirecloudwww:/var/www/static
- wirecloudinstance:/opt/wirecloud_instance
volumes:
wirecloudwww: {}
wirecloudinstance: {}
postgresdata: {}
I also tried with docker-compose v1 like they show in the documentation but the result is the same.
The problem is the definition of the postgres volume and the PGDATA environment variable. The PGDATA environment is telling PostgreSQL to store data in /tmp, so it is not going to store data inside de volume (you can create a volume on /tmp, but this seems a bit strange). If you remove the PGDATA environment variable, postgres will store data into /var/lib/postgresql/data. Using this definition for the postgres service should do the trick:
postgres:
restart: always
image: postgres:latest
ports:
- "5432:5432"
volumes:
- postgresdata:/var/lib/postgresql/data
environment:
POSTGRES_USER: postgres
POSTGRES_PASSWORD:
POSTGRES_DB: postgres