After running docker-compose file 1, I want to use one of its named volumes in docker-compose file 2. I tried the below.
What should I be doing instead please? Thanks.
docker-compose file 1 (excerpt) in 'nginx-proxy' folder:
services:
nginx:
volumes:
- certs:/etc/nginx/certs
volumes:
certs: // this produces a container called `nginx-proxy_certs`
networks:
default:
external:
name: nginx-proxy
docker-compose file 2 (excerpt) in 'mailserver' folder:
services:
mailserver:
volumes:
- certs:/etc/letsencrypt
volumes:
certs: // this produces a container called `mailserver_certs`
networks:
default:
external:
name: nginx-proxy
Related
When using a docker ACI context, the following docker-compose file fails. The mongodb container continuously restarts.
version: "3.9"
services:
mongodb:
image: mongo:5.0.6
env_file: mongo.env
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=changeit
ports:
- 27017
volumes:
- dbdata:/data/db
volumes:
dbdata:
driver: azure_file
driver_opts:
share_name: mongodb-data
storage_account_name: kpncoqyuxumoetuftz
If I don't use the azure_file storage it will run ok (But of course the data won't be persistent)
I am not sure why I can't mount to the default directory /data/db but to get this to work I had to mount to a different directory and then replace the default command with one that takes a parameter.
Working version is below:
version: "3.9"
services:
mongodb:
image: mongo:5.0.6
command: ["mongod", "--dbpath=/dbdata"]
environment:
- MONGO_INITDB_ROOT_USERNAME=root
- MONGO_INITDB_ROOT_PASSWORD=changeit
ports:
- 27017
volumes:
- dbdata:/dbdata
volumes:
dbdata:
driver: azure_file
driver_opts:
share_name: mongodb-data
storage_account_name: kpncoqyuxumoetuftz
I have following simple docker-compose file:
version: "3.7"
services:
db:
image: postgres:12
container_name: project-back_postgres
environment:
POSTGRES_PASSWORD: pass
POSTGRES_USER: user
POSTGRES_DB: project-back
volumes:
- type: bind
source: ./db-data
target: /var/lib/postgres/data
ports:
- "5432:5432"
My false expectation of this volume definition was, that volume will be available in folder db-data. However the folder db-data does not contain any volumes and remains empty.
How can I achieve this behavior that folder db-date contains volume of Postgres?
you can try mounting volume in following way
volumes:
- ./db-data:/var/lib/postgres/data
Here is a part of my project structure:
Here is a part of my docker-compose.yml file:
Here is my Dockerfile (which is inside postgres-passport folder):
I have init.sql script which should create user, database and tables (user and db are the same as in docker-compose.yml file)
But when I look into my docker-entrypoint-initdb.d folder it is empty (there is no init.sql file). I use this command:
docker exec latest_postgres-passport_1 ls -l
docker-entrypoint-initdb.d/
On my server (Ubuntu) I see:
I need your help, what am I doing wrong? (how can I copy a folder with init.sql script. Postgres tell me that
/usr/local/bin/docker-entrypoint.sh: ignoring
/docker-entrypoint-initdb.d/*
(as he can't find this folder)
All code in text format below:
Full docker-compose.yml:
version: '3'
volumes:
redis_data: {}
proxy_certs: {}
nsq_data: {}
postgres_passport_data: {}
storage_data: {}
services:
# ####################################################################################################################
# Http services
# ####################################################################################################################
back-passport:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
expose:
- 9000
depends_on:
- postgres-passport
- redis
- nsq
environment:
ACCESS_LOG: ${ACCESS_LOG}
AFTER_CONFIRM_BASE_URL: ${AFTER_CONFIRM_BASE_URL}
CONFIRM_BASE_URL: ${CONFIRM_BASE_URL}
COOKIE_DOMAIN: ${COOKIE_DOMAIN}
COOKIE_SECURE: ${COOKIE_SECURE}
DEBUG: ${DEBUG}
POSTGRES_URL: ${POSTGRES_URL_PASSPORT}
NSQ_ADDR: ${NSQ_ADDR}
REDIS_URL: ${REDIS_URL}
SIGNING_KEY: ${SIGNING_KEY}
command: "passport"
# ####################################################################################################################
# Background services
# ####################################################################################################################
back-email:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
depends_on:
- nsqlookup
environment:
DEFAULT_FROM: ${EMAIL_DEFAULT_FROM}
NSQLOOKUP_ADDR: ${NSQLOOKUP_ADDR}
MAILGUN_DOMAIN: ${MAILGUN_DOMAIN}
MAILGUN_API_KEY: ${MAILGUN_API_KEY}
TEMPLATES_DIR: "/var/templates/email"
command: "email"
# ####################################################################################################################
# Frontend apps
# ####################################################################################################################
front-passport:
image: ${REGISTRY_BASE_URL}/frontend-passport:${TAG}
restart: always
expose:
- 80
# ####################################################################################################################
# Reverse proxy
# ####################################################################################################################
proxy:
image: ${REGISTRY_BASE_URL}/proxy:${TAG}
restart: always
ports:
- 80:80
- 443:443
volumes:
- "proxy_certs:/root/.caddy"
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_API_KEY: ${CLOUDFLARE_API_KEY}
# ACME_AGREE: 'true'
# ####################################################################################################################
# Services (database, event bus etc)
# ####################################################################################################################
postgres-passport:
image: postgres:latest
restart: always
expose:
- 5432
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: ${POSTGRES_PASSPORT_DB}
POSTGRES_USER: ${POSTGRES_PASSPORT_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSPORT_PASSWORD}
redis:
image: redis
restart: always
expose:
- 6379
volumes:
- "redis_data:/data"
nsqlookup:
image: nsqio/nsq:v1.1.0
restart: always
expose:
- 4160
- 4161
command: /nsqlookupd
nsq:
image: nsqio/nsq:v1.1.0
restart: always
depends_on:
- nsqlookup
expose:
- 4150
- 4151
volumes:
- "nsq_data:/data"
command: /nsqd --lookupd-tcp-address=nsqlookup:4160 --data-path=/data
# ####################################################################################################################
# Ofelia cron job scheduler for docker
# ####################################################################################################################
scheduler:
image: mcuadros/ofelia
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./etc/scheduler:/etc/ofelia"
Dockerfile:
FROM postgres:latest
COPY init.sql /docker-entrypoint-initdb.d/
In your docker-compose.yml file, you say in part:
postgres-passport:
image: postgres:latest
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
So you're running the stock postgres image (the Dockerfile you show never gets called); and whatever's in your local postgres-passport directory, starting from the same directory as the docker-compose.yml file, appears as the /docker-entrypoint-initdb.d directory inside the container.
In the directory tree you show, if you
cd deploy/latest
docker-compose up
The ./postgres-passport is expected to be in the deploy/latest tree. Since it's not actually there, Docker doesn't complain, but just creates it as an empty directory.
If you're just trying to inject this configuration file, using a volume is a reasonable way to do it; you don't need the Dockerfile. However, you need to give the correct path to the directory you're trying to mount into the container.
postgres-passport:
image: postgres:latest
volumes:
# vvv Change this path vvv
- "../../postgres-passport/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
If you want to use that Dockerfile instead, you need to tell Docker Compose to build the custom image instead of using the standard one. Since you're building the init file into the image, you don't also need a bind-mount of the same file.
postgres-passport:
build: ../../postgres-passport
volumes:
# Only this one
- "./data/postgres_passport_data:/var/lib/postgresql/data"
(You will also need to adjust the COPY statement to match the path layout; just copying the entire local docker-entrypoint-initdb.d directory into the image is probably the most straightforward thing.)
I am using service 3 and below is mycode,
I tried to add the var- COMPOSE_CONVERT_WINDOWS_PATHS: 1 in environment
it still get the error:
ERROR: for db-on-docker-ms_mysql-dev_1 Cannot create container for service mysql-dev: invalid volume specification: '/c/Dockerfile/db-on-docker-ms:/var/lib/mysql under volumes:rw'
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- /c/Dockerfile/db-on-docker-ms:/var/lib/mysql
My Docker Version: 18.09.2
I think you either need set COMPOSE_CONVERT_WINDOWS_PATHS environment variable from your command line
$ export COMPOSE_CONVERT_WINDOWS_PATHS=1
Then change the volumes configuration
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- c:\Dockerfile\db-on-docker-ms:/var/lib/mysql
Run docker compose
$ docker-compose up
Or you can attempt to set the volumes like this
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- //c/Dockerfile/db-on-docker-ms:/var/lib/mysql
And run docker compose
$ docker-compose up
Thanks for Misantorp's ans first!
I finally figure out how to do it in windows container
the volumes path should be:
volumes:
- C:\Dockerfile\db-on-docker-ms:/var/lib/mysql
run the command in powershell:
COMPOSE_CONVERT_WINDOWS_PATHS=0
then run:
docker-compose up
This is an extract from Volume configuration reference, in docker docs.
version: "3"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume:
Can I have something similar mapped to a custom directory, instead of /var/lib/docker/volumes/?
Something like this:
version: "3"
services:
db:
image: db
volumes:
- data-volume:/var/lib/db
backup:
image: backup-service
volumes:
- data-volume:/var/lib/backup/data
volumes:
data-volume: /home/user/db
It is possible. See https://github.com/docker/docker/issues/19990#issuecomment-248955005
You will have to create the volume with docker cli.