How do I access entrypoint.sh values in docker-compose file. I have declared all the values in entrypoint.sh as shown below. I need to access those in my docker-compose file
I have used docker-volumes to copy the entrypoint.sh script to /usr/local/bin directory in the docker container.
entrypoint.sh script
MONGO_ROOT_USERNAME=root
MONGO_ROOT_PASSWORD=mongo#123
MONGO_EXPRESS_USERNAME=root
MONGO_EXPRESS_PASSWORD=express#123
docker-compose file
mongo-express:
image: mongo-express
ports:
- 8081:8081
volumes:
- "./docker-scripts/entrypoint.sh:/usr/local/bin"
environment:
ME_CONFIG_BASICAUTH_USERNAME: ${MONGO_EXPRESS_USERNAME}
ME_CONFIG_BASICAUTH_PASSWORD: ${MONGO_EXPRESS_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
But when I run docker-compose -d I get this message
WARNING: The MONGO_ROOT_USERNAME variable is not set. Defaulting to a blank string.
WARNING: The MONGO_ROOT_PASSWORD variable is not set. Defaulting to a blank string.
WARNING: The MONGO_EXPRESS_USERNAME variable is not set. Defaulting to a blank string.
WARNING: The MONGO_EXPRESS_PASSWORD variable is not set. Defaulting to a blank string.
you forget to export your env variables:
export MONGO_ROOT_USERNAME=root
export MONGO_ROOT_PASSWORD=mongo#123
export MONGO_EXPRESS_USERNAME=root
export MONGO_EXPRESS_PASSWORD=express#123
Update1:
So firsty create a .env file, then add it to your docker compose
.env
MONGO_ROOT_USERNAME=root
MONGO_ROOT_PASSWORD=mongo#123
MONGO_EXPRESS_USERNAME=root
MONGO_EXPRESS_PASSWORD=express#123
Then on docker-compose.yml
mongo-express:
image: mongo-express
ports:
- 8081:8081
volumes:
- "./docker-scripts/entrypoint.sh:/usr/local/bin"
env_file: # <-- Add this line
- .env # <-- Add this line
environment:
ME_CONFIG_BASICAUTH_USERNAME: ${MONGO_EXPRESS_USERNAME}
ME_CONFIG_BASICAUTH_PASSWORD: ${MONGO_EXPRESS_PASSWORD}
ME_CONFIG_MONGODB_PORT: 27017
ME_CONFIG_MONGODB_ADMINUSERNAME: ${MONGO_ROOT_USERNAME}
ME_CONFIG_MONGODB_ADMINPASSWORD: ${MONGO_ROOT_PASSWORD}
Related
Kindly ask you to help with docker and Postgres.
I have a local Postgres database and a project on NestJS.
I killed 5432 port.
My Dockerfile
FROM node:16.13.1
WORKDIR /app
COPY package.json ./
COPY yarn.lock ./
RUN yarn install
COPY . .
COPY ./dist ./dist
CMD ["yarn", "start:dev"]
My docker-compose.yml
version: '3.0'
services:
main:
container_name: main
build:
context: .
env_file:
- .env
volumes:
- .:/app
- /app/node_modules
ports:
- 4000:4000
- 9229:9229
command: yarn start:dev
depends_on:
- postgres
restart: always
postgres:
container_name: postgres
image: postgres:12
env_file:
- .env
environment:
PG_DATA: /var/lib/postgresql/data
POSTGRES_HOST_AUTH_METHOD: 'trust'
ports:
- 5432:5432
volumes:
- pgdata:/var/lib/postgresql/data
restart: always
volumes:
pgdata:
.env
DB_TYPE=postgres
DB_HOST=postgres
DB_PORT=5432
DB_USERNAME=hleb
DB_NAME=artwine
DB_PASSWORD=Mypassword
running sudo docker-compose build - NO ERRORS
running sudo docker-compose up --force-recreate - ERROR
ERROR [ExceptionHandler] role "hleb" does not exist.
I've tried multiple suggestions from existing issues but nothing helped.
What am I doing wrong?
Thanks!
Do not use sudo - unless you have to.
Use the latest Postgres release if possible.
The Postgresql Docker Image provides some environment variables, that will help you bootstrapping your database.
Be aware:
The PostgreSQL image uses several environment variables which are easy to miss. The only variable required is POSTGRES_PASSWORD, the rest are optional.
Warning: the Docker specific variables will only have an effect if you start the container with a data directory that is empty; any pre-existing database will be left untouched on container startup.
When you do not provide the POSTGRES_USER environment variable in the docker-compose.yml file, it will default to postgres.
Your .env file used for Docker Compose does not contain the docker specific environment variables.
So amending/extending it to:
POSTGRES_USER=hleb
POSTGRES_DB=artwine
POSTGRES_PASSWORD=Mypassword
should do the trick. You will have to re-create the volume (delete it) to make this work, if the data directory already exists.
Here is a part of my project structure:
Here is a part of my docker-compose.yml file:
Here is my Dockerfile (which is inside postgres-passport folder):
I have init.sql script which should create user, database and tables (user and db are the same as in docker-compose.yml file)
But when I look into my docker-entrypoint-initdb.d folder it is empty (there is no init.sql file). I use this command:
docker exec latest_postgres-passport_1 ls -l
docker-entrypoint-initdb.d/
On my server (Ubuntu) I see:
I need your help, what am I doing wrong? (how can I copy a folder with init.sql script. Postgres tell me that
/usr/local/bin/docker-entrypoint.sh: ignoring
/docker-entrypoint-initdb.d/*
(as he can't find this folder)
All code in text format below:
Full docker-compose.yml:
version: '3'
volumes:
redis_data: {}
proxy_certs: {}
nsq_data: {}
postgres_passport_data: {}
storage_data: {}
services:
# ####################################################################################################################
# Http services
# ####################################################################################################################
back-passport:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
expose:
- 9000
depends_on:
- postgres-passport
- redis
- nsq
environment:
ACCESS_LOG: ${ACCESS_LOG}
AFTER_CONFIRM_BASE_URL: ${AFTER_CONFIRM_BASE_URL}
CONFIRM_BASE_URL: ${CONFIRM_BASE_URL}
COOKIE_DOMAIN: ${COOKIE_DOMAIN}
COOKIE_SECURE: ${COOKIE_SECURE}
DEBUG: ${DEBUG}
POSTGRES_URL: ${POSTGRES_URL_PASSPORT}
NSQ_ADDR: ${NSQ_ADDR}
REDIS_URL: ${REDIS_URL}
SIGNING_KEY: ${SIGNING_KEY}
command: "passport"
# ####################################################################################################################
# Background services
# ####################################################################################################################
back-email:
image: ${REGISTRY_BASE_URL}/backend:${TAG}
restart: always
depends_on:
- nsqlookup
environment:
DEFAULT_FROM: ${EMAIL_DEFAULT_FROM}
NSQLOOKUP_ADDR: ${NSQLOOKUP_ADDR}
MAILGUN_DOMAIN: ${MAILGUN_DOMAIN}
MAILGUN_API_KEY: ${MAILGUN_API_KEY}
TEMPLATES_DIR: "/var/templates/email"
command: "email"
# ####################################################################################################################
# Frontend apps
# ####################################################################################################################
front-passport:
image: ${REGISTRY_BASE_URL}/frontend-passport:${TAG}
restart: always
expose:
- 80
# ####################################################################################################################
# Reverse proxy
# ####################################################################################################################
proxy:
image: ${REGISTRY_BASE_URL}/proxy:${TAG}
restart: always
ports:
- 80:80
- 443:443
volumes:
- "proxy_certs:/root/.caddy"
environment:
CLOUDFLARE_EMAIL: ${CLOUDFLARE_EMAIL}
CLOUDFLARE_API_KEY: ${CLOUDFLARE_API_KEY}
# ACME_AGREE: 'true'
# ####################################################################################################################
# Services (database, event bus etc)
# ####################################################################################################################
postgres-passport:
image: postgres:latest
restart: always
expose:
- 5432
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
environment:
POSTGRES_DB: ${POSTGRES_PASSPORT_DB}
POSTGRES_USER: ${POSTGRES_PASSPORT_USER}
POSTGRES_PASSWORD: ${POSTGRES_PASSPORT_PASSWORD}
redis:
image: redis
restart: always
expose:
- 6379
volumes:
- "redis_data:/data"
nsqlookup:
image: nsqio/nsq:v1.1.0
restart: always
expose:
- 4160
- 4161
command: /nsqlookupd
nsq:
image: nsqio/nsq:v1.1.0
restart: always
depends_on:
- nsqlookup
expose:
- 4150
- 4151
volumes:
- "nsq_data:/data"
command: /nsqd --lookupd-tcp-address=nsqlookup:4160 --data-path=/data
# ####################################################################################################################
# Ofelia cron job scheduler for docker
# ####################################################################################################################
scheduler:
image: mcuadros/ofelia
restart: always
volumes:
- "/var/run/docker.sock:/var/run/docker.sock:ro"
- "./etc/scheduler:/etc/ofelia"
Dockerfile:
FROM postgres:latest
COPY init.sql /docker-entrypoint-initdb.d/
In your docker-compose.yml file, you say in part:
postgres-passport:
image: postgres:latest
volumes:
- "./postgres-passport:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
So you're running the stock postgres image (the Dockerfile you show never gets called); and whatever's in your local postgres-passport directory, starting from the same directory as the docker-compose.yml file, appears as the /docker-entrypoint-initdb.d directory inside the container.
In the directory tree you show, if you
cd deploy/latest
docker-compose up
The ./postgres-passport is expected to be in the deploy/latest tree. Since it's not actually there, Docker doesn't complain, but just creates it as an empty directory.
If you're just trying to inject this configuration file, using a volume is a reasonable way to do it; you don't need the Dockerfile. However, you need to give the correct path to the directory you're trying to mount into the container.
postgres-passport:
image: postgres:latest
volumes:
# vvv Change this path vvv
- "../../postgres-passport/docker-entrypoint-initdb.d:/docker-entrypoint-initdb.d"
- "./data/postgres_passport_data:/var/lib/postgresql/data"
If you want to use that Dockerfile instead, you need to tell Docker Compose to build the custom image instead of using the standard one. Since you're building the init file into the image, you don't also need a bind-mount of the same file.
postgres-passport:
build: ../../postgres-passport
volumes:
# Only this one
- "./data/postgres_passport_data:/var/lib/postgresql/data"
(You will also need to adjust the COPY statement to match the path layout; just copying the entire local docker-entrypoint-initdb.d directory into the image is probably the most straightforward thing.)
Docker is using the variables from my .env file and i keep getting the error:
Unhandled rejection SequelizeConnectionError: role "eli" does not
exist
I would like for Postgres to get the variables from the environment set in docker-compose.yml
.env
POSTGRES_PORT=5432
POSTGRES_DB=elitest4
POSTGRES_USER=eli
POSTGRES_PASSWORD=
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
env_file:
- .env
database:
image: postgres:9.6.8-alpine
environment: # postgress should be getting these variables, not the variables set in the env file thats for localhost
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
env_file:
- .env
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
TL/DR
Try updating the docker-compose service database environment section as follows:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-user}
POSTGRES_DB: ${POSTGRES_DB:-db}
Also notice that if you would like to see how each bound variable ultimately evaluates in Compose you can run the following command to see the "effective" compose file:
$ docker-compose config
This command will print out what your compose file looks like with all variable substitution replaced with its evaluation.
See the Environment variables in Compose and the Variable substitution sections in the documentation.
Pay close attention to this section:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file Dockerfile
4. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes precedence.
$ docker-compose exec api node
process.env.NODE_ENV 'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start like NODE_ENV=test node server.js, then this overrules any setting in your docker-compose.yml file.
I have installed MongoDB in Linux Ubuntu through the docker image.
I have set the parameters in the YAML file like below to implement the authentication for mongodb. But when I up the server using the command docker-compose up -d, I'm getting error like
"Unsupported config option for 'db' service: 'security'".
How can I resolve the issue?
docker-compose.yaml:
db:
image: mongo:2.6.4
command: ["mongod", "--smallfiles"]
expose: "27017"
ports: "27017:27017"
security: keyFile: "/home/ubuntu/dockerlab/keyfile.key"
authorization: "enabled"
security and authorization are not a keyword for the docker-compose YAML file, so take them out of there.
If that file key file needs copying into the container, you should put something like:
FROM: mongo:2.6.4
ADD /home/ubuntu/dockerlab/keyfile.key /tmp
ENV AUTH yes
in a Dockerfile.
And change in the docker-compose.yml file:
image: mongo:2.6.4
into
build: .
and the command value into
command: ["mongod", "--smallfiles", "--keyFile /tmp/keyfile.key" ]
Alternatively you can use a volume entry in your docker-compose.yml to map the keyfile.key into your container, and instead of the ENV in the Dockerfile you add , "--auth" to sequence that is the value for command. Then you can continue to use the image stanza and leave out the Dockerfile altogether:
db:
image: mongo:2.6.4
command: ["mongod", "--smallfiles", "--auth", "--keyFile /tmp/keyfile.key" ]
expose: "27017"
ports: "27017:27017"
volumes:
- /home/ubuntu/dockerlab/keyfile.key: /tmp
Based on Docker's Postgres documentation, I can create any *.sql file inside /docker-entrypoint-initdb.d and have it automatically run.
I have init.sql that contains CREATE DATABASE ronda;
In my docker-compose.yaml, I have
web:
restart: always
build: ./web
expose:
- "8000"
links:
- postgres:postgres
volumes:
- /usr/src/app/static
env_file: .env
command: /usr/local/bin/gunicorn ronda.wsgi:application -w 2 -b :8000
nginx:
restart: always
build: ./nginx/
ports:
- "80:80"
volumes:
- /www/static
volumes_from:
- web
links:
- web:web
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
data:
restart: always
build: ./postgres/
volumes:
- /var/lib/postgresql
command: "true"
and my postgres Dockerfile,
FROM library/postgres
RUN mkdir -p /docker-entrypoint-initdb.d
COPY init.sql /docker-entrypoint-initdb.d/
Running docker-compose build and docker-compose up work fine, but the database ronda is not created.
This is how I use postgres on my projects and preload the database.
file: docker-compose.yml
db:
container_name: db_service
build:
context: .
dockerfile: ./Dockerfile.postgres
ports:
- "5432:5432"
volumes:
- /var/lib/postgresql/data/
This Dockerfile load the file named pg_dump.backup(binary dump) or psql_dump.sql(plain text dump) if exist on root folder of the project.
file: Dockerfile.postgres
FROM postgres:9.6-alpine
ENV POSTGRES_DB DatabaseName
COPY pg_dump.backup .
COPY pg_dump.sql .
RUN [[ -e "pg_dump.backup" ]] && pg_restore pg_dump.backup > pg_dump.sql
# Preload database on init
RUN [[ -e "pg_dump.sql" ]] && cp pg_dump.sql /docker-entrypoint-initdb.d/
In case of need retry the loading of the dump, you can remove the current database with the command:
docker-compose rm db
Then you can run docker-compose up to retry load the database.
If your initialisation requirements are just to create the ronda schema, then you could just make use of the POSTGRES_DB environment variable as described in the documentation.
The bit of your docker-compose.yml file for the postgres service would then be:
postgres:
restart: always
build: ./postgres/
volumes_from:
- data
ports:
- "5432:5432"
environment:
POSTGRES_DB: ronda
On a side note, do not use restart: always for your data container as this container does not run any service (just the true command). Doing this you are basically telling Docker to run the true command in an infinite loop.
Had the same problem with postgres 11.
Some points that helped me:
run:
docker-compose rm
docker-compose build
docker-compose up
The obvious: don't run compose in detached mode. You want to see the logs.
After adding the step docker-compose rm to the mix it worked, finally.