How to specify multiple "env_files" for a docker compose service? - docker-compose

Currently I have setup my service like the following.
version: '3'
services:
gateway:
container_name: gateway
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./services/gateway:/services/gateway
- ./packages:/packages
- ./node_modules:/node_modules
env_file: .env
command: yarn run ts-node-dev services/gateway --colors
ports:
- 3000:3000
So I have specified one env_file. But now I want to pass multiple .env files.
Unfortunately, the following is not possible:
env_files:
- .env.secrets
- .env.development
Is there any way to pass multiple .env files to one service in docker-compsoe?

You can specify multiple env files on the env_file option (without s).
For instance:
version: '3'
services:
hello:
image: alpine
entrypoint: ["sh"]
command: ["-c", "env"]
env_file:
- a.env
- b.env

Note that, complementary to #conradkleineespel's answer, if an environment variable is defined in multiple .env files listed under env_file, the value found in the last environment file in the list overwrites all prior (tested in a docker-compose file with version: '3.7'.

Related

Docker-compose use env_file variables in the command

I'm trying to set up a docker-file for a selenium grid that is able to change the nodes based on an environment variable.
selenoida:
image: "aerokube/selenoid:latest"
container_name: selenoid
network_mode: bridge
ports:
- "0.0.0.0:4444:4444"
volumes:
- ".:/etc/selenoid"
- "./target:/output"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./target:/opt/selenoid/video"
environment:
- "OVERRIDE_VIDEO_OUTPUT_DIR=$PWD/target"
env_file:
- variables.env
command: ["-limit", "$NODES","-enable-file-upload", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video"]
But looks like the command is not able to use variables. I tested with ${NODES}, NODES.
Any ideas on how to use env variables set from a file in commands?
The command in exec form will not invoke the shell to expand the variables.
You can try using the command in shell form, or just overwrite the entrypoint with:
entrypoint: [""sh", "-c", "/usr/bin/selenoid -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -enable-file-upload -limit $${NODES}"]

Run docker postrgessql image twice first exits

I made 2 yml and when i run docker-compose -f postgresql.yml up its starts ok
and then when i run docker-compose -f postgresql2.yml up first exist code 0.
Is it even possible to run same image twice?
My main purpose to run same web app source twice with different db on the same server pc.
1 web app source 2 instances with self db each on one server(maybe its clearer definition).
Maybe there is better approach and I do and think everything in wrong way.
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5432:5432
and this no big difference
postgresql2.yml
# This configuration is intended for development purpose, it's **your** responsibility to harden it for production
version: '3.8'
services:
freshhipster-postgresql:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true
Just use another service name freshhipster-postgresql2 on postgresql2.yml
version: '3.8'
services:
freshhipster-postgresql2:
image: postgres:13.1
container_name: postgres2
volumes:
- pgdata:/var/lib/postgresql/data_vol2/
environment:
- POSTGRES_USER=FreshHipster
- POSTGRES_PASSWORD=
- POSTGRES_HOST_AUTH_METHOD=trust
# If you want to expose these ports outside your dev PC,
# remove the "127.0.0.1:" prefix
ports:
- 5433:5432
volumes:
pgdata:
external: true

Get docker-compose up to only run certain containers

So i currently can use "docker-compose up test" which only runs my database and my testing scripts. I want to be able to us say docker-compose up app" or something like that that runs everything besides testing. That way Im not running unnecessary containers. Im not sure if theres a way but thats what I was wondering. If possible Id appreciate some links to some that already do that and I can figure out the rest. Basically can I only run certain containers with a single command without running the others.
Yaml
version: '3'
services:
webapp:
build: ./literate-app
command: nodemon -e vue,js,css start.js
depends_on:
- postgres
links:
- postgres
environment:
- DB_HOST=postgres
ports:
- "3000:3000"
networks:
- literate-net
server:
build: ./readability-server
command: nodemon -L --inspect=0.0.0.0:5555 server.js
networks:
- literate-net
redis_db:
image: redis:alpine
networks:
- literate-net
postgres:
restart: 'always'
#image: 'bitnami/postgresql:latest'
volumes:
- /bitnami
ports:
- "5432:5432"
networks:
- literate-net
environment:
- "FILLA_DB_USER=my_user"
- "FILLA_DB_PASSWORD=password123"
- "FILLA_DB_DATABASE=my_database"
- "POSTGRES_PASSWORD=password123"
build: './database-creation'
test:
image: node:latest
build: ./test
working_dir: /literate-app/test
volumes:
- .:/literate-app
command:
npm run mocha
networks:
- literate-net
depends_on:
- postgres
environment:
- DB_HOST=postgres
networks:
literate-net:
driver: bridge
I can run docker-compose up test
Which only runs the postgres. Though I'd like to be able to just run my app without having to run my testing container.
Edit
Thanks to #ideam for the link
I was able to create an additional yaml file for just testing.
For those that dont want to look it up simply create a new yaml file like so
docker-compose.dev.yml
replace dev with whatever you like besides override which causes docker-compose up to automatically run that unless otherwise specified
To run the new file simply call
docker-compose -f docker-compose.dev.yml up
The -f is a flag for selecting a certain file to run. You can run multiple files to have different enviornments set-up
Appreciate the help
docker-compose up <service_name> will start only the service you have specified and its dependencies. (those specified in the dependends_on option.)
you may also define multiple services in the docker-compose up command:
docker-compose up <service_name> <service_name>
note - what does it mean "start the service and its dependecies"?
usually your production services (containers) are attached to each other via the dependes_on chain, therefore you can start only the last containers of the chain. for example, take the following compose file:
version: '3.7'
services:
frontend:
image: efrat19/vuejs
ports:
- "80:8080"
depends_on:
- backend
backend:
image: nginx:alpine
depends_on:
- fpm
fpm:
image: php:7.2
testing:
image: hze∂ƒxhbd
depends_on:
- frontend
all the services are chained in the depends_on option, while the testing container is down bellow the frontend. so when you hit docker-compose up frontend docker will run the fpm first, then the backend, then the frontend, and it will ignore the testing container, which is not required for running the frontend.
Starting with docker-compose 1.28.0 the new service profiles are just made for that! With profiles you can mark services to be only started in specific profiles:
services:
webapp:
# ...
server:
# ...
redis_db:
# ...
postgres:
# ...
test:
profiles: ["test"]
# ...
docker-compose up # start only your app services
docker-compose --profile test up # start app and test services
docker-compose run test # run test service
Maybe you want to share your docker-compose.yml for a better answer than this.
For reusing docker-compose configurations have a look at https://docs.docker.com/compose/extends/#example-use-case which explains the combination of multiple configuration files for reuse of configs for different use cases (test, production, etc.)

Docker is not getting Postgres environment variables

Docker is using the variables from my .env file and i keep getting the error:
Unhandled rejection SequelizeConnectionError: role "eli" does not
exist
I would like for Postgres to get the variables from the environment set in docker-compose.yml
.env
POSTGRES_PORT=5432
POSTGRES_DB=elitest4
POSTGRES_USER=eli
POSTGRES_PASSWORD=
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
env_file:
- .env
database:
image: postgres:9.6.8-alpine
environment: # postgress should be getting these variables, not the variables set in the env file thats for localhost
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
env_file:
- .env
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
TL/DR
Try updating the docker-compose service database environment section as follows:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-user}
POSTGRES_DB: ${POSTGRES_DB:-db}
Also notice that if you would like to see how each bound variable ultimately evaluates in Compose you can run the following command to see the "effective" compose file:
$ docker-compose config
This command will print out what your compose file looks like with all variable substitution replaced with its evaluation.
See the Environment variables in Compose and the Variable substitution sections in the documentation.
Pay close attention to this section:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file Dockerfile
4. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes precedence.
$ docker-compose exec api node
process.env.NODE_ENV 'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start like NODE_ENV=test node server.js, then this overrules any setting in your docker-compose.yml file.

Docker PostgreSQL - Scripts in /docker-entrypoint-initdb.d doesn't run

So, I have a docker-compose project with this structure:
DockerDev
- docker-compose.yaml
- d-php
- Dockerfile
- scripts-apache
- d-postgresql
- Dockerfile
- scripts
- dev_data_setup.sql
- logs
- pgdata
- www
PHP, Redis, ElasticSearch is OK. But Postgresql doesn't run dev_data_setup.sql, with any diferent solutions to /dockes-entrypoint-initdb.d that I found (volume, ADD, COPY, etc). I tried to run and sh script and nothing.
Could you see this docker-compose and Dockerfile and help me? Thanks
Dockerfile:
FROM postgres:latest
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d
docker-compose.yaml:
version: '2'
services:
php:
build: ./d-php/
hostname: www.domain.com
ports:
- "80:80"
volumes:
- ./www:/var/www/html
- ./d-php/scripts-apache2/apache2.conf:/etc/apache2/apache2.conf
- ./d-php/scripts-apache2/web.conf:/etc/apache2/sites-enabled/web.conf
- ./d-php/scripts-apache2/webservice.conf:/etc/apache2/sites-enabled/webservice.conf
- ./logs:/var/log/apache2
links:
- db
- redis
- elasticsearch
db:
build: ./d-postgresql/
volumes:
- ./pgdata:/pgdata
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/pgdata
redis:
image: redis:latest
elasticsearch:
image: elasticsearch:2.4.1
So I found the problem.
First: my sql script was trying to recreate postgres user. then, the dockedev_db exited.
Second: I needed to remove all images related to db to docker-compose run the script again.
Thanks for your help.
Your problem is caused by the way you use ADD in your Dockerfile:
FROM postgres:latest
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d
This creates a file called /docker-entrypoint-initdb.d with the content of the dev_data_setup.sql file. What you want is to treat /docker-entrypoint-initdb.d as a directory.
You should change your ADD command to one of the following:
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d/
The trailing slash will treat the dest parameter as a directory. Or use
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d/dev_data_setup.sql
Which will specifically spell out the file name.
Reference: https://docs.docker.com/engine/reference/builder/#/add
If <dest> does not end with a trailing slash, it will be considered a regular file and the contents of <src> will be written at <dest>.