I am using the https://github.com/andrewmclagan/react-env way to pass env variables to browser. I am able to do using .env file, but not through the docker compose
I have a docker compose file. I have to set the api url for the nextjs project
node_nextjs:
image: "node16_build_custom"
ports:
- 3000:3000
environment:
- DS_PATH=/home/simha/app/torsha_datasets
- REACT_APP_API_HOST=http://localhost:8000
volumes:
- type: bind
source: ../CODE/nextjs_frontend
target: /home/simha/app
command:
- sh
- -c
- |
id
yarn react-env
pm2-runtime npm -- start
stdin_open: true # Add this line into your service
tty: true # Add this line into your service
networks:
- nginx_network
I am setting the REACT_APP_API_HOST=http://localhost:8000 in the docker compose
but yarn react-env does not create the env variable inside public/__ENV.js
I see
window.__ENV = {};
How to pass the docker compose env varialbe inside .env file
i tried
.env
REACT_APP_API_HOST=$REACT_APP_API_HOST
This does not work.
Related
Following is my docker-compose.yml file
version: "3.7"
services:
test-build:
image: docker-hardened-ol8-openjdk17
command: tail -f /dev/null
restart: always
volumes:
- "C:/checkouts:/opt/checkouts"
ports:
- 9001:9001
environment:
- JAVA_17_HOME=${JAVA_HOME:?err}
The docker-hardened-ol8-openjdk17 image has Java 17 and the JAVA_HOME environment variable. I need to set the JAVA_17_HOME environment variable to the same as JAVA_HOME from the image. But when I run docker compose up, it takes the JAVA_HOME value set in my machine (host machine).
I read the https://docs.docker.com/compose/environment-variables/ and https://docs.docker.com/compose/reference/envvars/ pages. Even these pages mention that -
Compose uses the variable values from the shell environment in which docker-compose is run.
Is there a way I can specify docker-compose to use the image's environment variable instead of the host machine's?
I could work around this using a Dockerfile along with the docker-compose.yml config file in the same directory. I moved the image part to the Dockerfile, and declared all the environment variables in the Dockerfile.
docker-compose.yml -
version: "3.7"
services:
test-build:
build:
context: .
command: tail -f /dev/null
restart: always
volumes:
- "C:/checkouts:/opt/checkouts"
ports:
- 9001:9001
Dockerfile -
FROM docker-hardened-ol8-openjdk17
ENV JAVA_17_HOME=$JAVA_HOME
ENV BUILD_HOME=/opt/checkouts/
WORKDIR $BUILD_HOME
I'm trying to use env vars to define the host and credentials for the Traefik dashboard, but Traefik doesn't see them. All of the env vars are present when I verify them inside the docker container.
Everything works well with hardcoded values.
I attempted to use both approaches:
.env file
Declare the environment variables in the docker-compose file (environment section)
All the other services of the docker-compose can successfully use the vars from the .env file
What am I doing incorrectly?
docker-compose.yml
version: '3.6'
services:
reverse-proxy:
image: traefik:v2.6
ports:
- 80:80
- 443:443
env_file:
- "./.env"
deploy:
placement:
constraints: [node.role == manager]
update_config:
failure_action: rollback
labels:
# Enable traefik for the specific service
- "traefik.enable=true"
# global redirect to https
- "traefik.http.routers.http-catchall.rule=hostregexp(`{host:.+}`)"
- "traefik.http.routers.http-catchall.entrypoints=http"
- "traefik.http.routers.http-catchall.middlewares=https-redirect"
- "traefik.http.middlewares.https-redirect.redirectscheme.scheme=https"
- "traefik.http.middlewares.https-redirect.redirectscheme.permanent=true"
# Make the Traefik use this domain in HTTPS
- "traefik.http.routers.traefik-https.rule=Host(`${TRFK_HOST}`)"
# Allow the connections to the traefik api for the dashboard support
- "traefik.http.routers.traefik-https.service=api#internal"
- "traefik.http.services.traefik-svc.loadbalancer.server.port=9999"
# Use the Let's encrypt resolver
- "traefik.http.routers.traefik-https.tls=true"
- "traefik.http.routers.traefik-https.tls.certresolver=le"
# Use the traefik_net network that is declared below
- "traefik.docker.network=traefik_net"
# Use the auth for traefik dashboard
- "traefik.http.middlewares.traefik-auth.basicauth.users=${TRFK_USER}:${TRFK_PSWD}"
- "traefik.http.routers.traefik-https.middlewares=traefik-auth"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- traefik-public-certificates:/certificates
command:
- --providers.docker
- --providers.docker.swarmMode=true
- --providers.docker.exposedbydefault=false
- --entrypoints.http.address=:80
- --entrypoints.https.address=:443
- --certificatesresolvers.le.acme.email=ex#ex.com
- --certificatesresolvers.le.acme.storage=/certificates/acme.json
- --certificatesresolvers.le.acme.httpchallenge=true
- --certificatesresolvers.le.acme.httpchallenge.entrypoint=http
- --accesslog
- --log
- --api
networks:
- traefik_net
volumes:
traefik-public-certificates:
networks:
traefik_net:
external: true
.env file
# traefik dashboard auth config
TRFK_USER=user
TRFK_PASSWD=$apr1$ZPapA6iQ$7OzhPqocYY.lotTdGgnoM.
TRFK_HOST=traefik.example.com
The only way that is currently working is:
env $(cat .env | grep ^[A-Z] | xargs) docker stack deploy -c docker-stack.yml stack
Is there any other way to make it work ?
You seem to be doing it correctly.
env_file: and environment: sections are used for injecting environment variables into the created container.
In this case, however, the environment variables are being expanded directly in the labels of the stack yml, so are not being passed through to the container - they do need to be part of the environment of the docker stack deploy command.
As long as you have the call to docker stack deploy wrapped up in a Makefile or bash script of some kind so preparing the environment is automated, this is the correct and necessary way.
Currently I have setup my service like the following.
version: '3'
services:
gateway:
container_name: gateway
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./services/gateway:/services/gateway
- ./packages:/packages
- ./node_modules:/node_modules
env_file: .env
command: yarn run ts-node-dev services/gateway --colors
ports:
- 3000:3000
So I have specified one env_file. But now I want to pass multiple .env files.
Unfortunately, the following is not possible:
env_files:
- .env.secrets
- .env.development
Is there any way to pass multiple .env files to one service in docker-compsoe?
You can specify multiple env files on the env_file option (without s).
For instance:
version: '3'
services:
hello:
image: alpine
entrypoint: ["sh"]
command: ["-c", "env"]
env_file:
- a.env
- b.env
Note that, complementary to #conradkleineespel's answer, if an environment variable is defined in multiple .env files listed under env_file, the value found in the last environment file in the list overwrites all prior (tested in a docker-compose file with version: '3.7'.
Docker is using the variables from my .env file and i keep getting the error:
Unhandled rejection SequelizeConnectionError: role "eli" does not
exist
I would like for Postgres to get the variables from the environment set in docker-compose.yml
.env
POSTGRES_PORT=5432
POSTGRES_DB=elitest4
POSTGRES_USER=eli
POSTGRES_PASSWORD=
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
env_file:
- .env
database:
image: postgres:9.6.8-alpine
environment: # postgress should be getting these variables, not the variables set in the env file thats for localhost
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
env_file:
- .env
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
TL/DR
Try updating the docker-compose service database environment section as follows:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-user}
POSTGRES_DB: ${POSTGRES_DB:-db}
Also notice that if you would like to see how each bound variable ultimately evaluates in Compose you can run the following command to see the "effective" compose file:
$ docker-compose config
This command will print out what your compose file looks like with all variable substitution replaced with its evaluation.
See the Environment variables in Compose and the Variable substitution sections in the documentation.
Pay close attention to this section:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file Dockerfile
4. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes precedence.
$ docker-compose exec api node
process.env.NODE_ENV 'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start like NODE_ENV=test node server.js, then this overrules any setting in your docker-compose.yml file.
I was trying to set the password from secrets but it wasn't picking it up.
Docker Server verions is 17.06.2-ce. I used the below command to set the secret:
echo "abcd" | docker secret create password -
My docker compose yml file looks like this
version: '3.1'
...
build:
context: ./test
dockerfile: Dockerfile
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
I also have root secrets tag:
secrets:
password:
external: true
When I hardcode the password in environment it works but when I try via the secrets it doesn't pick up. I tried to change the compose version to 3.2 but with no luck. Any pointers are highly appreciated!
To elaborate on the original accepted answer, just change your docker-compose.yml file so that it contains this as your entrypoint:
version: "3.7"
services:
server:
image: alpine:latest
secrets:
- test
entrypoint: [ '/bin/sh', '-c', 'export TEST=$$(cat /var/run/secrets/test) ; source /entrypoint.sh' ]
secrets:
test:
external: true
That way you don't need any additional files!
You need modify docker compose to read the secret env file from /run/secrets. If you want to set environment variables via bash, you can overwrite your docker-compose.yaml file as displayed below.
You can save the following code as entrypoint_overwrited.sh:
# get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
# if you need some specific file, where password is the secret name
# export $(egrep -v '^#' /run/secrets/password| xargs)
# call the dockerfile's entrypoint
source /docker-entrypoint.sh
In your docker-compose.yaml overwrite the dockerfile and entrypoint keys:
version: '3.1'
#...
build:
context: ./test
dockerfile: Dockerfile
entrypoint: source /data/entrypoint_overwrited.sh
tmpfs:
- /run/secrets
volumes:
- /path/your/data/where/is/the/script/:/data/
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
Using the snippets above, the environment variables user_name or eureka_password will be overwritten. If your secret env file defines the same env vars, the same will happen if you define in your service some env_file.
I found this neat extension to Alejandro's approach: make your custom entrypoint load from ENV_FILE variables to ENV ones:
environment:
MYSQL_PASSWORD_FILE: /run/secrets/my_password_secret
entrypoint: /entrypoint.sh
and then in your entrypoint.sh:
#!/usr/bin/env bash
set -e
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
file_env "MYSQL_PASSWORD"
Then, when the upstream image changes adds support for _FILE variables, you can drop the custom entrypoint without making changes to your compose file.
One option is to map your secret directly before you run your command:
entrypoint: "/bin/sh -c 'eureka_password=`cat /run/secrets/password` && echo $eureka_password'"
For example MYSQL password for node:
version: "3.7"
services:
app:
image: xxx
entrypoint: "/bin/sh -c 'MYSQL_PASSWORD=`cat /run/secrets/sql-pass` npm run start'"
secrets:
- sql-pass
secrets:
sql-pass:
external: true
Because you are initialising the eureka_password with a file instead of value.