How to set random string as env variable in docker compose - docker-compose

I tried following things but I get ERROR:Invalid interpolation format for "environment" option in service
version: '3.9'
services:
service-1:
image: library/python:3.8
environment:
- VAR1=$(head -c 12 /dev/random | base64)
- VAR2={{ randAlphaNum 16 | b64enc }}
command: >
bash -c "export VAR3=$(uuidgen)"

Create a .env file with needed variables.
VAR1=test
VAR2=16
And forward them inside the container in docker-compose.yml
version: '3.9'
services:
service-1:
image: library/python:3.8
environment:
- VAR1=${VAR1}
- VAR2=${VAR2}
command: >
echo $VAR1 # works
NB : You cannot run command in .env file.
If you really want to do it, check this answer

.env
VAR1=$(head -c 12 /dev/random | base64)
docker-compose.yaml
services:
service1:
image: ubuntu:latest
container_name: sample_container1
environment:
- local_var1=$VAR1 # doesnt work
- local_var2=${VAR1} # doesnt work
- local_var4=$$VAR1 # doesnt work
command: >
bash -c " export local_var3=$VAR1 # doesnt work
&& echo $local_var3 # doesnt work
&& echo $local_var4 # doesnt work
&& echo $VAR1 # works
&& echo VAR1=$VAR1 # works"
# each time we call $VAR1 we will get a new random string.

Related

Docker Compose common YAML add service name variable

I would like to send all my service logs to Graylog, but don't want to repeat my logging config.
How can I send in my service name to this YAML merge block?
version: "3"
x-common: &common
restart: always
logging:
driver: "gelf"
options:
gelf-address: "udp://localhost:12201"
tag: "<service_name>"
services:
test:
image: debian:wheezy
command: /bin/sh -c "while true; do date && echo "hello"; sleep 1; done"
<<: *common

Enable logging in postgresql using docker-compose

I am using Postgres as a service in my docker-compose file. I want logging to log file to be enabled when I do docker-compose up. One way to enable logging is by editing postgres.conf file but it's not useful in this case. One other way is to do something like this
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
but this isn't useful too cause I am not starting it from an image but as a docker-compose service. Any idea how I can start the docker-compose up with logging enabled in Postgres???
Here is the docker-compose to run the command -c in compose
version: '3.6'
services:
postgresql:
image: postgres:11.5
container_name: platops_postgres
volumes: ['platops-data:/var/lib/postgresql/data/', 'postgress-logs:/var/log/postgresql/']
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
environment:
- POSTGRES_USER=postgresql
- POSTGRES_PASSWORD=postgresql
ports: ['5432:5432']
volumes:
platops-data: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/data/
postgress-logs: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/logs/
For more information, you can check with the containers/postgress
Just like you command with docker run:
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
that you add the -c logging_collector=on arguments for the ENTRYPOINT ["/sbin/entrypoint.sh"] to enable logging. (Dockerfile).
In docker-compose.yml file, use command: like this:
version: "3.7"
services:
database:
image: sameersbn/postgresql:10-2
command: "-c logging_collector=on"
# ......
When Postgresql contaienr run, it will run command: /sbin/entrypoint.sh -c logging_collector=on.

Embed heredoc in docker-compose yaml file

I would like to embed a HEREDOC in a docker-compose yaml file.
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint: |
/bin/sh << HERE
echo hello
echo goodbye
HERE
When I attempt to run this, I get the following error.
docker-compose -f heredoc.yml run --rm test-cli
Creating network "dspace-compose-v2_default" with the default driver
/bin/sh: 0: Can't open <<
Contrary to the docs, it seems the arguments given to entrypoint aren't passed to '/bin/sh -c' but are instead parsed and converted to an array of arguments (argv).
In fact if you run docker inspect on the example you provided you can see that your command line was converted into an array:
"Entrypoint": [
"/bin/sh",
"<<",
"HERE",
"echo",
"hello",
"echo",
"goodbye",
"HERE"
],
Since the array of arguments isn't interpreted by a shell you can't use stuff like pipes and HEREDOC.
Instead you could use the features that YAML gives you to deal with multi line input and provide an array of arguments:
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint:
- /bin/bash
- '-c'
- |
echo hello
echo goodbye
If you really need HEREDOC you could do:
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint:
- /bin/bash
- '-c'
- |
/bin/sh << HERE
echo hello
echo goodbye
HERE

Changing Environment in docker-compose up

I'm new to docker.
Here is my simple docker-compose file.
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=production
To run this, I usually use docker-compose up command.
Can I change the NODE_ENV variable to anything while running docker-compose up?
For example:
docker-compose up -x NODE_ENV=staging
Use docker-compose run, you can manage services but not the complete stack. Useful for one-off commands.
$ docker-compose run -d -e NODE_ENV=staging web
Ref - https://docs.docker.com/compose/reference/run/
OR
Best way i could see as if now is to use shell & export the environment variable before doing a docker-compose up as below -
$ export NODE_ENV=staging && docker-compose up -d
Where your docker-compose will look something as below -
version: '3.4'
services:
web:
image: 'myimage:latest'
build: .
ports:
- "5265:5265"
environment:
- NODE_ENV=${NODE_ENV}

docker stack: setting environment variable from secrets

I was trying to set the password from secrets but it wasn't picking it up.
Docker Server verions is 17.06.2-ce. I used the below command to set the secret:
echo "abcd" | docker secret create password -
My docker compose yml file looks like this
version: '3.1'
...
build:
context: ./test
dockerfile: Dockerfile
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
I also have root secrets tag:
secrets:
password:
external: true
When I hardcode the password in environment it works but when I try via the secrets it doesn't pick up. I tried to change the compose version to 3.2 but with no luck. Any pointers are highly appreciated!
To elaborate on the original accepted answer, just change your docker-compose.yml file so that it contains this as your entrypoint:
version: "3.7"
services:
server:
image: alpine:latest
secrets:
- test
entrypoint: [ '/bin/sh', '-c', 'export TEST=$$(cat /var/run/secrets/test) ; source /entrypoint.sh' ]
secrets:
test:
external: true
That way you don't need any additional files!
You need modify docker compose to read the secret env file from /run/secrets. If you want to set environment variables via bash, you can overwrite your docker-compose.yaml file as displayed below.
You can save the following code as entrypoint_overwrited.sh:
# get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
# if you need some specific file, where password is the secret name
# export $(egrep -v '^#' /run/secrets/password| xargs)
# call the dockerfile's entrypoint
source /docker-entrypoint.sh
In your docker-compose.yaml overwrite the dockerfile and entrypoint keys:
version: '3.1'
#...
build:
context: ./test
dockerfile: Dockerfile
entrypoint: source /data/entrypoint_overwrited.sh
tmpfs:
- /run/secrets
volumes:
- /path/your/data/where/is/the/script/:/data/
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
Using the snippets above, the environment variables user_name or eureka_password will be overwritten. If your secret env file defines the same env vars, the same will happen if you define in your service some env_file.
I found this neat extension to Alejandro's approach: make your custom entrypoint load from ENV_FILE variables to ENV ones:
environment:
MYSQL_PASSWORD_FILE: /run/secrets/my_password_secret
entrypoint: /entrypoint.sh
and then in your entrypoint.sh:
#!/usr/bin/env bash
set -e
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
file_env "MYSQL_PASSWORD"
Then, when the upstream image changes adds support for _FILE variables, you can drop the custom entrypoint without making changes to your compose file.
One option is to map your secret directly before you run your command:
entrypoint: "/bin/sh -c 'eureka_password=`cat /run/secrets/password` && echo $eureka_password'"
For example MYSQL password for node:
version: "3.7"
services:
app:
image: xxx
entrypoint: "/bin/sh -c 'MYSQL_PASSWORD=`cat /run/secrets/sql-pass` npm run start'"
secrets:
- sql-pass
secrets:
sql-pass:
external: true
Because you are initialising the eureka_password with a file instead of value.