Embed heredoc in docker-compose yaml file - docker-compose

I would like to embed a HEREDOC in a docker-compose yaml file.
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint: |
/bin/sh << HERE
echo hello
echo goodbye
HERE
When I attempt to run this, I get the following error.
docker-compose -f heredoc.yml run --rm test-cli
Creating network "dspace-compose-v2_default" with the default driver
/bin/sh: 0: Can't open <<

Contrary to the docs, it seems the arguments given to entrypoint aren't passed to '/bin/sh -c' but are instead parsed and converted to an array of arguments (argv).
In fact if you run docker inspect on the example you provided you can see that your command line was converted into an array:
"Entrypoint": [
"/bin/sh",
"<<",
"HERE",
"echo",
"hello",
"echo",
"goodbye",
"HERE"
],
Since the array of arguments isn't interpreted by a shell you can't use stuff like pipes and HEREDOC.
Instead you could use the features that YAML gives you to deal with multi line input and provide an array of arguments:
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint:
- /bin/bash
- '-c'
- |
echo hello
echo goodbye
If you really need HEREDOC you could do:
version: "3.7"
services:
test-cli:
image: ubuntu
entrypoint:
- /bin/bash
- '-c'
- |
/bin/sh << HERE
echo hello
echo goodbye
HERE

Related

How to set random string as env variable in docker compose

I tried following things but I get ERROR:Invalid interpolation format for "environment" option in service
version: '3.9'
services:
service-1:
image: library/python:3.8
environment:
- VAR1=$(head -c 12 /dev/random | base64)
- VAR2={{ randAlphaNum 16 | b64enc }}
command: >
bash -c "export VAR3=$(uuidgen)"
Create a .env file with needed variables.
VAR1=test
VAR2=16
And forward them inside the container in docker-compose.yml
version: '3.9'
services:
service-1:
image: library/python:3.8
environment:
- VAR1=${VAR1}
- VAR2=${VAR2}
command: >
echo $VAR1 # works
NB : You cannot run command in .env file.
If you really want to do it, check this answer
.env
VAR1=$(head -c 12 /dev/random | base64)
docker-compose.yaml
services:
service1:
image: ubuntu:latest
container_name: sample_container1
environment:
- local_var1=$VAR1 # doesnt work
- local_var2=${VAR1} # doesnt work
- local_var4=$$VAR1 # doesnt work
command: >
bash -c " export local_var3=$VAR1 # doesnt work
&& echo $local_var3 # doesnt work
&& echo $local_var4 # doesnt work
&& echo $VAR1 # works
&& echo VAR1=$VAR1 # works"
# each time we call $VAR1 we will get a new random string.

Enable logging in postgresql using docker-compose

I am using Postgres as a service in my docker-compose file. I want logging to log file to be enabled when I do docker-compose up. One way to enable logging is by editing postgres.conf file but it's not useful in this case. One other way is to do something like this
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
but this isn't useful too cause I am not starting it from an image but as a docker-compose service. Any idea how I can start the docker-compose up with logging enabled in Postgres???
Here is the docker-compose to run the command -c in compose
version: '3.6'
services:
postgresql:
image: postgres:11.5
container_name: platops_postgres
volumes: ['platops-data:/var/lib/postgresql/data/', 'postgress-logs:/var/log/postgresql/']
command: ["postgres", "-c", "logging_collector=on", "-c", "log_directory=/logs", "-c", "log_filename=postgresql.log", "-c", "log_statement=all"]
environment:
- POSTGRES_USER=postgresql
- POSTGRES_PASSWORD=postgresql
ports: ['5432:5432']
volumes:
platops-data: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/data/
postgress-logs: {}
# uncomment and set the path of the folder to maintain persistancy
# data-postgresql:
# driver: local
# driver_opts:
# o: bind
# type: none
# device: /path/of/db/postgres/logs/
For more information, you can check with the containers/postgress
Just like you command with docker run:
docker run --name postgresql -itd --restart always sameersbn/postgresql:10-2 -c logging_collector=on
that you add the -c logging_collector=on arguments for the ENTRYPOINT ["/sbin/entrypoint.sh"] to enable logging. (Dockerfile).
In docker-compose.yml file, use command: like this:
version: "3.7"
services:
database:
image: sameersbn/postgresql:10-2
command: "-c logging_collector=on"
# ......
When Postgresql contaienr run, it will run command: /sbin/entrypoint.sh -c logging_collector=on.

How to check if docker-compose's "command" was executed

I've added
command: bash -c './wait-for-it.sh -t 4 -s php:9000 -- bash run-ssh-on-php.sh'
to my docker-compose.yml
php:
build: docker/php
user: "$LOCAL_USER_ID:$LOCAL_GROUP_ID"
depends_on:
- mysql
- rabbitmq
- mail
- phantomjs
- data
volumes_from:
- data
ports:
- "9000:9000"
environment:
- SYMFONY_ENV
command: bash -c './wait-for-it.sh -t 4 -s php:9000 -- bash run-ssh-on-php.sh'
and it seems that it wasn't executed at all, how can I check if it was? I tried adding "touch somefile" but nothing was created

docker stack: setting environment variable from secrets

I was trying to set the password from secrets but it wasn't picking it up.
Docker Server verions is 17.06.2-ce. I used the below command to set the secret:
echo "abcd" | docker secret create password -
My docker compose yml file looks like this
version: '3.1'
...
build:
context: ./test
dockerfile: Dockerfile
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
I also have root secrets tag:
secrets:
password:
external: true
When I hardcode the password in environment it works but when I try via the secrets it doesn't pick up. I tried to change the compose version to 3.2 but with no luck. Any pointers are highly appreciated!
To elaborate on the original accepted answer, just change your docker-compose.yml file so that it contains this as your entrypoint:
version: "3.7"
services:
server:
image: alpine:latest
secrets:
- test
entrypoint: [ '/bin/sh', '-c', 'export TEST=$$(cat /var/run/secrets/test) ; source /entrypoint.sh' ]
secrets:
test:
external: true
That way you don't need any additional files!
You need modify docker compose to read the secret env file from /run/secrets. If you want to set environment variables via bash, you can overwrite your docker-compose.yaml file as displayed below.
You can save the following code as entrypoint_overwrited.sh:
# get your envs files and export envars
export $(egrep -v '^#' /run/secrets/* | xargs)
# if you need some specific file, where password is the secret name
# export $(egrep -v '^#' /run/secrets/password| xargs)
# call the dockerfile's entrypoint
source /docker-entrypoint.sh
In your docker-compose.yaml overwrite the dockerfile and entrypoint keys:
version: '3.1'
#...
build:
context: ./test
dockerfile: Dockerfile
entrypoint: source /data/entrypoint_overwrited.sh
tmpfs:
- /run/secrets
volumes:
- /path/your/data/where/is/the/script/:/data/
environment:
user_name: admin
eureka_password: /run/secrets/password
secrets:
- password
Using the snippets above, the environment variables user_name or eureka_password will be overwritten. If your secret env file defines the same env vars, the same will happen if you define in your service some env_file.
I found this neat extension to Alejandro's approach: make your custom entrypoint load from ENV_FILE variables to ENV ones:
environment:
MYSQL_PASSWORD_FILE: /run/secrets/my_password_secret
entrypoint: /entrypoint.sh
and then in your entrypoint.sh:
#!/usr/bin/env bash
set -e
file_env() {
local var="$1"
local fileVar="${var}_FILE"
local def="${2:-}"
if [ "${!var:-}" ] && [ "${!fileVar:-}" ]; then
echo >&2 "error: both $var and $fileVar are set (but are exclusive)"
exit 1
fi
local val="$def"
if [ "${!var:-}" ]; then
val="${!var}"
elif [ "${!fileVar:-}" ]; then
val="$(< "${!fileVar}")"
fi
export "$var"="$val"
unset "$fileVar"
}
file_env "MYSQL_PASSWORD"
Then, when the upstream image changes adds support for _FILE variables, you can drop the custom entrypoint without making changes to your compose file.
One option is to map your secret directly before you run your command:
entrypoint: "/bin/sh -c 'eureka_password=`cat /run/secrets/password` && echo $eureka_password'"
For example MYSQL password for node:
version: "3.7"
services:
app:
image: xxx
entrypoint: "/bin/sh -c 'MYSQL_PASSWORD=`cat /run/secrets/sql-pass` npm run start'"
secrets:
- sql-pass
secrets:
sql-pass:
external: true
Because you are initialising the eureka_password with a file instead of value.

Docker wait for postgresql to be running

I am using postgresql with django in my project. I've got them in different containers and the problem is that i need to wait for postgres before running django. At this time i am doing it with sleep 5 in command.sh file for django container. I also found that netcat can do the trick but I would prefer way without additional packages. curl and wget can't do this because they do not support postgres protocol.
Is there a way to do it?
I've spent some hours investigating this problem and I got a solution.
Docker depends_on just consider service startup to run another service. Than it happens because as soon as db is started, service-app tries to connect to ur db, but it's not ready to receive connections. So you can check db health status in app service to wait for connection. Here is my solution, it solved my problem. :)
Important: I'm using docker-compose version 2.1.
version: '2.1'
services:
my-app:
build: .
command: su -c "python manage.py runserver 0.0.0.0:8000"
ports:
- "8000:8000"
depends_on:
db:
condition: service_healthy
links:
- db
volumes:
- .:/app_directory
db:
image: postgres:10.5
ports:
- "5432:5432"
volumes:
- database:/var/lib/postgresql/data
healthcheck:
test: ["CMD-SHELL", "pg_isready -U postgres"]
interval: 5s
timeout: 5s
retries: 5
volumes:
database:
In this case it's not necessary to create a .sh file.
This will successfully wait for Postgres to start. (Specifically line 6). Just replace npm start with whatever command you'd like to happen after Postgres has started.
services:
practice_docker:
image: dockerhubusername/practice_docker
ports:
- 80:3000
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; npm start'
depends_on:
- db
environment:
- DATABASE_URL=postgres://postgres:password#db:5432/practicedocker
- PORT=3000
db:
image: postgres
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=password
- POSTGRES_DB=practicedocker
If you have psql you could simply add the following code to your .sh file:
RETRIES=5
until psql -h $PG_HOST -U $PG_USER -d $PG_DATABASE -c "select 1" > /dev/null 2>&1 || [ $RETRIES -eq 0 ]; do
echo "Waiting for postgres server, $((RETRIES--)) remaining attempts..."
sleep 1
done
The simplest solution is a short bash script:
while ! nc -z HOST PORT; do sleep 1; done;
./run-smth-else;
Problem with your solution tiziano is that curl is not installed by default and i wanted to avoid installing additional stuff. Anyway i did what bereal said. Here is the script if anyone would need it.
import socket
import time
import os
port = int(os.environ["DB_PORT"]) # 5432
s = socket.socket(socket.AF_INET, socket.SOCK_STREAM)
while True:
try:
s.connect(('myproject-db', port))
s.close()
break
except socket.error as ex:
time.sleep(0.1)
In your Dockerfile add wait and change your start command to use it:
ADD https://github.com/ufoscout/docker-compose-wait/releases/download/2.7.3/wait /wait
RUN chmod +x /wait
CMD /wait && npm start
Then, in your docker-compose.yml add a WAIT_HOSTS environment variable for your api service:
services:
api:
depends_on:
- postgres
environment:
- WAIT_HOSTS: postgres:5432
postgres:
image: postgres
ports:
- "5432:5432"
This has the advantage that it supports waiting for multiple services:
environment:
- WAIT_HOSTS: postgres:5432, mysql:3306, mongo:27017
For more details, please read their documentation.
wait-for-it small wrapper scripts which you can include in your application’s image to poll a given host and port until it’s accepting TCP connections.
can be cloned in Dockerfile by below command
RUN git clone https://github.com/vishnubob/wait-for-it.git
docker-compose.yml
version: "2"
services:
web:
build: .
ports:
- "80:8000"
depends_on:
- "db"
command: ["./wait-for-it/wait-for-it.sh", "db:5432", "--", "npm", "start"]
db:
image: postgres
Why not curl?
Something like this:
while ! curl http://$POSTGRES_PORT_5432_TCP_ADDR:$POSTGRES_PORT_5432_TCP_PORT/ 2>&1 | grep '52'
do
sleep 1
done
It works for me.
I have managed to solve my issue by adding health check to docker-compose definition.
db:
image: postgres:latest
ports:
- 5432:5432
healthcheck:
test: "pg_isready --username=postgres && psql --username=postgres --list"
timeout: 10s
retries: 20
then in the dependent service you can check the health status:
my-service:
image: myApp:latest
depends_on:
kafka:
condition: service_started
db:
condition: service_healthy
source: https://docs.docker.com/compose/compose-file/compose-file-v2/#healthcheck
If the backend application itself has a PostgreSQL client, you can use the pg_isready command in an until loop. For example, suppose we have the following project directory structure,
.
├── backend
│   └── Dockerfile
└── docker-compose.yml
with a docker-compose.yml
version: "3"
services:
postgres:
image: postgres
backend:
build: ./backend
and a backend/Dockerfile
FROM alpine
RUN apk update && apk add postgresql-client
CMD until pg_isready --username=postgres --host=postgres; do sleep 1; done \
&& psql --username=postgres --host=postgres --list
where the 'actual' command is just a psql --list for illustration. Then running docker-compose build and docker-compose up will give you the following output:
Note how the result of the psql --list command only appears after pg_isready logs postgres:5432 - accepting connections as desired.
By contrast, I have found that the nc -z approach does not work consistently. For example, if I replace the backend/Dockerfile with
FROM alpine
RUN apk update && apk add postgresql-client
CMD until nc -z postgres 5432; do echo "Waiting for Postgres..." && sleep 1; done \
&& psql --username=postgres --host=postgres --list
then docker-compose build followed by docker-compose up gives me the following result:
That is, the psql command throws a FATAL error that the database system is starting up.
In short, using an until pg_isready loop (as also recommended here) is the preferable approach IMO.
There are couple of solutions as other answers mentioned.
But don't make it complicated, just let it fail-fast combined with restart: on-failure. Your service will open connection to the db and may fail at the first time. Just let it fail. Docker will restart your service until it green. Keep your service simple and business-focused.
version: '3.7'
services:
postgresdb:
hostname: postgresdb
image: postgres:12.2
ports:
- "5432:5432"
environment:
- POSTGRES_USER=user
- POSTGRES_PASSWORD=secret
- POSTGRES_DB=Ceo
migrate:
image: hanh/migration
links:
- postgresdb
environment:
- DATA_SOURCE=postgres://user:secret#postgresdb:5432/Ceo
command: migrate sql --yes
restart: on-failure # will restart until it's success
Check out restart policies.
None of other solution worked, except for the following:
version : '3.8'
services :
postgres :
image : postgres:latest
environment :
- POSTGRES_DB=mydbname
- POSTGRES_USER=myusername
- POSTGRES_PASSWORD=mypassword
healthcheck :
test: [ "CMD", "pg_isready", "-q", "-d", "mydbname", "-U", "myusername" ]
interval : 5s
timeout : 5s
retries : 5
otherservice:
image: otherserviceimage
depends_on :
postgres:
condition: service_healthy
Thanks to this thread: https://github.com/peter-evans/docker-compose-healthcheck/issues/16
Sleeping until pg_isready returns true unfortunately is not always reliable. If your postgres container has at least one initdb script specified, postgres restarts after it is started during it's bootstrap procedure, and so it might not be ready yet even though pg_isready already returned true.
What you can do instead, is to wait until docker logs for that instance return a PostgreSQL init process complete; ready for start up. string, and only then proceed with the pg_isready check.
Example:
start_postgres() {
docker-compose up -d --no-recreate postgres
}
wait_for_postgres() {
until docker-compose logs | grep -q "PostgreSQL init process complete; ready for start up." \
&& docker-compose exec -T postgres sh -c "PGPASSWORD=\$POSTGRES_PASSWORD PGUSER=\$POSTGRES_USER pg_isready --dbname=\$POSTGRES_DB" > /dev/null 2>&1; do
printf "\rWaiting for postgres container to be available ... "
sleep 1
done
printf "\rWaiting for postgres container to be available ... done\n"
}
start_postgres
wait_for_postgres
You can use the manage.py command "check" to check if the database is available (and wait 2 seconds if not, and check again).
For instance, if you do this in your command.sh file before running the migration, Django has a valid DB connection while running the migration command:
...
echo "Waiting for db.."
python manage.py check --database default > /dev/null 2> /dev/null
until [ $? -eq 0 ];
do
sleep 2
python manage.py check --database default > /dev/null 2> /dev/null
done
echo "Connected."
# Migrate the last database changes
python manage.py migrate
...
PS: I'm not a shell expert, please suggest improvements.
#!/bin/sh
POSTGRES_VERSION=9.6.11
CONTAINER_NAME=my-postgres-container
# start the postgres container
docker run --rm \
--name $CONTAINER_NAME \
-e POSTGRES_PASSWORD=docker \
-d \
-p 5432:5432 \
postgres:$POSTGRES_VERSION
# wait until postgres is ready to accept connections
until docker run \
--rm \
--link $CONTAINER_NAME:pg \
postgres:$POSTGRES_VERSION pg_isready \
-U postgres \
-h pg; do sleep 1; done
An example for Nodejs and Postgres api.
#!/bin/bash
#entrypoint.dev.sh
echo "Waiting for postgres to get up and running..."
while ! nc -z postgres_container 5432; do
# where the postgres_container is the hos, in my case, it is a Docker container.
# You can use localhost for example in case your database is running locally.
echo "waiting for postgress listening..."
sleep 0.1
done
echo "PostgreSQL started"
yarn db:migrate
yarn dev
# Dockerfile
FROM node:12.16.2-alpine
ENV NODE_ENV="development"
RUN mkdir -p /app
WORKDIR /app
COPY ./package.json ./yarn.lock ./
RUN yarn install
COPY . .
CMD ["/bin/sh", "./entrypoint.dev.sh"]
If you want to run it with a single line command. You can just connect to the container and check if postgres is running
docker exec -it $DB_NAME bash -c "\
until psql -h $HOST -U $USER -d $DB_NAME-c 'select 1'>/dev/null 2>&1;\
do\
echo 'Waiting for postgres server....';\
sleep 1;\
done;\
exit;\
"
echo "DB Connected !!"
Inspired by #tiziano answer and the lack of nc or pg_isready, it seems that in a recent docker python image (python:3.9 here) that curl is installed by default and I have the following check running in my entrypoint.sh:
postgres_ready() {
$(which curl) http://$DBHOST:$DBPORT/ 2>&1 | grep '52'
}
until postgres_ready; do
>&2 echo 'Waiting for PostgreSQL to become available...'
sleep 1
done
>&2 echo 'PostgreSQL is available.'
Trying with a lot of methods, Dockerfile, docker compose yaml, bash script. Only last of method help me: with makefile.
docker-compose up --build -d postgres
sleep 2
docker-compose up --build -d app