Docker-compose use env_file variables in the command - docker-compose

I'm trying to set up a docker-file for a selenium grid that is able to change the nodes based on an environment variable.
selenoida:
image: "aerokube/selenoid:latest"
container_name: selenoid
network_mode: bridge
ports:
- "0.0.0.0:4444:4444"
volumes:
- ".:/etc/selenoid"
- "./target:/output"
- "/var/run/docker.sock:/var/run/docker.sock"
- "./target:/opt/selenoid/video"
environment:
- "OVERRIDE_VIDEO_OUTPUT_DIR=$PWD/target"
env_file:
- variables.env
command: ["-limit", "$NODES","-enable-file-upload", "-conf", "/etc/selenoid/browsers.json", "-video-output-dir", "/opt/selenoid/video"]
But looks like the command is not able to use variables. I tested with ${NODES}, NODES.
Any ideas on how to use env variables set from a file in commands?

The command in exec form will not invoke the shell to expand the variables.
You can try using the command in shell form, or just overwrite the entrypoint with:
entrypoint: [""sh", "-c", "/usr/bin/selenoid -listen :4444 -conf /etc/selenoid/browsers.json -video-output-dir /opt/selenoid/video/ -enable-file-upload -limit $${NODES}"]

Related

How do I run a bash script in a docker container after it starts?

I'm trying to run a bash script after a Postgres container starts which 1) creates a new table within the Postgres DB, and 2) runs a copy command that dumps the contents of a csv file into the newly created table.
Currently, I'm specifying the execution of the script within my docker-compose.yml file using the "command" argument, but I find that it doesn't allow the Postgres container to succesfully start. I receive the following information from the log:
Is the server running locally and accepting
connections on Unix domain socket "/var/run/postgresql/.s.PGSQL.5432"?
When I remove the "command" argument everything is fine. Here is what my docker-compose.yml files looks like now:
# docker-compose.yml
version: '3'
services:
web:
build: .
command: bash -c 'while !</dev/tcp/db/5432; do sleep 1; done; uvicorn app.main:app --host 0.0.0.0'
volumes:
- .:/app
expose: # new
- 8000
environment:
- DATABASE_URL=postgresql://fastapi_traefik:fastapi_traefik#db:5432/fastapi_traefik
depends_on:
- db
labels: # new
- "traefik.enable=true"
- "traefik.http.routers.fastapi.rule=Host(`fastapi.localhost`)"
db:
image: postgres:13-alpine
volumes:
- postgres_data:/var/lib/postgresql/data/
- "/Users/theComputerPerson/:/tmp"
expose:
- 5432
environment:
- POSTGRES_USER=fastapi_traefik
- POSTGRES_PASSWORD=fastapi_traefik
- POSTGRES_DB=fastapi_traefik
command: /bin/bash -c "/tmp/newtable.sh"
traefik: # new
image: traefik:v2.2
ports:
- 8008:80
- 8081:8080
volumes:
- "./traefik.dev.toml:/etc/traefik/traefik.toml"
- "/var/run/docker.sock:/var/run/docker.sock:ro"
volumes:
postgres_data:
It may be worth noting that I'm trying to customize some of the aspects of this FastAPI project, and to turn your attention to the development files and not the production files. Please let me know if I can provide any additional information in the comments.
You are overriding the default container image startup command.
According to PostgreSQL official container image page, you can extend initialization adding your sh scripts (or even sql files) to /docker-entrypoint-initdb.d directory.
See https://hub.docker.com/_/postgres.
This approach has a caveat that this script could not be executed.
Another approach is to override default container image command adding yours in bash style: postgres; /bin/bash -c "/tmp/newtable.sh";

How to specify multiple "env_files" for a docker compose service?

Currently I have setup my service like the following.
version: '3'
services:
gateway:
container_name: gateway
image: node:lts-alpine
working_dir: /
volumes:
- ./package.json:/package.json
- ./tsconfig.json:/tsconfig.json
- ./services/gateway:/services/gateway
- ./packages:/packages
- ./node_modules:/node_modules
env_file: .env
command: yarn run ts-node-dev services/gateway --colors
ports:
- 3000:3000
So I have specified one env_file. But now I want to pass multiple .env files.
Unfortunately, the following is not possible:
env_files:
- .env.secrets
- .env.development
Is there any way to pass multiple .env files to one service in docker-compsoe?
You can specify multiple env files on the env_file option (without s).
For instance:
version: '3'
services:
hello:
image: alpine
entrypoint: ["sh"]
command: ["-c", "env"]
env_file:
- a.env
- b.env
Note that, complementary to #conradkleineespel's answer, if an environment variable is defined in multiple .env files listed under env_file, the value found in the last environment file in the list overwrites all prior (tested in a docker-compose file with version: '3.7'.

Why is testcafe-docker.sh ignoring app-init-delay parameter?

I'm trying to build a docker-compose to include my app as a service and testcafe as another service. Both containers are build and initialized, but I can't get testcafe to wait until my app is available to start running the tests.
I've tried passing the --app-init-delay 30000 as a parameter to testcafe-docker.sh, but it ignores it.
entrypoint: ["/opt/testcafe/docker/testcafe-docker.sh", "'chromium --no-sandbox'", "--app-init-delay 30000", "e2e"]
Also tried to use the script https://github.com/Eficode/wait-for in the entrypoint or command before calling testcafe-docker.sh. In the command seems to get in conflict with the entrypoint command, when using it on the entrypoint I get testcafe to wait, but instead of running the tests ends with 'Operation timed out'
entrypoint: ['/script/wait-for', 'app:8080 -- "/opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e"']
(Seems that all the parameters of wait-for need to be within the same entry of the array for it to work as an entrypoint script)
This is my docker-compose file
version: "2"
services:
app:
container_name: app
build: ./dist/docker/
ports:
- "8080:8080"
volumes:
- /var/run/docker.sock:/var/run/docker.sock
- ./dist/docker/dependency:/dependency
testcafe:
container_name: testcafe
image: testcafe/testcafe
depends_on:
- app
volumes:
- ./test/e2e:/e2e
- ./package.json:/package.json
- ./package-lock.json:/package-lock.json
- ./script:/script
entrypoint: ['/script/wait-for', 'app:8080 -- "/opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e"']
# entrypoint: ["/opt/testcafe/docker/testcafe-docker.sh", "'chromium --no-sandbox'", "--app-init-delay 30000", "e2e"]
It seems that I'm very close to solving the issue with wait-for, but somehow my entrypoint syntax is incorrect
You can do it simpler:
testcafe:
container_name: testcafe
image: testcafe/testcafe
depends_on:
- app
volumes:
- ./test/e2e:/e2e
- ./package.json:/package.json
- ./package-lock.json:/package-lock.json
- ./script:/script
entrypoint: ['/script/run.sh']
Create run.sh in the folder script and make it executable:
#!/bin/bash
/script/wait-for app:8080 -t 60 -- /opt/testcafe/docker/testcafe-docker.sh chromium --no-sandbox e2e

Docker is not getting Postgres environment variables

Docker is using the variables from my .env file and i keep getting the error:
Unhandled rejection SequelizeConnectionError: role "eli" does not
exist
I would like for Postgres to get the variables from the environment set in docker-compose.yml
.env
POSTGRES_PORT=5432
POSTGRES_DB=elitest4
POSTGRES_USER=eli
POSTGRES_PASSWORD=
docker-compose.yml
# docker-compose.yml
version: "3"
services:
app:
build: .
depends_on:
- database
ports:
- 8000:8000
environment:
- POSTGRES_HOST=database
env_file:
- .env
database:
image: postgres:9.6.8-alpine
environment: # postgress should be getting these variables, not the variables set in the env file thats for localhost
POSTGRES_PASSWORD: password
POSTGRES_USER: user
POSTGRES_DB: db
volumes:
- pgdata:/var/lib/postgresql/pgdata
ports:
- 8002:5432
env_file:
- .env
react_client:
build:
context: ./client
dockerfile: Dockerfile
image: react_client
working_dir: /home/node/app/client
volumes:
- ./:/home/node/app
ports:
- 8001:8001
env_file:
- ./client/.env
volumes:
pgdata:
TL/DR
Try updating the docker-compose service database environment section as follows:
environment:
POSTGRES_PASSWORD: ${POSTGRES_PASSWORD:-password}
POSTGRES_USER: ${POSTGRES_USER:-user}
POSTGRES_DB: ${POSTGRES_DB:-db}
Also notice that if you would like to see how each bound variable ultimately evaluates in Compose you can run the following command to see the "effective" compose file:
$ docker-compose config
This command will print out what your compose file looks like with all variable substitution replaced with its evaluation.
See the Environment variables in Compose and the Variable substitution sections in the documentation.
Pay close attention to this section:
When you set the same environment variable in multiple files, here’s
the priority used by Compose to choose which value to use:
1. Compose file
2. Shell environment variables
3. Environment file Dockerfile
4. Variable is not defined
In the example below, we set the same environment variable on an Environment file, and the Compose file:
$ cat ./Docker/api/api.env
NODE_ENV=test
$ cat docker-compose.yml
version: '3'
services:
api:
image: 'node:6-alpine'
env_file:
- ./Docker/api/api.env
environment:
- NODE_ENV=production
When you run the container, the environment variable defined in the Compose file takes precedence.
$ docker-compose exec api node
process.env.NODE_ENV 'production'
Having any ARG or ENV setting in a Dockerfile evaluates only if there is no Docker Compose entry for environment or env_file.
Specifics for NodeJS containers
If you have a package.json entry for script:start like NODE_ENV=test node server.js, then this overrules any setting in your docker-compose.yml file.

Docker PostgreSQL - Scripts in /docker-entrypoint-initdb.d doesn't run

So, I have a docker-compose project with this structure:
DockerDev
- docker-compose.yaml
- d-php
- Dockerfile
- scripts-apache
- d-postgresql
- Dockerfile
- scripts
- dev_data_setup.sql
- logs
- pgdata
- www
PHP, Redis, ElasticSearch is OK. But Postgresql doesn't run dev_data_setup.sql, with any diferent solutions to /dockes-entrypoint-initdb.d that I found (volume, ADD, COPY, etc). I tried to run and sh script and nothing.
Could you see this docker-compose and Dockerfile and help me? Thanks
Dockerfile:
FROM postgres:latest
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d
docker-compose.yaml:
version: '2'
services:
php:
build: ./d-php/
hostname: www.domain.com
ports:
- "80:80"
volumes:
- ./www:/var/www/html
- ./d-php/scripts-apache2/apache2.conf:/etc/apache2/apache2.conf
- ./d-php/scripts-apache2/web.conf:/etc/apache2/sites-enabled/web.conf
- ./d-php/scripts-apache2/webservice.conf:/etc/apache2/sites-enabled/webservice.conf
- ./logs:/var/log/apache2
links:
- db
- redis
- elasticsearch
db:
build: ./d-postgresql/
volumes:
- ./pgdata:/pgdata
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/pgdata
redis:
image: redis:latest
elasticsearch:
image: elasticsearch:2.4.1
So I found the problem.
First: my sql script was trying to recreate postgres user. then, the dockedev_db exited.
Second: I needed to remove all images related to db to docker-compose run the script again.
Thanks for your help.
Your problem is caused by the way you use ADD in your Dockerfile:
FROM postgres:latest
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d
This creates a file called /docker-entrypoint-initdb.d with the content of the dev_data_setup.sql file. What you want is to treat /docker-entrypoint-initdb.d as a directory.
You should change your ADD command to one of the following:
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d/
The trailing slash will treat the dest parameter as a directory. Or use
ADD ./scripts/dev_data_setup.sql /docker-entrypoint-initdb.d/dev_data_setup.sql
Which will specifically spell out the file name.
Reference: https://docs.docker.com/engine/reference/builder/#/add
If <dest> does not end with a trailing slash, it will be considered a regular file and the contents of <src> will be written at <dest>.