Docker Compose apparently ignores COMPOSE_FILE - docker-compose

I have two docker-compose config yamls that look like this:
docker-compose.yml :
version: '2'
services:
web: &web
build: .
environment:
FOO: bar
docker-compose.development.override.yml :
version: '2'
services:
web: &web
build: .
environment:
FOO: biz
When I take a look at the value of $FOO inside the container, I am not seeing the value I expect:
bdares$ COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml
bdares$ docker-compose run --rm web bash
docker$ echo $FOO
bar
When I explicitly set the compose files, I see the value I expect:
bdares$ docker-compose -f ./docker-compose.yml \
> -f ./docker-compose.development.override.yml \
> run --rm web bash
docker$ echo $FOO
biz
This suggests to me that docker-compose is not respecting the COMPOSE_FILE environment variable as claimed here.
What might I be doing wrong?
Version info:
docker-compose version 1.8.0
Docker version 1.11.0

Use:
export COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml
docker-compose run --rm web bash
or
COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml docker-compose run --rm web bash

Related

Prevent container startup

I have this docker compose:
myservice:
restart: "no"
With no the service will start anyway (but it won't restart)
How can I prevent the service to start at all?
Note: the reason I want to do this, for those curious, is that I want to make this flag configurable via an env var:
myservice:
restart: "${RESTART_SERVICE:-no}"
And then pass the right value to start the service.
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restart.
https://docs.docker.com/config/containers/start-containers-automatically/
So it's only when containers exit or when Docker restart.
But you have two options for what you want to do:
First only start the service you want:
docker-compose up other-service
This don't use ENV as you want (unless you have a script for run the docker-compose up ).
if [[ $START == true ]]; then
docker-compose up
else
docker-compose up other-service
fi
But as mentioned here and here, you can overwrite the entrypoint
So you can do something like:
services:
alpine:
image: alpine:latest
environment:
- START=false
volumes:
- ./start.sh:/start.sh
entrypoint: ['sh', '/start.sh']
And and start.sh like:
if [ $START == true ]; then
echo ok # replace with the original entrypoint or command
else
exit 0
fi
# START=false in the docker-compose
$ docker-compose up
Starting stk_alpine_1 ... done
Attaching to stk_alpine_1
stk_alpine_1 exited with code 0
$ sed -i 's/START=false/START=true/' docker-compose.yml
$ docker-compose up
Starting stk_alpine_1 ... done
Attaching to stk_alpine_1
alpine_1 | ok
stk_alpine_1 exited with code 0

docker-compose environment variables inside container

I have a small python app developed with docker containers.
My setup is:
Dockerfile
FROM python:3
ARG http_proxy
ARG https_proxy
ENV http_proxy ${http_proxy}
ENV https_proxy ${https_proxy}
ENV VIRTUAL_ENV=/opt/venv
ENV PATH="$VIRTUAL_ENV/bin:$PATH"
RUN apt-get update
RUN apt install -y vim screen
RUN \
echo 'alias py="/opt/venv/bin/python"' >> /root/.bashrc && \
echo 'alias ls="ls --color=auto"' >> /root/.bashrc && \
echo 'PS1="\u#\h:\[\e[33m\]\w\[\e[0m\]\$ "' >> /root/.bashrc
RUN python3 -m venv $VIRTUAL_ENV
WORKDIR /app
COPY requirements.txt ./
RUN pip install --no-cache-dir -r requirements.txt
docker-compose.yml
version: '3.8'
x-dev:
&proxy_conf
http_proxy: "${HTTP_PROXY}"
https_proxy: "${HTTPS_PROXY}"
services:
py_service:
container_name: ${APP_NAME}
build:
context: .
dockerfile: Dockerfile
args: *proxy_conf
image: ${APP_NAME}_img
volumes:
- '.:/app'
restart: always
command: tail -f /dev/null
.env
HTTP_PROXY=<http_proxy_server_here>
HTTPS_PROXY=<https_proxy_server_here>
APP_NAME=python_app
The problem is if the proxy server has changed i need to rebuild the image and i don't want that(as a last result maybe i will do it).
What i'm trying to do is change the proxy environment variables inside the container but i don't find the file where the env is stored.
The container OS version is:
[root#5b1b77079e10 ~ >>>] $ cat /etc/os-release
PRETTY_NAME="Debian GNU/Linux 10 (buster)"
NAME="Debian GNU/Linux"
VERSION_ID="10"
VERSION="10 (buster)"
VERSION_CODENAME=buster
ID=debian
HOME_URL="https://www.debian.org/"
SUPPORT_URL="https://www.debian.org/support"
BUG_REPORT_URL="https://bugs.debian.org/"
You should only need to recreate containers, not rebuild the image. I assume you are doing something like this to get everything up initially:
docker-compose build
docker-compose up -d
Then I assume you are updating you .env file, once you do that you should be able to just do the following for your container to pick up the change:
docker-compose down
docker-compose up -d
You should not need to do a docker-compose build again.

Can't see environment variables set in docker-compose from sbt

I have a dockerfile:
FROM mozilla/sbt:8u212_1.3.4
WORKDIR /app
ADD . /app
RUN sbt compile
CMD sbt run
I have a docker-compose file:
version: '3'
services:
my-service:
build: .
environment:
- KEY=VALUE
My scala project looks like this:
object Main extends App {
println(System.getenv("KEY")
}
but when I run docker-compose up it just prints null, instead of VALUE
First, Check if the variable it's in the container.
Run de container and enter in it:
$ docker exec -it <IDcontainer> /bin/bash
# echo $KEY
Maybe the problem is in the program and not in the container.
Bye

CircleCI 2.0 testing with docker-compose and code checkout

This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT

how to pass host user to Dockerfile when using docker-compose

I'm trying to pass a host variable to a Dockerfile when running docker-compose build
I would like to run
RUN usermod -u $USERID www-data
in an apache-php7 Dockerfile. $USERID being the ID of the current host user.
I would have thought that the following might work:
commandline
EXPORT USERID=$(id -u); docker-compose build
docker-compose.yml
...
environment:
- USERID=$USERID
Dockerfile
ENV USERID
RUN usermod -u $USERID www-data
But no luck yet.
For Docker in general it is generally not possible to use host environment variables during the build phase; this is because it is desirable that if you run docker build and I run docker build using the same Dockerfile (or Docker Hub runs `docker build with the same Dockerfile), we end up with the same image, regardless of our local environment.
While passing in variables at runtime is easy with the docker command line (using -e <var>=<value>), it's a little trickier with docker-compose, because that tool is designed to create self-contained environments.
A simple solution would be to drop the host uid into an environment file before starting the container. That is, assuming you have:
version: "2"
services:
shell:
image: alpine
env_file: docker-compose.env
command: >
env
You can then:
echo HOST_UID=$UID > docker-compose.env; docker-compose up
And the HOST_UID environment variable will be available to your
container:
Recreating vartest_shell_1
Attaching to vartest_shell_1
shell_1 | HOSTNAME=17423d169a25
shell_1 | HOST_UID=1000
shell_1 | HOME=/root
vartest_shell_1 exited with code 0
You would then to have something like an ENTRYPOINT script that
would set up the container environment (creating users, modifying file
ownership) to operate correctly with the given UID.