I have a dockerfile:
FROM mozilla/sbt:8u212_1.3.4
WORKDIR /app
ADD . /app
RUN sbt compile
CMD sbt run
I have a docker-compose file:
version: '3'
services:
my-service:
build: .
environment:
- KEY=VALUE
My scala project looks like this:
object Main extends App {
println(System.getenv("KEY")
}
but when I run docker-compose up it just prints null, instead of VALUE
First, Check if the variable it's in the container.
Run de container and enter in it:
$ docker exec -it <IDcontainer> /bin/bash
# echo $KEY
Maybe the problem is in the program and not in the container.
Bye
Related
I have this docker compose:
myservice:
restart: "no"
With no the service will start anyway (but it won't restart)
How can I prevent the service to start at all?
Note: the reason I want to do this, for those curious, is that I want to make this flag configurable via an env var:
myservice:
restart: "${RESTART_SERVICE:-no}"
And then pass the right value to start the service.
Docker provides restart policies to control whether your containers start automatically when they exit, or when Docker restart.
https://docs.docker.com/config/containers/start-containers-automatically/
So it's only when containers exit or when Docker restart.
But you have two options for what you want to do:
First only start the service you want:
docker-compose up other-service
This don't use ENV as you want (unless you have a script for run the docker-compose up ).
if [[ $START == true ]]; then
docker-compose up
else
docker-compose up other-service
fi
But as mentioned here and here, you can overwrite the entrypoint
So you can do something like:
services:
alpine:
image: alpine:latest
environment:
- START=false
volumes:
- ./start.sh:/start.sh
entrypoint: ['sh', '/start.sh']
And and start.sh like:
if [ $START == true ]; then
echo ok # replace with the original entrypoint or command
else
exit 0
fi
# START=false in the docker-compose
$ docker-compose up
Starting stk_alpine_1 ... done
Attaching to stk_alpine_1
stk_alpine_1 exited with code 0
$ sed -i 's/START=false/START=true/' docker-compose.yml
$ docker-compose up
Starting stk_alpine_1 ... done
Attaching to stk_alpine_1
alpine_1 | ok
stk_alpine_1 exited with code 0
This is my circle.yml:
version: 2
jobs:
build:
working_directory: /app
docker:
- image: docker:stable-git
steps:
- checkout
- setup_remote_docker
- run:
name: Install dependencies
command: |
apk add --no-cache py-pip bash
pip install docker-compose
- run:
name: Start service containers and run tests
command: |
docker-compose -f docker-compose.test.yml up -d db es redis
docker-compose run web bash -c "cd myDir && ./manage.py test"
This works fine in that it brings up my service containers (db, es, redis) and I build a new image for my web container. However, my working code is not inside the freshly built image (so "cd myDir" always fails).
I figure the following lines in my Dockerfile should make my code available when it's built but it appears that it doesn't work like that:
ENV APPLICATION_ROOT /app/
RUN mkdir -p $APPLICATION_ROOT
WORKDIR $APPLICATION_ROOT
ADD . $APPLICATION_ROOT
What am I doing wrong and how can I make my code available inside my test container?
Thanks,
Use COPY, Your Dockerfile should look something like this.
FROM image
COPY . /opt/app
WORKDIR "/opt/app"
(More commands)
ENTRYPOINT
I tried to use a .env file to make environment variable, but it doesn't work.
These are my steps:
version "3"
services:
web:
image: php-fpm:5.6.30
env_file:
- .env
This is .env file
TEST_ENV="HELLO WORLD"
It doesn't work when I start the container:
var_dump(getenv("TEST_ENV")); // output NULL
For me it seems to work. Maybe this can help you:
├── docker-compose.yaml
├── .env
└── myphp
├── Dockerfile
└── script.php
My .env file
TEST_ENV="HELLO WORLD"
My docker-compose.yaml:
version: "3"
services:
web:
build: ./myphp
env_file: .env
So my docker-compose.yaml will build the image myphp. The dockerfile looks like this:
FROM php:5.6.30-fpm
COPY script.php /var/script.php
My script.php
<?php
var_dump(getenv('TEST_ENV'));
exit;
Than I perform docker-compose up -d --build. This will build my image and add the php script in it and it will run a container instance of that image.
$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
15f0289bfbe8 test_web "docker-php-entryp..." 3 seconds ago Up 1 second 9000/tcp test_web_1
I'm accessing my container
$ docker exec -it 15f0289bfbe8 bash
And I'm going the the /var folder where I've put my script (check dockerfile) and I'm exexcuting it + also just printing env var:
root#15f0289bfbe8:/var/www/html# cd /var/
root#15f0289bfbe8:/var# ls
backups cache lib local lock log mail opt run script.php spool tmp www
root#15f0289bfbe8:/var# php -f script.php
string(13) ""HELLO WORLD""
root#15f0289bfbe8:/var# echo $TEST_ENV
"HELLO WORLD"
root#15f0289bfbe8:/var#
I have two docker-compose config yamls that look like this:
docker-compose.yml :
version: '2'
services:
web: &web
build: .
environment:
FOO: bar
docker-compose.development.override.yml :
version: '2'
services:
web: &web
build: .
environment:
FOO: biz
When I take a look at the value of $FOO inside the container, I am not seeing the value I expect:
bdares$ COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml
bdares$ docker-compose run --rm web bash
docker$ echo $FOO
bar
When I explicitly set the compose files, I see the value I expect:
bdares$ docker-compose -f ./docker-compose.yml \
> -f ./docker-compose.development.override.yml \
> run --rm web bash
docker$ echo $FOO
biz
This suggests to me that docker-compose is not respecting the COMPOSE_FILE environment variable as claimed here.
What might I be doing wrong?
Version info:
docker-compose version 1.8.0
Docker version 1.11.0
Use:
export COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml
docker-compose run --rm web bash
or
COMPOSE_FILE=./docker-compose.yml:./docker-compose.development.override.yml docker-compose run --rm web bash
I'm trying to pass a host variable to a Dockerfile when running docker-compose build
I would like to run
RUN usermod -u $USERID www-data
in an apache-php7 Dockerfile. $USERID being the ID of the current host user.
I would have thought that the following might work:
commandline
EXPORT USERID=$(id -u); docker-compose build
docker-compose.yml
...
environment:
- USERID=$USERID
Dockerfile
ENV USERID
RUN usermod -u $USERID www-data
But no luck yet.
For Docker in general it is generally not possible to use host environment variables during the build phase; this is because it is desirable that if you run docker build and I run docker build using the same Dockerfile (or Docker Hub runs `docker build with the same Dockerfile), we end up with the same image, regardless of our local environment.
While passing in variables at runtime is easy with the docker command line (using -e <var>=<value>), it's a little trickier with docker-compose, because that tool is designed to create self-contained environments.
A simple solution would be to drop the host uid into an environment file before starting the container. That is, assuming you have:
version: "2"
services:
shell:
image: alpine
env_file: docker-compose.env
command: >
env
You can then:
echo HOST_UID=$UID > docker-compose.env; docker-compose up
And the HOST_UID environment variable will be available to your
container:
Recreating vartest_shell_1
Attaching to vartest_shell_1
shell_1 | HOSTNAME=17423d169a25
shell_1 | HOST_UID=1000
shell_1 | HOME=/root
vartest_shell_1 exited with code 0
You would then to have something like an ENTRYPOINT script that
would set up the container environment (creating users, modifying file
ownership) to operate correctly with the given UID.