Is it possible to have environment and env_file in Docker Compose? - docker-compose

I am new to Gitlab CI/CD and I have the following issue.
Suppose I have some environment variables in my local setup in an .env file. Something like this:
SOME_URL=https://someurl.com/
SECRET_KEY=verysecretkey
For my setup to work, I need both of these environment variables. The SECRET_KEY is not in the .env for the deployment. It is in the "secrets" in GitLab. If I have something like this in my Docker Compose file:
environment:
SECRET_KEY: ${SECRET_KEY}
env_file:
- .env
My two questions are:
In my local setup, will I have in my environment variables both SECRET_KEY and SOME_URL?
In GitLab, am I going to be able to replace SECRET_KEY with secrets?
Thanks for your answers in advance!

The variables defined in Gitlab CI/CD are available inside the pipeline (.gitlab-ci.yml). In the step, you are executing the docker-compose command, the variable will be replaced then set in your container thanks to environment: SECRET_KEY: ${SECRET_KEY}. That should be fine.
https://docs.gitlab.com/ee/ci/variables/
https://docs.docker.com/compose/compose-file/#variable-substitution

Related

docker-compose set environment per-service profile

I recently discovered docker-compose profiles, which seem great for allowing optional local resources for testing
However, it's not clear if it's possible to provide a container with a different environment depending on the profile; what is a sensible way (if any) to switch environmental vars by-service profile?
Perhaps
using extends (which appears deprecated, but may work for me anyways Extend service in docker-compose 3)
the profile value is or can be made available to the container so it can switch internally
this was never intended or considered in the design (probe local connection on startup, volume mounting tricks..)
Specifically, I'm trying to prefer an address and some keys via env var under some testing profile, but prefer a .env file otherwise.
Normal structure
services:
webapp:
...
env_file:
- .env
Structure with test profile
services:
db-service:
image: db-image
profiles: ["test"]
...
webapp:
...
environment:
- DATABASE_HOST=db-service:1234
I can say with certainty that this was never an intended use case for profiles :)
docker-compose has no native way to pass the current profile down to a service. As a workaround you could pass the COMPOSE_PROFILES environment variable to the container. But this does not work when specifying the profiles with the --profiles flag on the command line.
Also you had to manually handle having multiple active profiles corretly.
The best solution for your specific issue would be to have different services for each profile:
services:
webapp-prod:
profiles: ["prod"]
#...
env_file:
- .env
db-service:
image: db-image
profiles: ["test"]
#...
webapp-test:
profiles: ["test"]
#...
environment:
- DATABASE_HOST=db-service:1234
This only has the downside of different service names for "the same" service with different configurations and they both need assigned profile(s) so none of them will start by default, i.e. with every profile.
Also it has some duplicate code for the two service definitions. If you want to share the definition in the file you could use yaml anchors and aliases:
services:
webapp-prod: &webapp
profiles: ["prod"]
#...
env_file:
- .env
webapp-test:
<<: *webapp
profiles: ["test"]
environment:
- DATABASE_HOST=db-service:1234
db-service:
image: db-image
profiles: ["test"]
#...
Another alternative could be using multiple compose files:
# docker-compose.yml
services:
webapp:
#...
env_file:
- .env
# docker-compose.test.yml
services:
db-service:
image: db-image
#...
webapp:
environment:
- DATABASE_HOST=db-service:1234
This way you can start the production service normally and the instances by passing and merging the compose files:
docker-compose up # start the production version
docker-compose -f docker-compose.yml -f docker-compose.test.yml # start the test version
Arcan's answers has a lot of good ideas.
I think another solution is to just pass a variable next to your --profile tag on your docker commands. You can then for instance set an -e TESTING=.env.testing in your docker-compose command and use env_file:${TESTING:-.env.default} in your file. This allows you to have a default env file added on any none profile actions and runs the given file when needed.
Since I have a slightly different setup I am adding a single variable to a container in my docker-compose so I did not test if it works on the env-file: attribute but I think it should work.

Apply label to docker compose service depending on environment configuration?

Say I have a docker-compose.yml like so:
version: "2.1"
services:
web:
image: foo
cli:
image: bar
Upon docker-compose up, depending on the value of an environment variable, I would like to add a specific label to either the web service or the cli service, but never both.
What are some solutions for this?
EDIT: An additional stipulation is that the compose file can have an arbitrary set of services in it (i.e. the set of services is not constant, it is variable).
You might want to split your compose.yml file and add some shell scripting around docker to achieve this.
So you could create a bash script that checks your environment variable, and switches the appropriate yml files into the 'docker compose up' command it calls.

Heroku play scala always runs in Prod mode

When I run "sbt ~run" I can see that mode is set Dev as expected. However, when I run "heroku local web" the server runs in Prod mode. Any idea how I can get this set to Dev mode? Do I have to set any variable with heroku config CLI ? My intention is to test with heroku local before pushing to Heroku git.
Have tried this in my Procfile:
web: target/universal/stage/bin/myapp -Dhttps.port=${PORT} -Dhttp.port=disabled -Dhttps.keyStore=conf/generated.keystore -Dlogback.configurationFile=conf/logback.xml -Dapplication.mode=DEV
But still it shows Prod. When server runs, the conf file is set programmatically with "-Dconfig.resource=root-dev.conf".
You'll want to use environment variables for the application.mode in some fashion (after all, that's what environment variables are for: changes specific to an environment). There are a few way you could do this, one is to create an APPLICATION_MODE env var, and use it in your Procfile thusly:
-Dapplication.mode=${APPLICATION_MODE:-DEV}
This will set -Dapplication.mode from APPLICATION_MODE and default to DEV if the env var is not set. Then you can set the env var on Heroku like this:
$ heroku config:set APPLICATION_MODE="PROD"
Or you could put the whole -Dapplication.mode=DEV in JAVA_OPTS in your .env file in your local repo like this:
JAVA_OPTS="-Dapplication.mode=DEV"
The heroku local command will pick up .env and load it. In this way you don't need to change anything on Heroku.

docker-compose how to pass env variables into file

Is there any chance to pass variables from docker-compose into apache.conf file?
I have Dockerfile with variables
ENV APACHE_SERVER_NAME localhost
ENV APACHE_DOCUMENT_ROOT /var/www/html
I have apache.conf which I copy into /etc/apache2/sites-available/ while building image
ServerName ${APACHE_SERVER_NAME}
DocumentRoot ${APACHE_DOCUMENT_ROOT}
I have docker-compose.yml
environment:
- APACHE_SERVER_NAME=cms
- APACHE_DOCUMENT_ROOT=/var/www/html/public
When I run docker-compose, nothing happened and apache.conf in container is unchanged.
Am I completely wrong and is this imposible or am I missing any step or point?
Thank you
Let me explain some little differences among ways to pass environment variables:
Environment variables for building time and for entrypoint in running time
ENV (Dockerfile): Once specified in Dockerfile before building, containers, will have environment variables for entrypoints.
environment: The same than ENV but for docker-compose.
docker run -e VAR=value... The same than before but for cli.
Environment variables only for building time
ARG (Dockerfile): They won't appear in deployed containers.
Environment variables accesibled for every container and builds
.env file defined in your working dir where executes docker run or docker-compose.
env_file: section in docker-compose.yml file to define another .env file.
As you're trying to define variables for apache conf file, maybe you should try to use apache as entrypoint, or just define them in .env file.

Why is "ENV variable 'S3_KEY' needs to be set" being thrown on deployment?

I'd like to use heroku_san to deploy multiple environments to heroku. I'm using dragonfly for image handling and S3 for storage. Usually you can add your key and secret for the storage using heroku config:add S3_KEY=… S3_SECRET=… directly.
So I've added these details to the heroku.yml file used by heroku_san:
staging:
app: app-staging
config: &default
BUNDLE_WITHOUT: "development:test"
S3_KEY: XXXXXXXXXXXXXXXXXX
S3_SECRET: XXXXXXXXXXXXXXXXXX
S3_BUCKET: app-staging
but when deploying I'm still getting:
rake aborted!
ENV variable 'S3_KEY' needs to be set - use
heroku config:add S3_KEY=XXXXXXXXX
What am I missing here? Is there a better way then storing this information in a YML file?
There's no need to run heroku config:add manually. Just run heroku_san's config task:
$ rake all heroku:config
Repeat this whenever you update the heroku.yml file.
I was confused about this too, since it's strangely absent from heroku_san's documentation, but the option does appear in the list of rake tasks:
$ rake -T
and in the heroku_san code: https://github.com/fastestforward/heroku_san/blob/master/lib/heroku_san/tasks.rb
A simple solution/work-around:
heroku config:add S3_KEY=XXX S3_SECRET=XXX --app app-staging
Any better ideas?
I think you need to run the command rake all heroku:rack_env. This command will then set the environment configurations for you based on your heroku_san YAML configuration.