I recently discovered docker-compose profiles, which seem great for allowing optional local resources for testing
However, it's not clear if it's possible to provide a container with a different environment depending on the profile; what is a sensible way (if any) to switch environmental vars by-service profile?
Perhaps
using extends (which appears deprecated, but may work for me anyways Extend service in docker-compose 3)
the profile value is or can be made available to the container so it can switch internally
this was never intended or considered in the design (probe local connection on startup, volume mounting tricks..)
Specifically, I'm trying to prefer an address and some keys via env var under some testing profile, but prefer a .env file otherwise.
Normal structure
services:
webapp:
...
env_file:
- .env
Structure with test profile
services:
db-service:
image: db-image
profiles: ["test"]
...
webapp:
...
environment:
- DATABASE_HOST=db-service:1234
I can say with certainty that this was never an intended use case for profiles :)
docker-compose has no native way to pass the current profile down to a service. As a workaround you could pass the COMPOSE_PROFILES environment variable to the container. But this does not work when specifying the profiles with the --profiles flag on the command line.
Also you had to manually handle having multiple active profiles corretly.
The best solution for your specific issue would be to have different services for each profile:
services:
webapp-prod:
profiles: ["prod"]
#...
env_file:
- .env
db-service:
image: db-image
profiles: ["test"]
#...
webapp-test:
profiles: ["test"]
#...
environment:
- DATABASE_HOST=db-service:1234
This only has the downside of different service names for "the same" service with different configurations and they both need assigned profile(s) so none of them will start by default, i.e. with every profile.
Also it has some duplicate code for the two service definitions. If you want to share the definition in the file you could use yaml anchors and aliases:
services:
webapp-prod: &webapp
profiles: ["prod"]
#...
env_file:
- .env
webapp-test:
<<: *webapp
profiles: ["test"]
environment:
- DATABASE_HOST=db-service:1234
db-service:
image: db-image
profiles: ["test"]
#...
Another alternative could be using multiple compose files:
# docker-compose.yml
services:
webapp:
#...
env_file:
- .env
# docker-compose.test.yml
services:
db-service:
image: db-image
#...
webapp:
environment:
- DATABASE_HOST=db-service:1234
This way you can start the production service normally and the instances by passing and merging the compose files:
docker-compose up # start the production version
docker-compose -f docker-compose.yml -f docker-compose.test.yml # start the test version
Arcan's answers has a lot of good ideas.
I think another solution is to just pass a variable next to your --profile tag on your docker commands. You can then for instance set an -e TESTING=.env.testing in your docker-compose command and use env_file:${TESTING:-.env.default} in your file. This allows you to have a default env file added on any none profile actions and runs the given file when needed.
Since I have a slightly different setup I am adding a single variable to a container in my docker-compose so I did not test if it works on the env-file: attribute but I think it should work.
Related
I'm running Grafana on localdev and I don't want to login with admin credentials all the time. To that end I have created the following docker-compose:
version: "3"
services:
grafana:
image: grafana/grafana:8.3.5
ports:
- "3010:3000"
environment:
- GF_AUTH_ANONYMOUS_ENABLED=true
volumes:
...
This works for allowing anonymous users to gain access but it's in view-only / read-only mode. I would like to enable god-mode for the anonymous user per:
https://stackoverflow.com/a/51173858/863651
Is there some environment variable or something to that effect that allows me to achieve the desired result. I want to avoid introducing my own 'defaults.ini' just to set 'org_role = Editor'
Any config from the config file can be overriden by env variable. Syntax for env variable name: GF_<CONF-SECTION>_<CONFIG-PROPERTY> (BTW also GF_AUTH_ANONYMOUS_ENABLED follows this syntax).
So config file section:
[auth.anonymous]
org_role = Editor
has env variable equivalent:
GF_AUTH_ANONYMOUS_ORG_ROLE=Editor
I have a docker.compose.yml file that works as expected when I execute docker-compose up in its parent directory.
My problem is that it's an old version compose file, and I need to integrate its containers into another compose file. The old file has the following structure:
service1:
...
service2:
...
While the target docker-compose.yml has the following structure:
version: '2.3'
services:
service1:
...
service2:
...
So, my problem is that the old version file relies on parameter links. I don't quite understand what is its function. I see few documentation online, and all that the docs says is that the links are replaced by networks. Good, but what is the function of links? How could I replace it, so I don't use (about to get) deprecated features?
Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
You can just delete the link section of the old-version docker compose file and access the services from other container by their names.
You can optionally define networks in order to define which service will be available to the others by placing them in the same network. E.g. :
networks:
my_network_1:
driver: bridge
my_network_2:
driver: bridge
services:
service_1:
networks:
- my_network_1
service_2:
networks:
- my_network_1
service_3:
networks:
- my_network_2
version: '3.8'
services:
foo:
...
networks:
- $FOO_NETWORK
networks:
foo_network:
I am unable to use $FOO_NETWORK under networks, i.e. it allows only to enter a value and not an ENV variable. How do I customize the network name to be taken from the environment variable instead
Environment variables are for values, you want to use it for a key. As far as i know this isn't supported yet and I'm not sure if it'll ever be.
One way you can customise this is to use multiple docker-compose files. Create three files:
one.yml:
version: "3.0"
services:
test:
image: nginx
two.yml:
version: "3.0"
services:
test:
networks:
foo: {}
networks:
foo: {}
three.yml:
version: "3.0"
services:
test:
networks:
bar: {}
networks:
bar: {}
Now you if you run it like this:
docker-compose -f one.yml -f two.yml up
or like this:
docker-compose -f one.yml -f three.yml up
You'll see that the files are merged:
Creating network "network_foo" with the default driver
Recreating network_test_1 ... done
...
Creating network "network_bar" with the default driver
Recreating network_test_1 ... done
You can even spin all three at once:
docker-compose -f one.yml -f two.yml -f three.yml up
Creating network "network_foo" with the default driver
Creating network "network_bar" with the default driver
Creating network_test_1 ... done
Attaching to network_test_1
test_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
Check out documentation for more: https://docs.docker.com/compose/extends/
Also there is another way, which actually involves using variables to select a network. The way is to use existing networks. You'll need an .env file for this:
network=my_network
and in compose file you do like this:
version: "3.8"
services:
test:
networks:
mynet: {}
networks:
mynet:
external: true
name: $network
As you see there is an option to provide name when using an external network. The network with the name must exist when you start your containers or you'll get an error. You can use a separate file to create networks on a node or just create using CLI. Note that the compose version changed, the feature isn't supported in "3.0".
I don't really make sense of docker-compose's behavior with regards to environment variables files.
I've defined a few variables for an simple echo server setup with 2 flask application running.
In .env:
FLASK_RUN_PORT=5000
WEB_1_PORT=80
WEB_2_PORT=8001
Then in docker-compose.yml:
version: '3.8'
x-common-variables: &shared_envvars
FLASK_ENV: development
FLASK_APP: main.py
FLASK_RUN_HOST: 0.0.0.0
COMPOSE_PROJECT_NAME: DOCKER_ECHOES
x-volumes: &com_volumes
- .:/project # maps the current directory, e.g. project root that is gitted, to /proj in the container so we can live-reload
services:
web_1:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_1_PORT}:${FLASK_RUN_PORT}" # flask runs on 5000 (default). docker-compose --env-file .env up loads whatever env vars specified & allows them to be used this way here.
volumes: *com_volumes
environment:
<<: *shared_envvars # DRY: defined common stuff in a shared section above, & use YAML merge language syntaxe to include that k-v mapping here. pretty neat.
FLASK_NAME: web_1
web_2:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_2_PORT}:${FLASK_RUN_PORT}" # flask by default runs on 5000 so keep it on container, and :8001 on host
volumes: *com_volumes
environment:
<<: *shared_envvars
FLASK_NAME: web_2
If I run docker-compose up with the above, everything works as expected.
However, if I simply change the name of the file .env for, say, flask.env, and then accordingly change both env_file: .env to env_file: flask.env, then I get:
(venv) [fv#fv-hpz420workstation flask_echo_docker]$ docker-compose up
WARNING: The WEB_1_PORT variable is not set. Defaulting to a blank string.
WARNING: The FLASK_RUN_PORT variable is not set. Defaulting to a blank string.
WARNING: The WEB_2_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
So obviously the envvars defined in the file were not loaded in that case. I know that according to the documentation, the section environement:, which I am using, overrides what is loaded in the env_file:. But those aren't the same variables. And at any rate, if that was the issue, it shouldn't work either with the first way, right?
What's wrong with the above?
Actually, the env_file is loaded AFTER the images have been built. We can verify this. With the code I have posted above, I can see that env_file.env has not been loaded at build time, because of the error message that I get (telling me WEB_PORT_1 is not set etc.).
But this could simply be that the file is never loaded. To rule that out, we build the image (say by providing the missing arguments with docker-compose build -e (...), then we can verify that it is indeed loaded (by logging its value in the flask application in my case, or a simple print to screen etc.).
This means the the content of env_file is available to the running container, but not before (such as when building the image).
If those variables are to be used within the docker-compose.yml file at BUILD time this file MUST be named .env (unless there is a way to provide a name other than the default, but if so I haven't found any). This is why changing env_file: flask.env to env_file: .env SEEMED to make it work - but the real reason why it worked then was because my ports were specified in a .env with the default name that docker-compose parses anyways. It didn't care that I specified it in docker-compose.yml file or not.
To summarize - TL;DR
If you need to feed environment variables to docker-compose for build-time, you must store them in a .env. No further actions needed, other than ensure this file is in the same directory as the docker-compose.yml. You can't change the default name .env
To provide envars at container run-time, you can put them in foo.env and then specify env_file:foo.env.
For run-time variable, another option is to specify them environment: [vars], if just hard-coding them in the docker-compose.yml is acceptable.. According to doc (not tested) those will override any variables also defined by the env_file
I'm trying to share a file from one container to another other. An essential detail is that (the machine running) my docker host does not have explicit access to the file: It pulls a git repo and doesn't know about the internal file organization of the repo (with the single exception of the docker-compose file). Hence, the approach of standard mapping <host-path>:<container-path> is not applicable; e.g. this is not possible for me: How to mount a single file in a volume
Below is the docker-compose file, stripped for increased readability. Say service_1 has the file /app/awesome.txt. We then want to mount it into service_2 as /opt/awesome.txt.
# docker-compose.yml
version: '3'
services:
service_1:
volumes:
- shared_vol:/public
# how to make service_1 map 'awesome.txt' into /public ?
service_2:
volumes:
- shared_vol/awesome.txt:/opt/awesome.txt
volumes:
shared_vol:
Working solutions that I have, but are not fond of,
running a script/cmd within service_1, copying the file into the shared volume: causes race-condition as service_2 needs the file upon startup
introducing a third service, which the other two depends_on, and does nothing but put the file in the shared volume
Any help, tips or guidance is most appreciated!
can you just do something like this
version: '3.5'
volumes:
xxx:
services:
service_1:
...
volumes:
- xxx:/app
service_2:
...
volumes:
- xxx:/opt