version: '3.8'
services:
foo:
...
networks:
- $FOO_NETWORK
networks:
foo_network:
I am unable to use $FOO_NETWORK under networks, i.e. it allows only to enter a value and not an ENV variable. How do I customize the network name to be taken from the environment variable instead
Environment variables are for values, you want to use it for a key. As far as i know this isn't supported yet and I'm not sure if it'll ever be.
One way you can customise this is to use multiple docker-compose files. Create three files:
one.yml:
version: "3.0"
services:
test:
image: nginx
two.yml:
version: "3.0"
services:
test:
networks:
foo: {}
networks:
foo: {}
three.yml:
version: "3.0"
services:
test:
networks:
bar: {}
networks:
bar: {}
Now you if you run it like this:
docker-compose -f one.yml -f two.yml up
or like this:
docker-compose -f one.yml -f three.yml up
You'll see that the files are merged:
Creating network "network_foo" with the default driver
Recreating network_test_1 ... done
...
Creating network "network_bar" with the default driver
Recreating network_test_1 ... done
You can even spin all three at once:
docker-compose -f one.yml -f two.yml -f three.yml up
Creating network "network_foo" with the default driver
Creating network "network_bar" with the default driver
Creating network_test_1 ... done
Attaching to network_test_1
test_1 | /docker-entrypoint.sh: /docker-entrypoint.d/ is not empty, will attempt to perform configuration
Check out documentation for more: https://docs.docker.com/compose/extends/
Also there is another way, which actually involves using variables to select a network. The way is to use existing networks. You'll need an .env file for this:
network=my_network
and in compose file you do like this:
version: "3.8"
services:
test:
networks:
mynet: {}
networks:
mynet:
external: true
name: $network
As you see there is an option to provide name when using an external network. The network with the name must exist when you start your containers or you'll get an error. You can use a separate file to create networks on a node or just create using CLI. Note that the compose version changed, the feature isn't supported in "3.0".
Related
I'm trying to define a volume in ddev like this:
Filename: docker-compose.salesforce.yaml
Contents:
version: '3.6'
services:
web:
volumes:
- /Users/dmgig/JWT:/home/dmgig/JWT:rw
But as you can see it uses my user name and I'd like to make it work for anyone.
I've tried to find a home directory variable, but can't find one.
I assumed something like:
version: '3.6'
services:
web:
volumes:
- $HOME/JWT:$HOME_DDEV/JWT:rw
How would I write this so that it uses the correct paths for the host and the ddev machine?
You're trying to mount ~/JWT from the host into ~/JWT inside the container?
The environment variables are evaluated on the host by docker-compose, which doesn't know anything about what's inside the container, so you'll need to use absolute paths for inside the container, as you already did.
I think this might work for you on Linux or macOS:
version: '3.6'
services:
web:
volumes:
- /Users/$USER/JWT:/home/$USER/JWT:rw
I have a docker.compose.yml file that works as expected when I execute docker-compose up in its parent directory.
My problem is that it's an old version compose file, and I need to integrate its containers into another compose file. The old file has the following structure:
service1:
...
service2:
...
While the target docker-compose.yml has the following structure:
version: '2.3'
services:
service1:
...
service2:
...
So, my problem is that the old version file relies on parameter links. I don't quite understand what is its function. I see few documentation online, and all that the docs says is that the links are replaced by networks. Good, but what is the function of links? How could I replace it, so I don't use (about to get) deprecated features?
Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
You can just delete the link section of the old-version docker compose file and access the services from other container by their names.
You can optionally define networks in order to define which service will be available to the others by placing them in the same network. E.g. :
networks:
my_network_1:
driver: bridge
my_network_2:
driver: bridge
services:
service_1:
networks:
- my_network_1
service_2:
networks:
- my_network_1
service_3:
networks:
- my_network_2
I recently discovered docker-compose profiles, which seem great for allowing optional local resources for testing
However, it's not clear if it's possible to provide a container with a different environment depending on the profile; what is a sensible way (if any) to switch environmental vars by-service profile?
Perhaps
using extends (which appears deprecated, but may work for me anyways Extend service in docker-compose 3)
the profile value is or can be made available to the container so it can switch internally
this was never intended or considered in the design (probe local connection on startup, volume mounting tricks..)
Specifically, I'm trying to prefer an address and some keys via env var under some testing profile, but prefer a .env file otherwise.
Normal structure
services:
webapp:
...
env_file:
- .env
Structure with test profile
services:
db-service:
image: db-image
profiles: ["test"]
...
webapp:
...
environment:
- DATABASE_HOST=db-service:1234
I can say with certainty that this was never an intended use case for profiles :)
docker-compose has no native way to pass the current profile down to a service. As a workaround you could pass the COMPOSE_PROFILES environment variable to the container. But this does not work when specifying the profiles with the --profiles flag on the command line.
Also you had to manually handle having multiple active profiles corretly.
The best solution for your specific issue would be to have different services for each profile:
services:
webapp-prod:
profiles: ["prod"]
#...
env_file:
- .env
db-service:
image: db-image
profiles: ["test"]
#...
webapp-test:
profiles: ["test"]
#...
environment:
- DATABASE_HOST=db-service:1234
This only has the downside of different service names for "the same" service with different configurations and they both need assigned profile(s) so none of them will start by default, i.e. with every profile.
Also it has some duplicate code for the two service definitions. If you want to share the definition in the file you could use yaml anchors and aliases:
services:
webapp-prod: &webapp
profiles: ["prod"]
#...
env_file:
- .env
webapp-test:
<<: *webapp
profiles: ["test"]
environment:
- DATABASE_HOST=db-service:1234
db-service:
image: db-image
profiles: ["test"]
#...
Another alternative could be using multiple compose files:
# docker-compose.yml
services:
webapp:
#...
env_file:
- .env
# docker-compose.test.yml
services:
db-service:
image: db-image
#...
webapp:
environment:
- DATABASE_HOST=db-service:1234
This way you can start the production service normally and the instances by passing and merging the compose files:
docker-compose up # start the production version
docker-compose -f docker-compose.yml -f docker-compose.test.yml # start the test version
Arcan's answers has a lot of good ideas.
I think another solution is to just pass a variable next to your --profile tag on your docker commands. You can then for instance set an -e TESTING=.env.testing in your docker-compose command and use env_file:${TESTING:-.env.default} in your file. This allows you to have a default env file added on any none profile actions and runs the given file when needed.
Since I have a slightly different setup I am adding a single variable to a container in my docker-compose so I did not test if it works on the env-file: attribute but I think it should work.
I'm trying to share a file from one container to another other. An essential detail is that (the machine running) my docker host does not have explicit access to the file: It pulls a git repo and doesn't know about the internal file organization of the repo (with the single exception of the docker-compose file). Hence, the approach of standard mapping <host-path>:<container-path> is not applicable; e.g. this is not possible for me: How to mount a single file in a volume
Below is the docker-compose file, stripped for increased readability. Say service_1 has the file /app/awesome.txt. We then want to mount it into service_2 as /opt/awesome.txt.
# docker-compose.yml
version: '3'
services:
service_1:
volumes:
- shared_vol:/public
# how to make service_1 map 'awesome.txt' into /public ?
service_2:
volumes:
- shared_vol/awesome.txt:/opt/awesome.txt
volumes:
shared_vol:
Working solutions that I have, but are not fond of,
running a script/cmd within service_1, copying the file into the shared volume: causes race-condition as service_2 needs the file upon startup
introducing a third service, which the other two depends_on, and does nothing but put the file in the shared volume
Any help, tips or guidance is most appreciated!
can you just do something like this
version: '3.5'
volumes:
xxx:
services:
service_1:
...
volumes:
- xxx:/app
service_2:
...
volumes:
- xxx:/opt
I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.