I have a docker.compose.yml file that works as expected when I execute docker-compose up in its parent directory.
My problem is that it's an old version compose file, and I need to integrate its containers into another compose file. The old file has the following structure:
service1:
...
service2:
...
While the target docker-compose.yml has the following structure:
version: '2.3'
services:
service1:
...
service2:
...
So, my problem is that the old version file relies on parameter links. I don't quite understand what is its function. I see few documentation online, and all that the docs says is that the links are replaced by networks. Good, but what is the function of links? How could I replace it, so I don't use (about to get) deprecated features?
Links are not required to enable services to communicate - by default, any service can reach any other service at that service’s name.
You can just delete the link section of the old-version docker compose file and access the services from other container by their names.
You can optionally define networks in order to define which service will be available to the others by placing them in the same network. E.g. :
networks:
my_network_1:
driver: bridge
my_network_2:
driver: bridge
services:
service_1:
networks:
- my_network_1
service_2:
networks:
- my_network_1
service_3:
networks:
- my_network_2
Related
I recently discovered docker-compose profiles, which seem great for allowing optional local resources for testing
However, it's not clear if it's possible to provide a container with a different environment depending on the profile; what is a sensible way (if any) to switch environmental vars by-service profile?
Perhaps
using extends (which appears deprecated, but may work for me anyways Extend service in docker-compose 3)
the profile value is or can be made available to the container so it can switch internally
this was never intended or considered in the design (probe local connection on startup, volume mounting tricks..)
Specifically, I'm trying to prefer an address and some keys via env var under some testing profile, but prefer a .env file otherwise.
Normal structure
services:
webapp:
...
env_file:
- .env
Structure with test profile
services:
db-service:
image: db-image
profiles: ["test"]
...
webapp:
...
environment:
- DATABASE_HOST=db-service:1234
I can say with certainty that this was never an intended use case for profiles :)
docker-compose has no native way to pass the current profile down to a service. As a workaround you could pass the COMPOSE_PROFILES environment variable to the container. But this does not work when specifying the profiles with the --profiles flag on the command line.
Also you had to manually handle having multiple active profiles corretly.
The best solution for your specific issue would be to have different services for each profile:
services:
webapp-prod:
profiles: ["prod"]
#...
env_file:
- .env
db-service:
image: db-image
profiles: ["test"]
#...
webapp-test:
profiles: ["test"]
#...
environment:
- DATABASE_HOST=db-service:1234
This only has the downside of different service names for "the same" service with different configurations and they both need assigned profile(s) so none of them will start by default, i.e. with every profile.
Also it has some duplicate code for the two service definitions. If you want to share the definition in the file you could use yaml anchors and aliases:
services:
webapp-prod: &webapp
profiles: ["prod"]
#...
env_file:
- .env
webapp-test:
<<: *webapp
profiles: ["test"]
environment:
- DATABASE_HOST=db-service:1234
db-service:
image: db-image
profiles: ["test"]
#...
Another alternative could be using multiple compose files:
# docker-compose.yml
services:
webapp:
#...
env_file:
- .env
# docker-compose.test.yml
services:
db-service:
image: db-image
#...
webapp:
environment:
- DATABASE_HOST=db-service:1234
This way you can start the production service normally and the instances by passing and merging the compose files:
docker-compose up # start the production version
docker-compose -f docker-compose.yml -f docker-compose.test.yml # start the test version
Arcan's answers has a lot of good ideas.
I think another solution is to just pass a variable next to your --profile tag on your docker commands. You can then for instance set an -e TESTING=.env.testing in your docker-compose command and use env_file:${TESTING:-.env.default} in your file. This allows you to have a default env file added on any none profile actions and runs the given file when needed.
Since I have a slightly different setup I am adding a single variable to a container in my docker-compose so I did not test if it works on the env-file: attribute but I think it should work.
I'm trying to share a file from one container to another other. An essential detail is that (the machine running) my docker host does not have explicit access to the file: It pulls a git repo and doesn't know about the internal file organization of the repo (with the single exception of the docker-compose file). Hence, the approach of standard mapping <host-path>:<container-path> is not applicable; e.g. this is not possible for me: How to mount a single file in a volume
Below is the docker-compose file, stripped for increased readability. Say service_1 has the file /app/awesome.txt. We then want to mount it into service_2 as /opt/awesome.txt.
# docker-compose.yml
version: '3'
services:
service_1:
volumes:
- shared_vol:/public
# how to make service_1 map 'awesome.txt' into /public ?
service_2:
volumes:
- shared_vol/awesome.txt:/opt/awesome.txt
volumes:
shared_vol:
Working solutions that I have, but are not fond of,
running a script/cmd within service_1, copying the file into the shared volume: causes race-condition as service_2 needs the file upon startup
introducing a third service, which the other two depends_on, and does nothing but put the file in the shared volume
Any help, tips or guidance is most appreciated!
can you just do something like this
version: '3.5'
volumes:
xxx:
services:
service_1:
...
volumes:
- xxx:/app
service_2:
...
volumes:
- xxx:/opt
I need to copy a php.ini file that I have (with xdebug enabled) to /bitnami/php-fpm/conf/. I am using a bitnami docker container, and I want to use xdebug to debug the php code in my app. Therefore I must enable xdebug in the php.ini file. The /bitnami/php-fpm container on the repository had this comment added to it:
5.5.30-0-r01 (2015-11-10)
php.ini is now exposed in the volume mounted at /bitnami/php-fpm/conf/ allowing users to change the defaults as per their requirements.
So I am trying to copy my php.ini file to /bitnami/php-fpm/conf/php.ini in the docker-compose.yml. Here is the php-fpm section of the .yml:
php-fpm:
image: bitnami/php-fpm:5.5.26-3
volumes:
- ./app:/app
- php.ini:/bitnami/php-fpm/conf
networks:
- net
volumes:
database_data:
driver: local
networks:
net:
Here is the error I get: ERROR: Named volume "php.ini:/bitnami/php-fpm/conf:rw" is used in service "php-fpm" but no declaration was found in the volumes section.
Any idea how to fix this?
I will assume that your indentation is correct otherwise you probably wouldn't get that error. Always run your yaml's through a lint tool such as http://www.yamllint.com/.
In terms of your volume mount, the first one you have the correct syntax but the second you don't therefore Docker thinks it is a named volume.
Assuming php.ini is in the root directory next to your docker-compose.yml.
volumes:
- ./app:/app
- ./php.ini:/bitnami/php-fpm/conf
I have a docker-compose.yml with multiple services using same Dockerfile (django, celery, and many more) When I use docker-compose build, it build my container multiple times.
It make my "apply changes and restart" process costly. Is there a good way to build the Dockerfile only once? Sould I build only one container and hope it updates all?
In my case I have 5 instance of a Dockerfile simply using a different commands, different volumes…
Using --build flag along with docker-compose up makes docker to build all containers. If you want to build only once then you can name the image which is built in one service and for other services, instead of using dockerfile, you can use that newly built image.
version: '3'
services:
wordpress-site-1:
image: wordpress:custom
build:
context: ./wordpress
dockerfile: wordpress.dockerfile
container_name: wordpress-site-1
links:
- mysql-database
depends_on:
- mysql-database
ports:
- 8080:80
wordpress-site-2:
image: wordpress:custom
container_name: wordpress-site-2
links:
- mysql-database
depends_on:
- mysql-database
- wordpress-site-1
ports:
- 8888:80
Note: build and image are used in first service and only image is used in the second service.
This is sample usage which generates two wordpress containers, one of which is built from the dockerfile which is specified in context and names the generated image as wordpress:custom and other is built from the image wordpress:custom. The container name is different but the image used is same in both the services. One service builds the image using build context and other uses that image. Being at safe side, you may remove any previous wordpress:custom image. So that wordpress-site-2 does not use cached image.
Edit 1: Extended answer to show how two containers are built using same image. Same container_name cannot be used.
I have this docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
mongo:
image: mongo
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally?
It works on production (ubuntu) but when i run it on my dev-machine (mac) I get:
ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied
Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac.
Is the idea to use the same docker-compose on both production and development? How do I solve this issue?
Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container.
You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them.
Here is what the documentation says about volumes:
[...] specify a path on the host machine (HOST:CONTAINER)
So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container.
Regarding your last question, have a look at this article: Using Compose in production.
Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host.
Here is an example :
version : "3.2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- type: bind
source: /data
target: /data/db
ports:
- "42421:27017"
source is the folder in your host and target the folder in your container
More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax