Probably simple, but I dont find any thing about this. My composer config file (version 3), define two volumes to be shared with others services :
version: "3"
services:
nginx:
build: docker/nginx
ports:
- "80:80"
volumes:
- config:/etc/nginx/conf.d
- data:/var/http
networks:
- default
container_name: nginx
networks:
default:
volumes:
config:
data:
How to set/attach the local directories (ex. d:/nginx/etc, d:/nginx/http) mapped to this volumes on the config file (or the docker-compose up command) ?
You may try to replace your volumes lines this way :
data:/var/http -> path/to/local/dir:/var/http
Moreover assuming that you are running Windows, it should look like this :
- //d/nginx/etc:/etc/nginx/conf.d
- //d/nginx/http:/var/http
Then remove the global volumes section.
Related
I have a few config file that have to be mapped to files inside the container. I want to be able to change these config files on the host and that should reflect in the container. These are basically connection string files that I want to swap without having to rebuild the containers. What I have in my docker-compose.yml is:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: volume
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: volume
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
I fail to get this to work... I saw some examples where they did not supply the type (or instead of volume they made it "bind") but nothing seems to work for me.
If I build the images with docker compose up and then do docker inspect portal I can see that it has: "Mounts": []
My final plan is to have a docker-compose.yml that has a service called portal and mounts 2 or more files inside the container(NOT copy so that I can change it on my host at will) as well as a few directories. What is kicking me in the face is the files that have to be mapped into the container.
I think you need to change type: volume to type: mount
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: mount
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: mount
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
Also, you can add read-only: true to both of those mounts if you don't want the services to be able to modify parameters.yml or portal.conf.
Just mapping should do the job if the files and folders in the lhs exists in your local machine:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- ./local/parameters.local.yml:/var/www/portal/s/config/parameters.yml
- ./portal.conf:/etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
volumes:
awscreds:
I working with a simple docker-compose file (node alpine), i got three anon volumens, this already work in the pass but now, not longer created.
I delete the folder from the host side (Windows), to try if docker creates against the folders and put inside the files, but nothing is happend.
version: "3.3"
services:
api:
#restart: always
build:
context: .
image: foo-foo-platform:1.1.0.0
#container_name: foo-foo-platform
env_file: docker-compose-debug.env
labels:
- "traefik.enable=false"
- "traefik.http.routers.api-gw.rule=PathPrefix(`/`)"
- "traefik.http.services.api-gw.loadbalancer.server.port=8090"
networks:
- internal
volumes:
- /mnt/logs:/mnt/logs
- /mnt/cc:/mnt/cc
ports:
- "8084:8084"
networks:
internal:
I have tried to prune volumes with docker volume prune, anyway noone of volumes listed is from this docker.
Al tried "docker-compose -f docker-compose-debug.yml up --build --force-recreate --renew-anon-volumes"
Note: "/mnt/logs:/mnt/logs" this notation works in windows.
I can't seem to get docker/Maria to use my named docker volume. The host docker volumes directory is empty. But, there is a new container id right next to my named volume that looks like it has all of the MariaDB parts in it. The question is why?
My docker compose file:
version: "3.7"
#
# [Volumes]
#
volumes:
data-mysql:
#
# [Services]
#
services:
mariadb:
volumes:
- data-mysql:/var/lib/mysql
image: linuxserver/mariadb
container_name: mariadb
environment:
- PUID=1000
- GUID=1000
- MYSQL_ROOT_PASSWORD=<snipped>
- TZ=Etc/UTC
ports:
- 3306:3306
restart: unless-stopped
I've tried moving the volume part before and after the services line with no difference. When I do a docker-compose up, it does say it's creating the volume: mariadb_data-mysql, but when I shut down docker, there is nothing in the folder.
Thanks for any insight!
Nick
The data folder for MARIADB image you are using (linuxserver/mariadb) is /config/databases/ and not /var/lib/mysql. Replace this in your docker-compose.yml and it will work.
Also, the order in your docker-compose.yml does not matter: docker-compose will compile it and order everything alphabetically anyway before processing.
This is my Docker compose/stack file. When I deploy on a single node, everything works fine, but when I deploy on multiple nodes I get the following error:
invalid mount config for type bind bind source path does not exist
version: '3'
services:
shinyproxy:
build: /etc/shinyproxy
deploy:
replicas: 3
user: root:root
hostname: shinyproxy
image: shinyproxy-example
restart: unless-stopped
volumes:
- /var/run/docker.sock:/var/run/docker.sock
ports:
- 5000:5000
networks:
- proxynetwork
mysql:
image: mysql
deploy:
replicas: 3
volumes:
- /mysqldata:/var/lib/mysql
environment:
MYSQL_ROOT_PASSWORD: root_password
MYSQL_DATABASE: keycloak
MYSQL_USER: keycloak
MYSQL_PASSWORD: password
networks:
- proxynetwork
keycloak:
deploy:
replicas: 3
image: jboss/keycloak
volumes:
- /etc/letsencrypt/live/ds-gym.de/fullchain.pem:/etc/x509/https/tls.crt
- /etc/letsencrypt/live/ds-gym.de/privkey.pem:/etc/x509/https/tls.key
#- /theme/govuk-social-providers/:/opt/jboss/keycloak/themes/govuk-social-providers/
environment:
- PROXY_ADDRESS_FORWARDING=true
- KEYCLOAK_USER=myadmin
- KEYCLOAK_PASSWORD=mypassword
ports:
- 8443:8443
networks:
- proxynetwork
networks:
proxynetwork:
external: true
I understand that the volumes path is expected on every other node too, but I think this is a very bad practice and my other 2 nodes are anyway just workers. How can I solve that problem? Hopefully there is a solution which allows me to keep the volumes, since I use the same file for docker-compose build to build my images.
Can someone help me?
Thank you :-)
If it is possible you could restrict this service to the node that has the required host path's using placement constraints. However I'm guessing that that's not an option in this use case.
Host mounted volumes should really not be used in a swarm deployment as it would cause redundant data in the filesystems between the nodes. (All files need to be present on all nodes).
One solution would be to implement NFS volumes:
volumes:
example:
driver_opts:
type: "nfs"
o: "addr=<NFS_SERVER_IP>,nolock,soft,rw"
device: ":/docker/path/to/configs"
This solution requires you to host a NFS-Server though. Also keep in mind that this approach is fine for configs but should not be used for file systems that need to provide high performance access.
Regarding your question about keeping your docker-compose file the same across environments: While it is technically possible to do so, most modern projects consist of a base compose file as well as an environment specific override for volumes,networks,images etc.
In a swarm your services will be deployed randomly on your available nodes.
I suppose your "to be mounted directory" is on the manager node, so deploy the wanted service on the manager node like so.
deploy:
placement:
constraints:
- node.role == manager
I have a Docker Compose file with some services. One of them is the database which I would like to back up the volumes and migrate all the data to another machine.
My docker-compose.yml looks like this
version: '3'
services:
service1:
...
serviceN:
db:
image: postgres:11
ports:
- 5432:5432
networks:
- postgresnet
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
volumes:
- postgresql:/var/lib/postgresql
- postgresql_data:/var/lib/postgresql/data
volumes:
postgresql:
postgresql_data:
networks:
postgresnet:
driver: bridge
How could I backup the data of postgresql and postgresql_data volunes and migrate them to another machine?
Easiest way is to share external volumes between your docker-compose files.
First create volume
docker volume create shared-data
Next modify you yml:
...
volumes:
postgresql:
postgresql_data:
external:
name: shared-data
...
Now your postgresql_data is mapped to external volume and everything you save there could be visible from outside. Just create same configuration in another docker-compose.yml and enjoy.