I am using docker-compose file generated by docker-app
docker-app render | docker-compose -f - up
The docker app file looks like this and it works as expected. But I am not able to use volumes.
I use -v parameter while using docker run command like this...
-v /my/custom3399:/etc/mysql/conf.d
-v /storage/mysql/datadir3399:/var/lib/mysql
How do I declare volumes in compose file?
# vi hello.dockerapp
# This section contains your application metadata.
# Version of the application
version: 0.1.0
# Name of the application
name: hello
# A short description of the application
description:
# Namespace to use when pushing to a registry. This is typically your Hub username.
#namespace: myHubUsername
# List of application maitainers with name and email for each
maintainers:
- name: root
email:
# Specify false here if your application doesn't support Swarm or Kubernetes
targets:
swarm: false
kubernetes: false
--
# This section contains the Compose file that describes your application services.
version: "3.5"
services:
mysql:
image: ${mysql.image.version}
environment:
MYSQL_ROOT_PASSWORD: india${mysql.port}
ports:
- "${mysql.port}:3306"
--
# This section contains the default values for your application settings.
mysql.image.version: shantanuo/mysql:5.7
mysql.port: 3391
update:
The script mentioned above works well. But once I add volumes, I get an error:
version: "3.5"
services:
mysql:
image: ${mysql.image.version}
environment:
MYSQL_ROOT_PASSWORD: india${mysql.port}
ports:
- "${mysql.port}:3306"
volumes:
- type: volume
source: mysql_data
target: /var/lib/mysql
volumes:
mysql_data:
external: true
And the error is:
docker-app render | docker-compose -f - up
Recreating e5c833e2187d_hashi_mysql_1 ... error
ERROR: for e5c833e2187d_hashi_mysql_1 Cannot create container for service mysql: Duplicate mount point: /var/lib/mysql
ERROR: for mysql Cannot create container for service mysql: Duplicate mount point: /var/lib/mysql
ERROR: Encountered errors while bringing up the project.
As mentioned in the comment, the rendered output is as follows:
# /usr/local/bin/docker-app render
version: "3.5"
services:
mysql:
environment:
MYSQL_ROOT_PASSWORD: india3391
image: shantanuo/mysql:5.7
ports:
- mode: ingress
target: 3306
published: 3391
protocol: tcp
volumes:
- type: volume
source: mysql_data
target: /var/lib/mysql
volumes:
mysql_data:
name: mysql_data
external: true
This issue was resolved once I changed the directory name.
# cd ..
# mv hashi/ hashi123/
# cd hashi123
Not sure how this worked. But since I am able to start the server stack, I am posting it as answer.
Related
I have a few config file that have to be mapped to files inside the container. I want to be able to change these config files on the host and that should reflect in the container. These are basically connection string files that I want to swap without having to rebuild the containers. What I have in my docker-compose.yml is:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: volume
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: volume
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
I fail to get this to work... I saw some examples where they did not supply the type (or instead of volume they made it "bind") but nothing seems to work for me.
If I build the images with docker compose up and then do docker inspect portal I can see that it has: "Mounts": []
My final plan is to have a docker-compose.yml that has a service called portal and mounts 2 or more files inside the container(NOT copy so that I can change it on my host at will) as well as a few directories. What is kicking me in the face is the files that have to be mapped into the container.
I think you need to change type: volume to type: mount
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- type: mount
source: ./local/parameters.local.yml
target: /var/www/portal/s/config/parameters.yml
- type: mount
source: ./portal.conf
target: /etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
Also, you can add read-only: true to both of those mounts if you don't want the services to be able to modify parameters.yml or portal.conf.
Just mapping should do the job if the files and folders in the lhs exists in your local machine:
services:
portal:
container_name: portal
image: portal
build:
context: .
extra_hosts:
- "host.docker.internal:host-gateway"
volumes:
- ./:/var/www/portal
- ./local/parameters.local.yml:/var/www/portal/s/config/parameters.yml
- ./portal.conf:/etc/apache2/sites-available/portal.conf
- awscreds:/root/.aws:ro
volumes:
awscreds:
I'm trying to run mockoon docker cli using docker compose for my mock api and i have the json exported to my directory,
version: "3"
services:
mockoon:
image: mockoon/cli:latest
command: mockoon-cli start
networks:
demo:
ports:
- "8001:8001"
volumes:
- type: bind
source: ./mock.json
target: /www
networks:
demo:
when i run docker-compose up the container comes up for few seconds and shuts down with the following log
› Error: Missing required flag:
› -d, --data DATA Path(s) or URL(s) to your Mockoon data file(s)
› See more help with --help
Any additional settings that i should make or i have missed something in my config.
I have checked the official documentation, you need to specify the --data directory inside the docker-compose command line.
So the compose file would like like this:
version: "3"
services:
mockoon:
image: mockoon/cli:latest
command: mockoon-cli start --data <directory> # this is where the change takes place
networks:
demo:
ports:
- "8001:8001"
volumes:
- type: bind
source: ./mock.json
target: /www
networks:
demo:
mockoon cli
The "data" after the --data flag is the alias to the file name, not the folder name. Hence, it should be:
version: "3"
services:
chicargo-mockoon-mock:
platform: linux/amd64
image: mockoon/cli:latest
ports:
- "8001:8001"
volumes:
- type: bind
source: ./your-file-name.json
target: /data
command: mockoon-cli start --data data
Hey guys im just installed rundeck 3.4.9 via docker-compose and after it i cant see the list of installed plugins, the page is refreshing endlessly - same with project list and with user list.
Here is my docker-compose file:
version: '3'
services:
rundeck:
image: rundeck/rundeck:3.4.9
tty: true
volumes:
- data:/home/rundeck/server/data
ports:
- 4440:4440
volumes:
data:
here is what i see in browser:
here what i see in docker-compose logs:
so no errors here, how can i fix this issue ?
In your docker-compose definition add the RUNDECK_GRAILS_URL with the proxy-defined URL for your Rundeck instance and RUNDECK_SERVER_FORWARDED with a true value.
I made a basic example using NGINX as a reverse proxy.
version: "3"
services:
rundeck:
image: rundeck/rundeck:3.4.9
ports:
- 4440:4440
environment:
RUNDECK_GRAILS_URL: http://your_proxy_exit_url_for_rundeck
RUNDECK_SERVER_FORWARDED: "true"
nginx:
image: nginx:alpine
volumes:
- ./config/nginx.conf:/etc/nginx/conf.d/default.conf:ro
ports:
- 80:80
(NGINX conf file referenced on the volumes section):
server {
listen 80 default_server;
server_name rundeck-cl;
# default rundeck location is the root
location / {
# took from default rundeck conf
proxy_pass http://rundeck:4440;
}
}
Take a look at this.
I am trying to make the following docker-compose.yaml to run on my QNAP container station.
The following part is working, but after the "restart: unless-stopped" the mess begins.
version: '3'
services:
pihole:
container_name: pihole
image: pihole/pihole:latest
ports:
- "65003:53/tcp"
- "65002:53/udp"
- "65001:67/udp"
- "65000:80/tcp"
environment:
TZ: 'Berlin'
WEBPASSWORD: 'password'
# Volumes store your data between container upgrades
volumes:
- './etc-pihole/:/etc/pihole/'
- './etc-dnsmasq.d/:/etc/dnsmasq.d/'
# Recommended but not required (DHCP needs NET_ADMIN)
# https://github.com/pi-hole/docker-pi-hole#note-on-capabilities
cap_add:
- NET_ADMIN
restart: unless-stopped
qnet_dhcp:
image: alpine
command: ifconfig eth0
networks:
- qnet-dhcp
qnet_static:
image: alpine
command: ifconfig eth0
networks:
qnet-static:
ipv4_address: 192.168.178.2
networks:
qnet-dhcp:
driver: qnet
ipam:
driver: qnet
options:
iface: "eth0"
qnet-static:
driver: qnet
ipam:
driver: qnet
options:
iface: "eth0"
config:
- subnet: 192.168.178.0/24
gateway: 192.168.178.1
I got the network information directly from QNAP https://qnap-dev.github.io/container-station-api/qnet.html and tried to verify it with http://www.yamllint.com/, but it does not work together.
error
line 24
notvalid
One of your service names is not correctly indented.
Additionally, you have provided an invalid configuration for ipam for the version 3 file. You can only provide options in version 2 according to the docs.
I will truncate the file for brevity.
# you need file version 2 in order to use options in ipam
# the file you copied it from is also using version 2
version: '2'
services:
pihole:
...
# this one (qnet_dhcp) is the name of another service.
# In your original code the indention is incorrect.
# It should be aligned with the other services.
qnet_dhcp:
...
qnet_static:
...
networks:
qnet-dhcp:
...
ipam:
...
# as mentioned above,
# this is only valid in version 2
options:
...
I am using service 3 and below is mycode,
I tried to add the var- COMPOSE_CONVERT_WINDOWS_PATHS: 1 in environment
it still get the error:
ERROR: for db-on-docker-ms_mysql-dev_1 Cannot create container for service mysql-dev: invalid volume specification: '/c/Dockerfile/db-on-docker-ms:/var/lib/mysql under volumes:rw'
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- /c/Dockerfile/db-on-docker-ms:/var/lib/mysql
My Docker Version: 18.09.2
I think you either need set COMPOSE_CONVERT_WINDOWS_PATHS environment variable from your command line
$ export COMPOSE_CONVERT_WINDOWS_PATHS=1
Then change the volumes configuration
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- c:\Dockerfile\db-on-docker-ms:/var/lib/mysql
Run docker compose
$ docker-compose up
Or you can attempt to set the volumes like this
version: '3'
services:
mysql-dev:
image: mysql:latest
environment:
MYSQL_ROOT_PASSWORD: password
MYSQL_DATABASE: blogapp
ports:
- "3308:3306"
volumes:
- //c/Dockerfile/db-on-docker-ms:/var/lib/mysql
And run docker compose
$ docker-compose up
Thanks for Misantorp's ans first!
I finally figure out how to do it in windows container
the volumes path should be:
volumes:
- C:\Dockerfile\db-on-docker-ms:/var/lib/mysql
run the command in powershell:
COMPOSE_CONVERT_WINDOWS_PATHS=0
then run:
docker-compose up