my docker-compose is failing on this line. Why? - docker-compose

I need to copy a php.ini file that I have (with xdebug enabled) to /bitnami/php-fpm/conf/. I am using a bitnami docker container, and I want to use xdebug to debug the php code in my app. Therefore I must enable xdebug in the php.ini file. The /bitnami/php-fpm container on the repository had this comment added to it:
5.5.30-0-r01 (2015-11-10)
php.ini is now exposed in the volume mounted at /bitnami/php-fpm/conf/ allowing users to change the defaults as per their requirements.
So I am trying to copy my php.ini file to /bitnami/php-fpm/conf/php.ini in the docker-compose.yml. Here is the php-fpm section of the .yml:
php-fpm:
image: bitnami/php-fpm:5.5.26-3
volumes:
- ./app:/app
- php.ini:/bitnami/php-fpm/conf
networks:
- net
volumes:
database_data:
driver: local
networks:
net:
Here is the error I get: ERROR: Named volume "php.ini:/bitnami/php-fpm/conf:rw" is used in service "php-fpm" but no declaration was found in the volumes section.
Any idea how to fix this?

I will assume that your indentation is correct otherwise you probably wouldn't get that error. Always run your yaml's through a lint tool such as http://www.yamllint.com/.
In terms of your volume mount, the first one you have the correct syntax but the second you don't therefore Docker thinks it is a named volume.
Assuming php.ini is in the root directory next to your docker-compose.yml.
volumes:
- ./app:/app
- ./php.ini:/bitnami/php-fpm/conf

Related

Location of bind-mount volume provided in docker-compose file of mongodb

How to see the location of bind-mount volume provided in the docker-compose yml file. I have created an bind-mount to persist the mongodb.
It is working fine i.e. if container is shut, then also the data is present, but I want to know where is this location present in my computer.
version : "3"
services:
eswmongodb:
image: mongo:latest
container_name: mongocont
ports:
- "27017:27017"
volumes:
- "~/mongo/db:/data/db"
if the container is shut, the data is present
There would be no way for you to know this unless you've already found it stored on the host.
The location is what you've given - ~/mongo/db. Open a terminal and cd to the path
Keep in mind that in Windows, ~ is a special character and is sometimes hidden in the file explorer. If you're using it to get to your User folder, you should prefer using environment variables instead https://superuser.com/questions/332871/what-is-the-equivalent-of-linuxs-tilde-in-windows

docker-compose environment variable loading .env, but not env_file from compose file

I don't really make sense of docker-compose's behavior with regards to environment variables files.
I've defined a few variables for an simple echo server setup with 2 flask application running.
In .env:
FLASK_RUN_PORT=5000
WEB_1_PORT=80
WEB_2_PORT=8001
Then in docker-compose.yml:
version: '3.8'
x-common-variables: &shared_envvars
FLASK_ENV: development
FLASK_APP: main.py
FLASK_RUN_HOST: 0.0.0.0
COMPOSE_PROJECT_NAME: DOCKER_ECHOES
x-volumes: &com_volumes
- .:/project # maps the current directory, e.g. project root that is gitted, to /proj in the container so we can live-reload
services:
web_1:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_1_PORT}:${FLASK_RUN_PORT}" # flask runs on 5000 (default). docker-compose --env-file .env up loads whatever env vars specified & allows them to be used this way here.
volumes: *com_volumes
environment:
<<: *shared_envvars # DRY: defined common stuff in a shared section above, & use YAML merge language syntaxe to include that k-v mapping here. pretty neat.
FLASK_NAME: web_1
web_2:
env_file: .env
build:
dockerfile: dockerfile_flask
context: .
ports:
- "${WEB_2_PORT}:${FLASK_RUN_PORT}" # flask by default runs on 5000 so keep it on container, and :8001 on host
volumes: *com_volumes
environment:
<<: *shared_envvars
FLASK_NAME: web_2
If I run docker-compose up with the above, everything works as expected.
However, if I simply change the name of the file .env for, say, flask.env, and then accordingly change both env_file: .env to env_file: flask.env, then I get:
(venv) [fv#fv-hpz420workstation flask_echo_docker]$ docker-compose up
WARNING: The WEB_1_PORT variable is not set. Defaulting to a blank string.
WARNING: The FLASK_RUN_PORT variable is not set. Defaulting to a blank string.
WARNING: The WEB_2_PORT variable is not set. Defaulting to a blank string.
ERROR: The Compose file './docker-compose.yml' is invalid because:
So obviously the envvars defined in the file were not loaded in that case. I know that according to the documentation, the section environement:, which I am using, overrides what is loaded in the env_file:. But those aren't the same variables. And at any rate, if that was the issue, it shouldn't work either with the first way, right?
What's wrong with the above?
Actually, the env_file is loaded AFTER the images have been built. We can verify this. With the code I have posted above, I can see that env_file.env has not been loaded at build time, because of the error message that I get (telling me WEB_PORT_1 is not set etc.).
But this could simply be that the file is never loaded. To rule that out, we build the image (say by providing the missing arguments with docker-compose build -e (...), then we can verify that it is indeed loaded (by logging its value in the flask application in my case, or a simple print to screen etc.).
This means the the content of env_file is available to the running container, but not before (such as when building the image).
If those variables are to be used within the docker-compose.yml file at BUILD time this file MUST be named .env (unless there is a way to provide a name other than the default, but if so I haven't found any). This is why changing env_file: flask.env to env_file: .env SEEMED to make it work - but the real reason why it worked then was because my ports were specified in a .env with the default name that docker-compose parses anyways. It didn't care that I specified it in docker-compose.yml file or not.
To summarize - TL;DR
If you need to feed environment variables to docker-compose for build-time, you must store them in a .env. No further actions needed, other than ensure this file is in the same directory as the docker-compose.yml. You can't change the default name .env
To provide envars at container run-time, you can put them in foo.env and then specify env_file:foo.env.
For run-time variable, another option is to specify them environment: [vars], if just hard-coding them in the docker-compose.yml is acceptable.. According to doc (not tested) those will override any variables also defined by the env_file

docker-compose on Windows volume not working

I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.

docker-compose volumes: When are they mounted in the container?

I had assumed that docker-compose volumes are mounted before the container's service is started. Is this the case?
I ask, since I've got a docker-compose.yml that, amongst other things, fires up a parse-server container, and mounts a host directory in the container, which contains some code the service should run (Cloud Code).
I can see the directory is mounted correctly after docker-compose up by shelling into the container; the expected files are in the expected place. But the parse-server instance doesn't trigger the code in the mounted directory (I checked it by adding some garbage; no errors).
Is it possible the volume is being mounted after the parse-server service starts?
This is my docker-compose.yml:
version: "3"
volumes:
myappdbdata:
myappconfigdata:
services:
# MongoDB
myappdb:
image: mongo:3.0.8
volumes:
- myappdbdata:/data/db
- myappconfigdata:/data/configdb
# Parse Server
myapp-parse-server:
image: parseplatform/parse-server:2.7.2
environment:
- PARSE_SERVER_MASTER_KEY=someString
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=1
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
depends_on:
- myappdb
ports:
- 5000:1337
volumes:
- ./cloud:/parse-server/cloud
I'm not sure of the response as I can't find this information in the docs. But I had problems with volumes when I needed them mounted before the container was really running. Sometimes the configuration files were not loaded for example.
The only way to deal with it, is to create a Dockerfile, copy what you want, and use this image for your container.
Hth.
Sadly enough, the biggest issue here was whitespace
PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
should have been
PARSE_SERVER_CLOUD=/parse-server/cloud/
Used 1.5 days chasing this; fml.

How to store MongoDB data with docker-compose

I have this docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
mongo:
image: mongo
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally?
It works on production (ubuntu) but when i run it on my dev-machine (mac) I get:
ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied
Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac.
Is the idea to use the same docker-compose on both production and development? How do I solve this issue?
Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container.
You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them.
Here is what the documentation says about volumes:
[...] specify a path on the host machine (HOST:CONTAINER)
So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container.
Regarding your last question, have a look at this article: Using Compose in production.
Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host.
Here is an example :
version : "3.2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- type: bind
source: /data
target: /data/db
ports:
- "42421:27017"
source is the folder in your host and target the folder in your container
More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax