Docker Compose USB Mount - docker-compose

I'm trying to mount my USB drive to a docker container but running into some issues.
The USB drive auto mounts via /etc/fstab and is chown'd to pi:pi with perms of 777 across the board. So there shouldn't be a true permissions issue.
Within my docker-compose.yml, I have the following:
plex:
image: ghcr.io/linuxserver/plex/bionic
container_name: plex
network_mode: host
environment:
- PUID=1000
- PGID=1000
- VERSION=docker
- UMASK_SET=022
volumes:
- ./volumes/plex/library:/config
- /media:/media
restart: unless-stopped
It's the /media:/media line that doesn't seem to be working. If I get a bash shell inside of the container I'm not seeing any of the files I expect to see.
I'm a docker noob but have tried reading a lot of forums and haven't had much luck so any help would be greatly appreciated.
Note that I'm using "docker-compose restart" when bringing up my various containers after making changes.
Thanks.

Wow, I literally found the issue right after I posted this.
I thought "docker-compose restart" forced a re-read of the file but it does not. I had to do "docker-compose up -d" for it to reread the file and then it worked.

Related

Is a new postgres database created upon every docker-compose up?

Let's say I have the following setup in my docker-compose.yml.
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
ports:
- 5432:5432
...
where ./database contains some SQL files that initialize the database. Here's my question... is initdb run every single time the stopped postgres container starts running again (via $ docker-compose up).
Thus, is it fair to say that every time I restart my postgres container, it builds the entire database from scratch all over again?
My guess is 'yes' as in the documentation it says
The default postgres user and database are created in the entrypoint with initdb.
The answer is no, when you stop your container it is not deleted, only stopped, you can start it when it is stopped the same when you stop your computer it will not vanish from your desk :)
You can even restart it when it is running, same as you would do with your computer.
However when you remove/delete the container with
docker rm -f containername
or
docker-compose rm
then it is truly deleted, equivalent of making your computer vanish from your desk.
But even than you can still persist your data with volume mounts, for example in your compose file your ./database directory will not be deleted from your host machine even when you delete the containers using it. It is the equivalent of using an external usb drive in your computer, so when you make your computer vanish from your desk with deleting it, you still have your usb drive with the data on it that was there when you still had your computer.
So you can persist your database files with the same technique in a volume mount like this:
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
- ./postgres-data/data:/var/lib/postgresql/data
ports:
- 5432:5432
...
This way when you delete your container(s) and do "docker-compose up" again for the same compose file, postgres will not run its init scirpt because the /var/lib/postgresql directory is already populated in it.
However, my computer analogy is valid only in this context, please do not think of containers being mini computers or mini virtual machines, they are not! But that's an other discussion.

How to make sure docker-compose will not remove my volume with postgres data

I am running a simple django webapp with docker-compose. I define both a web service and a db service in a docker-compose.yml file:
version: "3.8"
services:
db:
image: postgres
volumes:
- postgres_data:/var/lib/postgresql/data
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
web:
build: .
command: python manage.py runserver 0.0.0.0:8000
ports:
- "8000:8000"
env_file:
- ./.env.dev
depends_on:
- db
volumes:
postgres_data:
I start the service by running:
docker-compose up -d
I can load some data in there with a custom django command that I wrote for my app. Everything is running fine (with data) on localhost:8000.
However, when I run
docker-compose down
(so without -v) and then again
docker-compose up -d
the database is empty again. The volume was not persisted. From what I read in the docker-compose docs and also in several posts here at SO, persisting the volume and reusing it when you start a new container should be the default behavior (which, if I understand it correctly, you can disable by using the --renew-anon-volumes flag).
However in my case, the volume is not persisted. Or maybe it is, but my data is gone.
By doing docker volume ls I can see that my volume (I'll use the name my_volume here) still exists after the docker-compose down command. However, the CreatedAt value has been changed. This makes me think it's a different volume with the same name, and my data is already gone, but I don't know how to confirm that.
This SO answer suggests to mount the volume on /var/lib/postgresql instead of /var/lib/postgresql/data. However, I've seen other resources (like this one) where the opposite is suggested. I've tried both, but neither option works.
Thanks for any advice.
It turns out that the Dockerfile of my app was using an entrypoint in which the following command was executed: python manage.py flush which clears all data in the database. As this gets executed every time the app container starts, it clears all data. It had nothing to do with docker-compose.

docker-compose on Windows volume not working

I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.

docker-compose volumes: When are they mounted in the container?

I had assumed that docker-compose volumes are mounted before the container's service is started. Is this the case?
I ask, since I've got a docker-compose.yml that, amongst other things, fires up a parse-server container, and mounts a host directory in the container, which contains some code the service should run (Cloud Code).
I can see the directory is mounted correctly after docker-compose up by shelling into the container; the expected files are in the expected place. But the parse-server instance doesn't trigger the code in the mounted directory (I checked it by adding some garbage; no errors).
Is it possible the volume is being mounted after the parse-server service starts?
This is my docker-compose.yml:
version: "3"
volumes:
myappdbdata:
myappconfigdata:
services:
# MongoDB
myappdb:
image: mongo:3.0.8
volumes:
- myappdbdata:/data/db
- myappconfigdata:/data/configdb
# Parse Server
myapp-parse-server:
image: parseplatform/parse-server:2.7.2
environment:
- PARSE_SERVER_MASTER_KEY=someString
- PARSE_SERVER_APPLICATION_ID=myapp
- VERBOSE=1
- PARSE_SERVER_DATABASE_URI=mongodb://myappdb:27017/dev
- PARSE_SERVER_URL=http://myapp-parse-server:1337/parse
- PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
depends_on:
- myappdb
ports:
- 5000:1337
volumes:
- ./cloud:/parse-server/cloud
I'm not sure of the response as I can't find this information in the docs. But I had problems with volumes when I needed them mounted before the container was really running. Sometimes the configuration files were not loaded for example.
The only way to deal with it, is to create a Dockerfile, copy what you want, and use this image for your container.
Hth.
Sadly enough, the biggest issue here was whitespace
PARSE_SERVER_CLOUD_CODE_MAIN = /parse-server/cloud/
should have been
PARSE_SERVER_CLOUD=/parse-server/cloud/
Used 1.5 days chasing this; fml.

How to store MongoDB data with docker-compose

I have this docker-compose:
version: "2"
services:
api:
build: .
ports:
- "3007:3007"
links:
- mongo
mongo:
image: mongo
volumes:
- /data/mongodb/db:/data/db
ports:
- "27017:27017"
The volumes, /data/mongodb/db:/data/db, is the first part (/data/mongodb/db) where the data is stored inside the image and the second part (/data/db) where it's stored locally?
It works on production (ubuntu) but when i run it on my dev-machine (mac) I get:
ERROR: for mongo Cannot start service mongo: error while creating mount source path '/data/mongodb/db': mkdir /data/mongodb: permission denied
Even if I run it as sudo. I've added the /data directory in the "File Sharing"-section in the docker-program on the mac.
Is the idea to use the same docker-compose on both production and development? How do I solve this issue?
Actually it's the other way around (HOST:CONTAINER), /data/mongodb/db is on your host machine and /data/db is in the container.
You have added the /data in the shared folders of your dev machine but you haven't created /data/mongodb/db, that's why you get a permission denied error. Docker doesn't have the rights to create folders.
I get the impression you need to learn a little bit more about the fundamentals of Docker to fully understand what you are doing. There are a lot of potential pitfalls running Docker in production, and my recommendation is to learn the basics really well so you know how to handle them.
Here is what the documentation says about volumes:
[...] specify a path on the host machine (HOST:CONTAINER)
So you have it the wrong way around. The first part is the past on the host, e.g. your local machine, and the second is where the volume is mounted within the container.
Regarding your last question, have a look at this article: Using Compose in production.
Since Docker-Compose syntax version 3.2, you can use a long syntax of the volume property to specify the type of volume. This allows you to create a "Bind" volume, which effectively links a folder from a container to a folder in your host.
Here is an example :
version : "3.2"
services:
mongo:
container_name: mongo
image: mongo
volumes:
- type: bind
source: /data
target: /data/db
ports:
- "42421:27017"
source is the folder in your host and target the folder in your container
More information avaliable here : https://docs.docker.com/compose/compose-file/#long-syntax