I use a devcontainer for building and debugging my .NET Core apps. I'd like to share user-secrets between my host machine and the container.
How can I do this if the the location of the usersecrets depends on the host machine?
Windows: %APPDATA%/Microsoft/UserSecrets
Mac/Linux: $HOME/.microsoft/usersecrets
I tried mounting both locations, but that throws an error.
.devcontainer/devcontainer.json
{
"dockerComposeFile":"docker-compose.yml",
"service":"devcontainer",
"runServices":[],
"workspaceFolder":"/workspace",
"forwardPorts":[
5000,
5001
],
"remoteEnv":{
"ASPNETCORE_ENVIRONMENT":"Development",
"ASPNETCORE_URLS":"https://+:5001;http://+:5000"
}
}
.devcontainer/docker-compose.yml
version: "3.7"
services:
devcontainer:
image: mydevcontainerimage:12345
volumes:
- ..:/workspace:cached
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${HOME}/.microsoft/usersecrets:/root/.microsoft/usersecrets
# Forwards the local Docker socket to the container.
- /var/run/docker.sock:/var/run/docker.sock
command: sleep infinity
Docker-compose crashes with an error.
ERROR: Duplicate mount points: [/.microsoft/usersecrets:/root/.microsoft/usersecrets:rw, C:\Users\steven\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:rw]
The solution might be to use a named volume between the host and the container.
Hence, the docker-compose will only reference that named volume.
The named volume creation will be specific to the host though.
For named volume creation based on host path, see here
But as stated here
The built-in local driver on Windows does not support any options.
And for example device=c:\a\path\to\my\folder will not work under Windows.
But, given that the windows path %APPDATA% expands to something like c:\a\path\to\my\folder you can rephrase it as /host_mnt/c/a/path/to/my/folder and use that for device:
docker volume create --name my_test_volume --opt type=none --opt device=device=/host_mnt/c/a/path/to/my/folder --opt o=bind
For others, this supposes that c: is made accessible in docker settings (Resources / File sharing).
Related
For automated testing we can't use a DB Docker container with a defined volume. Just wondering if there would be available an "offical" Postgres image with no mounted volume or volume definitions.
Or if someone has a Dockerfile that would create a container without any volume definitions, that would be very helpful to see or try to use one.
Or is there any way to override a defined volume mount and just use datafile inside of to be created Docker container with running DB.
I think you are mixing up volumes and bind mounts.
https://docs.docker.com/storage/
VOLUME Dockerfile command: A volume with the VOLUME command in a Dockerfile is created into the docker area on the host that is /var/lib/docker/volumes/.
I don't think it is possible to run docker without it having access to this directory or it would be not advisable to restrict permission of docker to these directories, these are dockers own directories after all.
So postgres dockerfile has this command in dockerfile, for example: https://github.com/docker-library/postgres/blob/master/15/bullseye/Dockerfile
line 186: VOLUME /var/lib/postgresql/data
This means that the /var/lib/postgresql/data directory that is inside the postgres container will be a VOLUME that will be stored on the host somewhere in /var/lib/docker/volumes/somerandomhashorguid..... in a directory with a random name.
You can also create a volume like this with docker run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /etc postgres:15.1
This way the /etc directory that is inside the container will be stored on the host in the /var/lib/docker/volumes/somerandomhashorguid.....
This volume solution is needed for containers that need extra IO, because the files of the containers (that are not in volumes) are stored in the writeable layer as per the docs: "Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem."
So you could technically remove the VOLUME command from the postgres dockerfile and rebuild the image for yourself and use that image to create your postgres container but it would have lesser performance.
Bind mounts are the type of data storage solution that can be mounted to anywhere on the host filesystem. For example if you would run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /tmp/mypostgresdata:/var/lib/postgresql/data postgres:15.1
(Take not of the -v flag here, there is a colon between the host and the container directory while previously in the volume version of this flag there was no host directory and no colon either.)
then you would have a directory created on your docker host machine /tmp/mypostgresdata and the directory of the container of /var/lib/postgresql/data would be mapped here instead of the docker volumes internal directory /var/lib/docker/volumes/somerandomhashorguid.....
My general rule of thumb would be to use volumes - as in /var/lib/docker/volumes/ - whenever you can and deviate only if really necessary. Bind mounts are not flexible enough to make an image/container portable and the writable container layer has less performance than docker volumes.
You can list docker volumes with docker volume ls but you will not see bind mounted directories here. For that you will need to do docker inspect containername
"You could just copy one of the dockerfiles used by the postgres project, and remove the VOLUME statement. github.com/docker-library/postgres/blob/… –
Nick ODell
Nov 26, 2022 at 18:05"
answered Nick abow.
And that edited Dockerfile would build "almost" Docker Official Image.
I have a docker-compose.yml which I am using to deploy to a remote host from my local (Mac) machine using docker context. The compose config is as follows:
database:
image: postgres:14.2
restart: on-failure
volumes:
- ./db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
In order to persist data, I have defined a volume ./db-data:/var/lib/postgresql/data. This db-data folder does not exist in my local machine. I want delete this mount completely because I don't want any of the previously persisted data. I know I can define a new volume directory but I would like to use the same directory name (db-data). I have tried the following:
docker compose down --volume --remove-orphans - when I recreate new container, previously persisted data still exists
There is no folder called ./db-data in my Mac working directory.
I tried searching var/lib/docker in my Mac. But that directory does not exists.
Docker for Mac app doesn't list any volumes
There is no db-data in the remote host where the database is deployed
Running docker inspect <container-id> listed the mount directory for the container. The mount directory resembled absolute path of my local computer. For example it was like /Users/<user-name>/dir/db-data. When I saw this I assumed this had to be in the local computer due to the prefix Users/<user-name> but this path was actually found in the root of the remote machine.
Thats because the directory for docker volumes is in the docker vm for MACOS.
Where is /var/lib/docker on Mac/OS X
You would have to follow this to see the volume
Option 1: (named container. the volume is identified by its name. It store its data in the /var/lib/docker/volumes/nameofthevolume)
# create the volume in advance
$ docker volume create test_vol
Option: 2 (here name of the volume bind-test does not matter, what matter is which local path /home/user/test it mounts to, which is persistant. Rather than /var/lib/docker/volume/somevolumename /home/user/somedatafolder makes more readability. Cons: we have to ensure that the /home/user/somedatafolder exists.)
# inside a docker-compose file
...
volumes:
bind-test:
driver: local
driver_opts:
type: none
o: bind
device: /home/user/test
or:
version: '3'
services:
myservice:
volumes:
- ./path:/volume/path
The downside of bind mounts is that it places files that are managed by containers, with the uid/gid from the container, inside a path likely used by other users on the host, often with a different uid/gid on the host. The result is permission issues either on the host or inside the container. You need to align uid/gid's between the two to avoid this.
At the end of the day, there isn't a big difference between bind mount and Docker named volumes.
I tend to prefer keeping persistent data from Docker services in Docker volumes. You can then use tools like docker system df -v to inspect what your application uses.
As for exporting the data, you can use docker cp
docker cp someContainer:/somedir/ .
I've been playing with Docker for the past week and think the container idea is very useful, but despite reading everything I can for the past 3 days I can't get the volume mapping to work
get docker-compose to use my existing volume.
Docker Version: 18.03.1-ce
docker-compose version 1.21.1, build 7641a569
I created a volume using the following via a Dockerfile
# Reference SQL image
FROM microsoft/mssql-server-windows-developer
# Create directory within SQL container for database files mapped to the volume
VOLUME sqldata:c:/MSSQL
and here it shows:
C:\ProgramData\Docker\volumes>docker volume ls
local sqldata
Now I've tried probably 60+ different "solutions" based on StackOverflow and Docker forums, but none of them work. (Note despite the names below with Azure I am simply trying to get this to run locally, Azure is next hurdle)
Docker-compose.yaml:
version: '3.4'
services:
ws:
image: wsManager
container_name: azure-wcf
ports:
- "80"
depends_on:
- db
db:
image: dbimage:latest
container_name: azure-db
volumes:
- \sqldata:/mssql
# - type: volume
# source: sqldata
# target: /mssql
ports:
- "1433"
I've added a volumes section but it does not help,
volumes:
sqldata:
external:
name: sqldata
changed the - \sqldata:/mssql
to every possible slash .. . ~ whatever. Moved the file to yaml file
to C:\ProgramData\Docker\volumes - basically any suggestion that showed in my search results. The dbImage is a SQL Server image that I need to persist the data from but am wondering what the magic is as nothing I've tried works. Any help is GREATLY appreciated.
I'm running on Windows 10 Pro build 1803.
Why does this have to be so hard?
Than you to whomever knows how to make this actually work.
The solution is to reference the true path on Windows using the volumes: option as below:
sqldb:
image: sqlimage
container_name: azure-db
volumes:
- "C:\\ProgramData\\Docker\\volumes\\sqldata:c:\\mssql"
To persist the data I used the following:
environment:
- "sa_password=ddsql2017##"
- "ACCEPT_EULA=Y"
- 'attach_dbs= {"dbName":"MyDb","dbFiles":"C:\\MSSQL\\MyDb.mdf","C:\\MSSQL\\MyDb.ldf"]}]'
Hope this helps someone else as many of the examples I found searching both on SO and elsewhere did not work for me, and in the Docker forums there are a lot of posts saying mounting volumes not work for Windows.
For those who are using Ubunto WSL:
sudo mkdir /c
sudo mount --bind /mnt/c /c
navigate to your project file use new path ( /c/your-project-path and not /mnt/c/your-project-path)
edit your docker-compose.yml and use relative path for volume : ( like ./src instead of c/your-project-path/src)
docker-compose up
I was struggling with a similar problem when trying to mount a volume to a specific path of my Windows machine: basically it didn't work so every time I restarted my Docker instance I lose all my DB data.
I finally found out that it is because Docker for Windows by default cannot interpret Windows path so the flag COMPOSE_CONVERT_WINDOWS_PATHS has to be activated. To do so:
Run the command "set COMPOSE_CONVERT_WINDOWS_PATHS=1"
Restart Docker
Go to Settings > Shared Drives > Reset credentials and then select drive and then apply
From the command line, kill the containers (docker container rm -f )
Re-run the containers
Hope it helps
If your windows account credentials has been changed, you also have to reset credentials for shared drives. (Settings > Shared Drives > Reset credentials)
In my case, the password was changed by my company security policy.
Are you sure you really need to map to a certain host directory? If not, my solution is to create a volume beforehand and use it in docker-compose.yaml. I use the same scripts for both windows and linux. That is the beauty of docker.
Here is what I did to start both postgres and mysql:
create_db.sh (you can run it in git bash or similiar environment in windows):
docker volume create --name postgres-data -d local
docker volume create --name mysql-data -d local
docker-compose up -d
docker-compose.yaml:
version: '3'
services:
postgres:
image: postgres:latest
environment:
POSTGRES_DB: datasource
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
ports:
- 5432:5432
volumes:
- postgres-data:/var/lib/postgresql/data
mysql:
image: mysql:latest
environment:
MYSQL_DATABASE: 'train'
MYSQL_USER: 'mysql'
MYSQL_PASSWORD: 'mysql'
MYSQL_ROOT_PASSWORD: 'mysql'
ports:
- 3306:3306
volumes:
- mysql-data:/var/lib/mysql
volumes:
postgres-data:
external: true
mysql-data:
external: true
By default it looks that after installing Docker on Windows, sharing of drivers is disabled - so you won't be able to use volumes(that are stored on disks)
Enabling such sharing, through: Docker in tray - right click - Settings, helped to me, volumes started working fine.
Docker on Windows is having strange behavior as Windows has limitations with credentials and also with the virtual machine that Docker is using(Hyper-V , VirtualBox - depending on your Docker version and setup).
Basically, you are correct to map a folder in
volumes:
section in your service:
The path is
version: '3.4'
services:
db:
image: dbimage:latest
container_name: azure-db
volumes:
- c:/Temp/sqldata:/mssql
Important is that you do not need to explicitly create volume in volumes section, but the docker-compose up will create it(the same is for docker run).
Strange thing is that it will never show up in
docker volume ls
but it will be usable with the same files inside windows directory and inside container path /mssql
You can test it with:
docker run --rm -v c:/Temp/sqldata:/data alpine ls /data
or
docker run --rm -v c:/Temp:/data alpine ls /data
If it Disappear, probably it lost the credentials and Reset it via Docker->Settings->Shared Drives->Reset credentials.
I hope it was clear and covered all the aspects for you.
Launch Docker from your windows taskbar
Click on Settings icon on top
Click Resources
Click File Sharing
Click on (+) sign and add path of local folder in which you want to map the container volume.
It worked for me.
It is quite easy to run MongoDB containerised using docker. Though each time you start a new mongodb container, you will get new empty database.
What should I do in order to keep the database content between container restarts? I tried to bind external directory to container using -v option but without any success.
I tried using the ehazlett/mongodb image and it worked fine.
With this image, you can easily specify where mongo store its data with DATA_DIR env variable. I am sure it must not be very difficult to change on your image too.
Here is what I did:
mkdir test; docker run -v `pwd`/test:/tmp/mongo -e DATA_DIR=/tmp/mongo ehazlett/mongodb
notice the `pwd` in within the -v, as the server and the client might have different path, it is important to specify the absolute path.
With this command, I can run mongo as many time as I want and the database will always be store in the ./test directory I just created.
When using the official Mongo docker image, which is i.e. version mongo:4.2.2-bionic as writing this answer, and using docker-compose, you can achieve persistent data storage using this docker-compose.yml file example.
In the official mongo image, data is stored in the container under the root directory in the folder /data/db by default.
Map this folder to a folder in your local working directory called data (in this example).
Make sure ports are set and mapped, default 27017-27019:27017-27019.
Example of my docker-compose.yml:
version: "3.2"
services:
mongodb:
image: mongo:4.2.2-bionic
container_name: mongodb
restart: unless-stopped
ports:
- 27017-27019:27017-27019
volumes:
- ./data:/data/db
Run docker-compose up in the directory where the yml file is located to run the mongodb container with persistent storage. If you do not have the official image yet, it will pull it from Dockerhub first.
Old post but may be someone still need quick and easy solution...
The easiest way I found is using binding to volume.
Following that way you can easily attach existing MongoDB data; and it will live even after you destroying the container.
Create volume that points to your folder (may include existing db). In my case it's done under Windows, but you can do it on any file system:
docker volume create --opt type=none --opt o=bind --opt device=d:/data/db db
Create/run docker container with MongoDB using that volume binding:
docker run --name mongodb -d -p 27017:27017 -v db:/data/db mongo