I have a docker-compose.yml which I am using to deploy to a remote host from my local (Mac) machine using docker context. The compose config is as follows:
database:
image: postgres:14.2
restart: on-failure
volumes:
- ./db-data:/var/lib/postgresql/data
environment:
POSTGRES_DB: db
POSTGRES_USER: postgres
POSTGRES_PASSWORD: postgres
In order to persist data, I have defined a volume ./db-data:/var/lib/postgresql/data. This db-data folder does not exist in my local machine. I want delete this mount completely because I don't want any of the previously persisted data. I know I can define a new volume directory but I would like to use the same directory name (db-data). I have tried the following:
docker compose down --volume --remove-orphans - when I recreate new container, previously persisted data still exists
There is no folder called ./db-data in my Mac working directory.
I tried searching var/lib/docker in my Mac. But that directory does not exists.
Docker for Mac app doesn't list any volumes
There is no db-data in the remote host where the database is deployed
Running docker inspect <container-id> listed the mount directory for the container. The mount directory resembled absolute path of my local computer. For example it was like /Users/<user-name>/dir/db-data. When I saw this I assumed this had to be in the local computer due to the prefix Users/<user-name> but this path was actually found in the root of the remote machine.
Thats because the directory for docker volumes is in the docker vm for MACOS.
Where is /var/lib/docker on Mac/OS X
You would have to follow this to see the volume
Related
Let's say I have the following setup in my docker-compose.yml.
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
ports:
- 5432:5432
...
where ./database contains some SQL files that initialize the database. Here's my question... is initdb run every single time the stopped postgres container starts running again (via $ docker-compose up).
Thus, is it fair to say that every time I restart my postgres container, it builds the entire database from scratch all over again?
My guess is 'yes' as in the documentation it says
The default postgres user and database are created in the entrypoint with initdb.
The answer is no, when you stop your container it is not deleted, only stopped, you can start it when it is stopped the same when you stop your computer it will not vanish from your desk :)
You can even restart it when it is running, same as you would do with your computer.
However when you remove/delete the container with
docker rm -f containername
or
docker-compose rm
then it is truly deleted, equivalent of making your computer vanish from your desk.
But even than you can still persist your data with volume mounts, for example in your compose file your ./database directory will not be deleted from your host machine even when you delete the containers using it. It is the equivalent of using an external usb drive in your computer, so when you make your computer vanish from your desk with deleting it, you still have your usb drive with the data on it that was there when you still had your computer.
So you can persist your database files with the same technique in a volume mount like this:
services:
postgres:
image: postgres:11.6
env_file:
- .local.env
volumes:
- ./database/:/docker-entrypoint-initdb.d
- ./postgres-data/data:/var/lib/postgresql/data
ports:
- 5432:5432
...
This way when you delete your container(s) and do "docker-compose up" again for the same compose file, postgres will not run its init scirpt because the /var/lib/postgresql directory is already populated in it.
However, my computer analogy is valid only in this context, please do not think of containers being mini computers or mini virtual machines, they are not! But that's an other discussion.
I use a devcontainer for building and debugging my .NET Core apps. I'd like to share user-secrets between my host machine and the container.
How can I do this if the the location of the usersecrets depends on the host machine?
Windows: %APPDATA%/Microsoft/UserSecrets
Mac/Linux: $HOME/.microsoft/usersecrets
I tried mounting both locations, but that throws an error.
.devcontainer/devcontainer.json
{
"dockerComposeFile":"docker-compose.yml",
"service":"devcontainer",
"runServices":[],
"workspaceFolder":"/workspace",
"forwardPorts":[
5000,
5001
],
"remoteEnv":{
"ASPNETCORE_ENVIRONMENT":"Development",
"ASPNETCORE_URLS":"https://+:5001;http://+:5000"
}
}
.devcontainer/docker-compose.yml
version: "3.7"
services:
devcontainer:
image: mydevcontainerimage:12345
volumes:
- ..:/workspace:cached
- ${APPDATA}/Microsoft/UserSecrets/:/root/.microsoft/usersecrets
- ${HOME}/.microsoft/usersecrets:/root/.microsoft/usersecrets
# Forwards the local Docker socket to the container.
- /var/run/docker.sock:/var/run/docker.sock
command: sleep infinity
Docker-compose crashes with an error.
ERROR: Duplicate mount points: [/.microsoft/usersecrets:/root/.microsoft/usersecrets:rw, C:\Users\steven\AppData\Roaming\Microsoft\UserSecrets:/root/.microsoft/usersecrets:rw]
The solution might be to use a named volume between the host and the container.
Hence, the docker-compose will only reference that named volume.
The named volume creation will be specific to the host though.
For named volume creation based on host path, see here
But as stated here
The built-in local driver on Windows does not support any options.
And for example device=c:\a\path\to\my\folder will not work under Windows.
But, given that the windows path %APPDATA% expands to something like c:\a\path\to\my\folder you can rephrase it as /host_mnt/c/a/path/to/my/folder and use that for device:
docker volume create --name my_test_volume --opt type=none --opt device=device=/host_mnt/c/a/path/to/my/folder --opt o=bind
For others, this supposes that c: is made accessible in docker settings (Resources / File sharing).
I have an app that runs as a docker container, the app connects to another container which is Mongodb, I want to mount the mongodb volume data/db to Azure share files. I am using docker compose file to define my containers. Assuming that I already have a storage account linked to the app, this is how I define my docker volume in docker-compose
database:
image: "mongo:latest"
container_name: database
ports:
- "27017:27017"
volumes:
- ${WEBAPP_STORAGE_HOME}/data/db:/data/db
In Unable to mount azure file shares as mongodb volume in azure container instances it was mentioned that mounting data/db is not recommended and it won't work. My question is:
How can I mount my mongodb files to Azure files ? how to perform back-ups to those files? and if i want to restore a backup to the database would it be possible to just upload the files in the azure files and see them in my mongodb ?
For your issue, just as I said in another issue you find. To mount the Azure File Share to Web App will override the existing files in the image. So you need to change the MongoDB data path when the Azure File Share is already mounted.
The example docker-compose file like this:
version: '3.7'
services:
web:
image: mongo:latest
ports:
- "27017:27017"
volumes:
- fileshare:/data/mongodb
And create the Web App with this docker-compose file and set the start file with value mongod --dbpath=/data/mongodb --bind_ip_all. For example, if you use the Azure CLI, the create command like this:
az webapp create -g group_name --plan plan_name -n web_name --multicontainer-config-type compose --multicontainer-config-file docker-compose-file --startup-file "mongod --dbpath=/data/mongodb --bind_ip_all"
Finally, you need to set the file share mount like below:
Or follow the steps through CLI in Configure your app with Azure Storage.
I deploy a service on a standard Docker for AWS stack (using this template).
I deploy using docker stack deploy -c docker-compose.yml pos with this compose file:
version: "3.2"
services:
postgres_vanilla:
image: postgres
volumes:
- db-data:/var/lib/postgresql
volumes:
db-data:
driver: "cloudstor:aws"
driver_opts:
size: "6"
ebstype: "gp2"
backing: "relocatable"
I then change some data in the db and force an update of the service with docker service update --force pos_postgres_vanilla
Problem is that the data I change doesn't persist after the update.
I've noticed that postgres initdb script runs every time I update, so I assume it's related.
Is there something i'm doing wrong?
Issue was that cloudstor:aws creates the volume with a lost+found under it, so when postgres starts it finds that the data directory isn't empty and complains about it. To fix that I changed the volume to be mounted one directory above the data directory, at /var/lib/postgresql, but that caused postgres to not find the PGVERSION file, which in turn caused it to run initdb every time the container starts (https://github.com/docker-library/postgres/blob/master/11/docker-entrypoint.sh#L57).
So to work around it, instead of changing the volume to be mounted one directory above the data directory, I changed the data directory to be one level below the volume mount by overriding environment variable PGDATA (to something like /var/lib/postgresql/data/db/).
I am new to the docker ecosystem and I am trying to spin up a simple postgres container along with a volume so it persists its data, by using a yaml composer file. The file is as follows:
# Use postgres/example user/password credentials
version: '3.3'
services:
db:
image: postgres
environment:
POSTGRES_DB: recrow
POSTGRES_USER: recrow
POSTGRES_PASSWORD: recrow_db_1000
PGDATA: /var/lib/pgsql/data/pgdata
volumes:
- ./pgsql/data:/var/lib/pgsql/data/pgdata
However, upon calling docker-compose -f stack.yml up I get the following error:
fixing permissions on existing directory
/var/lib/postgresql/data/pgdata ... initdb: could not change
permissions of directory "/var/lib/postgresql/data/pgdata": Operation
not permitted
/var/lib/pgsql/data/pgdata is supposed to be a directory relative to the container's root, while ./pgsql/data is a path on the host. I am running the container from an ntfs-3g partition mounted on /mnt/storage. What could be the problem? I am also running docker without root permissions, by adding my user to the docker group and this user also has full access to the beforementioned mount point /mnt/storage.
I'm guessing this is going to be an incompatibility with ntfs-3g. The PostgreSQL image contains an entrypoint script that is doing some permission changes on container start: https://github.com/docker-library/postgres/blob/972294a377463156c8d61297320c872fc7d370a9/9.6/docker-entrypoint.sh#L32-L38. I found another relevant question at https://askubuntu.com/questions/11840/how-do-i-use-chmod-on-an-ntfs-or-fat32-partition that talks about being able to set permissions at mount time. But not being able to change via chmod or chown (which is likely the reason for the failure in this case).
Unfortunately, I think the answer here is that you cannot use ntfs-3g safely for backing Docker host volume mounts.
Following off of #liam-mitchell's note above, that is the answer. Use named volumes such like the following:
services:
db:
image: postgres:12-alpine
volumes:
- "postgres:/data/postgres"
environment:
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=postgres
- PGDATA=/data/postgres
...
volumes:
postgres:
I work with OpenShift and had the same problem to run this official image from Docker Hub.
In my case, the solution was to use the official postgres image from red hat repository, the image from red hat repository has fixed this problem, this is can be an alternative.
I had the same issue with docker on WSL2. Setting the :Z flag for the mount and not mounting to a Windows file system directory (/mnt/*) but a linux directory (/home/*) worked for me.
my compose:
version: '3.3'
services:
postgres:
container_name: dbs2-postgres
environment:
- POSTGRES_PASSWORD=mysecretpassword
- PGDATA=/var/lib/postgresql/data/pgdata
volumes:
- './data:/var/lib/postgresql/data:Z'
image: postgres