Mount Postgres data to Windows host directory - postgresql

I want to ensure my Postgres data (using Linux based image) persist, even after my Windows host machine restart.
I tried to follow the steps How to persist data in a dockerized postgres database using volumes
docker-compose.yml
volumes:
- ./postgres_data:/var/lib/postgresql/data
However, I'm getting error
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data/pgdata" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Then, I tried to follow step in https://forums.docker.com/t/trying-to-get-postgres-to-work-on-persistent-windows-mount-two-issues/12456/5
The suggested method are
docker volume create --name postgres_data --driver local
docker-compose.yml
services:
postgres:
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
external: true
However, I'm confused on the command docker volume create --name postgres_data --driver local. As, it doesn't mention the exact path of Windows host machine.
I tried
C:\celery-hello-world>docker volume create postgres_data
postgres_data
C:\celery-hello-world>docker volume inspect postgres_data
[
{
"CreatedAt": "2018-02-06T14:54:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/postgres_data/_data",
"Name": "postgres_data",
"Options": {},
"Scope": "local"
}
]
May I know where is the Windows directory location, which VOLUME postgres_data mount to?

Today i was asking myself the same question but i have figured it out.
First to note is that inside postgres container the path is:
/var/lib/postgresql/data
Now the path you tried to track down is another path:
C:\celery-hello-world>docker volume inspect postgres_data [
{
"CreatedAt": "2018-02-06T14:54:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/postgres_data/_data",
"Name": "postgres_data",
"Options": {},
"Scope": "local"
} ]
/var/lib/docker/volumes/postgres_data/_data
I am also using Docker for windows and Unix containers.
As you can check normally you will not see this docker machine:
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
But there's a trick to access it regarding Unix containers:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Just run this from your CLI and it'll drop you in a container with
full permissions on the Moby VM. Only works for Moby Linux VM (doesn't
work for Windows Containers). Note this also works on Docker for Mac.
Reference: https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
From there you can navigate to this folder (note that i named my volume postgresql-volume):
/var/lib/docker/volumes/postgresql-volume/_data ls
PG_VERSION pg_dynshmem pg_notify pg_stat_tmp postgresql.auto.conf
base pg_hba.conf pg_replslot pg_subtrans postgresql.conf
global pg_ident.conf pg_serial pg_tblspc postmaster.opts
pg_clog pg_logical pg_snapshots pg_twophase postmaster.pid
pg_commit_ts pg_multixact pg_stat pg_xlog
So to be clear it's inside of the VM provided by Docker for Windows in the directory above.
Docker for windows needs Hyper-V enabled for virtualization.
You can also find image location in Docker for windows go to Settings->Advanced->Disk image location in my case it's:
"C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx"
Also note that the same could be found in Hyper-V Manager.

Related

Old Postgresql Volume Mounted with Container but not showing the Databases

I have deleted the old postgresql container having 4 databases in it. Those 4 databases were kept persisted in volume. Later I made another container and mounted the previous old postgresql volume (having 4 databases) with this new container of postgresql, The volume was successfully mounted but when i exec that container and opened the psql shell inside this new container and typed \l it is not showing the previous 4 databases stored in old volume. I have performed the following steps to make the new container and mount with new volume:
Created the new Container
sudo docker run --name postgresql-container -p 5432:5432 -e
POSTGRES_PASSWORD=somePassword -d postgres
postgresql old volume (having 4 databases) attached with new
container
sudo docker container run -itd -v
bf6c9d65bce874795c1b3af1071008a6115b3f63400efb8af655bb9a1ae0f08a:/var/lib/postgresql/data
postgres
In step 2 the volume was successfully attached with container, i have inspected the container with following command:
sudo docker container inspect
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b
You can see the volume is correctly mounted in below json format
"Mounts": [
{
"Type": "volume",
"Name": "c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58",
"Source": "/var/lib/docker/volumes/c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
When i run this new container with the following and inspect the database it does not show the previous stored databases in volumes:
sudo docker exec -it
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b bash
where am i doing it wrong?

Postgresql in a Docker Container on Windows: How to persist data to a local windows folder

I'm trying to run postgres in a docker container on windows. I also want keep the data in a windows folder, so I tried this:
mkdir c:\pgdata
PS > docker run --name postgres -v c:\pgdata:/var/lib/postgresql/data -d postgres
d12af76bed7f8078babc0b6d35710dfc02b12d650904ed53ca95bb99984e9b36
This appeared to work, but the container is not running and the log tells a different story:
2019-07-24 23:19:20.861 UTC [77] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-24 23:19:20.861 UTC [77] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
If I remove the volume option, it starts up fine, but then I don't get my database files persisted where I want them. What am I doing wrong here?
You did nothing wrong, just have a look for the full log:
The files belonging to this database system will be owned by user "postgres".
This user must also own the server process.
The database cluster will be initialized with locale "en_US.utf8".
The default database encoding has accordingly been set to "UTF8".
The default text search configuration will be set to "english".
Data page checksums are disabled.
fixing permissions on existing directory /var/lib/postgresql/data ... ok
creating subdirectories ... ok
selecting default max_connections ... 20
selecting default shared_buffers ... 400kB
selecting default timezone ... Etc/UTC
selecting dynamic shared memory implementation ... posix
creating configuration files ... ok
2019-07-25 01:28:18.301 UTC [77] FATAL: data directory "/var/lib/postgresql/data" has wrong ownership
2019-07-25 01:28:18.301 UTC [77] HINT: The server must be started by the user that owns the data directory.
child process exited with exit code 1
initdb: removing contents of data directory "/var/lib/postgresql/data"
running bootstrap script ...
From above, you can see fixing permissions on existing directory /var/lib/postgresql/data ... ok which is executed in docker-entrypoint.sh to change the ownership from root to postgres, but unfortunately, this just works on linux host not on windows host.
Then, why not work on windows, see this discussion, mainly because current implementation was based on CIFS/Samba which make docker cannot improve it.
So, I guess you have no chance to persist data to windows if you insist to use bind mount.
But, if you not insist, a closer solution maybe use Named Volumes like next:
PS C:\> docker volume create my-vol
my-vol
PS C:\> docker run --name postgres -v my-vol:/var/lib/postgresql/data -d postgres
079d4b5b3f73bc0c4c586cdfee3fdefc8a27cdcd409e857de985bead254cd23f
PS C:\> docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
079d4b5b3f73 postgres "docker-entrypoint.s…" 5 seconds ago Up 2 seconds 5432/tcp postgres
PS C:\> docker volume inspect my-vol
[
{
"CreatedAt": "2019-07-25T01:43:01Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/my-vol/_data",
"Name": "my-vol",
"Options": {},
"Scope": "local"
}
]
Finally, the data will persist in /var/lib/docker/volumes/my-vol/_data, but the limit is this folder is not on windows, it's on hyper-v machine as you may know docker for windows use hyper-v to simulate linux kernel.
But it may still meet your requirement, because even you remove current container, next time if you use same volume name(here is my-vol) to mount, the data will still be in the new container, the named volume will not be delete even container deleted, it will be persist in hyper-v virtual machine.
in windows, once you installed docker, you also got docker-compose.exe command. so let's use it for your postgres:
step1. create an folder in your host(windows: D:\workspace\docker_folder\postgres9.5)
step2. make a docker-compose.yml and paste its content as below:
version: '3'
services:
postgres9.5:
container_name: "postgres9.5"
image: postgres:9.5
# notice here, D:\workspace should be written as: /d/workspace
volumes:
- /d/workspace/docker_folder/postgres9.5:/var/lib/postgresql/data
command: 'postgres'
ports:
- "5432:5432"
stdin_open: true
tty: true
environment:
- POSTGRES_DB=postgres
- POSTGRES_USER=postgres
- POSTGRES_PASSWORD=88888888
step3. run docker-compose up, then you will see the files in your D:\workspace\docker_folder, as below:

ECS + EFS - files never make their way out of the container

I'm trying to do a POC with ECS + EFS (MySQL for a personal site), but the file changes to the mounted volume within docker don't make their way to EFS.
I have it mounted on the container host:
us-east-1a.fs-#####.efs.us-east-1.amazonaws.com:/ on /mnt/mysql-data-docker type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.128,local_lock=none,addr=10.0.0.90)
My task definition (relevant parts) shows:
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "mysql-data",
"readOnly": null
}
]
and
"volumes": [
{
"host": {
"sourcePath": "/mnt/mysql-data-docker"
},
"name": "mysql-data"
}
],
I can write a file there, terminate the host, have the new host come up via scaling group and get mounted, and the file is still there, so I know that's working (and EFS shows 12kb on that FS instead of 6kb).
Looking at the running MySQL container:
[ec2-user#ip-10-0-0-128 ~]$ docker inspect e96a7 | jq '.[0].Mounts'
[
{
"Source": "/mnt/mysql-data-docker",
"Destination": "/var/lib/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
/mnt/mysql-data-docker on the host only shows my test file I verified with. In the container, there's a bunch of stuff in /var/lib/mysql but it never makes its way to the host or to EFS.
Turns out its because:
If you're using the Amazon ECS-Optimized AMI or Amazon Linux AMI's docker packages, the Docker daemon's mount namespace is unshared from the host's at launch. Some other AMIs might also have this behaviour.
On any of those, a filesystem mount will not propagate to the Docker daemon until the next time it's restarted.
So running sudo service docker restart && sudo start ecs solved it.
When you startup an EC2 instance you can specify a "user data" script that is run at boot time, BEFORE the docker start up
In your case, something like this should work:
#cloud-boothook
#!/bin/bash
# Install nfs-utils
sudo yum install -y nfs-utils
# Mount EFS, writing the volume to fstab ensures it will automatically mount even on reboot
sudo mkdir -p /mnt/mysql-data-docker
sudo echo "us-east-1a.fs-#####.efs.us-east-1.amazonaws.com:/ /mnt/mysql-data-docker nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" >> /etc/fstab
sudo mount -a

Docker volume details for Windows

I am learning docker and deploying some sample images from the docker hub. One of them requires postgresql. When I deploy without specifying a volume, it works beautifully. When I specify the volume 'path on host', it fails with inability to fsync properly. My question is when I inspect the volumes, I cannot find where docker is storing those volumes. I'd like to be able to specify a volume so I can move the data if/when needed. Where does Docker store this on a windows machine? I tried enabling volume through Kinematic but the container became unusable.
> docker volume inspect 0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data",
"Name": "0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62",
"Options": {},
"Scope": "local"
}
]
I can create a volume through docker but am not sure where it is stored on the harddisk.
If you're on windows 10 and use Docker For Windows, docker will create a VM and run it on your local hyper-v, the volumes you create are then located inside of this VM which is stored in something called MobyLinuxVM.vhdx (you can check it in the settings of docker).
One way to have your data on your host computer for now is to share a drive on docker settings and then map your postgres data folder to your windows hard drive.
Something like docker run -it -v /c/mypgdata:/var/lib/postgresql/data postgres
Another way would be to create a volume with a specific driver, take a look at existing volume drivers if one can do what you want.
This one could be of interest for you : https://github.com/CWSpear/local-persist
You can also enter the MobyLinux VM with this "kind of" hack
#get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
#run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
#switch to host FS
chroot /host
#and then go to the volume you asked for
cd /var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data
Found here : http://docker-saigon.github.io/post/Docker-Beta/

How to set docker mongo data volume

I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
Per Docker Docs:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
docker volume create mongodbdata
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
Via Docker Compose:
version: '2'
services:
mongodb:
image: mongo:latest
volumes:
- ./<your-local-path>:/data/db
/data/db is the location of the data saved on the container.
<your-local-path> is the location on your machine AKA the host machine where the actual database journal files will be saved.
For anyone running MongoDB container on Windows: as described here, there's an issue when you mount volume from Windows host to MongoDB container using path (we call this local volume).
You can overcome the issue using a Docker volume (volume managed by Docker):
docker volume create mongodata
Or using docker-compose as my preference:
version: "3.4"
services:
....
db:
image: mongo
volumes:
- mongodata:/data/db
restart: unless-stopped
volumes:
mongodata:
Tested on Windows 10 and it works
Found a link: VirtualBox Shared Folders are not supported by mongodb.