Old Postgresql Volume Mounted with Container but not showing the Databases - postgresql

I have deleted the old postgresql container having 4 databases in it. Those 4 databases were kept persisted in volume. Later I made another container and mounted the previous old postgresql volume (having 4 databases) with this new container of postgresql, The volume was successfully mounted but when i exec that container and opened the psql shell inside this new container and typed \l it is not showing the previous 4 databases stored in old volume. I have performed the following steps to make the new container and mount with new volume:
Created the new Container
sudo docker run --name postgresql-container -p 5432:5432 -e
POSTGRES_PASSWORD=somePassword -d postgres
postgresql old volume (having 4 databases) attached with new
container
sudo docker container run -itd -v
bf6c9d65bce874795c1b3af1071008a6115b3f63400efb8af655bb9a1ae0f08a:/var/lib/postgresql/data
postgres
In step 2 the volume was successfully attached with container, i have inspected the container with following command:
sudo docker container inspect
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b
You can see the volume is correctly mounted in below json format
"Mounts": [
{
"Type": "volume",
"Name": "c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58",
"Source": "/var/lib/docker/volumes/c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
When i run this new container with the following and inspect the database it does not show the previous stored databases in volumes:
sudo docker exec -it
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b bash
where am i doing it wrong?

Related

Docker volume backup and restore postgres:10 image & named volume

I'm trying to backup/restore a docker volume. This volume is a named volume attached to a postgres:10 db image. The volume is seeded via a pg_dump/restore. I'm using docker-compose and here is what it looks like:
services:
db:
image: postgres:10
ports:
- '5432:5432'
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
driver: local
localstack-data:
driver: local
Here is what the mounted volume looks like on the db service:
"Mounts": [
{
"Type": "volume",
"Name": "postgres-data",
"Source": "/var/lib/docker/volumes/postgres-data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
]
Now, when I say "backup/restore", I mean I want to be able to have a backup of the data inside that volume so that after I have messed all the data up, I can simply just replace it with the backup and have fresh data. A simple "restore w/ snapshot" type of action. I'm following the documentation found on docker.
To test it:
I add a user to my postgres-data
Stop my container docker stop $(docker ps -q)
Perform backup (db_1 is container name): docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox tar cvf /backup/backup.tar /var/lib/postgresql/data
Delete user after backup is complete
Here is an example of the output of backup command that follows w/ NO errors:
tar: removing leading '/' from member names
var/lib/postgresql/data/
var/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_stat/
Once done, tar file gets created. Now, maybe the error is with the path? If you look at the output I'm seeing lib/postgresql/data nested over and over. Not sure if that is an okay result or not.
Once done, I perform the "restore": docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox sh -c "cd /var/lib/postgresql/data && tar xvf /backup/backup.tar --strip 1"
What I expect: To see the deleted user back in the db. Basically, to see the same data that was in the volume when I performed the backup.
What I'm seeing: the data is not getting restored. Still same data. No user restored and any data that has been changed stays changed.
Again, perhaps what I'm doing is the wrong thing or "overkill" for a simple desire to hot swap out used volumes with a fresh, untouched one. Any ideas would be helpful from debugging or a better approach.

ECS + EFS - files never make their way out of the container

I'm trying to do a POC with ECS + EFS (MySQL for a personal site), but the file changes to the mounted volume within docker don't make their way to EFS.
I have it mounted on the container host:
us-east-1a.fs-#####.efs.us-east-1.amazonaws.com:/ on /mnt/mysql-data-docker type nfs4 (rw,relatime,vers=4.0,rsize=1048576,wsize=1048576,namlen=255,hard,proto=tcp,timeo=600,retrans=2,sec=sys,clientaddr=10.0.0.128,local_lock=none,addr=10.0.0.90)
My task definition (relevant parts) shows:
"mountPoints": [
{
"containerPath": "/var/lib/mysql",
"sourceVolume": "mysql-data",
"readOnly": null
}
]
and
"volumes": [
{
"host": {
"sourcePath": "/mnt/mysql-data-docker"
},
"name": "mysql-data"
}
],
I can write a file there, terminate the host, have the new host come up via scaling group and get mounted, and the file is still there, so I know that's working (and EFS shows 12kb on that FS instead of 6kb).
Looking at the running MySQL container:
[ec2-user#ip-10-0-0-128 ~]$ docker inspect e96a7 | jq '.[0].Mounts'
[
{
"Source": "/mnt/mysql-data-docker",
"Destination": "/var/lib/mysql",
"Mode": "",
"RW": true,
"Propagation": "rprivate"
}
]
/mnt/mysql-data-docker on the host only shows my test file I verified with. In the container, there's a bunch of stuff in /var/lib/mysql but it never makes its way to the host or to EFS.
Turns out its because:
If you're using the Amazon ECS-Optimized AMI or Amazon Linux AMI's docker packages, the Docker daemon's mount namespace is unshared from the host's at launch. Some other AMIs might also have this behaviour.
On any of those, a filesystem mount will not propagate to the Docker daemon until the next time it's restarted.
So running sudo service docker restart && sudo start ecs solved it.
When you startup an EC2 instance you can specify a "user data" script that is run at boot time, BEFORE the docker start up
In your case, something like this should work:
#cloud-boothook
#!/bin/bash
# Install nfs-utils
sudo yum install -y nfs-utils
# Mount EFS, writing the volume to fstab ensures it will automatically mount even on reboot
sudo mkdir -p /mnt/mysql-data-docker
sudo echo "us-east-1a.fs-#####.efs.us-east-1.amazonaws.com:/ /mnt/mysql-data-docker nfs4 nfsvers=4.1,rsize=1048576,wsize=1048576,hard,timeo=600,retrans=2" >> /etc/fstab
sudo mount -a

Docker volume details for Windows

I am learning docker and deploying some sample images from the docker hub. One of them requires postgresql. When I deploy without specifying a volume, it works beautifully. When I specify the volume 'path on host', it fails with inability to fsync properly. My question is when I inspect the volumes, I cannot find where docker is storing those volumes. I'd like to be able to specify a volume so I can move the data if/when needed. Where does Docker store this on a windows machine? I tried enabling volume through Kinematic but the container became unusable.
> docker volume inspect 0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data",
"Name": "0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62",
"Options": {},
"Scope": "local"
}
]
I can create a volume through docker but am not sure where it is stored on the harddisk.
If you're on windows 10 and use Docker For Windows, docker will create a VM and run it on your local hyper-v, the volumes you create are then located inside of this VM which is stored in something called MobyLinuxVM.vhdx (you can check it in the settings of docker).
One way to have your data on your host computer for now is to share a drive on docker settings and then map your postgres data folder to your windows hard drive.
Something like docker run -it -v /c/mypgdata:/var/lib/postgresql/data postgres
Another way would be to create a volume with a specific driver, take a look at existing volume drivers if one can do what you want.
This one could be of interest for you : https://github.com/CWSpear/local-persist
You can also enter the MobyLinux VM with this "kind of" hack
#get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
#run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
#switch to host FS
chroot /host
#and then go to the volume you asked for
cd /var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data
Found here : http://docker-saigon.github.io/post/Docker-Beta/

Migrate data from a data only postgresql docker volume

I have a data only postgresql container
docker create -v /var/lib/postgresql/data --name bevdata mdillon/postgis /bin/true
I have a running Postgis container
docker run --name bevaddress -e POSTGRES_USER=bevsu -e POSTGRES_DB=bevaddress -P -d --volumes-from bevdata mdillon/postgis
I have made a backup of that database into the bavaddress container into directory /var/lib/postgresql/backup
I think this means that the backup data is in container bevaddress (the running process) and NOT the data only container bevdata which I think is good.
Now if I docker pull mdillon/postgis to a new version, how can I attach the folder /var/lib/postgresql/backup of container bevaddress so that a new instance and version of mdillon/postgis can access that folder to restore the database?
To the best of my knowledge, you cannot. The file system in your running container only exists for the duration of the run. Without mounting a volume, you have no way to allow a second container access to the backup.
For future backups, you could create a second volume only container that mounts /var/lib/postgresql/backup.

How to set docker mongo data volume

I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
Per Docker Docs:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
docker volume create mongodbdata
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
Via Docker Compose:
version: '2'
services:
mongodb:
image: mongo:latest
volumes:
- ./<your-local-path>:/data/db
/data/db is the location of the data saved on the container.
<your-local-path> is the location on your machine AKA the host machine where the actual database journal files will be saved.
For anyone running MongoDB container on Windows: as described here, there's an issue when you mount volume from Windows host to MongoDB container using path (we call this local volume).
You can overcome the issue using a Docker volume (volume managed by Docker):
docker volume create mongodata
Or using docker-compose as my preference:
version: "3.4"
services:
....
db:
image: mongo
volumes:
- mongodata:/data/db
restart: unless-stopped
volumes:
mongodata:
Tested on Windows 10 and it works
Found a link: VirtualBox Shared Folders are not supported by mongodb.