I want to use Dockerizing MongoDB and store data in local volume.
But .. failed ...
It has mongo:latest images
kerydeMacBook-Pro:~ hu$ docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
mongo latest b11eedbc330f 2 weeks ago 317.4 MB
ubuntu latest 6cc0fc2a5ee3 3 weeks ago 187.9 MB
I want to store the mono data in ~/data. so ---
kerydeMacBook-Pro:~ hu$ docker run -p 27017:27017 -v ~/data:/data/db --name mongo -d mongo
f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43
But ... it not work...
docker ps -- no daemon mongo
kerydeMacBook-Pro:~ hu$ docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
try to run "mongo" --failed
kerydeMacBook-Pro:~ hu$ docker exec -it f57 bash
Error response from daemon: Container f57 is not running
docker inspect mongo
kerydeMacBook-Pro:~ hu$ docker inspect mongo
[
{
"Id": "f570073fa3104a54a54f39dbbd900a7c9f74938e2e0f3f731ec8a3140a418c43",
"Created": "2016-02-15T02:19:01.617824401Z",
"Path": "/entrypoint.sh",
"Args": [
"mongod"
],
"State": {
"Status": "exited",
"Running": false,
"Paused": false,
"Restarting": false,
"OOMKilled": false,
"Dead": false,
"Pid": 0,
"ExitCode": 100,
"Error": "",
"StartedAt": "2016-02-15T02:19:01.74102535Z",
"FinishedAt": "2016-02-15T02:19:01.806376434Z"
},
"Mounts": [
{
"Source": "/Users/hushuming/data",
"Destination": "/data/db",
"Mode": "",
"RW": true
},
{
"Name": "365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a",
"Source": "/mnt/sda1/var/lib/docker/volumes/365e687c4e42a510878179962bea3c7699b020c575812c6af5a1718eeaf7b57a/_data",
"Destination": "/data/configdb",
"Driver": "local",
"Mode": "",
"RW": true
}
],
If I do not set data volume, mongo image can work!
But, when setting data volume, it can't. Who can help me?
Try and check docker logs to see what was going on when the container stopped and go in "Existed" mode.
See also if specifying the full path for the volume would help:
docker run -p 27017:27017 -v /home/<user>/data:/data/db ...
The OP adds:
docker logs mongo
exception in initAndListen: 98
Unable to create/open lock file: /data/db/mongod.lock
errno:13 Permission denied
Is a mongod instance already running?
terminating 2016-02-15T06:19:17.638+0000
I CONTROL [initandlisten] dbexit: rc: 100
An errno:13 is what issue 30 is about.
This comment adds:
It's a file ownership/permission issue (not related to this docker image), either using boot2docker with VB or a vagrant box with VB.
Nevertheless, I managed to hack the ownership, remounting the /Users shared volume inside boot2docker to uid 999 and gid 999 (which are what mongo docker image uses) and got it to start:
$ boot2docker ssh
$ sudo umount /Users
$ sudo mount -t vboxsf -o uid=999,gid=999 Users /Users
But... mongod crashes due to filesystem type not being supported (mmap not working on vboxsf)
So the actual solution would be to try a DVC: Data Volume Container, because right now the mongodb doc mentions:
MongoDB requires a filesystem that supports fsync() on directories.
For example, HGFS and Virtual Box’s shared folders do not support this
operation.
So:
the mounting to OSX will not work for MongoDB because of the way that virtualbox shared folders work.
For a DVC (Data Volume Container), try docker volume create:
docker volume create mongodbdata
Then use it as:
docker run -p 27017:27017 -v mongodbdata:/data/db ...
And see if that works better.
As I mention in the comments:
A docker volume inspect mongodbdata (see docker volume inspect) will give you its path (that you can then backup if you need)
Per Docker Docs:
Volumes are the preferred mechanism for persisting data generated by and used by Docker containers.
docker volume create mongodbdata
docker run -p 27017:27017 -v mongodbdata:/data/db mongo
Via Docker Compose:
version: '2'
services:
mongodb:
image: mongo:latest
volumes:
- ./<your-local-path>:/data/db
/data/db is the location of the data saved on the container.
<your-local-path> is the location on your machine AKA the host machine where the actual database journal files will be saved.
For anyone running MongoDB container on Windows: as described here, there's an issue when you mount volume from Windows host to MongoDB container using path (we call this local volume).
You can overcome the issue using a Docker volume (volume managed by Docker):
docker volume create mongodata
Or using docker-compose as my preference:
version: "3.4"
services:
....
db:
image: mongo
volumes:
- mongodata:/data/db
restart: unless-stopped
volumes:
mongodata:
Tested on Windows 10 and it works
Found a link: VirtualBox Shared Folders are not supported by mongodb.
Related
I'm trying to backup/restore a docker volume. This volume is a named volume attached to a postgres:10 db image. The volume is seeded via a pg_dump/restore. I'm using docker-compose and here is what it looks like:
services:
db:
image: postgres:10
ports:
- '5432:5432'
volumes:
- postgres-data:/var/lib/postgresql/data
volumes:
postgres-data:
driver: local
localstack-data:
driver: local
Here is what the mounted volume looks like on the db service:
"Mounts": [
{
"Type": "volume",
"Name": "postgres-data",
"Source": "/var/lib/docker/volumes/postgres-data/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "rw",
"RW": true,
"Propagation": ""
}
]
Now, when I say "backup/restore", I mean I want to be able to have a backup of the data inside that volume so that after I have messed all the data up, I can simply just replace it with the backup and have fresh data. A simple "restore w/ snapshot" type of action. I'm following the documentation found on docker.
To test it:
I add a user to my postgres-data
Stop my container docker stop $(docker ps -q)
Perform backup (db_1 is container name): docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox tar cvf /backup/backup.tar /var/lib/postgresql/data
Delete user after backup is complete
Here is an example of the output of backup command that follows w/ NO errors:
tar: removing leading '/' from member names
var/lib/postgresql/data/
var/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/PG_VERSION
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_hba.conf
var/lib/postgresql/data/lib/postgresql/data/lib/postgresql/data/pg_stat/
Once done, tar file gets created. Now, maybe the error is with the path? If you look at the output I'm seeing lib/postgresql/data nested over and over. Not sure if that is an okay result or not.
Once done, I perform the "restore": docker run --rm --volumes-from db_1 -v $(pwd):/backup busybox sh -c "cd /var/lib/postgresql/data && tar xvf /backup/backup.tar --strip 1"
What I expect: To see the deleted user back in the db. Basically, to see the same data that was in the volume when I performed the backup.
What I'm seeing: the data is not getting restored. Still same data. No user restored and any data that has been changed stays changed.
Again, perhaps what I'm doing is the wrong thing or "overkill" for a simple desire to hot swap out used volumes with a fresh, untouched one. Any ideas would be helpful from debugging or a better approach.
I have deleted the old postgresql container having 4 databases in it. Those 4 databases were kept persisted in volume. Later I made another container and mounted the previous old postgresql volume (having 4 databases) with this new container of postgresql, The volume was successfully mounted but when i exec that container and opened the psql shell inside this new container and typed \l it is not showing the previous 4 databases stored in old volume. I have performed the following steps to make the new container and mount with new volume:
Created the new Container
sudo docker run --name postgresql-container -p 5432:5432 -e
POSTGRES_PASSWORD=somePassword -d postgres
postgresql old volume (having 4 databases) attached with new
container
sudo docker container run -itd -v
bf6c9d65bce874795c1b3af1071008a6115b3f63400efb8af655bb9a1ae0f08a:/var/lib/postgresql/data
postgres
In step 2 the volume was successfully attached with container, i have inspected the container with following command:
sudo docker container inspect
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b
You can see the volume is correctly mounted in below json format
"Mounts": [
{
"Type": "volume",
"Name": "c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58",
"Source": "/var/lib/docker/volumes/c792a1d5214bcb2ef57eab1b07b41871a0fdf0159d0efa545b0c0e71dee7ba58/_data",
"Destination": "/var/lib/postgresql/data",
"Driver": "local",
"Mode": "z",
"RW": true,
"Propagation": ""
}
],
When i run this new container with the following and inspect the database it does not show the previous stored databases in volumes:
sudo docker exec -it
fb1f0924f127c6ae3aa49d2c4ee1331697cc5a4930d6c7eae9944defd6c6ef5b bash
where am i doing it wrong?
I want to ensure my Postgres data (using Linux based image) persist, even after my Windows host machine restart.
I tried to follow the steps How to persist data in a dockerized postgres database using volumes
docker-compose.yml
volumes:
- ./postgres_data:/var/lib/postgresql/data
However, I'm getting error
waiting for server to start....FATAL: data directory "/var/lib/postgresql/data/pgdata" has wrong ownership
HINT: The server must be started by the user that owns the data directory.
stopped waiting
pg_ctl: could not start server
Then, I tried to follow step in https://forums.docker.com/t/trying-to-get-postgres-to-work-on-persistent-windows-mount-two-issues/12456/5
The suggested method are
docker volume create --name postgres_data --driver local
docker-compose.yml
services:
postgres:
volumes:
- postgres_data:/var/lib/postgresql/data
volumes:
postgres_data:
external: true
However, I'm confused on the command docker volume create --name postgres_data --driver local. As, it doesn't mention the exact path of Windows host machine.
I tried
C:\celery-hello-world>docker volume create postgres_data
postgres_data
C:\celery-hello-world>docker volume inspect postgres_data
[
{
"CreatedAt": "2018-02-06T14:54:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/postgres_data/_data",
"Name": "postgres_data",
"Options": {},
"Scope": "local"
}
]
May I know where is the Windows directory location, which VOLUME postgres_data mount to?
Today i was asking myself the same question but i have figured it out.
First to note is that inside postgres container the path is:
/var/lib/postgresql/data
Now the path you tried to track down is another path:
C:\celery-hello-world>docker volume inspect postgres_data [
{
"CreatedAt": "2018-02-06T14:54:48Z",
"Driver": "local",
"Labels": {},
"Mountpoint": "/var/lib/docker/volumes/postgres_data/_data",
"Name": "postgres_data",
"Options": {},
"Scope": "local"
} ]
/var/lib/docker/volumes/postgres_data/_data
I am also using Docker for windows and Unix containers.
As you can check normally you will not see this docker machine:
docker-machine ls
NAME ACTIVE DRIVER STATE URL SWARM DOCKER ERRORS
But there's a trick to access it regarding Unix containers:
docker run -it --rm --privileged --pid=host justincormack/nsenter1
Just run this from your CLI and it'll drop you in a container with
full permissions on the Moby VM. Only works for Moby Linux VM (doesn't
work for Windows Containers). Note this also works on Docker for Mac.
Reference: https://www.bretfisher.com/getting-a-shell-in-the-docker-for-windows-vm/
From there you can navigate to this folder (note that i named my volume postgresql-volume):
/var/lib/docker/volumes/postgresql-volume/_data ls
PG_VERSION pg_dynshmem pg_notify pg_stat_tmp postgresql.auto.conf
base pg_hba.conf pg_replslot pg_subtrans postgresql.conf
global pg_ident.conf pg_serial pg_tblspc postmaster.opts
pg_clog pg_logical pg_snapshots pg_twophase postmaster.pid
pg_commit_ts pg_multixact pg_stat pg_xlog
So to be clear it's inside of the VM provided by Docker for Windows in the directory above.
Docker for windows needs Hyper-V enabled for virtualization.
You can also find image location in Docker for windows go to Settings->Advanced->Disk image location in my case it's:
"C:\ProgramData\DockerDesktop\vm-data\DockerDesktop.vhdx"
Also note that the same could be found in Hyper-V Manager.
I am learning docker and deploying some sample images from the docker hub. One of them requires postgresql. When I deploy without specifying a volume, it works beautifully. When I specify the volume 'path on host', it fails with inability to fsync properly. My question is when I inspect the volumes, I cannot find where docker is storing those volumes. I'd like to be able to specify a volume so I can move the data if/when needed. Where does Docker store this on a windows machine? I tried enabling volume through Kinematic but the container became unusable.
> docker volume inspect 0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62
[
{
"Driver": "local",
"Labels": null,
"Mountpoint": "/var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data",
"Name": "0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62",
"Options": {},
"Scope": "local"
}
]
I can create a volume through docker but am not sure where it is stored on the harddisk.
If you're on windows 10 and use Docker For Windows, docker will create a VM and run it on your local hyper-v, the volumes you create are then located inside of this VM which is stored in something called MobyLinuxVM.vhdx (you can check it in the settings of docker).
One way to have your data on your host computer for now is to share a drive on docker settings and then map your postgres data folder to your windows hard drive.
Something like docker run -it -v /c/mypgdata:/var/lib/postgresql/data postgres
Another way would be to create a volume with a specific driver, take a look at existing volume drivers if one can do what you want.
This one could be of interest for you : https://github.com/CWSpear/local-persist
You can also enter the MobyLinux VM with this "kind of" hack
#get a privileged container with access to Docker daemon
docker run --privileged -it --rm -v /var/run/docker.sock:/var/run/docker.sock -v /usr/bin/docker:/usr/bin/docker alpine sh
#run a container with full root access to MobyLinuxVM and no seccomp profile (so you can mount stuff)
docker run --net=host --ipc=host --uts=host --pid=host -it --security-opt=seccomp=unconfined --privileged --rm -v /:/host alpine /bin/sh
#switch to host FS
chroot /host
#and then go to the volume you asked for
cd /var/lib/docker/volumes/0622ff3e0de10e2159fa4fe6b7cd7407c6149067f138b72380a5bbe337df8f62/_data
Found here : http://docker-saigon.github.io/post/Docker-Beta/
This docker-compose.yml:
services:
database:
image: mongo:3.2
ports:
- "27017"
command: "mongod --dbpath=/usr/database"
networks:
- backend
volumes:
- dbdata:/usr/database
volumes:
dbdata:
results in this error (snipped):
database_1 | 2016-11-28T06:30:29.864+0000 I STORAGE [initandlisten] exception in initAndListen: 98 Unable to create/open lock file: /usr/database/mongod.lock errno:13 Permission denied Is a mongod instance already running?, terminating
Ditto for just trying to run the command in a container using that image directly:
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
But, if I run /bin/bash when starting the container, and THEN start mongo, we're OK:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
root#8aab722fad89:/# mongod --dbpath=/usr/database
Based on the output, the difference seems to be that in the second scenario, the command is run as root.
So, my questions are:
Why does the /bin/bash method work, when the others do not?
How can I replicate that reason, in the docker-compose?
Note: On OSX, since that seems to effect whether you can mount a host directory as a volume for Mongo to use - not that I'm doing that.
To clarify, this image hub.docker.com/_/mongo is an official MongoDB docker image from DockerHub, but NOT an official docker image from MongoDB.
Now to answer your questions,
Why does the /bin/bash method work, when the others do not?
This answer is based on Dockerfile v3.2. First to point out that your volume mount command -v /usr/database , is essentially creating a directory in the container with the root ownership permission.
Your command below failed with permission denied because the the docker image is running the command as user mongodb (see this dockerfile line) . As the directory /usr/database is owned by root.
$ docker run -v /usr/database mongo:3.2 mongod --dbpath=/usr/database
While if you execute below /bin/bash then manually run mongod:
$ docker run -it -v /usr/database mongo:3.2 /bin/bash
Your are logged in as root and executing mongod as root, and it has the permission to create database files in /usr/database/.
Also, if you're executing the line below, it works because you're pointing to a directory /data/db which the permission has been corrected for user mongodb (see this dockerfile line)
$ docker run -v db:/data/db mongo:3.2
How can I replicate that reason, in the docker-compose?
The easiest solution is to use command: "mongod --dbpath=/data/db" because the permission ownership has been corrected in the Dockerfile.
If you are intending to use a host volume, you probably would have to add mongodb user on your host OSX and change appropriate directories permission. Modifying the permission ownership of a volume mount is outside the scope of docker-compose.