I've created a docker postgis container with the following command :
docker run -p 5435:5432 -e POSTGRES_PASSWORD=xxxxxxx -e POSTGRES_INITDB_ARGS="-d" mdillon/postgis:9.6
This created a volume for data in /var/lib/docker/volumes/[some_very_long_id]/_data
Now I need to move this volume to somewhere else to ease backup for my outsourcing contractor... and don't know how to do this. I'm a bit lost as there seems to be different alternative with data volumes and fs mounts for example.
So what's the correct way to do it as today ? And how to move my current data directory to a better place ?
Thanks.
You can declare a volume mount when you run your container. For example, you could run your container like this:
docker run -p 5435:5432 -e POSTGRES_PASSWORD=xxxxxxx -e POSTGRES_INITDB_ARGS="-d" \
-v /the/path/you/want/on/your/host:/var/lib/postgresql/data \
mdillon/postgis:9.6
This way the postgres data directory will be in the /the/path/you/want/on/your/host in your host.
I didn't check or search deeply, but in your case I suggest to do following step:
Create another container with outside folder.
Using pg_basebackup get all data from the old container to the new container. Or using replicate.
So, you have the data folder outside the container.
Hopefully it will help your case.
Related
I'm very new to using docker and I've created a postgres container using
docker run --name mytrainingdb -e POSTGRES_PASSWORD=mysecretpassword -d postgres. Then I connected to it with docker exec -it <container-id> bash and then psql.
Then I stop the container.
My query is, what do I do reconnect to the same database? I tried to run same docker run command, but it says the name 'mytrainingdb' is used, which means it is trying to create it afresh, which is not what I want. Hope my expectation is right, as in when I restart my laptop or resume work I can just restart the same container and my data/config would be preserved?
The documentation also mentions that we can link a host directory to volume of pg container to have the stored data accessible to us, but I'm ok with docker managing my storage for that database.
You will have error when you try to re-run the same command, because docker is trying to create a new container with same name as the previous one "mytrainingdb". If you close docker and reopen it you will still find your container , but its not running , you can start it again with docker start mytrainingdb or you can remove it with docker rm mytrainingdb .
However , dont restart docker because you want to create a new container with the same name! If you want to start a new container with the same name and your container is still running you can first stop it with docker stop mytrainingdb and docker rm mytrainingdb or you can just do docker rm -f mytrainingdb (this will remove you running container with force ) and then create a new container..
As for the volumes ,you just created one by default which is named is kind of hash , and its found at volumes/var/lib/docker/volumes/ .Because generally containers such PostgreSQL, or databases in general persists volumes. The volume gets created when running the container and is handy to save persistent data, whether you start the container with -v or not.
The volume you talked about in your question , is called mounted volume , is when you basically just bind a certain directory or file from the host (outside) to inside the container
docker run -v /hostdir:/containerdir in your case docker run -v /hostdir:/var/lib/postgresql/data
If you restart docker or your computer running containers won't be automatically restarted. You can start your container again with docker start mytrainingdb (related question), then connect with your docker exec command.
(one tip: instead of running bash, then psql, you can directly run psql, e.g. docker exec -it mytrainingdb psql --user postgres)
Your understanding of data persistence is correct, docker will manage the data and it will still be around.
From the postgres image documentation
There are several ways to store data used by applications that run in Docker containers. We encourage users of the postgres images to familiarize themselves with the options available, including:
Let Docker manage the storage of your database data by writing the database files to disk on the host system using its own internal volume management. This is the default and is easy and fairly transparent to the user. The downside is that the files may be hard to locate for tools and applications that run directly on the host system, i.e. outside containers.
You can add --rm argument so that whenever you stop the container manually, or container stops for any reasons (his task is done or it fails), it will remove that container.
In your case, you can use this:
docker run --name mytrainingdb --rm -e POSTGRES_PASSWORD=mysecretpassword -d postgres
On my macbook I have postgresql running in a docker container and I use a mapped volume to persist the data. This works perfectly locally. However, when I try to do the same on the Ubuntu server the 'initial' data from the mapped volume is not working. Postgres starts up in an 'empty' initial state.
However, when I add a table and data in that table in the default postgres database it IS persistent. So the volume mapping seems to work.
Furthermore it is interesting to note that I'm getting an error when I try to create a table in a new database. The new database is persistent as well, but the table cant be saved as there is an error thrown:
could not open file "base/16384/2611": No such file or directory
This is expected as the folder base/16384 doesn't exist.
To me this seems this is a user/rights issue perhaps, but no clue how to fix this.
I tried running the container as root, which didn't help.
Any suggestions?
I'm starting the container with either docker-compose or from the command line using;
docker run --rm --name pg -e POSTGRES_PASSWORD=[password] -d -p 5432:5432 -v /root/docker/volumes/postgres:/var/lib/postgresql/data postgres -c listen_addresses='*'
Instead of moving the actual data folder around I used pg_dump and pg_restore within the docker containers per suggestion on the docker forums. This did the trick
I want to setup my database inside a container using the Postgres:9.3 docker image, however I want to store my data in external drive.
I attempted it using the command
`docker run -dit -p 5432:5432 -v /mnt/external/docker_volume:/var/lib/postgresql/data --name mydatabase postgres:9.3`
Container got created as it echoes the container id, but it is not shown as running from docker ps. The above command works for other images. So my gut feeling is that there is a conflict as this images has VOLUME defined in its dockerfile but I haven't figure out a way to get around it. Any help will be appreciated!
Apparently, the problem is my /mnt/external/docker_volume was not empty and the Postgres init script didn't like it. I found this out after running with -it option and see the output in terminal.
I answered my own question. I hope somebody in the future will find this helpful. :)
I'm new to docker. You can take a look at my last questions here and see that I've been asking questions down this line. I read the docs carefully, and also read several articles on the web (which is pretty difficult given the rapid versioning in docker), but I still can't get a clear picture of how am I supposed to use containers and its impact on persistance.
The official postgres image creates a volume in its Dockerfile using this command
VOLUME /var/lib/postgresql/data
And the readme.md file shows only one example of how to run the image
docker run --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
When I try that, I can see (with "docker inspect some-postgres") that the volume created lives in a random directory in my host, and it seems to "belong" to that particular container.
So here are some questions that may help my understanding:
It looks (from the official postgres image docs) that expected usage is to use "docker run" to create the container, and "docker start" afterwards (this last bit I inferred from the fact that -d and --name are used). This makes sense to me, but conflicts with a lot of information I've seen regarding containers should be ephemeral. If spin a new container every time, then the default VOLUME config in the Dockerfile doesn't work for persistance. What's the right way of doing things?
Given the above is correct (that I can run once and start many times), the only reason I see for the VOLUME command in the Dockerfile is I/O performance because of the CoW filesystem bypass. Is this right?
Could you please clearly explain what's wrong with using this approach over the (I think unofficially) recommended way of using a data container? I'd like to know the pros/cons to my specific situation, which is a node js intranet application.
Thanks,
Awer
You're correct that you can start the container using 'docker run' and start it again in the future using 'docker start' assuming you haven't removed the container. You're also correct that docker containers are supposed to be ephemeral and you shouldn't be in bad shape if the container disappears. What you can do is mount a volume into the docker container to the storage location of the database.
docker run -v /postgres/storage:/container/postgres --name some-postgres -e POSTGRES_PASSWORD=mysecretpassword -d postgres
If you know the location of where the database writes to inside the container you can mount it correctly and then even if you remove the postgres container, when you start back up all your data will persist. You may need to mount some other areas that control configurations as well unless you modify and save the container.
I want to make two minor improvements on the official MongoDB docker so that it starts with the --auth enabled and uses a separate data container to store the data. What's the best way to do this?
If all are set, how should I start the shell? Will it be possible for someone without a username and password to access any of the databases available? Which directory should I backup?
EDIT
Apparently, this is not enough:
docker run --name mymongoname1 -v /my/local/dir:/data/db -d -P mongo:latest
OK, so partial answer, because I haven't messed around with docker auth.
Containerising storage is done with a storage container. That's basically a container created off a token instance, with some volumes assigned.
So for elasticsearch (which I know isn't mongo, but it is at least a NoSQL db) I've been using:
docker create -v /es_data:/es_data --name elasticsearch_data es-base /bin/true
Then:
docker run -d -p 9200:9200 --vols-from elasticsearch_data elasticsearch-2.1.0
This connects the container volume to my es container - in this example it passes through a host volume, but you don't actually need to any more, because the container can hold the data in the docker filesystem. (And then I think you can push the data container around too, but I've not got that far!)
If you run ps -a you will see the data container in Created state. Just watch if you're doing a cleanup script that you don't delete it, because unlike running containers, you can freely delete it...