Saving Data in kubernetes is not persistant. so we should use volume.
Forexample we can mount "/apt" to save data in "apt".
Now I want to mount "/" but I get this error.
Error: Error response from daemon: invalid volume specification:
'/var/lib/kubelet/pods/26c39eeb-85d7-11e9-933c-7c8bca006fec/volumes/kubernetes.io~rbd/pvc-d66d9039-853d-11e9-8aa3-7c8bca006fec:/':
invalid mount config for type "bind": invalid specification:
destination can't be '/'
The question is How can I mount "/" in kubernetes?
Not completely sure about your environment, but I ran into this issue today because I wanted to be able to browse the entire root filesystem of a container via SSH (WinSCP) to the host. I am using Docker in a Photon OS VM environment. The answer I've come to is: you can't do what you're trying to do, but you may be able to accomplish what you're trying to accomplish. Let's say I created a volume called mysql and I create a new (oversimplified) mysql container using that volume as root:
docker volume create --name mysql
docker run -d --name=mysqldb -v /var/lib/docker/volumes/mysql:/ mysql:5.7
Docker will cry and say I can't mount to root (destination can't be '/'). However, since I know the location where our volumes live (/var/lib/docker/volumes/) then we can simply create our container as normal and an arbitrarily-named volume will be placed in that folder. So if your goal is (as mine was) to be able to ssh to the host and browse the files in the root of your container, you CAN do that, you just need to go to the correct arbitrarily-named volume. In my case it is "/var/lib/docker/volumes/12dccb66f2eeaeefe8e1feabb86f3c6def87b091dabeccad2902851caa97f04c/_data", which isn't as pretty as "/var/lib/docker/volumes/mysql", but it gets the job done.
Hope that helps someone.
Related
For automated testing we can't use a DB Docker container with a defined volume. Just wondering if there would be available an "offical" Postgres image with no mounted volume or volume definitions.
Or if someone has a Dockerfile that would create a container without any volume definitions, that would be very helpful to see or try to use one.
Or is there any way to override a defined volume mount and just use datafile inside of to be created Docker container with running DB.
I think you are mixing up volumes and bind mounts.
https://docs.docker.com/storage/
VOLUME Dockerfile command: A volume with the VOLUME command in a Dockerfile is created into the docker area on the host that is /var/lib/docker/volumes/.
I don't think it is possible to run docker without it having access to this directory or it would be not advisable to restrict permission of docker to these directories, these are dockers own directories after all.
So postgres dockerfile has this command in dockerfile, for example: https://github.com/docker-library/postgres/blob/master/15/bullseye/Dockerfile
line 186: VOLUME /var/lib/postgresql/data
This means that the /var/lib/postgresql/data directory that is inside the postgres container will be a VOLUME that will be stored on the host somewhere in /var/lib/docker/volumes/somerandomhashorguid..... in a directory with a random name.
You can also create a volume like this with docker run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /etc postgres:15.1
This way the /etc directory that is inside the container will be stored on the host in the /var/lib/docker/volumes/somerandomhashorguid.....
This volume solution is needed for containers that need extra IO, because the files of the containers (that are not in volumes) are stored in the writeable layer as per the docs: "Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem."
So you could technically remove the VOLUME command from the postgres dockerfile and rebuild the image for yourself and use that image to create your postgres container but it would have lesser performance.
Bind mounts are the type of data storage solution that can be mounted to anywhere on the host filesystem. For example if you would run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /tmp/mypostgresdata:/var/lib/postgresql/data postgres:15.1
(Take not of the -v flag here, there is a colon between the host and the container directory while previously in the volume version of this flag there was no host directory and no colon either.)
then you would have a directory created on your docker host machine /tmp/mypostgresdata and the directory of the container of /var/lib/postgresql/data would be mapped here instead of the docker volumes internal directory /var/lib/docker/volumes/somerandomhashorguid.....
My general rule of thumb would be to use volumes - as in /var/lib/docker/volumes/ - whenever you can and deviate only if really necessary. Bind mounts are not flexible enough to make an image/container portable and the writable container layer has less performance than docker volumes.
You can list docker volumes with docker volume ls but you will not see bind mounted directories here. For that you will need to do docker inspect containername
"You could just copy one of the dockerfiles used by the postgres project, and remove the VOLUME statement. github.com/docker-library/postgres/blob/… –
Nick ODell
Nov 26, 2022 at 18:05"
answered Nick abow.
And that edited Dockerfile would build "almost" Docker Official Image.
I have a docker-compose.yml from which I started a couple of services. I add a new volume mapping to one of the services and then try to restart the container with
docker compose restart <service_name>
but the volume is still not mapped and not available from within the image.
What is the right way to add a volume to an image defined with docker compose?
Oki, so it turns out that restart is just a refresh of the existing image but changes nothing in the parameters with which it is started.
In order to have compose take into account volume mapping changes in the docker-compose.yml file one has ro run:
docker compose up --build <service_name>
There might be other solutions, but this is what I ended up doing.
I have a container called "postgres", build with plain docker command, that has a configured PostgreSQL inside it. Also, I have a docker-compose setup with two services - "api" and "nginx".
How to add the "postgres" container to my existing docker-compose setup as a service, without rebuilding? The PostgreSQL database is configured manually, and filled with data, so rebuilding is a really, really bad option.
I went through the docker-compose documentation, but found no way to do this without a re-build, sadly.
Unfortunately this is not possible.
You don't refer containers on docker-compose, you use images.
You need to create a volume and/or bind mount it to keep your database data.
This is because containers do not save data, if you have filled it with data and did not make a bind mount or a volume to it, you will lose everything on using docker container stop.
Recommendation:
docker cp
Docker cp will copy the contents from container to host. https://docs.docker.com/engine/reference/commandline/container_cp/
Create a folder to save all your PostgreSQL data (ex: /home/user/postgre_data/)
Save the contents of your PostgreSQL container data to this folder (docker hub postgres page for further reference: ;
Run a new PostgreSQL (same version) container with a bind mount poiting to the new folder;
This will maintain all your data and you will be able to volume or bind mount it to use on docker-compose.
Reference of docker-compose volumes: https://docs.docker.com/compose/compose-file/#volumes
Reference of postgres docker image: https://hub.docker.com/_/postgres/
Reference of volumes and bind mounts: https://docs.docker.com/storage/bind-mounts/#choosing-the--v-or---mount-flag
You can save this container in a new image using docker container commit and use that newly created image in your docker-compose
docker container commit [OPTIONS] CONTAINER [REPOSITORY[:TAG]]
I however prefer creating images with the use of Dockerfiles and scripts to fill my data etc.
i am relatively new to docker. I'd like to set up a postgres database but I wonder how to make sure that the data isn't being lost if I recreated the container.
Then I stumbled over named volumes (not bind volumes) and how to use them.
But... in a Dockerfile you can't use named volumes. E.g. data:/var/lib etc.
As I understood using a Dockerfile it's always an anonymous volume.
So every single time I'd recreate a container it would get its
own new volume.
So here comes my question:
Firstly: how do I make sure, if the container get's updated or recreated that the postgres database from within the new container references to the same data and not losing the reference to the previously created anonymous volume.
Secondly: how does this work with a yml file?
is it possible to reference multiple replicas of such a database container to one volume? (High Availability Mode)?
It would really be great if someone could get me a hint or best practices.
Thank you in advance.
Looking at the Dockerfile for Postgres, you see that it declares a volume instruction:
VOLUME /var/lib/postgresql/data
Everytime you run a new Postgres container, without specifying a --volume option, docker automatically creates a new volume. The volume is given a random name.
You can see all volumes by running the command:
docker volume ls
You can also inspect the files stored on the host by the volume, by inspecting the host path using:
docker volume inspect <volume-name>
So when you don't specify the --volume option for the run command, docker create volumes for all volumes declared in the Dockerfile. This is mainly a safety if you forget to name your volume and the data shouldn't be lost.
Firstly: how do I make sure, if the container get's updated or
recreated that the postgres database from within the new container
references to the same data and not losing the reference to the
previously created anonymous volume.
If you want docker to use the same volume, you need to specify the --volume option. Once specified, docker won't create a new volume and it will simply mount the existing volume onto the specified folder in the docker command.
As a best practice, name your volumes that have valuable data. For example:
docker run --volume postgresData:/var/lib/postgresql/data ...
If you run this command for the first time the volume postgresData will be created and will backup /var/lib/postgresql/data on the host. The second time you run it the same data backed up on the host will be mounted onto the container.
Secondly: how does this work with a yml file? is it possible to
reference multiple replicas of such a database container to one
volume?
Yes, volumes can be shared between multiple containers. You can mount the same volume onto multiple containers, and the containers will use the same files. Docker compose allows you to do that ...
However, beware that volumes are limited to the host they were created. When running containers on multiple machines, the volume needs to be accessible from all the machines. There are ways/tools to achieve that
but they are a bit complex. This is still a limitation to be addressed in Docker.
I'm using docker-compose in one of my projects. During development i mount my source directory to a volume in one of my docker services for easy development. At the same time, I have a db service (psql) that mounts a named volume for persistent data storage.
I start by solution and everything is working fine
$ docker-compose up -d
When I check my volumes I see the named and "unnamed" (source volume).
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
The problem I experience is that, when I do
$ docker-compose down
...
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
both volumes remain. Every time I run
$ docker-compose down
$ docker-compose up -d
a new volume is created for my source mount
$ docker volume ls
DRIVER VOLUME NAME
local 19181286b19c0c3f5b67d7d1f0e3f237c83317816acbdf4223328fdf46046518
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
I know that this will not happen on my deployment server, since it will not mount the source, but is there a way to not make the mounted source persistent?
You can use the --rm option in docker run. To use it with docker-compose you can use
docker-compose rm -v after stopping your containers with docker-compose stop
If you go through the docs about Data volumes , its mentioned that
Data volumes persist even if the container itself is deleted.
So that means, stopping a container will not remove the volumes it created, whether named or anonymous.
Now if you read further down to Removing volumes
A Docker data volume persists after a container is deleted. You can
create named or anonymous volumes. Named volumes have a specific
source form outside the container, for example awesome:/bar. Anonymous
volumes have no specific source. When the container is deleted, you
should instruct the Docker Engine daemon to clean up anonymous
volumes. To do this, use the --rm option, for example:
$ docker run --rm -v /foo -v awesome:/bar busybox top
This command creates an anonymous /foo volume. When the container is
removed, the Docker Engine removes the /foo volume but not the awesome
volume.
Just remove volumes with the down command:
docker-compose down -v