I am moving my service to kubernetes from docker and I also have to copy over some files from my docker volume.
I am using a PersistentVolumeClaim and a StorageClass in kubernetes and that's already implemented.
But now I need to copy the contents in the folder /opt/checker/dataFiles to the same mount path on kubernetes. How best to do it ? Is there a better way than copying the files into the folder inside the kubernetes container manually?
Apparently I couldn't find any and I ended up doing this manually:
Copy the contents from the docker volume to you local disk (docker cp container:source_path local_dest_path)
Copy the contents into the Kubernetes volumes into the mount path (kubectl cp local_dest_path namespace/pod:final_dest_path)
Related
For automated testing we can't use a DB Docker container with a defined volume. Just wondering if there would be available an "offical" Postgres image with no mounted volume or volume definitions.
Or if someone has a Dockerfile that would create a container without any volume definitions, that would be very helpful to see or try to use one.
Or is there any way to override a defined volume mount and just use datafile inside of to be created Docker container with running DB.
I think you are mixing up volumes and bind mounts.
https://docs.docker.com/storage/
VOLUME Dockerfile command: A volume with the VOLUME command in a Dockerfile is created into the docker area on the host that is /var/lib/docker/volumes/.
I don't think it is possible to run docker without it having access to this directory or it would be not advisable to restrict permission of docker to these directories, these are dockers own directories after all.
So postgres dockerfile has this command in dockerfile, for example: https://github.com/docker-library/postgres/blob/master/15/bullseye/Dockerfile
line 186: VOLUME /var/lib/postgresql/data
This means that the /var/lib/postgresql/data directory that is inside the postgres container will be a VOLUME that will be stored on the host somewhere in /var/lib/docker/volumes/somerandomhashorguid..... in a directory with a random name.
You can also create a volume like this with docker run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /etc postgres:15.1
This way the /etc directory that is inside the container will be stored on the host in the /var/lib/docker/volumes/somerandomhashorguid.....
This volume solution is needed for containers that need extra IO, because the files of the containers (that are not in volumes) are stored in the writeable layer as per the docs: "Writing into a container’s writable layer requires a storage driver to manage the filesystem. The storage driver provides a union filesystem, using the Linux kernel. This extra abstraction reduces performance as compared to using data volumes, which write directly to the host filesystem."
So you could technically remove the VOLUME command from the postgres dockerfile and rebuild the image for yourself and use that image to create your postgres container but it would have lesser performance.
Bind mounts are the type of data storage solution that can be mounted to anywhere on the host filesystem. For example if you would run:
docker run --name mypostgres -e POSTGRES_PASSWORD=password -v /tmp/mypostgresdata:/var/lib/postgresql/data postgres:15.1
(Take not of the -v flag here, there is a colon between the host and the container directory while previously in the volume version of this flag there was no host directory and no colon either.)
then you would have a directory created on your docker host machine /tmp/mypostgresdata and the directory of the container of /var/lib/postgresql/data would be mapped here instead of the docker volumes internal directory /var/lib/docker/volumes/somerandomhashorguid.....
My general rule of thumb would be to use volumes - as in /var/lib/docker/volumes/ - whenever you can and deviate only if really necessary. Bind mounts are not flexible enough to make an image/container portable and the writable container layer has less performance than docker volumes.
You can list docker volumes with docker volume ls but you will not see bind mounted directories here. For that you will need to do docker inspect containername
"You could just copy one of the dockerfiles used by the postgres project, and remove the VOLUME statement. github.com/docker-library/postgres/blob/… –
Nick ODell
Nov 26, 2022 at 18:05"
answered Nick abow.
And that edited Dockerfile would build "almost" Docker Official Image.
I modified the mount directory in the docker-compose.yml file. Which command should I use to make the mounted directory effective?
should I use docker-compose restart?
In the past, I used the docker compose restart command.
I'm trying to deploy a react app on my local machine with docker-desktop and its kubernetes cluster with bitnami apache helm chart.
I'm following this this tutorial.
The tutorial makes you publish the image on a public repo (step 2) and I don't want to do that. It is indeed possible to pass the app files through a persistent volume claim.
This is described in the following tutorial.
Step 2 of this second tutorial lets you create a pod pointing to a PVC and then asks you to copy the app files there by using command
kubectl cp /myapp/* apache-data-pod:/data/
My issues:
I cannot use the * wildcard or else I get an error. To avoid this I just run
kubectl cp . apache-data-pod:/data/
This instruction copies the files in the pod but it creates another data folder in the already existing data folder in the pod filesystem
After this command my pod filesystem looks like this
I tried executing
kubectl cp . apache-data-pod:/
But this copies the file in the root of the pod filesystem at the same location where first data folder is.
I need to copy the data directly in <my_pod>:/data/.
How can I achieve such behaviour?
Regards
**Use the full path in the command as mentioned below to copy local files to POD : *
kubectl cp apache-pod:/var/www/html/index.html /tmp
*If there are multiple containers on the POD, Use the below syntax to copy a file from local to pod:
kubectl cp /<path-to-your-file>/<file-name> <pod-name>:<fully-qualified-file-name> -c <container-name>
Points to remember :
While referring to the file path on the POD. It is always relative to the WORKDIR you have defined on your image.
Unlike Linux, the base directory does not always start from the / workdir is the base directory
When you have multiple containers on the POD you need to specify the container to use with the copy operation using -c parameter
Quick Example of kubectl cp : Here is the command to copy the index.html file from the POD’s /var/www/html to the local /tmp directory.
No need to mention the full path, when the doc root is the workdir or the default directory of the image.
kubectl cp apache-pod:index.html /tmp
To make it less confusing, you can always write the full path like this
kubectl cp apache-pod:/var/www/html/index.html /tmp
*Also refer to this stack question for more information.
I am using k3d to run local kubernetes
I have created a cluster using k3d.
Now I want to mount a local directory as a persistent volume.
How can i do this while using k3d.
I know in minikube
$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
Then If you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Is there any similar technique here also while using k3d
According to the answers to this Github question the teature you're looking for is not available yet.
Here is some idea from this link:
The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do k3d cluster create -v "$HOME/git:/git#agent:*" to get all the repositories on my host present in all agent nodes to be used for hot-reloading.
According to this documentation one can use the following command with the adequate flag:
k3d cluster create NAME -v [SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
This command mounts volumes into the nodes
(Format:[SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
Example:
`k3d cluster create --agents 2 -v /my/path#agent:0,1 -v /tmp/test:/tmp/other#server:0`
Here is also an interesting article how volumes and storage work in a K3s cluster (with examples).
I think this feature is not yet available
https://github.com/k3d-io/k3d/issues/566
So far we can only mount volumn when we create a new cluster.
k3d cluster create mykube --volume HOME/go/src/github.com/nginx:/data
I'm using docker-compose in one of my projects. During development i mount my source directory to a volume in one of my docker services for easy development. At the same time, I have a db service (psql) that mounts a named volume for persistent data storage.
I start by solution and everything is working fine
$ docker-compose up -d
When I check my volumes I see the named and "unnamed" (source volume).
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
The problem I experience is that, when I do
$ docker-compose down
...
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
both volumes remain. Every time I run
$ docker-compose down
$ docker-compose up -d
a new volume is created for my source mount
$ docker volume ls
DRIVER VOLUME NAME
local 19181286b19c0c3f5b67d7d1f0e3f237c83317816acbdf4223328fdf46046518
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
I know that this will not happen on my deployment server, since it will not mount the source, but is there a way to not make the mounted source persistent?
You can use the --rm option in docker run. To use it with docker-compose you can use
docker-compose rm -v after stopping your containers with docker-compose stop
If you go through the docs about Data volumes , its mentioned that
Data volumes persist even if the container itself is deleted.
So that means, stopping a container will not remove the volumes it created, whether named or anonymous.
Now if you read further down to Removing volumes
A Docker data volume persists after a container is deleted. You can
create named or anonymous volumes. Named volumes have a specific
source form outside the container, for example awesome:/bar. Anonymous
volumes have no specific source. When the container is deleted, you
should instruct the Docker Engine daemon to clean up anonymous
volumes. To do this, use the --rm option, for example:
$ docker run --rm -v /foo -v awesome:/bar busybox top
This command creates an anonymous /foo volume. When the container is
removed, the Docker Engine removes the /foo volume but not the awesome
volume.
Just remove volumes with the down command:
docker-compose down -v