kubernetes - k3d how to use local directory as a Persistent volume - kubernetes

I am using k3d to run local kubernetes
I have created a cluster using k3d.
Now I want to mount a local directory as a persistent volume.
How can i do this while using k3d.
I know in minikube
$ minikube start --mount-string="$HOME/go/src/github.com/nginx:/data" --mount
Then If you mount /data into your Pod using hostPath, you will get you local directory data into Pod.
Is there any similar technique here also while using k3d

According to the answers to this Github question the teature you're looking for is not available yet.
Here is some idea from this link:
The simplest I guess would be to have a pretty generic mount containing all the code, e.g. in my case, I could do k3d cluster create -v "$HOME/git:/git#agent:*" to get all the repositories on my host present in all agent nodes to be used for hot-reloading.
According to this documentation one can use the following command with the adequate flag:
k3d cluster create NAME -v [SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
This command mounts volumes into the nodes
(Format:[SOURCE:]DEST[#NODEFILTER[;NODEFILTER...]]
Example:
`k3d cluster create --agents 2 -v /my/path#agent:0,1 -v /tmp/test:/tmp/other#server:0`
Here is also an interesting article how volumes and storage work in a K3s cluster (with examples).

I think this feature is not yet available
https://github.com/k3d-io/k3d/issues/566
So far we can only mount volumn when we create a new cluster.
k3d cluster create mykube --volume HOME/go/src/github.com/nginx:/data

Related

Location of Kubernetes config directory with Docker Desktop on Windows

I am running a local Kubernetes cluster through Docker Desktop on Windows. I'm attempting to modify my kube-apiserver config, and all of the information I've found has said to modify /etc/kubernetes/manifests/kube-apiserver.yaml on the master. I haven't been able to find this file, and am not sure what the proper way is to do this. Is there a different process because the cluster is through Docker Desktop?
Is there a different process because the cluster is through Docker Desktop?
You can get access to the kubeapi-server.yaml with a Kubernetes that is running on Docker Desktop but in a "hacky" way. I've included the explanation below.
For setups that require such reconfigurations, I encourage you to use different solution like for example minikube.
Minikube has a feature that allows you to pass the additional options for the Kubernetes components. You can read more about --extra-config ExtraOption by following this documentation:
Minikube.sigs.k8s.io: Docs: Commands: Start
As for the reconfiguration of kube-apiserver.yaml with Docker Desktop
You need to run following command:
docker run -it --privileged --pid=host debian nsenter -t 1 -m -u -n -i sh
Above command will allow you to run:
vi /etc/kubernetes/manifests/kube-apiserver.yaml
This lets you edit the API server configuration. The Pod running kubeapi-server will be restarted with new parameters.
You can check below StackOverflow answers for more reference:
Stackoverflow.com: Answer: Where are the Docker Desktop for Windows kubelet logs located?
Stackoverflow.com: Answer: How to change the default nodeport range on Mac (docker-desktop)?
I've used this answer without $ screen command and I was able to reconfigure kubeapi-server on Docker Desktop in Windows

K8s PersistentVolume - smart way to view data

Using Google Cloud & Kubernetes engine:
Is there a smart way to view or mount a
PersistentVolume(physical Storage, in the case of Google PD) to a local drive/remote computer/macos, or anything able to view data on the volume - to be able to backup or just view files.
Maybe using something like FUSE and in my case osxfuse.
Obviously I can mount a container and exec,
but maybe there are other ways?
Tried to ssh into the node and cd to /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet
But I get cd: pods: Permission denied
Regarding sharing PersistnetDisk between other VM's it was discussed here. If you want to use the same PD on many nodes, it would work only in read-only mode.
Easiest way to check what's inside the PD is to SSH to node (like you mentioned), but it will require superuser privileges (sudo) rights.
- SSH to node
$ sudo su
$ cd /home/kubernetes/containerized_mounter/rootfs/var/lib/kubelet/plugins/kubernetes.io/gce-pd/mounts
$ ls
Now you will get a few records, depends on how many PVC you have. Name of folder is the same as name you get from kubectl get pv.
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-53091548-57af-11ea-a629-42010a840131 1Gi RWO Delete Bound default/pvc-postgres standard 42m
Enter to it using cd
$ cd <pvc_name>
in my case:
$ cd gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131
now you can list all files inside this PersistentDisk
...gke-gke-metrics-d24588-pvc-53091548-57af-11ea-a629-42010a840131 # ls
lost+found text.txt
$ cat text.txt
This is test
It's not empty
There is tutorial on Github where user used sshfs but on MacOS.
===
Alternative way to mount PD to your local machine is to use NFS. However, you would need to configure it. Later you could specify mount in your Deployment and your local machine.
More details can be found here.
===
To create backup's you can consider Persistent disk snapshots.

dockerized postgresql with volumes

i am relatively new to docker. I'd like to set up a postgres database but I wonder how to make sure that the data isn't being lost if I recreated the container.
Then I stumbled over named volumes (not bind volumes) and how to use them.
But... in a Dockerfile you can't use named volumes. E.g. data:/var/lib etc.
As I understood using a Dockerfile it's always an anonymous volume.
So every single time I'd recreate a container it would get its
own new volume.
So here comes my question:
Firstly: how do I make sure, if the container get's updated or recreated that the postgres database from within the new container references to the same data and not losing the reference to the previously created anonymous volume.
Secondly: how does this work with a yml file?
is it possible to reference multiple replicas of such a database container to one volume? (High Availability Mode)?
It would really be great if someone could get me a hint or best practices.
Thank you in advance.
Looking at the Dockerfile for Postgres, you see that it declares a volume instruction:
VOLUME /var/lib/postgresql/data
Everytime you run a new Postgres container, without specifying a --volume option, docker automatically creates a new volume. The volume is given a random name.
You can see all volumes by running the command:
docker volume ls
You can also inspect the files stored on the host by the volume, by inspecting the host path using:
docker volume inspect <volume-name>
So when you don't specify the --volume option for the run command, docker create volumes for all volumes declared in the Dockerfile. This is mainly a safety if you forget to name your volume and the data shouldn't be lost.
Firstly: how do I make sure, if the container get's updated or
recreated that the postgres database from within the new container
references to the same data and not losing the reference to the
previously created anonymous volume.
If you want docker to use the same volume, you need to specify the --volume option. Once specified, docker won't create a new volume and it will simply mount the existing volume onto the specified folder in the docker command.
As a best practice, name your volumes that have valuable data. For example:
docker run --volume postgresData:/var/lib/postgresql/data ...
If you run this command for the first time the volume postgresData will be created and will backup /var/lib/postgresql/data on the host. The second time you run it the same data backed up on the host will be mounted onto the container.
Secondly: how does this work with a yml file? is it possible to
reference multiple replicas of such a database container to one
volume?
Yes, volumes can be shared between multiple containers. You can mount the same volume onto multiple containers, and the containers will use the same files. Docker compose allows you to do that ...
However, beware that volumes are limited to the host they were created. When running containers on multiple machines, the volume needs to be accessible from all the machines. There are ways/tools to achieve that
but they are a bit complex. This is still a limitation to be addressed in Docker.

Is there a way to choose the path of named volumes with the default volume drives in docker?

Is there a way to choose the path of named volumes with the default volume drives in docker?
I know I can bind mount volumes in each service. I know I can create named volumes outside of services and them share them by mounting them in each service. But I can't find out a way to get the data (to be shared) on a path that I select instead of docker's /var/lib/docker/volumes/.
Anyone tried to share a volume as a mount point in two different containers of the same docker-compose file where the volume is a location at your choosing?
You cannot do that with the built-in docker volume plugin. You need to install a thirdparty plugin like below
https://github.com/CWSpear/local-persist
curl -fsSL https://raw.githubusercontent.com/CWSpear/local-persist/master/scripts/install.sh | sudo bash
docker volume create -d local-persist -o mountpoint=/data/images --name=images
docker run -d -v images:/path/to/images/on/one/ one
docker run -d -v images:/path/to/images/on/two/ two

docker-compose mounted volume remain

I'm using docker-compose in one of my projects. During development i mount my source directory to a volume in one of my docker services for easy development. At the same time, I have a db service (psql) that mounts a named volume for persistent data storage.
I start by solution and everything is working fine
$ docker-compose up -d
When I check my volumes I see the named and "unnamed" (source volume).
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
The problem I experience is that, when I do
$ docker-compose down
...
$ docker volume ls
DRIVER VOLUME NAME
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
both volumes remain. Every time I run
$ docker-compose down
$ docker-compose up -d
a new volume is created for my source mount
$ docker volume ls
DRIVER VOLUME NAME
local 19181286b19c0c3f5b67d7d1f0e3f237c83317816acbdf4223328fdf46046518
local 226ba7af9689c511cb5e6c06ceb36e6c26a75dd9d619360882a1012cdcd25b72
local myproject_data
I know that this will not happen on my deployment server, since it will not mount the source, but is there a way to not make the mounted source persistent?
You can use the --rm option in docker run. To use it with docker-compose you can use
docker-compose rm -v after stopping your containers with docker-compose stop
If you go through the docs about Data volumes , its mentioned that
Data volumes persist even if the container itself is deleted.
So that means, stopping a container will not remove the volumes it created, whether named or anonymous.
Now if you read further down to Removing volumes
A Docker data volume persists after a container is deleted. You can
create named or anonymous volumes. Named volumes have a specific
source form outside the container, for example awesome:/bar. Anonymous
volumes have no specific source. When the container is deleted, you
should instruct the Docker Engine daemon to clean up anonymous
volumes. To do this, use the --rm option, for example:
$ docker run --rm -v /foo -v awesome:/bar busybox top
This command creates an anonymous /foo volume. When the container is
removed, the Docker Engine removes the /foo volume but not the awesome
volume.
Just remove volumes with the down command:
docker-compose down -v