Sharing existing media folder with pods on Kubernetes - kubernetes

I'm working on my toy project and I want to share an existing folder with media files with pods running on Kubernetes (Docker Desktop's built in Kubernetes on Windows 10 or microk8s on my home linux server). What is the best way to do it? I have searched through the docs and there are no exemaples with existing folder and data.

A file or directory from the filesystem of the host node is mounted into your Pod by a hostPath volume.You can create a PV with hostpath so that you can claim in the pod configurations. For this your existing directory has to be in the node where the pods are going to create.
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: registry.k8s.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
Only root has access to the newly created files and folders on the underlying hosts. To be able to write to a hostPath drive, you must either execute your process as root in a privileged Container or change the file permissions on the host.
For detailed information refer to this document
NOTE: Avoiding the usage of HostPath volumes whenever possible is a best practise since they pose numerous security issues.

Related

access minikube folder's data from host machine

I'm using minikube for running my Kubernetes deployment:
pvc:
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pipeline
spec:
storageClassName:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
pod:
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test1
name: test1
spec:
replicas: 1
selector:
matchLabels:
app: test1
template:
metadata:
labels:
app: test1
spec:
containers:
- env:
- name: SHARED_FOLDER_PATH
value: /data/shared
image: docker.io/foo/test1:st3
imagePullPolicy: Always
name: test1
ports:
- containerPort: 8061
name: protobuf-api
- containerPort: 8062
name: webui
volumeMounts:
- mountPath: /data/shared
name: test1
imagePullSecrets:
- name: acumos-registry
volumes:
- name: test1
persistentVolumeClaim:
claimName: pipeline
I have checked that pod and pvc are running:
$ kubectl describe pv,pvc
Name: pvc-34bbd532-9c55-45cc-ab96-1accd08ded6e
Labels: <none>
Annotations: hostPathProvisionerIdentity: c6eeb812-6b82-4546-bc5a-8917cf0d3d6b
pv.kubernetes.io/provisioned-by: k8s.io/minikube-hostpath
Finalizers: [kubernetes.io/pv-protection]
StorageClass: standard
Status: Bound
Claim: test/pipeline
Reclaim Policy: Delete
Access Modes: RWO
VolumeMode: Filesystem
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /tmp/hostpath-provisioner/test/pipeline
HostPathType:
Events: <none>
What I'm trying is to access the data on the minikube folder: /tmp/hostpath-provisioner/test/pipeline from host machine. For that purpose, I'm mounting local volumen:
$ minikube mount /tmp/hostpath-provisioner/test/pipeline:/tmp/hostpath-provisioner/test/pipeline
I have checked if any data is on minikube folder by ssh:
docker#minikube:/tmp/hostpath-provisioner/test/pipeline$ ls -a
. .. classes.json
But I can't see this info from local folder
The local mount you created mounts the specified directory into minikube, but not from the guest to the host as you would like it to.
Depending on your host machine's OS you will have to set up proper file sharing using either host folder sharing or a network based file system.
With a bit of work, one could set up Syncthing between the host and the guest VM for persistent file synchronization.
Grab the latest release of Syncthing for your operating system & unpack it (if you use Debian/Ubuntu you may want to use the Debian repository)
At this point Syncthing will also have set up a folder called Default Folder for you, in a directory called Sync in your home directory (%USERPROFILE% on Windows). You can use this as a starting point, then remove it or add more folders later.
The admin GUI starts automatically and remains available on http://localhost:8384/. Cookies are essential to the correct functioning of the GUI; please ensure your browser accepts them.
On the left is the list of “folders”, or directories to synchronize. You can see the Default Folder was created for you, and it’s currently marked “Unshared” since it’s not yet shared with any other device. On the right is the list of devices. Currently there is only one device: the computer you are running this on.
For Syncthing to be able to synchronize files with another device, it must be told about that device. This is accomplished by exchanging “device IDs”. A device ID is a unique, cryptographically-secure identifier that is generated as part of the key generation the first time you start Syncthing. It is printed in a log, and you can see it in the web GUI by selecting “Actions” (top right) and “Show ID”.
Two devices will only connect and talk to each other if they are both configured with each other’s device ID. Since the configuration must be mutual for a connection to happen, device IDs don’t need to be kept secret. They are essentially part of the public key.
To get your two devices to talk to each other click “Add Remote Device” at the bottom right on both devices, and enter the device ID of the other side. You should also select the folder(s) that you want to share. The device name is optional and purely cosmetic. You can change it later if desired.
Once you click “Save” the new device will appear on the right side of the GUI (although disconnected) and then connect to the new device after a minute or so. Remember to repeat this step for the other device.
At this point the two devices share an empty directory. Adding files to the shared directory on either device will synchronize those files to the other side.
What is Syncthing:
https://syncthing.net/
Installation Guide:
https://docs.syncthing.net/intro/getting-started.html
Lates Release of syncthing:
https://github.com/syncthing/syncthing/releases/tag/v1.18.5
Debian Repo:
https://apt.syncthing.net/
The problem was the Firewall. The procedure detailed within the question post together with the solution proposed in this answer (on Ubuntu) worked for me:
How to mount a Host folder in minikube VM

Why is my Host Path Persistent Volume reachable from all pods?

I'm pretty stuck with this learning step of Kubernetes named PV and PVC.
What I'm trying to do here is understand how to handle shared read-write volume on multiple pods.
What I understood here is that a PVC cannot be shared between pods unless a NFS-like storage class has been configured.
I'm still with my hostPath Storage Class and I tried the following (Docker Desktop and 3 nodes microK8s cluster) :
This PVC with dynamic Host Path provisionning
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-desktop
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 50Mi
Deployment with 3 replicated pods writing on the same PVC.
apiVersion: apps/v1
kind: Deployment
metadata:
name: busybox
spec:
replicas: 3
selector:
matchLabels:
app: busybox
template:
metadata:
labels:
app: busybox
spec:
containers:
- name: busybox
image: library/busybox:stable
command: ["/bin/sh"]
args:
["-c", 'while true; do echo "1: $(hostname)" >> /root/index.html; sleep 2; done;',]
volumeMounts:
- mountPath: /root
name: vol-desktop
volumes:
- name: vol-desktop
persistentVolumeClaim:
claimName: pvc-desktop
Nginx server for serving volume content
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 1
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:stable
volumeMounts:
- mountPath: /usr/share/nginx/html
name: vol-desktop
ports:
- containerPort: 80
volumes:
- name: vol-desktop
persistentVolumeClaim:
claimName: pvc-desktop
Following what I understood on the documentation, this could not be possible, but in reality everything run pretty smoothly and my Nginx server displayed the up to date index.html file pretty well.
It actually worked on a single-node cluster and multi-node cluster.
What am I not getting here? Why this thing works?
Is every pod mounting is own host path volume on start?
How can a hostPath storage works between multiple nodes?
EDIT: For the multi-node case, a network folder has been created between the same storage path of each machine this is why everything has been replicated successfully. I didn't understand that the same host path is created on each node with that PVC mounted.
To anyone with the same problem: each node mounting this hostpath PVC will have is own folder created at the PV path.
So without network replication between nodes, only pods of the same node will share the same folder.
This is why it's discouraged on a multi-node cluster due to the unpredictable location of a pod on the cluster.
Thanks!
how to handle shared read-write volume on multiple pods.
Redesign your application to avoid it. It tends to be fragile and difficult to manage multiple writers safely; you depend on both your application correctly performing things like file locking, the underlying shared filesystem implementation handling things properly, and the system being tolerant of any sort of network hiccup that might happen.
The example you give is something that frequently appears in Docker Compose setups: have an application with a mix of backend code and static files, and then try to publish the static files at runtime through a volume to a reverse proxy. Instead, you can build an image that copies the static files at build time:
FROM nginx
ARG app_version=latest
COPY --from=my/app:${app_version} /app/static /usr/share/nginx/html
Have your CI system build this and push it immediately after the backend image is built. The resulting image serves the corresponding static files, but doesn't require a shared volume or any manual management of the volume contents.
For other types of content, consider storing data in a database, or use an object-storage service that maintains its own backing store and can handle the concurrency considerations. Then most of your pods can be totally stateless, and you can manage the data separately (maybe even outside Kubernetes).
How can a hostPath storage works between multiple nodes?
It doesn't. It's an instruction to Kubernetes, on whichever node the pod happens to be scheduled on, to mount that host directory into the container. There's no management of any sort of the directory content; if two pods get scheduled on the same node, they'll share the directory, and if not, they won't; and if your pod's Deployment is updated and the pod is deleted and recreated somewhere else, it might not be the same node and might not have the same data.
With some very specific exceptions you shouldn't use hostPath volumes at all. The exceptions are things like log collectors run as DaemonSets, where there is exactly one pod on every node and you're interested in picking up the host-directory content that is different on each node.
In your specific setup either you're getting lucky with where the data producers and consumers are getting colocated, or there's something about your MicroK8s setup that's causing the host directories to be shared. It is not in general reliable storage.

Share local directory with Kind Kubernetes Cluster using hostpath

I want to share my non-empty local directory with kind cluster.
Based on answer here: How to reference a local volume in Kind (kubernetes in docker)
I tried few variations of the following:
Kind Cluster yaml:
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraMounts:
- hostPath: /Users/xyz/documents/k8_automation/data/manual/
containerPath: /host_manual
extraPortMappings:
- containerPort: 30000
hostPort: 10000
Pod yaml:
apiVersion: v1
kind: Pod
metadata:
name: manual
spec:
serviceAccountName: manual-sa
containers:
- name: tools
image: tools:latest
imagePullPolicy: Never
command:
- bash
tty: true
volumeMounts:
- mountPath: /home/jenkins/agent/data
name: data
volumes:
- name: data
hostPath:
path: /host_manual
type: Directory
---
I see that the directory /home/jenkins/agent/data does exist when the pod gets created. However, the folder is empty.
kinds documentation here: https://kind.sigs.k8s.io/docs/user/configuration/#extra-mounts
It should be the case that whatever is in the local machine at hostpath (/Users/xyz/documents/k8_automation/data/manual/) in extraMounts in the cluster yaml be available to the node at containerPath (/host_manual), which then gets mounted at container volume mounthPath (/home/jenkins/agent/data).
I should add that even if I change the hostPath in the cluster yaml file to a non-existent folder, the empty "data" folder still gets mounted in the container, so I think it's the connection from my local to kind cluster that's the issue.
Why am I not getting the contents of /Users/xyz/documents/k8_automation/data/manual/ with it's many files also available at /home/jenkins/agent/data in the container?
How can I fix this?
Any alternatives if there is no fix?
Turns out these yaml configuration was just fine.
The reason the directory was not showing up in the container was related with docker settings. And because "kind is a tool for running local Kubernetes clusters using Docker container “nodes”", it matters.
It seems docker restricts resource sharing and allows only specific directories to be bind mounted into Docker containers by default. Once I added the specific directory I wanted to show up in the container to the list of directories under Preferences -> Resources -> File sharing, it worked!

Creating a pod/container in kubernetes - how to copy a bunch of files into it

Sorry if this is a noob question:
I am creating a pod in a kubernetes cluster using a pod defintion yaml file.
This pod defines just one container. I'd like to ... copy a few files to a particular directory in the container.
sort of like in docker-compose:
volumes:
- ./testHelpers/certs:/var/private/ssl/certs
Is it possible to do that at this point (point of defining the pod?)
If not, what could my alternatives be?
PS - I understand that the sample from docker-compose is very different since this maps local directory to a directory in container
It's better to use volumes in pod definition.
Initialize the pod before the container runs
Apart from this, you can also use ConfigMap to store certs and other config files you needed and than can access them in the container as volumes.
More details here
You should create a config map, you can do it from files or a directory.
kubectl create configmap my-config --from-file=configuration/
And then mount the config map as directory:
apiVersion: v1
kind: Pod
metadata:
name: configmap-pod
spec:
containers:
- name: test
image: busybox
volumeMounts:
- name: my-config
mountPath: /etc/config
volumes:
- name: my-config
configMap:
name: my-config

How to mount local volume to ksonnet component deployed in kubeflow

I am trying to mount a local directory into a component deployed in kubeflow using ksonnet prototype.
There is no way to mount a local directory into a Kubernetes pod (after all kubeflow and ksonnet just create pods and other Kubernetes resources).
If you want your files to be available in Kubernetes I can think in two options:
Create a custom docker image, copying the folder you want, and push it to a registry. Kubeflow has parameters to customize the images to be deployed.
Use NFS. That way you could mount the NFS volume in your local and also in the pods. To do that you should modify the ksonnet code, since in the last stable version it is not implemented.
If you provide more information about what component are you trying to deploy and which cloud provider you're using, I could help you more
If by local directory you mean local directory on the node, then it is possible to mount a directory on the node’s filesystem inside a pod using HostPath or Local Volumes feature.
A hostPath volume mounts a file or directory from the host node’s filesystem into your Pod. This is not something that most Pods will need, but it offers a powerful escape hatch for some applications.
A local volume represents a mounted local storage device such as a disk, partition or directory.
Local volumes can only be used as a statically created PersistentVolume. Dynamic provisioning is not supported yet.
Compared to hostPath volumes, local volumes can be used in a durable and portable manner without manually scheduling Pods to nodes, as the system is aware of the volume’s node constraints by looking at the node affinity on the PersistentVolume.
For example:
# HostPaht Volume example
apiVersion: v1
kind: Pod
metadata:
name: test-pd
spec:
containers:
- image: k8s.gcr.io/test-webserver
name: test-container
volumeMounts:
- mountPath: /test-pd
name: test-volume
volumes:
- name: test-volume
hostPath:
# directory location on host
path: /data
# this field is optional
type: Directory
# local volume example (beta in v1.10)
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-pv
spec:
capacity:
storage: 100Gi
# volumeMode field requires BlockVolume Alpha feature gate to be enabled.
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Delete
storageClassName: local-storage
local:
path: /mnt/disks/ssd1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- example-node
GlusterFS is also available as Volume or as Persistent Volume (Access modes:ReadWriteOnce,ReadOnlyMany,ReadWriteMany)
A glusterfs volume allows a Glusterfs (an open source networked filesystem) volume to be mounted into your Pod. Unlike emptyDir, which is erased when a Pod is removed, the contents of a glusterfs volume are preserved and the volume is merely unmounted. This means that a glusterfs volume can be pre-populated with data, and that data can be “handed off” between Pods. GlusterFS can be mounted by multiple writers simultaneously.
See the GlusterFS example for more details.