How to use Shared Drive as multiple Kubernetes PV in Homelab - kubernetes

I have homelab.
Window Host and Vmware workstation
1 Master Node
3 Worker Nodes
All nodes have the windows drive mounted and available /external
I want to run multiple tools like jenkins, nexus, nessus, etc and want to use persistent volumes in external drive so that even if i create new EKS clusters then volumes stay there for ever and i can reuse them
So i want to know whats the best to use it
Can i create single hostPath PV and then each pod can claim exmaple 20GB from it
Or I have to create PV for each pod with hostPath and then claim it in POD
So is there 1:1 relationship with PV and PVC ? or one PV can have multiple claims in diff folders?
Also if recreate CLuster and create PV from same hostPath , will my data be there ?

You can use local volume instead of hostPath to experiment with SC/PVC/PC. First, you create the StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: shared
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then you provision the PersistentVolume available on each node, here's an example for one node:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-pv-1
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: shared
local:
path: <path to the shared folder>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <your node name>
And the claim that allows you to mount the provisioned volume in a pod:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-pv-1
spec:
storageClassName: shared
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
accessModes:
- ReadWriteOnce
Here's an example pod that mounts the volume and write to it:
apiVersion: v1
kind: Pod
metadata:
name: busybox-1
spec:
restartPolicy: Never
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-pv-1
containers:
- name: busybox-1
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: shared
mountPath: /data
command: ["ash","-c","while :; do echo \"$(date)\tmessage from busybox-1.\" >> /data/message.txt; sleep 1; done"]
For local volume, by default the data written will require manual cleanup and deletion. A positive side effect for you as you would like the content to persist. If you like go further to experiment CSI alike local volume, you can use this Local Persistence Volume Static Provisioner.

Related

Migrating ROM, RWO persistent volume claims from in-tree plugin to csi in GKE

I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume while the deployment is scaled down to 0. However, since in-tree plugins will be deprecated in future versions of kubernetes, I'm planning to migrate this process to volumes using csi drivers.
In order to clarify my current use of this kind of volumes, I'll put a sample yaml configuration file using the basic idea:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
storageClassName: standard
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: busybox
# Populate the volume
command:
- touch
- /foo/bar
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test
name: test
spec:
replicas: 0
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
command:
- sh
- '-c'
- |
# Check the volume has been populated
ls /foo/
# Prevent the pod from exiting for a while
sleep 3600
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
readOnly: true
so the job populates the the volume and later the deployment is scaled up. However, replacing the storageClassName field standard in the persistent volume claim by singlewriter-standard does not even allow the job to run.
Is this some kind of bug? Is there some workaround to this using volumes using the csi driver?
If this is a bug, I'd plan to migrate to using sci drivers later; however, if this is not a bug, how should I migrate my current workflow since in-tree plugins will eventually be deprecated?
Edit:
The version of the kubernetes server is 1.17.9-gke.1504. As for the storage classes, they are the standard and singlewriter-standard default storage classes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
components.gke.io/component-name: pdcsi-addon
components.gke.io/component-version: 0.5.1
storageclass.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: singlewriter-standard
parameters:
type: pd-standard
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
While the error is not shown in the job but in the pod itself (this is just for the singlewriter-standard storage class):
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
The message you encountered:
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
is not a bug. The attachdetach-controller is showing this error as it doesn't know in which accessMode it should mount the volume:
For [ReadOnlyMany, ReadWriteOnce] PV, the external attacher simply does not know if the attachment is going to be consumed as read-only(-many) or as read-write(-once)
-- Github.com: Kubernetes CSI: External attacher: Issues: 153
I encourage you to check the link above for a full explanation.
I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume
You can combine the steps from below guides:
Turn on the CSI Persistent disk driver in GKE
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Gce-pd-csi-driver
Create a PVC with pd.csi.storage.gke.io provisioner (you will need to modify YAML definitions with storageClassName: singlewriter-standard):
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Citing the documentation on steps to take (from ReadOnlyMany guide) that should fulfill the setup you've shown:
Before using a persistent disk in read-only mode, you must format it.
To format your persistent disk:
Create a persistent disk manually or by using dynamic provisioning.
Format the disk and populate it with data. To format the disk, you can:
Reference the disk as a ReadWriteOnce volume in a Pod. Doing this results in GKE automatically formatting the disk, and enables the Pod to pre-populate the disk with data. When the Pod starts, make sure the Pod writes data to the disk.
Manually mount the disk to a VM and format it. Write any data to the disk that you want. For details, see Persistent disk formatting.
Unmount and detach the disk:
If you referenced the disk in a Pod, delete the Pod, wait for it to terminate, and wait for the disk to automatically detach from the node.
If you mounted the disk to a VM, detach the disk using gcloud compute instances detach-disk.
Create Pods that access the volume as ReadOnlyMany as shown in the following section.
-- Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Additional resources:
Github.com: Kubernetes: Design proposals: Storage: CSI
Kubernetes.io: Blog: Container storage interface
Kubernetes-csi.github.io: Docs: Drivers
EDIT
Following the official documentation:
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Please treat it as an example.
Dynamically create a PVC that will be used with ReadWriteOnce accessMode:
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rwo
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 81Gi
Run a Pod with a PVC mounted to it:
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox-pvc
spec:
containers:
- image: k8s.gcr.io/busybox
name: busybox
command:
- "sleep"
- "36000"
volumeMounts:
- mountPath: /test-mnt
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pvc-rwo
Run following commands:
$ kubectl exec -it busybox-pvc -- /bin/sh
$ echo "Hello there!" > /test-mnt/hello.txt
Delete the Pod and wait for the drive to be unmounted. Please do not delete PVC as deleting it:
When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.
-- Cloud.google.com: Kubernetes Engine: Persistent Volumes: Dynamic provisioning
Get the name (it's in VOLUME column) of the earlier created disk by running:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-rwo Bound pvc-11111111-2222-3333-4444-555555555555 81Gi RWO singlewriter-standard 52m
Create a PV and PVC with following definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-rox
spec:
storageClassName: singlewriter-standard
capacity:
storage: 81Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: pvc-rox # <-- important
gcePersistentDisk:
pdName: <INSERT HERE THE DISK NAME FROM EARLIER COMMAND>
# pdName: pvc-11111111-2222-3333-4444-555555555555 <- example
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rox # <-- important
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 81Gi
You can test if your disk is in ROX accessMode when the spawned Pods were scheduled on multiple nodes and all of them have the PVC mounted:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 15
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /test-mnt
name: volume-ro
readOnly: true
volumes:
- name: volume-ro
persistentVolumeClaim:
claimName: pvc-rox
readOnly: true
$ kubectl get deployment nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 15/15 15 15 3m1s
$ kubectl exec -it nginx-6c77b8bf66-njhpm -- cat /test-mnt/hello.txt
Hello there!

how to find my persistent volume location

I tried creating persistent volume using the host path. I can bind it to a specific node using node affinity but I didn't provide that. My persistent volume YAML looks like this
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: fast
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data
After this I created PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
And finally attached it onto the pod.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: thinkingmonster/nettools
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now in describe command for pv or pvc it does not tell that on which node it has actually kept the volume /mnt/data
and I had to ssh to all nodes to locate the same.
And pod is smart enough to be created on that node only where Kubernetes had mapped host directory to PV
How can I know that on which node Kubernetes has created Persistent volume? Without the requirement to ssh the nodes or check that where is pod running.
It's only when a volume is bound to a claim that it's associated with a particular node. HostPath volumes are a bit different than the regular sort, making it a little less clear. When you get the volume claim, the annotations on it should give you a bunch of information, including what you're looking for. In particular, look for the:
volume.kubernetes.io/selected-node: ${NODE_NAME}
annotation on the PVC. You can see the annotations, along with the other computed configuration, by asking the Kubernetes api server for that info:
kubectl get pvc -o yaml -n ${NAMESPACE} ${PVC_NAME}

Kubernetes - Generate files on all the pods

I have Java API which exports the data to an excel and generates a file on the POD where the request is served.
Now the next request (to download the file) might go to a different POD and the download fails.
How do I get around this?
How do I generate files on all the POD? Or how do I make sure the subsequent request goes to the same POD where file was generated?
I cant give the direct POD URL as it will not be accessible to clients.
Thanks.
You need to use a persistent volumes to share the same files between your containers. You could use the node storage mounted on containers (easiest way) or other distributed file system like NFS, EFS (AWS), GlusterFS etc...
If you you need a simplest to share the file and your pods are in the same node, you could use hostpath to store the file and share the volume with other containers.
Assuming you have a kubernetes cluster that has only one Node, and you want to share the path /mtn/data of your node with your pods:
Create a PersistentVolume:
A hostPath PersistentVolume uses a file or directory on the Node to emulate network-attached storage.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
Create a PersistentVolumeClaim:
Pods use PersistentVolumeClaims to request physical storage
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Look at the PersistentVolumeClaim:
kubectl get pvc task-pv-claim
The output shows that the PersistentVolumeClaim is bound to your PersistentVolume, task-pv-volume.
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
task-pv-claim Bound task-pv-volume 10Gi RWO manual 30s
Create a deployment with 2 replicas for example:
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/mnt/data"
name: task-pv-storage
Now you can check inside both container the path /mnt/data has the same files.
If you have cluster with more than 1 node I recommend you to think about the other types of persistent volumes.
References:
Configure persistent volumes
Persistent volumes
Volume Types

Persistent volume isn't matched with a claim

I created a simple local storage volume. Something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /srv/volumes/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
The I create a claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage:1Gi
For unknow reason they don't get matches. What am I doing wrong?
About local storage it is worth to note that:
Using local storage ties your application to that specific node,
making your application harder to schedule. If that node or local
volume encounters a failure and becomes inaccessible, then that pod
also becomes inaccessible. In addition, many cloud providers do not
provide extensive data durability guarantees for local storage, so you
could lose all your data in certain scenarios.
This is for Kubernetes 1.10. In Kubernetes 1.14 local persistent volumes became GA.
You posted an answer that user is required. Just to clarify the user you meant is a consumer like a pod, deployment, statefullset etc.
So using just a simple pod definition would make your PV to become bound:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now the problem happens when you would delete the pod and try to run another one. In this case if you or someone else wold look for a solution it has been described in this GitHub issue.
Hope this clears things out.
You should specify volumeName in your PVC to bind that specifically to the PV that you just created as so:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeName: "vol1"
resources:
requests:
storage:1Gi
Additionally, if you specify storageClassName in your PVC, your PVC will also get bound to a PV matching that specification (though it doesn't guarantee that it will be bound to your "vol1" PV if there are more than 1 PVs for that storage class).
Hope this helps!
I figured it out. I just needed a user. As long as I had a user, everything worked perfectly.

Kubernetes, how to link a PersistentVolume to a volumeClaim

I'm newbie in the Kubernetes world and I try to figure it out how a volumeClaim or volumeClaimTemplates defined in a StatefulSet can be linked to a specific PersistentVolume.
I've followed some tutorials to understand and set a local PersistentVolume. If I take Elasticsearch as an example, when the StatefulSet starts, the PersistantVolumeClaim is bound to the PersistantVolume.
Like you know, for a local PersistentVolume we must define the local path to the storage destination.
For Elasticsearch I've defined something like this
local:
path: /mnt/kube_data/elasticsearch
But in a real project, there are more than one persistent volume. So, I will have more than one folder in path /mnt/kube_data. How does Kubernetes select the right persistent volume for a persistent volume claim?
I don't want Kubernetes to put Database data in a persistent volume created for another service.
Here is the configuration for Elasticsearch :
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-sts
spec:
serviceName: elasticsearch
replicas: 1
[...]
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-elasticsearch
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/elasticsearch
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
---
You need ClaimRef in the persistent volume definition which have the PVC name to which you want to bind your PV. Also, ClaimRef in PV should have the namespace name where PVC resides because PV's are independent to namespace while PVC aren't. So a same name PVC can exist in two different namespace, hence it is mandatory to provide namespace along with PVC name even when PVC resides in default namespace.
You can refer following answer for PV,PVC and statefulset yaml files for local storage.
Is it possible to mount different pods to the same portion of a local persistent volume?
Hope this helps.