Persistent volume isn't matched with a claim - kubernetes

I created a simple local storage volume. Something like this:
apiVersion: v1
kind: PersistentVolume
metadata:
name: vol1
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /srv/volumes/vol1
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- my-node
The I create a claim:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage:1Gi
For unknow reason they don't get matches. What am I doing wrong?

About local storage it is worth to note that:
Using local storage ties your application to that specific node,
making your application harder to schedule. If that node or local
volume encounters a failure and becomes inaccessible, then that pod
also becomes inaccessible. In addition, many cloud providers do not
provide extensive data durability guarantees for local storage, so you
could lose all your data in certain scenarios.
This is for Kubernetes 1.10. In Kubernetes 1.14 local persistent volumes became GA.
You posted an answer that user is required. Just to clarify the user you meant is a consumer like a pod, deployment, statefullset etc.
So using just a simple pod definition would make your PV to become bound:
kind: Pod
apiVersion: v1
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: nginx
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now the problem happens when you would delete the pod and try to run another one. In this case if you or someone else wold look for a solution it has been described in this GitHub issue.
Hope this clears things out.

You should specify volumeName in your PVC to bind that specifically to the PV that you just created as so:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeName: "vol1"
resources:
requests:
storage:1Gi
Additionally, if you specify storageClassName in your PVC, your PVC will also get bound to a PV matching that specification (though it doesn't guarantee that it will be bound to your "vol1" PV if there are more than 1 PVs for that storage class).
Hope this helps!

I figured it out. I just needed a user. As long as I had a user, everything worked perfectly.

Related

How to use Shared Drive as multiple Kubernetes PV in Homelab

I have homelab.
Window Host and Vmware workstation
1 Master Node
3 Worker Nodes
All nodes have the windows drive mounted and available /external
I want to run multiple tools like jenkins, nexus, nessus, etc and want to use persistent volumes in external drive so that even if i create new EKS clusters then volumes stay there for ever and i can reuse them
So i want to know whats the best to use it
Can i create single hostPath PV and then each pod can claim exmaple 20GB from it
Or I have to create PV for each pod with hostPath and then claim it in POD
So is there 1:1 relationship with PV and PVC ? or one PV can have multiple claims in diff folders?
Also if recreate CLuster and create PV from same hostPath , will my data be there ?
You can use local volume instead of hostPath to experiment with SC/PVC/PC. First, you create the StorageClass:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: shared
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: WaitForFirstConsumer
Then you provision the PersistentVolume available on each node, here's an example for one node:
apiVersion: v1
kind: PersistentVolume
metadata:
name: shared-pv-1
spec:
capacity:
storage: 20Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: shared
local:
path: <path to the shared folder>
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: kubernetes.io/hostname
operator: In
values:
- <your node name>
And the claim that allows you to mount the provisioned volume in a pod:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: shared-pv-1
spec:
storageClassName: shared
volumeMode: Filesystem
resources:
requests:
storage: 20Gi
accessModes:
- ReadWriteOnce
Here's an example pod that mounts the volume and write to it:
apiVersion: v1
kind: Pod
metadata:
name: busybox-1
spec:
restartPolicy: Never
volumes:
- name: shared
persistentVolumeClaim:
claimName: shared-pv-1
containers:
- name: busybox-1
image: busybox
imagePullPolicy: IfNotPresent
volumeMounts:
- name: shared
mountPath: /data
command: ["ash","-c","while :; do echo \"$(date)\tmessage from busybox-1.\" >> /data/message.txt; sleep 1; done"]
For local volume, by default the data written will require manual cleanup and deletion. A positive side effect for you as you would like the content to persist. If you like go further to experiment CSI alike local volume, you can use this Local Persistence Volume Static Provisioner.

how to find my persistent volume location

I tried creating persistent volume using the host path. I can bind it to a specific node using node affinity but I didn't provide that. My persistent volume YAML looks like this
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: fast
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data
After this I created PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
And finally attached it onto the pod.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: thinkingmonster/nettools
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now in describe command for pv or pvc it does not tell that on which node it has actually kept the volume /mnt/data
and I had to ssh to all nodes to locate the same.
And pod is smart enough to be created on that node only where Kubernetes had mapped host directory to PV
How can I know that on which node Kubernetes has created Persistent volume? Without the requirement to ssh the nodes or check that where is pod running.
It's only when a volume is bound to a claim that it's associated with a particular node. HostPath volumes are a bit different than the regular sort, making it a little less clear. When you get the volume claim, the annotations on it should give you a bunch of information, including what you're looking for. In particular, look for the:
volume.kubernetes.io/selected-node: ${NODE_NAME}
annotation on the PVC. You can see the annotations, along with the other computed configuration, by asking the Kubernetes api server for that info:
kubectl get pvc -o yaml -n ${NAMESPACE} ${PVC_NAME}

pod has unbound immediate persistentvolumeclaims after deleting namespace

I have configured the Postgres pod with static provisioning of persistence volume in my local environment . It works fine at the first time but when i delete the namespace and
rerun the pod then its status is pending and give me error
pod has unbound immediate persistentvolumeclaims
I tried to remove the storageClassName from Persistance Volume claim but not works
I also tried to change the storeageclass from manual to block storage but same problem
my yaml file
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
namespace: manhattan
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/opt/manhattan/current/pgdata"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
namespace: manhattan
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: v1
kind: Pod
metadata:
name: postgres
namespace: manhattan
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: dbr-postgres
image: postgres-custome
tty: true
volumeMounts:
- mountPath: "/var/lib/pgsql/9.3/data"
name: task-pv-storage
nodeSelector:
kubernetes.io/hostname: k8s-master
I want my pod to be running even when i delete the namespace and rerun the pod.yaml file
Data will be kept in the kubernetes node because hostpath uses the node filesystem to store the data. The problem is that if you have multiple nodes, then your pod can start on any other node. To solve this, you can either specify the node where you want your pod to start or implement a nfs or glusterfs in your kubernetes nodes. This might be the cause of your problem.
There is one more thing I can think of that might be your issue. When you remove a namespace all the kubernetes resources inside it are removed as well. There is no easy way to recover those. This means that you have to create the pv, pvc and pod in the new namespace.
I solved this issue by using persistentVolumeReclaimPolicy to recycle. Now I can rebound the persistence volume even after deleting the namespace and recreating it
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: "/opt/manhattan/current/pgdata"

How to specify a storage class to user pvc in kubeflow

I am trying to attach a storage class to all the PVC request created by individual user pods for jupyter notebooks in kubeflow.
I tried editing some values and specify storage_class. but none of it is working, whenever a new pvc comes up it doesnot come with a storage class name.
The desired result is whenever a pvc of user pods comes, it should have the name of storage class attached with it. Kindly help with this. I am stuck from last day
you need to have a default storage class in your cluster, so if a pvc does not specify any storage class then default one would be selected.
List the StorageClasses in your cluster:
kubectl get storageclass
Mark a StorageClass as default:
set the annotation storageclass.kubernetes.io/is-default-class=true.
kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Here are the detail steps change-default-storage-class
Basing on documentation
While PersistentVolumeClaims allow a user to consume abstract storage resources, it is common that users need PersistentVolumes with varying properties, such as performance, for different problems.
Cluster administrators need to be able to offer a variety of PersistentVolumes that differ in more ways than just size and access modes, without exposing users to the details of how those volumes are implemented. For these needs there is the StorageClass resource.
A PersistentVolume (PV) is a piece of storage in the cluster that has been provisioned by an administrator or dynamically provisioned using Storage Classes.
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: <name_of_your_StorageClass>
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
A PersistentVolumeClaim (PVC) is a request for storage by a user.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: task-pv-claim
spec:
storageClassName: <name_of_your_StorageClass>
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
Then you can create a Pod that uses your PVC as a volume(which uses PV with storageClass)
apiVersion: v1
kind: Pod
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
Before you are going to create a PV and PVC StorageClass must already exist, if it's not a default one will be used instead.
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: <name_of_your_StorageClass>
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
allowVolumeExpansion: true
mountOptions:
- debug
volumeBindingMode: Immediate
You can check your StorageClasses with this command:
kubectl get sc

Kubernetes, how to link a PersistentVolume to a volumeClaim

I'm newbie in the Kubernetes world and I try to figure it out how a volumeClaim or volumeClaimTemplates defined in a StatefulSet can be linked to a specific PersistentVolume.
I've followed some tutorials to understand and set a local PersistentVolume. If I take Elasticsearch as an example, when the StatefulSet starts, the PersistantVolumeClaim is bound to the PersistantVolume.
Like you know, for a local PersistentVolume we must define the local path to the storage destination.
For Elasticsearch I've defined something like this
local:
path: /mnt/kube_data/elasticsearch
But in a real project, there are more than one persistent volume. So, I will have more than one folder in path /mnt/kube_data. How does Kubernetes select the right persistent volume for a persistent volume claim?
I don't want Kubernetes to put Database data in a persistent volume created for another service.
Here is the configuration for Elasticsearch :
---
apiVersion: apps/v1beta1
kind: StatefulSet
metadata:
name: elasticsearch-sts
spec:
serviceName: elasticsearch
replicas: 1
[...]
containers:
- name: elasticsearch
image: docker.elastic.co/elasticsearch/elasticsearch:6.4.2
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes: [ "ReadWriteOnce" ]
storageClassName: local-storage
resources:
requests:
storage: 10Gi
---
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-elasticsearch
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: local-storage
local:
path: /mnt/elasticsearch
nodeAffinity:
required:
nodeSelectorTerms:
- matchExpressions:
- key: node-role.kubernetes.io/master
operator: Exists
---
You need ClaimRef in the persistent volume definition which have the PVC name to which you want to bind your PV. Also, ClaimRef in PV should have the namespace name where PVC resides because PV's are independent to namespace while PVC aren't. So a same name PVC can exist in two different namespace, hence it is mandatory to provide namespace along with PVC name even when PVC resides in default namespace.
You can refer following answer for PV,PVC and statefulset yaml files for local storage.
Is it possible to mount different pods to the same portion of a local persistent volume?
Hope this helps.