Kubernetes : Dynamic Storage Provisioning using host-path - kubernetes

My question is about PersistentVolumeClaim
I have one node cluster setup on aws ec2
I am trying to create a storage class using kubernetes.io/host-path as Provisioner.
yaml file content for storage class as follows,
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
namespace: kube-system
name: my-storage
annotations:
storageclass.beta.kubernetes.io/is-default-class: "false"
labels:
kubernetes.io/cluster-service: "true"
provisioner: kubernetes.io/host-path
yaml file content for PersistentVolumeClaim as follows,
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
annotations:
volume.beta.kubernetes.io/storage-class: my-storage
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
When I am trying to create storage class and PVC on minikube, it's working. It is creating
volume on minikube in /tmp/hostpath_volume/
But, When I am trying similar thing on one node cluster setup on aws ec2, I am getting following error
Failed to create provisioner: Provisioning in volume plugin "kubernetes.io/host-path" is disabled
I can see this error when I do the kubectl describe pvc task-pv-claim, Also as, PV is not created, so claim is in pending state
I found something like kube-controller-manager which shows
--enable-dynamic-provisioning and --enable-hostpath-provisioner in its option but don't know how to use it.

It seems you might not be running the provisioner itself, so there's nothing to actually do the work of creating the hostpath directory.
Take a look here
The way this works is that the hostpath provisioner reads from the kubernetes API, and watches for you to create a storage class (which you've done) and a persistentvolumeclaim (also done).
When those exist, the provisioner (which is running as a pod) will go an execute a mkdir to create the hostpath.
Run the following:
kubectl apply -f https://raw.githubusercontent.com/kubernetes-incubator/external-storage/master/docs/demo/hostpath-provisioner/pod.yaml
And then recreate your storageclass and pvc

Related

Running multiple pods of go-ethereum with common EFS storage on AWS EKS/kubernetes

I am trying to run a go-ethereum node on AWS EKS, for that i have used statefulsets with below configuration.
statefulset.yaml file
Runningkubectl apply -f statefulset.yaml creates 2 pods out of which 1 is running and 1 is in CrashLoopBackOff state.
Pods status
After checking the logs for second pod the error I am getting is Fatal: Failed to create the protocol stack: datadir already used by another process.
Error logs i am getting
The problem is mainly due to the pods using the same directory to write(geth data) on the persistant volume(i.e the pods are writing to '/data'). If I use a subpath expression and mount the pod's directory to a sub-directory with pod name(for eg: '/data/geth-0') it works fine.
statefulset.yaml with volume mounting to a sub directory with podname
But my requirement is that all the three pod's data is written at '/data' directory.
Below is my volume config file.
volume configuration
You need to dynamically provision the access point for each of your stateful pod. First create an EFS storage class that support dynamic provision:
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: efs-dyn-sc
provisioner: efs.csi.aws.com
reclaimPolicy: Retain
parameters:
provisioningMode: efs-ap
directoryPerms: "700"
fileSystemId: <get the ID from the EFS console>
Update your spec to support claim template:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: geth
...
spec:
...
template:
...
spec:
containers:
- name: geth
...
volumeMounts:
- name: geth
mountPath: /data
...
volumeClaimTemplates:
- metadata:
name: geth
spec:
accessModes:
- ReadWriteOnce
storageClassName: efs-dyn-sc
resources:
requests:
storage: 5Gi
All pods now write to their own /data.
The same directory cannot be reused by multiple instances of go-ethereum, so you have the following options:
Use the same persistent volume for each pod and use a subdirectory for each pod
Use a separate persistent volume for each pod, then each can use the same /data path

dynamic storage class on gcp kubernetes

I am trying to set k8 cluster on centos7 spinned on gcp cloud. i created 3 masters and 2 worker nodes. installation and checks are fine. now i am trying to create dynamic storage class to use gcp disk, but somehow it goes in pending state and no error messages found. can anyone point me to correct doc or steps to make sure it works.
[root#master1 ~]# cat slowdisk.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: us-east1-b
[root#master1 ~]# cat pclaim-slow.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
[root#master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Pending 5m58s
[root#master1 ~]# kubectl describe pvc
Name: claim2
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"claim2","namespace":"default"},"spec":{"accessModes...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
You haven't mentioned the storage class name. The value storageClassName: standard needs to be added on to PVC manifest file pclaim-slow.yaml.
You are missing the storageClassName in PVC yaml. Below is the update pvc.yaml. Check the documentation as well.
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 2G
It's kind of weird that you are not getting any error events from your PersistenVolumeClaim.
You can read Provisioning regional Persistent Disks, which says:
It's in Beta
This is a Beta release of local PersistentVolumes. This feature is not covered by any SLA or deprecation policy and might be subject to backward-incompatible changes.
Also pd-standard type, which you have configured is supposed to be used for drives of at least 200Gi.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
You can read more regarding Dynamic Persistent Volumes.
If you encounter this error PVC pending Failed to get GCE GCECloudProvider that means your cluster was not configured to use GCE as Cloud Provider. It can be fixed by adding cloud-provider: gce to the controller manager or just applying the yaml:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
controllerManager:
extraArgs:
cloud-provider: gce
cloud-config: /etc/kubernetes/cloud
extraVolumes:
- name: cloud-config
hostPath: /etc/kubernetes/cloud-config
mountPath: /etc/kubernetes/cloud
pathType: FileOrCreate
kubeadm upgrade apply --config gce.yaml
Let me know if that helps, if not I'll try to be more helpful.

How do I use PV and PVC for *reliable* persistent volumes?

I followed the instructions in this post:
how to bound a Persistent volume claim with a gcePersistentDisk?
And when I applied that, my PVC did not bind to the PV, instead I got this error in the event list:
14s 17s 2 test-pvc.155b8df6bac15b5b PersistentVolumeClaim Warning ProvisioningFailed persistentvolume-controller Failed to provision volume with StorageClass "standard": claim.Spec.Selector is not supported for dynamic provisioning on GCE
I found a github posting that suggested something that would fix this:
https://github.com/coreos/prometheus-operator/issues/323#issuecomment-299016953
But unfortunately that made no difference.
Is there a soup-to-nuts doc somewhere telling us exactly how to use PV and PVC to create truly persistent volumes? Specifically where you can shut down the pv and pvc and restore them later, and get all your content back? Because as it seems right now, if you lose your PVC for whatever reason, you lose connection to your volume and there is no way to get it back again.
The default StorageClass is not compatible with a gcePesistentDisk. Something like this would work:
$ cat <<EOF | kubectl create -f -
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: slow
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
EOF
then on your PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: "slow" <== specify the storageClass
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You can also set "slow" as the default storageClass in which case you wouldn't have to specify it on your PVC:
$ kubectl patch storageclass slow -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

pod has unbound PersistentVolumeClaims

When I push my deployments, for some reason, I'm getting the error on my pods:
pod has unbound PersistentVolumeClaims
Here are my YAML below:
This is running locally, not on any cloud solution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.
When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.
To solve your issue:
Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
Remove the StorageClass from your cluster
How do these pieces play together?
At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.
The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.
The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.
A PersistentVolume without StorageClass is considered to be static.
"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
Example PersistentVolume
In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
The PersistentVolumeSpec allows us to define multiple attributes.
I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.
Additional Resources:
Configure PersistentVolume Guide
If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
To use it on other distributions use https://github.com/rancher/local-path-provisioner
I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.
YOUR POD
Expects what kind of storage class?
YOUR PVC Definition
volumeClaimTemplates --> storageClassName : "standard"
PV
spec --> storageClassName : "standard"
In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.
But there might be more reasons to it. Make sure that :
The volumeName name specified in PVC match PV name
The storageClassName name specified in PVC match PV name
The sufficient capacity size is allocated to your resource
The access modes of You PV and PVC are consistent
The number of PV match PVC
For detailed explanation read this article.
We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:
kubectl get csidriver
Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature ‘Compute Engine persistent disk CSI Driver’

no persistent volumes available for this claim and no storage class is set

my pvc.yaml
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: database-disk
labels:
stage: production
name: database
app: mysql
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 2Gi
when i ran kubectl apply -f pvc.yaml i got the following error
Normal FailedBinding 12h (x83 over 13h) persistentvolume-controller no persistent volumes available for this claim and no storage class is set
same pvc worked fine on "GKE" (Google Kubernetes Engine) but failing in my local cluster using microk8s
Did you create any PV in your cluster?
PV and Storage classes on local clusters should be done manually by cluster admin.
Check out Kubernetes documentation for the details:
A cluster administrator creates a PersistentVolume that is backed by physical storage. The administrator does not associate the volume
with any Pod.
A cluster user creates a PersistentVolumeClaim, which gets automatically bound to a suitable PersistentVolume.
The user creates a Pod that uses the PersistentVolumeClaim as storage.