Dynamic Persistent volume in Kubernetes - persistence

I have a kubernetes cluster on 1.4.6 and trying to configure the dynamic persistence volume based on glusterfs. I have created the glusterfs cluster and have the volume created as well.
gluster volume info
Volume Name: volume1
Type: Replicate
Volume ID: xxxxxxxxx
Status: Started
Number of Bricks: 1 x 2 = 2
Transport-type: tcp
Bricks:
Brick1: <host-1>:/gluster-storage
Brick2: <host-2>:/gluster-storage
Options Reconfigured:
performance.readdir-ahead: on
From the kubernetes side, a storageclass is created with the storageclass.beta.kubernetes.io/is-default-class as "true" and the provisioner is set as kubernetes.io/glusterfs. With this configuration, when the PVC is created, its pending and never gets bound. While checking the PV, there is no PV created using the gluster-storage driver mentioned in the storageclass.
Is the dynamic provisioning using glusterFs is available for 1.4.6
Is there any specific configuration needs to be enabled when kube controller
is started.
Following are the yml files for reference.
kind: StorageClass
apiVersion: storage.k8s.io/v1beta1
metadata:
name: slow
annotations:
storageclass.beta.kubernetes.io/is-default-class: "true"
provisioner: kubernetes.io/glusterfs
parameters:
endpoint: "glusterfs-cluster"
resturl: "<Host IP for Gluster>"
restauthenabled: "false"
restuser: ""
restuserkey: ""
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-claim
annotations:
volume.alpha.kubernetes.io/storage-class: default
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
Has anybody done the dynamic provisioning using the glusterfs.

I think you should set the parameter volume.alpha.kubernetes.io/storage-class value slow in PersistentVolumeClaim which you have set in StorageClass.

Related

Can I use EBS in EKS on fargate?

I want to using prometheus in EKS on AWS fargate
I follow this.
https://aws.amazon.com/jp/blogs/containers/monitoring-amazon-eks-on-aws-fargate-using-prometheus-and-grafana/
but I can't create persistent volume claims.
this is prometheus-storageclass.yaml.
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: prometheus
namespace: prometheus
provisioner: kubernetes.io/aws-ebs
parameters:
type: gp2
reclaimPolicy: Retain
mountOptions:
debug
Can I Use aws-ebs in provisioner field?
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: prometheus-server
namespace: prometheus
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 16Gi
storageClassName: prometheus
I apply sc and pvc
When I apply the PVC, the PVC is still pending and I get the following message
Failed to provision volume with StorageClass "prometheus": error finding candidate zone for pvc: no instances returned
I forgot to create the node group.
eksctl create nodegroup --cluster=myClusterName

Cannot mount glusterfs volume on GKE

I have deployed a GlusterFS cluster on GCE, three nodes, one volume and one brick. I need to mount it on a pod deployed on GKE. I have successfully created the endpoint and PV, but I cannot create the PVC, If I introduce the volumeName refered to my PV, I get the following error:
2s Warning VolumeMismatch persistentvolumeclaim/my-glusterfs-pvc Cannot bind to requested volume "my-glusterfs-pv": storageClassName does not match
If I don't introduce a volumeName selector I get the following error:
3s Warning ProvisioningFailed persistentvolumeclaim/my-glusterfs-pvc Failed to provision volume with StorageClass "standard": invalid AccessModes [ReadWriteMany]: only AccessModes [ReadWriteOnce ReadOnlyMany] are supported
I though that glusterfs has native support on GKE, however I cannot find on the GCP site:
https://cloud.google.com/kubernetes-engine/docs/concepts/node-images#storage_driver_support
Is it related to the errors I'm getting while deploying the PVC? How could I solve it?
Thanks!
I have not defined any storageClass. I can only find Heketi provisioned GlusterFS storageClass on Kubernetes documentation. My YAMLs:
kind: PersistentVolume
metadata:
name: my-glusterfs-pv
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
glusterfs:
path: mypath
endpoints: my-glusterfs-cluster
readOnly: false
persistentVolumeReclaimPolicy: Retain
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: my-glusterfs-pvc
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
volumeName: my-glusterfs-pv
I have deployed Heketi on my glusterfs cluster and it works correctly. However I cannot deploy the pvc on GKE. Glusterfs-client is running ok, the heketi secret is available, the gluster storageclass has been created without errors, but the PVC remains in Pending an I have found this event:
17m Warning ProvisioningFailed persistentvolumeclaim/liferay-glusterfs-pvc Failed to provision volume with StorageClass "gluster-heketi-external": failed to create volume: failed to create volume: see kube-controller-manager.log for details
Storageclass:
kind: StorageClass
#apiVersion: storage.k8s.io/v1beta1
apiVersion: storage.k8s.io/v1
metadata:
name: gluster-heketi-external
provisioner: kubernetes.io/glusterfs
parameters:
restauthenabled: "true"
resturl: "http://my-heketi-node-ip:8080"
clusterid: "idididididididididid"
restuser: "myadminuser"
secretName: "heketi-secret"
secretNamespace: "my-app"
volumetype: "replicate:3"
PVC:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
annotations:
volume.beta.kubernetes.io/storage-class: gluster-heketi-external
# finalizers:
# - kubernetes.io/pvc-protection
name: my-app-pvc
namespace: my-app
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 100Gi
I also have created the firewall rule needed for Heketi, in order GKE nodes can access to Heketi resturl and port. I have checked that it´s working ok launching a telnet to 8080 port while I am sniffing the traffic on the Heketi node. Everything appears to be ok, I can see the answer and the packets. However I cannot see any packet when I deploy the pvc. There is nothing on syslog. I can create volumes when the heketi command is launched locally.
I have tried to curl Heketi node from a gluster-client pod and it works correctly:
kubectl exec -it gluster-client-6fqmb -- curl http://[myheketinodeip]:8080/hello
Hello from Heketi

dynamic storage class on gcp kubernetes

I am trying to set k8 cluster on centos7 spinned on gcp cloud. i created 3 masters and 2 worker nodes. installation and checks are fine. now i am trying to create dynamic storage class to use gcp disk, but somehow it goes in pending state and no error messages found. can anyone point me to correct doc or steps to make sure it works.
[root#master1 ~]# cat slowdisk.yaml
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: kubernetes.io/gce-pd
parameters:
type: pd-standard
replication-type: none
zone: us-east1-b
[root#master1 ~]# cat pclaim-slow.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
[root#master1 ~]# kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim2 Pending 5m58s
[root#master1 ~]# kubectl describe pvc
Name: claim2
Namespace: default
StorageClass:
Status: Pending
Volume:
Labels: <none>
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"name":"claim2","namespace":"default"},"spec":{"accessModes...
Finalizers: [kubernetes.io/pvc-protection]
Capacity:
Access Modes:
VolumeMode: Filesystem
Mounted By: <none>
Events: <none>
You haven't mentioned the storage class name. The value storageClassName: standard needs to be added on to PVC manifest file pclaim-slow.yaml.
You are missing the storageClassName in PVC yaml. Below is the update pvc.yaml. Check the documentation as well.
kind: PersistentVolumeClaim
metadata:
name: claim2
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 2G
It's kind of weird that you are not getting any error events from your PersistenVolumeClaim.
You can read Provisioning regional Persistent Disks, which says:
It's in Beta
This is a Beta release of local PersistentVolumes. This feature is not covered by any SLA or deprecation policy and might be subject to backward-incompatible changes.
Also pd-standard type, which you have configured is supposed to be used for drives of at least 200Gi.
Note: To use regional persistent disks of type pd-standard, set the PersistentVolumeClaim.storage attribute to 200Gi or higher. If you need a smaller persistent disk, use pd-ssd instead of pd-standard.
You can read more regarding Dynamic Persistent Volumes.
If you encounter this error PVC pending Failed to get GCE GCECloudProvider that means your cluster was not configured to use GCE as Cloud Provider. It can be fixed by adding cloud-provider: gce to the controller manager or just applying the yaml:
apiVersion: kubeadm.k8s.io/v1beta1
kind: ClusterConfiguration
controllerManager:
extraArgs:
cloud-provider: gce
cloud-config: /etc/kubernetes/cloud
extraVolumes:
- name: cloud-config
hostPath: /etc/kubernetes/cloud-config
mountPath: /etc/kubernetes/cloud
pathType: FileOrCreate
kubeadm upgrade apply --config gce.yaml
Let me know if that helps, if not I'll try to be more helpful.

pod has unbound PersistentVolumeClaims

When I push my deployments, for some reason, I'm getting the error on my pods:
pod has unbound PersistentVolumeClaims
Here are my YAML below:
This is running locally, not on any cloud solution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.
When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.
To solve your issue:
Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
Remove the StorageClass from your cluster
How do these pieces play together?
At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.
The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.
The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.
A PersistentVolume without StorageClass is considered to be static.
"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
Example PersistentVolume
In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
The PersistentVolumeSpec allows us to define multiple attributes.
I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.
Additional Resources:
Configure PersistentVolume Guide
If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
To use it on other distributions use https://github.com/rancher/local-path-provisioner
I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.
YOUR POD
Expects what kind of storage class?
YOUR PVC Definition
volumeClaimTemplates --> storageClassName : "standard"
PV
spec --> storageClassName : "standard"
In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.
But there might be more reasons to it. Make sure that :
The volumeName name specified in PVC match PV name
The storageClassName name specified in PVC match PV name
The sufficient capacity size is allocated to your resource
The access modes of You PV and PVC are consistent
The number of PV match PVC
For detailed explanation read this article.
We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:
kubectl get csidriver
Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature ‘Compute Engine persistent disk CSI Driver’

Kubernetes: PersistentVolume And PersistentVolumeClaim - Sharing Claims

This question is about the behavior of PersistentVolume and PersistentVolumeClaim configurations within Kubernetes. We have read through the documentation and are left with a few lingering questions.
We are using Azure Kubernetes Service to host our cluster and we want to provide a shared persistent storage backend for many of our Pods. We are planning on using PersistentVolumes to accomplish this.
In this scenario, we want to issue a PersistentVolume backed by an AzureFile storage resource. We will deploy Jenkins to our cluster and store the jenkins_home directory in the PersistentVolume so that our instance can survive pod and node failures. We will be running multiple Master Jenkins nodes, all configured with a similar deployment yaml.
We have created all the needed storage accounts and applicable shares ahead of time, as well as the needed secrets.
First, we issued the following PersistentVolume configuration;
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins-azure-file-share
labels:
usage: jenkins-azure-file-share
spec:
capacity:
storage: 100Gi
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
azureFile:
secretName: azure-file-secret
shareName: jenkins
readOnly: false
mountOptions:
- dir_mode=0777
- file_mode=0777
- uid=1000
- gid=1000
Following that, we then issued the following PersistentVolumeClaim configuration;
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: jenkins-file-claim
annotations:
volume.beta.kubernetes.io/storage-class: ""
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 10Gi
volumeName: "jenkins-azure-file-share"
Next, we use this claim within our deployments in the following manner;
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: jenkins-instance-name
spec:
replicas: 1
template:
metadata:
labels:
role: jenkins
app: jenkins-instance-name
spec:
containers:
- name: jenkins-instance-name
image: ContainerRegistry.azurecr.io/linux/jenkins_master:latest
ports:
- name: jenkins-port
containerPort: 8080
volumeMounts:
- name: jenkins-home
mountPath: /var/jenkins_home
subPath: "jenkins-instance-name"
volumes:
- name: jenkins-home
persistentVolumeClaim:
claimName: "jenkins-file-claim"
imagePullSecrets:
- name: ImagePullSecret
This is all working as expected. We have deployed multiple Jenkins Masters to our Kubernetes cluster and each one is correctly allocating a new folder on the share specific to each master instance.
Now for my questions
The PersistentVolume is configured with 100Gig of Storage. Does this mean that Kubernetes will only allow a maximum of 100Gig of total storage in this volume?
When the PersistentVolumeClaim is bound to the PersistentVolume, the PersistentVolumeClaim seems to show that it has 100Gig of total storage available, even though the PersistentVolumeClaim was configured for 10Gig of storage;
C:\ashley\scm\kubernetes>kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
jenkins-azure-file-share 100Gi RWX Retain Bound default/jenkins-file-claim 2d
C:\ashley\scm\kubernetes>kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
jenkins-homes-file-claim Bound jenkins-azure-file-share 100Gi RWX 2d
Is this just bad output from the get pvc command or am I misinterpreting the output of the get pvc command?
When sharing a PersistentVolumeClaim in this way;
Does each deployment ONLY have access to the configured maximum of 10Gig of storage from the PersistentVolume's 100Gig capacity?
Or, does each deployment have access to its own 10Gig slice of the total 100Gig of storage configured for the PersistentVolume?
With this configuration, what happens when a single PersistentVolumeClaim capacity gets fully utilized? Do all the Deployments using this single PersistentVolumeClaim stop working?
So for the pvc it is definitely the case that it has only 10Gig available with this config. For the pv I assume it is the same but in this case I don't know for sure but should be, because of consistency. And it stops working if any of this limits are reached so if you have 11 Jenkins running it will even fail although you not reached the limit on a single pvc.