Adding a Compute Engine Disk to Container Engine as persistent volume - kubernetes

I have a PersistentVolumeClaim that looks like the following:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gitlab-config-storage
namespace: gitlab
annotations:
volume.beta.kubernetes.io/storage-class: fast
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 1Gi
This created a Disk in Google Compute Engine, I then deleted the claim and reapplied it, but this created a new Disk, I would like to attach the original Disk to my claim as this had data in it I've already created, is there a way to force GKE to use a specific Disk?

By using a persistent volume claim, you are asking GKE to use a persistent disk, and then always use the same volume.
However, by deleting the claim, you've essentially destroyed it.
Don't delete the claim, ever, if you want to continue using it.
You can attach a claim to a multiple pods over its lifetime, and the disk will remain the same. As soon as you delete the claim, it will disappear.
Take a look here for more in.formation

You can re-attach a GCE disk to a PersistantVolumeClaim by first creating the PersistantVolume. Create a yaml file and set the proper values, e.g.:
---
apiVersion: v1
kind: PersistentVolume
name: pvc-gitlab-config-storage
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 25Gi
claimRef:
apiVersion: v1
kind: PersistentVolumeClaim
name: gitlab-config-storage
namespace: gitlab
gcePersistentDisk:
pdName: <name_of_the_gke_disk>
persistentVolumeReclaimPolicy: Delete
storageClassName: fast
Create this with kubectl apply -f filename.yaml and then re-create your PersistantVolumeClaim with matching values to the spec and claimRef. The PVC should find the matching PV and bind to it & the existing GCE disk.

Related

How can I mount Pv on one node and use that same PV for pods in another anode

I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
My approach was to create a PV/PVC that mounts to that volume and then use that PVC in my pod, but I am not sure if it's mounting to same host that pod comes up in or a different host.
YAML for Storage Class
kind: StorageClass
metadata:
name: local-path
provisioner: kubernetes.io/no-provisioner
volumeBindingMode: Immediate
allowVolumeExpansion: true
reclaimPolicy: Delete
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: redis-pv
labels:
type: local
spec:
capacity:
storage: 200Mi
storageClassName: local-path
claimRef:
namespace: redis
name: data-redis-0
accessModes:
- ReadWriteMany
hostPath:
path: "/mnt2/data/redis"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-redis-0
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 200Mi
storageClassName: local-path
no when i am trying to schedule a pod the storage is also getting mounted on the same node instead
you are using local path you can not do it.
There is a different type of AccessMount ReadWriteMany, ReadWriteOnce, and ReadyWriteOnly with PVC.
A PersistentVolumeClaim (PVC) is a request for storage by a user. It
is similar to a Pod. Pods consume node resources and PVCs consume PV
resources. Pods can request specific levels of resources (CPU and
Memory). Claims can request specific size and access modes (e.g., they
can be mounted ReadWriteOnce, ReadOnlyMany or ReadWriteMany, see
AccessModes).
Read More at : https://kubernetes.io/docs/concepts/storage/persistent-volumes/
Yes you can mount the multiple PODs to a single PVC but in that case, you have to use the ReadWriteMany. Most people use the NFS or EFS for this type of use case.
EBS is ReadWriteOnce, so it won't be possible to use the EBS in your case. you have to either use NFS or EFS.
you can use GlusterFs in the back it will be provisioning EBS volume. GlusterFS support ReadWriteMany and it will be faster compared to EFS as it's block storage (SSD).
For ReadWiteMany you can also checkout : https://min.io/
Find access mode details here : https://kubernetes.io/docs/concepts/storage/persistent-volumes/#access-modes
I have attached an EBS volume to one of the nodes in my cluster and I want that whatever pod are coming up, irrespective of the nodes they are scheduled onto, should use that EBS volume. is this possible?
No. An EBS volume can only be attached to at most one EC2 instance, and correspondingly, one Kubernetes node. In Kubernetes terminology, it only allows the ReadWriteOnce access mode.
It looks like the volume you're trying to create is the backing store for a Redis instance. If the volume will only be attached to one pod at a time, then this isn't a problem on its own, but you need to let Kubernetes manage the volume for you. Then the cluster will know to detach the EBS volume from the node it's currently on and reattach it to the node with the new pod. Setting this up is a cluster-administration problem and not something you as a programmer can do, but it should be set up for you in environments like Amazon's EKS managed Kubernetes.
In this environment:
Don't create a StorageClass; this is cluster-level configuration.
Don't manually create a PersistentVolume; the cluster will create it for you.
You should be able to use the default storageClass: in your PersistentVolumeClaim.
You probably should use a StatefulSet to create the PersistentVolumeClaim for you.
So for example:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
volumeClaimTemplates: # automatically creates PersistentVolumeClaims
- metadata:
name: data-redis
spec:
accessModes: [ReadWriteOnce] # data won't be shared between pods
resources:
requests:
storage: 200Mi
# default storageClassName:
template:
spec:
containers:
- name: redis
volumeMounts:
- name: data-redis
mountPath: /data

How to copy PVC between different storage classes?

I know about snapshots and tested volume cloning. And it works, when storage class is the same.
But what if I have two storage classes: one for fast ssd and second for cold storage hdd over network and I want periodically make backup to cold storage? How to do it?
This is not a thing Kubernetes supports since it would be entirely up to your underlying storage. The simple version would be a pod that mounts both and runs rsync I guess?
Cloning is supported with a different Storage Class
You need to use CSI Provisioning and apply something like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: clone-of-pvc-1
namespace: myns
spec:
accessModes:
- ReadWriteOnce
storageClassName: cloning
resources:
requests:
storage: 5Gi
dataSource:
kind: PersistentVolumeClaim
name: pvc-1
Full documentation

Can I combine StorageClass with PersistentVolume in GKE?

I'm fairly new to Kubernetes and find it difficult to get it working from documentation, Kubenetes docs says that StorageClass contains the fields provisioner, parameters, and reclaimPolicy, which are used when a PersistentVolume belonging to the class needs to be dynamically provisioned however can I use StorageClass with PV(not dynamic allocation) to specify high performance disk allocation such as ssd?
without StorageClass it worked fine for me.
following is my manifest
kind: PersistentVolume
metadata:
name: gke-pv
labels:
app: test
spec:
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: gce-disk
fsType: ext4
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: gke-pvc
labels:
app: test
spec:
accessModes:
- ReadWriteOnce
storageClassName: ssd-sc
resources:
requests:
storage: 2Gi
selector:
matchLabels:
app: test
You need storage class if the storage needs to be provisioned dynamically.
If you are provisioning persistent volumes then it is called static storage provisioning. You don't need storage class in this scenario
The problem that is going on here is that if you want to statically provision PersistentVolumes, they don't have a StorageClass. However, GKE clusters are created with a standard StorageClass which is the default, and so the PVC gets confused and tries to dynamically allocate.
The solution is to have the PVC request an empty storage class, which forces it to look at the statically provisioned PVs.
So you'd use a sequence like this to create a PV and then get it bound to a PVC:
Manually provision the ssd:
gcloud compute disks create --size=10Gi --zone=[YOUR ZONE] --type=pd-ssd already-created-ssd-disk
Then apply a PV object that uses the statically provisioned disk, like so:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ssd-for-k8s-volume
spec:
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
gcePersistentDisk:
pdName: already-created-ssd-disk
fsType: ext4
Then, you can claim it with a PVC like this:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ssd-demo
spec:
storageClassName: ""
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
You could also use labels to refine which PVs are selected, of course, for example if you have some that are SSD and others that are regular spinning metal.
Note that the idea of using a StorageClass for static provisioning isn't really the right thing, since StorageClass is tied to how you describe storage for dynamic provisioning.

pod has unbound PersistentVolumeClaims

When I push my deployments, for some reason, I'm getting the error on my pods:
pod has unbound PersistentVolumeClaims
Here are my YAML below:
This is running locally, not on any cloud solution.
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
annotations:
kompose.cmd: kompose convert
kompose.version: 1.16.0 ()
creationTimestamp: null
labels:
io.kompose.service: ckan
name: ckan
spec:
replicas: 1
strategy: {}
template:
metadata:
creationTimestamp: null
labels:
io.kompose.service: ckan
spec:
containers:
image: slckan/docker_ckan
name: ckan
ports:
- containerPort: 5000
resources: {}
volumeMounts:
- name: ckan-home
mountPath: /usr/lib/ckan/
subPath: ckan
volumes:
- name: ckan-home
persistentVolumeClaim:
claimName: ckan-pv-home-claim
restartPolicy: Always
status: {}
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: ckan-pv-home-claim
labels:
io.kompose.service: ckan
spec:
storageClassName: ckan-home-sc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
volumeMode: Filesystem
---
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: ckan-home-sc
provisioner: kubernetes.io/no-provisioner
mountOptions:
- dir_mode=0755
- file_mode=0755
- uid=1000
- gid=1000
You have to define a PersistentVolume providing disc space to be consumed by the PersistentVolumeClaim.
When using storageClass Kubernetes is going to enable "Dynamic Volume Provisioning" which is not working with the local file system.
To solve your issue:
Provide a PersistentVolume fulfilling the constraints of the claim (a size >= 100Mi)
Remove the storageClass from the PersistentVolumeClaim or provide it with an empty value ("")
Remove the StorageClass from your cluster
How do these pieces play together?
At creation of the deployment state-description it is usually known which kind (amount, speed, ...) of storage that application will need.
To make a deployment versatile you'd like to avoid a hard dependency on storage. Kubernetes' volume-abstraction allows you to provide and consume storage in a standardized way.
The PersistentVolumeClaim is used to provide a storage-constraint alongside the deployment of an application.
The PersistentVolume offers cluster-wide volume-instances ready to be consumed ("bound"). One PersistentVolume will be bound to one claim. But since multiple instances of that claim may be run on multiple nodes, that volume may be accessed by multiple nodes.
A PersistentVolume without StorageClass is considered to be static.
"Dynamic Volume Provisioning" alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand.
In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
Example PersistentVolume
In order to find how to specify things you're best advised to take a look at the API for your Kubernetes version, so the following example is build from the API-Reference of K8S 1.17:
apiVersion: v1
kind: PersistentVolume
metadata:
name: ckan-pv-home
labels:
type: local
spec:
capacity:
storage: 100Mi
hostPath:
path: "/mnt/data/ckan"
The PersistentVolumeSpec allows us to define multiple attributes.
I chose a hostPath volume which maps a local directory as content for the volume. The capacity allows the resource scheduler to recognize this volume as applicable in terms of resource needs.
Additional Resources:
Configure PersistentVolume Guide
If your using rancher k3s kubernetes distribution, set storageClassName to local-path as described in the doc
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: local-path-pvc
namespace: default
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 2Gi
To use it on other distributions use https://github.com/rancher/local-path-provisioner
I ran into this issue but I realized that I was creating my PV's with "manual" StorageClass type.
YOUR POD
Expects what kind of storage class?
YOUR PVC Definition
volumeClaimTemplates --> storageClassName : "standard"
PV
spec --> storageClassName : "standard"
In may case the problem was, the wrong name of PersistentVolume specified in PersistentVolumeClaim declaration.
But there might be more reasons to it. Make sure that :
The volumeName name specified in PVC match PV name
The storageClassName name specified in PVC match PV name
The sufficient capacity size is allocated to your resource
The access modes of You PV and PVC are consistent
The number of PV match PVC
For detailed explanation read this article.
We faced a very similar issue today. For us the problem was that there was no CSI driver installed on the nodes. To check the drivers installed, you can use this command:
kubectl get csidriver
Our managed kubernetes clusters v1.25 run in Google Cloud, so for us the solution was to just enable the feature ‘Compute Engine persistent disk CSI Driver’

Can one persistent volume by consumed by several persistent volume claims?

Is it correct to assume that one PV can be consumed by several PVCs and each pod instance needs one binding of PVC? I'm asking because I created a PV and then a PVC with different size requirements such as:
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8sdisk
labels:
type: amazonEBS
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: couchbase-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
But when I use the PVC with the pod, it shows as 200GB available instead of the 5GB.
I'm sure I'm mixing things, but could not find a reasonable explanation.
When you have a PVC it will look for a PV that will satisfy it's requirements, but unless it is a volume and claim in multi-access mode (and there is a limited amount of backends that support it, like ie. NFS - details in http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes), the PV will not be shared by multiple PVC. Furthermore, the size in PVC is not intended as quota for the amount of data saved on the volume during pods life, but as a way to match big enough PV, and thats it.