Hi i have this problem in kubernetes. I install a deployment with helm that consists of 2 pods i have to put them in two nodes but i want that the persistent volume used by the pods are in the same nodes as the pod are deployed. Is feasible in helm? Thanks
I think you can use Local Persistent Volume
See more: local-pv, local-pv-comparision.
Usage of Local Persistent Volumes:
The local volumes must still first be set up and mounted on the local
node by an administrator. The administrator needs to mount the local
volume into a configurable “discovery directory” that the local volume
manager recognizes. Directories on a shared file system are supported,
but they must be bind-mounted into the discovery directory.
This local volume manager monitors the discovery directory, looking
for any new mount points. The manager creates a PersistentVolume
object with the appropriate storageClassName, path, nodeAffinity, and
capacity for any new mount point that it detects. These
PersistentVolume objects can eventually be claimed by
PersistentVolumeClaims, and then mounted in Pods.
After a Pod is done using the volume and deletes the
PersistentVolumeClaim for it, the local volume manager cleans up the
local mount by deleting all files from it, then deleting the
PersistentVolume object. This triggers the discovery cycle: a new
PersistentVolume is created for the volume and can be reused by a new
PersistentVolumeClaim.
Local volume can be requested in exactly the same way as any other
PersistentVolume type: through a PersistentVolumeClaim. Just specify
the appropriate StorageClassName for local volumes in the
PersistentVolumeClaim object, and the system takes care of the rest!
In your case I will create manually storageclass and use it during chart installation.
apiVersion: v1
kind: PersistentVolume
metadata:
name: example-local-pv
spec:
capacity:
storage: 50Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Retain
storageClassName: example-local-storage
local:
path: /mnt/disks/v1
kind: StorageClass
apiVersion: storage.k8s.io/v1
metadata:
name: example-local-storage
provisioner: kubernetes.io/no-provisioner
and override default PVC storageClassName configuration during helm install like this:
$ helm install --name chart-name --set persistence.storageClass=example-local-storage
Take a look: using-local-pv, pod-local-pv, kubernetes-1.19-lv.
Related
My GKE deployment consists of N pods (possibly on different nodes) and a shared volume, which is dynamically provisioned by pd.csi.storage.gke.io and is a Persistent Disk in GCP. I need to initialize this disk with data before the pods go live.
My problem is I need to set accessModes to ReadOnlyMany and be able to mount it to all pods across different nodes in read-only mode, which I assume effectively would make it impossible to mount it in write mode to the initContainer.
Is there a solution to this issue? Answer to this question suggests a good solution for a case when each pod has their own disk mounted, but I need to have one disk shared among all pods since my data is quite large.
With GKE 1.21 and later, you can enable the managed Filestore CSI driver in your clusters. You can enable the driver for new clusters
gcloud container clusters create CLUSTER_NAME \
--addons=GcpFilestoreCsiDriver ...
or update existing clusters:
gcloud container clusters update CLUSTER_NAME \
--update-addons=GcpFilestoreCsiDriver=ENABLED
Once you've done that, create a storage class (or have or platform admin do it):
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: filestore-example
provisioner: filestore.csi.storage.gke.io
volumeBindingMode: WaitForFirstConsumer
allowVolumeExpansion: true
parameters:
tier: standard
network: default
After that, you can use PVCs and dynamic provisioning:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: podpvc
spec:
accessModes:
- ReadWriteMany
storageClassName: filestore-example
resources:
requests:
storage: 1Ti
...I need to have one disk shared among all pods
You can try Filestore. First your create a FileStore instance and save your data on a FileStore volume. Then you install FileStore driver on your cluster. Finally you share the data with pods that needs to read the data using a PersistentVolume referring the FileStore instance and volume above.
I have a Kubernetes cron job in AWS EKS that requires a persistent volume, so this is roughly what I have:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-{{$.Release.Name}}-tmp
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 10Gi
Then it's mounted to a cronjob (the mount part is correct, as the following shows)
All are deployed with Helm, and a fresh deployment times out, because the PVC remains in the Pending state with the message waiting for the first consumer to be created before binding. If during the deployment I create a new job based on the cron job, the PVC is immediately bound and this and all subsequent deployment work like expected.
Is it possible to either make a PVC bind "eagerly", without a pod that requires it or, preferably, not to wait for it to get bound during the chart installation?
What is the storage class that you use? Storage class has volumeBindingMode attributes that controls how PV is dynamically created.
The volumeBindingMode could be Immediate and WaitForFirstConsumer mode.
For checking the storage class you can do kubectl get storageclass or kubectl describe storageclass. The default storage class will be used if not specified on the K8 PVC definition.
References:
https://kubernetes.io/docs/concepts/storage/storage-classes/#volume-binding-mode
What I need?
A deployment with 2 PODs which read from the SAME volume (PV). The volume must be shared between PODS in a RW mode.
Note: I already have a rook ceph with a defined storageClass "rook-cephfs" which allow this capability. This SC also has Retain Policy
This is what I did:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: data-nginx
spec:
accessModes:
- "ReadWriteMany"
resources:
requests:
storage: "10Gi"
storageClassName: "rook-cephfs"
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
spec:
replicas: 2
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
serviceAccountName: default
containers:
- name: nginx
image: nginx:latest
imagePullPolicy: Always
ports:
- name: http
containerPort: 80
volumeMounts:
- name: pvc-data
mountPath: /data
volumes:
- name: pvc-data
persistentVolumeClaim:
claimName: data-nginx
It works! Both nginx containers shares the volume.
Problem:
If a delete all the resources (except the PV) and a recreate them, a NEW PV is created instead of reuse the old one. So basically, the new volume is empty.
The OLD PV get the status "Released" instead of "Available"
I realized that if a apply a patch to the PV to remove the claimRef.uid :
kubectl patch pv $PV_NAME --type json -p '[{"op": "remove", "path": "/spec/claimRef/uid"}]'
and then redeploy it works.
But I don't want to do this manual step. I need this automated.
I also tried the same configuration with a statefulSet and got the same problem.
Any solution?
Make sure to use reclaimPolicy: Retain in your StorageClass. It will tell Kubernetes to reuse the PV.
Ref: https://kubernetes.io/docs/tasks/administer-cluster/change-pv-reclaim-policy/
But I don't want to do this manual step. I need this automated.
Based on the official documentation, it is unfortunately impossible. First look at the Reclaim Policy:
PersistentVolumes that are dynamically created by a StorageClass will have the reclaim policy specified in the reclaimPolicy field of the class, which can be either Delete or Retain. If no reclaimPolicy is specified when a StorageClass object is created, it will default to Delete.
So, we have 2 supported options for Reclaim Policy: Delete or Retain.
Delete option is not for you, because,
for volume plugins that support the Delete reclaim policy, deletion removes both the PersistentVolume object from Kubernetes, as well as the associated storage asset in the external infrastructure, such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume. Volumes that were dynamically provisioned inherit the reclaim policy of their StorageClass, which defaults to Delete. The administrator should configure the StorageClass according to users' expectations; otherwise, the PV must be edited or patched after it is created.
Retain option allows you for manual reclamation of the resource:
When the PersistentVolumeClaim is deleted, the PersistentVolume still exists and the volume is considered "released". But it is not yet available for another claim because the previous claimant's data remains on the volume. An administrator can manually reclaim the volume with the following steps.
Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or Cinder volume) still exists after the PV is deleted.
Manually clean up the data on the associated storage asset accordingly.
Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the storage asset definition.
let me put you in context. I got pod with a configuration that looks close to this:
spec:
nodeSets:
- name: default
count: 3
volumeClaimTemplates:
- metadata:
name: elasticsearch-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
storageClassName: repd-ssd-xfs
I also have my StorageClass
apiVersion: ...
kind: StorageClass
metadata:
name: repd-ssd-xfs
parameters:
type: pd-ssd
fsType: xfs
replication-type: regional-pd
zones: us-central1-a, us-central1-b, us-central1-f
reclaimPolicy: Retain
volumeBindingMode: Immediate
I delete the namespace of the pod and then apply again and I notice that the pvc that my pod was using change and bound to a new pvc, the last pvc used by the pod is in state released. My question is that Is there any way to specify to the pod to use my old pvc? The StorageClass policy is Retain but that means that I can still using pvc with status released?
You can explicitly specify the persistent volume claim name in the pod spec if it's a deployment or a standalone pod like the code below:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
However, if it's a StatefulSet, it will automatically attach to the same PVC every time the pod restarts. Hope this helps.
In addition to the answer provided by #shashank tyagi.
Have a look at the documentation Persistent Volumes and at the section Retain you can find:
When the PersistentVolumeClaim is deleted, the PersistentVolume still
exists and the volume is considered “released”. But it is not yet
available for another claim because the previous claimant’s data
remains on the volume. An administrator can manually reclaim the
volume with the following steps.
Delete the PersistentVolume. The associated storage asset in external infrastructure (such as an AWS EBS, GCE PD, Azure Disk, or
Cinder volume) still exists after the PV is deleted.
Manually clean up the data on the associated storage asset accordingly.
Manually delete the associated storage asset, or if you want to reuse the same storage asset, create a new PersistentVolume with the
storage asset definition.
It could be helpful to check the documentation Persistent volumes with Persistent Disks and this example How to set ReclaimPolicy for PersistentVolumeClaim.
UPDATE Have a look at the article Persistent Volume Claim for StatefulSet.
I am very new to Linux, Docker and Kubernetes. I need to setup an on-premises POC to showcase BDC.
What I have installed.
1. Ubuntu 19.10
2. Kubernetes Cluster
3. Docker
4. NFS
5. Settings and prerequisites but obviously missing stuff.
This has been done with stitched together tutorials. I am stuck on "AZDATA BDC Create". Error below.
Scheduling error on POD PVC.
Some more information.
NFS information
Storage class info
More Info 20191220:
PV & PVcs bound NFS side
Dynamic Volume Provisioning alongside with a StorageClass allows the cluster to provision PersistentVolumes on demand. In order to make that work, the given storage provider must support provisioning - this allows the cluster to request the provisioning of a "new" PersistentVolume when an unsatisfied PersistentVolumeClaim pops up.
First make sure you have defined StorageClass properly. You have defined nfs-dynamic class but it is not defined as default storage class, that's why your claims cannot bound volumes to it. You have two options:
1. Execute command below:
$ kubectl patch storageclass <your-class-name> -p '{"metadata": {"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'
Another option is to define in pvc configuration file storageclass you have used:
Here is an example cofiguration of such file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 8Gi
storageClassName: slow
selector:
matchLabels:
release: "stable"
matchExpressions:
- {key: environment, operator: In, values: [dev]}'
Simple add line storageClassName: nfs-dynamic.
Then make sure you have followed steps from this instruction: nfs-kubernetes.