We are deploying vault as helm based deployment. We have configure/enabled the server datastorage mountpath also.
After deployment a new pvc is also getting created and getting bound to local-storage PV.
data-vault-0 Bound local-pv-f69fbb0 10Gi RWO local-storage 69s
local-pv-f69fbb0 10Gi RWO Delete Bound vault/data-vault-0 local-storage 2m55s
But when we are deleting the pod vault-0 and once it comes back, then we are loosing all the data(secrets, configurations,policies) we created in vault.
Even though pv,pvc remains intact after deleting the pod.
vault-0 server config
Mlock: supported: true, enabled: false
Recovery Mode: false
Storage: file
Version: Vault v1.11.3, built 2022-08-26T10:27:10Z
Related
I am getting the following error while expanding PVC(kubectl edit pvc csi-expand-pvc) and I have no idea why,
Event occurred" object="default/csi-expand-pvc" kind="PersistentVolumeClaim" apiVersion="v1" type="Warning" reason="ExternalExpanding" message="Ignoring the PVC: didn't find a plugin capable of expanding the volume; waiting for an external controller to process this PVC.
Here is the spec for my PVC.
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: csi-expand-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Gi
storageClassName: ab-storage
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
csi-expand-pvc Bound pvc-675765658 100Gi RWO ab-storage 33m
feature-gates ExpandInUsePersistentVolumes,ExpandCSIVolumes are enabled.
allowVolumeExpansion is also set to true under storage class.
setup details
kubeadm version: &version.Info{Major:"1", Minor:"23", GitVersion:"**v1.23.5**", GitCommit:"c285e781331a3785a7f436042c65c5641ce8a9e9", GitTreeState:"clean", BuildDate:"2022-03-16T15:57:37Z", GoVersion:"go1.17.8", Compiler:"gc", Platform:"linux/amd64"}
quay.io/k8scsi/csi-provisioner:v2.1.2
quay.io/k8scsi/csi-attacher:v3.1.0
quay.io/k8scsi/csi-snapshotter:v4.0.0
quay.io/k8scsi/csi-resizer:v1.1.0
As per above error it says that exisiting storage provisioner is not capable of expending the pvc.
Check the storage provisioner details from the description of the storage class your pvc is using.
Then check whether the current provisioner has the capability to extend the volume.
We are trying to configure local-storage in Rancher and storage provisioner configured successfully.
But when I create pvc using local-storage sc its going in pending state with below error.
Normal ExternalProvisioning 4m31s (x62 over 19m) persistentvolume-controller waiting for a volume to be created, either by external provisioner "rancher.io/local-path" or manually created by system administrator
Normal Provisioning 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 External provisioner is provisioning volume for claim "local-path-storage/test-pod-pvc-local"
Warning ProvisioningFailed 3m47s (x7 over 19m) rancher.io/local-path_local-path-provisioner-5f8f96cb66-8s9dj_f1bdad61-eb48-4a7a-918c-6827e75d6a27 failed to provision volume with StorageClass "local-path": configuration error, no node was specified
[root#n01-deployer local]#
sc configuration
[root#n01-deployer local]# kubectl edit sc local-path
# Please edit the object below. Lines beginning with a '#' will be ignored,
# and an empty file will abort the edit. If an error occurs while saving this file will be
# reopened with the relevant failures.
#
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"local-path"},"provisioner":"rancher.io/local-path","reclaimPolicy":"Delete","volumeBindingMode":"Immediate"}
creationTimestamp: "2022-02-07T16:12:58Z"
name: local-path
resourceVersion: "1501275"
uid: e8060018-e4a8-47f9-8dd4-c63f28eef3f2
provisioner: rancher.io/local-path
reclaimPolicy: Delete
volumeBindingMode: Immediate
PVC configuration
[root#n01-deployer local]# cat pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: local-path-storage
name: test-pod-pvc-local-1
spec:
accessModes:
- ReadWriteOnce
storageClassName: local-path
resources:
requests:
storage: 5Gi
volumeMode: Filesystem
I have mounted the local volume in all the worker node still my pvc not getting created. Can some please help me solve this issue?
The key to your problem was updating PSP.
I would like to add something about PSP:
According to this documentation and this blog:
As of Kubernetes version 1.21, PodSecurityPolicy (beta) is deprecated. The Kubernetes project aims to shut the feature down in version 1.25.
However I haven't found any information in Rancher's case (the documentation is up to date).
Rancher ships with two default Pod Security Policies (PSPs): the restricted and unrestricted policies.
See also:
The benefits of Pod Security Policy
Secure Kubernetes cluster PSP
Pod Security Policies
I deployed a PVC, which dynamically created a PV.
After that I deleted the PVC and now my PV looks like below:
PS Kubernetes> kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-1b59942c-eb26-4603-b78e-7054d9418da6 2G RWX Retain Released default/db-pvc hostpath 26h
When I recreate my PVC, that creates a new PV.
Is there a way to reattach the existing PV to my PVC ?
Is there a way to do it automatically ?
I tried to attach the PV with my PVC using "volumeName" option, but it did not work.
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
db-pvc Pending pvc-1b59942c-eb26-4603-b78e-7054d9418da6 0 hostpath 77s
When a PVC is deleted, the PV stays in the "Released" state with the claimRef uid of the deleted PVC.
To reuse a PV, you need to delete the claimRef to make it go to the "Available" state
You may either edit the PV and manually delete the claimRef section, or run the patch command as under:
kubectl patch pv pvc-1b59942c-eb26-4603-b78e-7054d9418da6 --type json -p '[{"op": "remove", "path": "/spec/claimRef"}]'
Subsequently, you recreate the PVC.
If you are on GKE and your PV is running
You can create the PVC using
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-1b59942c-eb26-4603-b78e-7054d9418da6
spec:
storageClassName: default
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 2G
It seems like, While deleting the pv,pvc, I have messed up. I can delete the pvc without issues but I can not delete the pv that pv-protection in it. while deleting pv,pvc earlier, I have pressed CTRL+C since it was taking time to delete it and also deleted the storageclass before deleting the pvc. I don't remember the storage class that was used for creating the pvc.
In this post, it says, updating the pvc protection to null will help remove the pvc. But I had to delete the pv which has pv-protection. The below is the describe output of pv.
~/github/vault-operator# kubectl describe pv pv-hostpath
Name: pv-hostpath
Labels: type=local
Annotations: pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pv-protection]
StorageClass: manual
Status: Terminating (lasts <invalid>)
Claim: poc-namespace/pvc-hostpath
Reclaim Policy: Retain
Access Modes: RWO
Capacity: 1Gi
Node Affinity: <none>
Message:
Source:
Type: HostPath (bare host directory volume)
Path: /kube
HostPathType:
Events: <none>
~/github/vault-operator# kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-hostpath 1Gi RWO Retain Terminating poc-namespace/pvc-hostpath manual 11d
question is, how can i delete the pv that is not deleted properly and what could be my issue.
In this scenario, you have a PVC (poc-namespace/pvc-hostpath) that is preventing your PV from being deleted. Delte the PVC, and you can delete the PV.
Generally speaking, the Reclaim Policy of a PVC is delete, by default, so when you delete a PVC, it automatically deletes the PV bound to it.
Your storageClass probably was this one (or similar), from rancher. It is a hostPath based one, meaning it maps your container volume to the host machine.
I am trying to deploy a persistentvolume for 3 pods to work on and i want to use the cluster's node storage i.e. not an external storage like ebs spin off.
To achieve the above i did the following experiment's -
1) I applied only the below PVC resource defined below -
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
This spin's up a storage set by default storageclass, which in my case was digital ocean's volume. So it created a 1Gi volume.
2) Created a PV resource and PVC resource like below -
apiVersion: v1
kind: PersistentVolume
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/mnt/data"
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
creationTimestamp: null
labels:
io.kompose.service: pv1
name: pv1
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
status: {}
Post this i see my claim is bound.
pavan#p1:~$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pv1 Bound task-pv-volume 10Gi RWO manual 2m5s
pavan#p1:~$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 118m
pavan#p1:~$ kubectl describe pvc
Name: pv1
Namespace: default
StorageClass: manual
Status: Bound
Volume: task-pv-volume
Labels: io.kompose.service=pv1
Annotations: kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"PersistentVolumeClaim","metadata":{"annotations":{},"creationTimestamp":null,"labels":{"io.kompose.service":"mo...
pv.kubernetes.io/bind-completed: yes
pv.kubernetes.io/bound-by-controller: yes
Finalizers: [kubernetes.io/pvc-protection]
Capacity: 10Gi
Access Modes: RWO
VolumeMode: Filesystem
Mounted By: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning ProvisioningFailed 28s (x8 over 2m2s) persistentvolume-controller storageclass.storage.k8s.io "manual" not found
Below are my questions that i am hoping to get answers/pointers to -
The above warning, storage class could not be found, do i need to
create one? If so, can you tell me why and how? or any pointer. (Somehow this link misses to state that - https://kubernetes.io/docs/tasks/run-application/run-single-instance-stateful-application/)
Notice the PV has storage capacity of 10Gi and PVC with request capacity of 1Gi, but still PVC was bound with 10Gi capacity? Can't i share the same PV capacity with other PVCs?
For question 2) If i have to create different PVs for different PVC with the required capacity, do i have to create storageclass as-well? Or same storage class and use selectors to select corresponding PV?
I was trying to reproduce all behavior to answer all your questions. However, I don't have access to DigitalOcean, so I tested it on GKE.
The above warning, storage class could not be found, do i need to
create one?
According to the documentation and best practices, it is highly recommended to create a storageclass and later create PV / PVC based on it. However, there is something called "manual provisioning". Which you did in this case.
Manual provisioning is when you need to manually create a PV first, and then a PVC with matching spec.storageClassName: field. Examples:
If you create a PVC without default storageclass, PV and storageClassName parameter (afaik kubeadm is not providing default storageclass) - PVC will be stuck on Pending with event: no persistent volumes available for this claim and no storage class is set.
If you create a PVC with default storageclass setup on cluster but without storageClassName parameter it will create it based on default storageclass.
If you create a PVC with storageClassName parameter (somewhere in the Cloud, Minikube or kubeadm) - PVC will also be Pending with the warning: storageclass.storage.k8s.io "manual" not found.
However, if you create PV with the same storageClassName parameter, it will be bound in a while.
Example:
$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Available manual 4s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Pending manual 4m12s
...
kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/task-pv-volume 10Gi RWO Retain Bound default/pv1 manual 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv1 Bound task-pv-volume 10Gi RWO manual 4m17s
The disadvantage of manual provisioning is that you have to create PV for each PVC. If you use storageclass you can just create PVC.
If so, can you tell me why and how? or any pointer.
You can use documentation examples or check here. As you are using cloud with default storageclass you can export it to yaml by:
$ kubectl get sc -oyaml >> storageclass.yaml.
Or if you have more than one, you have to specify which one. Names of storageclass can be obtained by $ kubectl get sc.
Later you can refer to K8s API to customize your storageclass.
Notice the PV has storage capacity of 10Gi and PVC with request
capacity of 1Gi, but still PVC was bound with 10Gi capacity?
You created manually a PV with 10Gi and the PVC requested 1Gi. As PVC and PV are bounding 1:1, PVC searched for a PV which meets all conditions and bound to it. PVC (pv1) requested 1Gi and the PV (task-pv-volume) meet those requirements so Kubernetes bound them. Unfortunately much of the space was wasted in this case.
Can't i share the same PV capacity with other PVCs
Unfortunately, you cannot bound 2 PVC to 1 PV as the relationship between PVC and PV is 1:1, but you can configure many pods/deployments to use the same PVC.
I can advise you to look at this stackoverflow case as it explains very well AccessMode specifics.
If i have to create different PVs for different PVC with the required
capacity, do i have to create storageclass as-well? Or same storage
class and use selectors to select corresponding PV?
As I mentioned before, if you create PV manually with a specific size and a PVC bounded to it, which request less, the extra space will be wasted. So, you have to create PV and PVC with the same resource request, or let storageclass adjust the storage based on PVC request.
Yes, you have to create storage class, check, but I guess digital-ocean provide default storage class, you can check it with:
kubectl get storageclasses
You can share one PV, but only in read-only access, if you need write access for all pods you have to create multiple PV, check