I am practicing k8s on storage topic. I don't understand why step2: PersistentVolume has different storage size when the tutorial configures PersistenVolumeClaim in step3
For example nfs-0001.yaml, nfs-0002.yaml. storages are 2Gi and 5Gi
apiVersion: v1
kind: PersistentVolumemetadata:
name: nfs-0001
spec:
capacity:
storage: 2Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0001
apiVersion: v1
kind: PersistentVolume
metadata: name: nfs-0002
spec:
capacity:
storage: 5Gi
accessModes:
- ReadWriteOnce
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
nfs:
server: 172.17.0.7
path: /exports/data-0002
Example in step3 : pvc-mysql.yaml, pvc-http.yaml
kind: PersistentVolumeClaim
apiVersion: v1metadata:
name: claim-mysql
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: claim-http
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
And when I check the pv and pvc
master $ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
claim-http Bound nfs-0001 2Gi RWO,RWX 17m
claim-mysql Bound nfs-0002 5Gi RWO,RWX 17m
master $ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-0001 2Gi RWO,RWX Recycle Bound default/claim-http 19m
nfs-0002 5Gi RWO,RWX Recycle Bound default/claim-mysql 19m
Neither 1Gi and 3Gi shown up in the terminal.
Question:
Where are the 1Gi and 3Gi?
If they are not used. Is it safe to put arbitrary number to storage in PersistenVolumeClaim yaml?
You need to understand the difference between PV and PVC's. PVC is a declaration of stroage which that at some point become available for application to use and that is not the actual size of volume allocated.
PV's are the actual volume allocated at the time on the disk and ready to use. In order to use these PVs user needs to create PersistentVolumeClaims which is nothing but a request for PVs. A claim must specify the access mode and storage capacity, once a claim is created PV is automatically bound to this claim.
In your case, you have PV size of 5 and 3 GB respectively and you have started two PVC's with 3 and 1 GB respectively with accessmode: ReadWriteOnce that means there can be only one PV is attached to the one PVC.
Now the capacity of the PV available is the larger than requested and hence it allocated the larger size PV to the PVC.
PVC.spec.capacity is user's request for storage, "I want 10 GiB volume". PV.spec.capacity is actual size of the PV. PVC can bind to a bigger PV when there is no smaller available PV, so the user can get actually more than he wants.
Similarly, dynamic provisioning works typically in bigger chunks. So if user asks for 0.5GiB in a PVC, he will get 1 GiB PV because that's the smallest one that AWS can provision.
There is nothing wrong about it. Also, you should not put any random number in PVC size, it should be well calculated according to your application need and scaling.
Related
I have 3 deployments, a-depl, b-depl, c-depl. Now each of these 3 deployments has a db deployment: a-db-depl, b-db-depl, c-db-depl.
Now I want to persist each of these dbs. Do I need to create a single PV for all or a PV for each of the deployments?
I know that PV <-> PVC is 1-to-1 relation. But I dont know about Depl <-> PV.
Can someone please help?
As of now, I have no clue, so I am using a single PV for all of the dp deployments
apiVersion: v1
kind: PersistentVolume
metadata:
name: mongo-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Retain
hostPath:
path: "/mnt/data/mongo"
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongo-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
At a time one PV can be bound to only one PVC. So for each of your PVC you need to create a corresponding PV. To automate PV creation you can create a StorageClass and refer that StorageClass in your PVC. StorageClass can dynamically provision a PV for each PVC.
Whether multiple deployments can use the same PVC or PV depends on accessModes of the PVC or PV.
ReadOnlyMany - the volume can be mounted read-only by many nodes
ReadWriteMany- the volume can be mounted as read-write by many nodes
ReadWriteOnce - the volume can be mounted as read-write by a single node
How does one run multiple replicas of a pod and have each pod use its own storage
volume?
A StatefulSet resource, which is specifically tailored to applications where instances of the application must be treated as non-fungible individuals, with each one having a stable name and state.
I'm trying to create a PersistentVolumeClaim giv it a specific volumeName to use.
I use this code:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
namespace: zipkin
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: nfs-provisioner
resources:
requests:
storage: 0.1Gi
volumeName: "demo"
If I remove volumeName the PVC is correctly bound otherways remain in pending status.
Why?
The volumeName is a name of the PersistentVolume you want to use.
On GKE PVC can automatically create a PV that will bound to, or you can specify the name of it using volumeName.
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-ciro
spec:
accessModes:
- ReadWriteOnce
storageClassName: standard
resources:
requests:
storage: 0.1Gi
volumeName: demo
pv.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: demo
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
storageClassName: standard
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: /tmp
server: 172.17.0.2
And the output will be:
$ kubectl get pv
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
demo 5Gi RWO Recycle Bound default/pvc-ciro standard 13s
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-ciro Bound demo 5Gi RWO standard 8s
You can read more details in Kubernetes documentation regarding Persistent Volumes.
I am trying to create multiple PV and PVC(for each one of the PV) in a single namespace and it is not allowing me to do so. Is this an expected behavior? i am using NFS.
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
nfs-office-tools-service-pv 70Gi RWX Retain Bound office-tools-service-ns/nfs-office-tools-service-pv manual 4d
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available manual 8m
nfs-perfqa-pv 2Gi RWX Retain Bound perfqa/nfs-perfqa-pvc
manual 17d
When i create a new PVC for the newly created PV, it is giving error as below:
Below are the yaml for PV and PVC:
PV.yaml
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
namespace: perfqa
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
PVC.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: nfs-perfqa-jenkins-pvc
namespace: default
spec:
storageClassName: manual
accessModes:
- ReadWriteMany
resources:
requests:
storage: 20Gi
Your cluster has ResourceQuota or LimitRange with requests.storage set to 2Gi. So you cannot create PVC with 20Gi.
First of all note that persistent volume is defined at cluster level. it is not defined at namespace level.
correct pv definition as below
apiVersion: v1
kind: PersistentVolume
metadata:
name: nfs-perfqa-jenkins-pv
spec:
storageClassName: manual
capacity:
storage: 20Gi
accessModes:
- ReadWriteMany
hostPath:
path: "/nfs_share/docker/test/jenkins"
There is no issue with pv. it is created and is available
nfs-perfqa-jenkins-pv 20Gi RWX Retain Available
also check for resourceQuota in default namespace. You might have set max storage limit to 2GB
What specific changes need to be made to the yaml below in order to get the PersistentVolumeClaim to bind to the PersistentVolume?
An EC2 instance in the same VPC subnet as the Kubernetes worker nodes has an ip of 10.0.0.112 and and has been configured to act as an NFS server in the /nfsfileshare path.
Creating the PersistentVolume
We created a PersistentVolume pv01 with pv-volume-network.yaml:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv01
spec:
capacity:
storage: 5Gi
volumeMode: Filesystem
accessModes:
- ReadWriteMany
persistentVolumeReclaimPolicy: Recycle
storageClassName: slow
mountOptions:
- hard
- nfsvers=4.1
nfs:
path: "/nfsfileshare"
server: "10.0.0.112"
and by typing:
kubectl create -f pv-volume-network.yaml
Then when we type kubectl get pv pv01, the pv01 PersistentVolume shows a STATUS of "Available".
Creating the PersistentVolumeClaim
Then we created a PersistentVolumeClaim named `` with pv-claim.yaml:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: my-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And by typing:
kubectl create -f pv-claim.yaml
STATUS is Pending
But then when we type kubectl get pvc my-pv-claim, we see that the STATUS is Pending. The STATUS remains as Pending for as long as we continued to check back.
Note this OP is different than this other question, because this problem persists even with quotes around the NFS IP and the path.
Why is this PVC not binding to the PV? What specific changes need to be made to resolve this?
I diagnosed the problem by typing kubectl describe pvc my-pv-claim and looking in the Events section of the results.
Then, based on the reported Events, I was able to fix this by changing storageClassName: manual to storageClassName: slow.
The problem was that the PVC's StorageClassName did not meet the requirement that it match the class specified in the PV.
Is it correct to assume that one PV can be consumed by several PVCs and each pod instance needs one binding of PVC? I'm asking because I created a PV and then a PVC with different size requirements such as:
kind: PersistentVolume
apiVersion: v1
metadata:
name: k8sdisk
labels:
type: amazonEBS
spec:
capacity:
storage: 200Gi
accessModes:
- ReadWriteOnce
awsElasticBlockStore:
volumeID: vol-xxxxxx
fsType: ext4
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: couchbase-pvc
labels:
type: amazonEBS
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 5Gi
But when I use the PVC with the pod, it shows as 200GB available instead of the 5GB.
I'm sure I'm mixing things, but could not find a reasonable explanation.
When you have a PVC it will look for a PV that will satisfy it's requirements, but unless it is a volume and claim in multi-access mode (and there is a limited amount of backends that support it, like ie. NFS - details in http://kubernetes.io/docs/user-guide/persistent-volumes/#access-modes), the PV will not be shared by multiple PVC. Furthermore, the size in PVC is not intended as quota for the amount of data saved on the volume during pods life, but as a way to match big enough PV, and thats it.