0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims - kubernetes

As the documentation states:
For each VolumeClaimTemplate entry defined in a StatefulSet, each Pod
receives one PersistentVolumeClaim. In the nginx example above, each
Pod receives a single PersistentVolume with a StorageClass of
my-storage-class and 1 Gib of provisioned storage. If no StorageClass
is specified, then the default StorageClass will be used. When a Pod
is (re)scheduled onto a node, its volumeMounts mount the
PersistentVolumes associated with its PersistentVolume Claims. Note
that, the PersistentVolumes associated with the Pods' PersistentVolume
Claims are not deleted when the Pods, or StatefulSet are deleted. This
must be done manually.
The part I'm interested in is this: If no StorageClassis specified, then the default StorageClass will be used
I create a StatefulSet like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
namespace: ches
name: ches
spec:
serviceName: ches
replicas: 1
selector:
matchLabels:
app: ches
template:
metadata:
labels:
app: ches
spec:
serviceAccountName: ches-serviceaccount
nodeSelector:
ches-worker: "true"
volumes:
- name: data
hostPath:
path: /data/test
containers:
- name: ches
image: [here I have the repo]
imagePullPolicy: Always
securityContext:
privileged: true
args:
- server
- --console-address
- :9011
- /data
env:
- name: MINIO_ACCESS_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: access-key
- name: MINIO_SECRET_KEY
valueFrom:
secretKeyRef:
name: ches-keys
key: secret-key
ports:
- containerPort: 9000
hostPort: 9011
resources:
limits:
cpu: 100m
memory: 200Mi
volumeMounts:
- name: data
mountPath: /data
imagePullSecrets:
- name: edge-storage-token
volumeClaimTemplates:
- metadata:
name: data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
Of course I have already created the secrets, imagePullSecrets etc and I have labeled the node as ches-worker.
When I apply the yaml file, the pod is in Pending status and kubectl describe pod ches-0 -n ches gives the following error:
Warning FailedScheduling 6s default-scheduler 0/1 nodes are
available: 1 pod has unbound immediate PersistentVolumeClaims.
preemption: 0/1 nodes are available: 1 Preemption is not helpful for
scheduling
Am I missing something here?

You need to create a PV in order to get a PVC bound. If you want the PVs automatically created from PVC claims you need a Provisioner installed in your Cluster.
First create a PV with at least the amout of space need by your PVC.
Then you can apply your deployment yaml which contains the PVC claim.

K3s when installed, also downloads a storage class which makes it as default.
Check with kubectl get storageclass:
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE
ALLOWVOLUMEEXPANSION AGE local-path rancher.io/local-path Delete
WaitForFirstConsumer false 8s
K8s cluster on the other hand, does not download also a default storage class.
In order to solve the problem:
Download rancher.io/local-path storage class:
kubectl apply -f
https://raw.githubusercontent.com/rancher/local-path-provisioner/master/deploy/local-path-storage.yaml
Check with kubectl get storageclass
Make this storage class (local-path) the default:
kubectl patch
storageclass local-path -p '{"metadata":
{"annotations":{"storageclass.kubernetes.io/is-default-class":"true"}}}'

Related

microk8s-hostpath does not create PV for a claim

I am trying to use Microk8s storage addon but my PVC and pod are stuck at pending and I don't know what is wrong. I am also using the "registry" addon which uses the storage and that one works without a problem.
FYI:
I already restarted the microk8s multiple times and even totally deleted and reinstalled it but the problem remained.
Yaml files:
# =================== pvc.yaml
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath
# =================== deployment.yaml (just spec section)
spec:
serviceName: registry
replicas: 1
selector:
matchLabels:
io.kompose.service: registry
template:
metadata:
labels:
io.kompose.service: registry
spec:
containers:
- image: {{ .Values.image }}
name: registry-master
ports:
- containerPort: 28015
- containerPort: 29015
- containerPort: 8080
resources:
requests:
cpu: {{ .Values.request_cpu }}
memory: {{ .Values.request_memory }}
limits:
cpu: {{ .Values.limit_cpu }}
memory: {{ .Values.limit_memory }}
volumeMounts:
- mountPath: /data
name: rdb-local-data
env:
- name: RUN_ENV
value: 'kubernetes'
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
volumes:
- name: rdb-local-data
persistentVolumeClaim:
claimName: wws-registry-claim
Cluster info:
$ kubectl get pvc -A
NAMESPACE NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
container-registry registry-claim Bound pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX microk8s-hostpath 56m
default wws-registry-claim Pending registry-pvc 0 microk8s-hostpath 23m
$ kubectl get pv -A
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca 20Gi RWX Delete Bound container-registry/registry-claim microk8s-hostpath 56m
$ kubectl get pods -n kube-system
NAME READY STATUS RESTARTS AGE
coredns-9b8997588-vk5vt 1/1 Running 0 57m
hostpath-provisioner-7b9cb5cdb4-wxcp6 1/1 Running 0 57m
metrics-server-v0.2.1-598c8978c-74krr 2/2 Running 0 57m
tiller-deploy-77855d9dcf-4cvsv 1/1 Running 0 46m
$ kubectl -n kube-system logs hostpath-provisioner-7b9cb5cdb4-wxcp6
I0322 12:31:31.231110 1 controller.go:293] Starting provisioner controller 87fc12df-8b0a-11eb-b910-ee8a00c41384!
I0322 12:31:31.231963 1 controller.go:893] scheduleOperation[lock-provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.235618 1 leaderelection.go:154] attempting to acquire leader lease...
I0322 12:31:31.237785 1 leaderelection.go:176] successfully acquired lease to provision for pvc container-registry/registry-claim
I0322 12:31:31.237841 1 controller.go:893] scheduleOperation[provision-container-registry/registry-claim[dfef8e65-0618-4980-8b3c-e6e9efc5b0ca]]
I0322 12:31:31.239011 1 hostpath-provisioner.go:86] creating backing directory: /var/snap/microk8s/common/default-storage/container-registry-registry-claim-pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca
I0322 12:31:31.239102 1 controller.go:627] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" created
I0322 12:31:31.244798 1 controller.go:644] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" for claim "container-registry/registry-claim" saved
I0322 12:31:31.244813 1 controller.go:680] volume "pvc-dfef8e65-0618-4980-8b3c-e6e9efc5b0ca" provisioned for claim "container-registry/registry-claim"
I0322 12:31:33.243345 1 leaderelection.go:196] stopped trying to renew lease to provision for pvc container-registry/registry-claim, task succeeded
$ kubectl get sc
NAME PROVISIONER AGE
microk8s-hostpath microk8s.io/hostpath 169m
$ kubectl get sc -o yaml
apiVersion: v1
items:
- apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"storage.k8s.io/v1","kind":"StorageClass","metadata":{"annotations":{},"name":"microk8s-hostpath"},"provisioner":"microk8s.io/hostpath"}
creationTimestamp: "2021-03-22T12:31:25Z"
name: microk8s-hostpath
resourceVersion: "2845"
selfLink: /apis/storage.k8s.io/v1/storageclasses/microk8s-hostpath
uid: e94b5653-e261-4e1f-b646-e272e0c8c493
provisioner: microk8s.io/hostpath
reclaimPolicy: Delete
volumeBindingMode: Immediate
kind: List
metadata:
resourceVersion: ""
selfLink: ""
Microk8s inspect:
$ microk8s.inspect
Inspecting services
Service snap.microk8s.daemon-cluster-agent is running
Service snap.microk8s.daemon-flanneld is running
Service snap.microk8s.daemon-containerd is running
Service snap.microk8s.daemon-apiserver is running
Service snap.microk8s.daemon-apiserver-kicker is running
Service snap.microk8s.daemon-proxy is running
Service snap.microk8s.daemon-kubelet is running
Service snap.microk8s.daemon-scheduler is running
Service snap.microk8s.daemon-controller-manager is running
Service snap.microk8s.daemon-etcd is running
Copy service arguments to the final report tarball
Inspecting AppArmor configuration
Gathering system information
Copy processes list to the final report tarball
Copy snap list to the final report tarball
Copy VM name (or none) to the final report tarball
Copy disk usage information to the final report tarball
Copy memory usage information to the final report tarball
Copy server uptime to the final report tarball
Copy current linux distribution to the final report tarball
Copy openSSL information to the final report tarball
Copy network configuration to the final report tarball
Inspecting kubernetes cluster
Inspect kubernetes cluster
WARNING: Docker is installed.
Add the following lines to /etc/docker/daemon.json:
{
"insecure-registries" : ["localhost:32000"]
}
and then restart docker with: sudo systemctl restart docker
Building the report tarball
Report tarball is at /var/snap/microk8s/1671/inspection-report-20210322_143034.tar.gz
I found the problem. Since the "host-provisioner" takes care of creating PV we should not pass the volumeName in our PVC yaml file. When I removed that field the provisioner could make a PV and bound my PVC to it and now my pod has started.
Now my PVC is:
apiVersion: v1
kind: List
items:
- apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: wws-registry-claim
spec:
# volumeName: registry-pvc
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 1Gi
storageClassName: microk8s-hostpath

Statefulset with replicas : 1 pod has unbound immediate PersistentVolumeClaims

I'm trying to setup , in my single node cluster (Docker Desktop Windows), an elastic cluster.
For this, I have created the PV as followed (working)
apiVersion: v1
kind: PersistentVolume
metadata:
name: elastic-pv-data
labels:
type: local
spec:
storageClassName: elasticdata
accessModes:
- ReadWriteOnce
capacity:
storage: 20Gi
hostPath:
path: "/mnt/data/elastic"
Then here is the configuration :
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: esnode
spec:
selector:
matchLabels:
app: es-cluster # has to match .spec.template.metadata.labels
serviceName: elasticsearch
replicas: 2
updateStrategy:
type: RollingUpdate
template:
metadata:
labels:
app: es-cluster
spec:
securityContext:
fsGroup: 1000
initContainers:
- name: init-sysctl
image: busybox
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
command: ["sysctl", "-w", "vm.max_map_count=262144"]
containers:
- name: elasticsearch
resources:
requests:
memory: 1Gi
securityContext:
privileged: true
runAsUser: 1000
capabilities:
add:
- IPC_LOCK
- SYS_RESOURCE
image: docker.elastic.co/elasticsearch/elasticsearch-oss:7.7.1
env:
- name: ES_JAVA_OPTS
valueFrom:
configMapKeyRef:
name: es-config
key: ES_JAVA_OPTS
readinessProbe:
httpGet:
scheme: HTTP
path: /_cluster/health?local=true
port: 9200
initialDelaySeconds: 5
ports:
- containerPort: 9200
name: es-http
- containerPort: 9300
name: es-transport
volumeMounts:
- name: es-data
mountPath: /usr/share/elasticsearch/data
volumeClaimTemplates:
- metadata:
name: es-data
spec:
storageClassName: elasticdata
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
And the result is only one "pod" has its pvc binded to the pv, the other one gets an error loop "0/1 nodes are available: 1 pod has unbound immediate PersistentVolumeClaims".
Here is the kubectl get pv,pvc result :
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/elastic-pv-data 20Gi RWO Retain Bound default/es-data-esnode-0 elasticdata 14m
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/es-data-esnode-0 Bound elastic-pv-data 20Gi RWO elasticdata 13m
If I undestood correctly, I should have a second persistantolumeclaim with the following identifier : es-data-esnode-1
Is there something I miss or do not understand correctly ?
Thanks for your help
I skip here the non relevant parts (configmap,loadbalancer,..)
Let me add a few details to what was already said both in comments and in Jonas's answer.
Inferring from the comments, you've not defined a StorageClass named elasticdata. If it doesn't exist, you cannot reference it in your PV and PVC.
Take a quick look at how hostPath is used to define a PersistentVolume and how it is referenced in a PersistentVolumeClaim. Here you can see that in the example storageClassName: manual is used. Kubernetes docs doesn't say it explicitely but if you take a look at Openshift docs, it says very clearly that:
A Pod that uses a hostPath volume must be referenced by manual
(static) provisioning.
It's not just some value used to bind PVC request to this specific PV. So if the elasticdata StorageClass hasn't been defined, you should't use it here.
Second thing. As Jonas already stated in his comment, there is one-to-one binding between PVC and PV so no matter that your PV still has enough capacity, it has been already claimed by a different PVC and is not available any more. As you can read in the official docs:
A PVC to PV binding is a one-to-one mapping, using a ClaimRef which is
a bi-directional binding between the PersistentVolume and the
PersistentVolumeClaim.
Claims will remain unbound indefinitely if a matching volume does not
exist. Claims will be bound as matching volumes become available. For
example, a cluster provisioned with many 50Gi PVs would not match a
PVC requesting 100Gi. The PVC can be bound when a 100Gi PV is added to
the cluster.
And vice versa. If there is just one 100Gi PV it won't be able to safisfy a request from two PVCs claiming for 50Gi each. Note that in the result of kubectl get pv,pvc you posted, both PV and PVC have capacity of 20Gi although you request in each PVC created from PVC template only 3Gi.
You don't work here with any dynamic storage provisioner so you need to provide manually as many PersistentVolumes as needed for your use case.
By the way, instead of using hostPath I would rather recommend you using local volume with properly defined StorageClass. It has a few advantages over HostPath. Additionally an external static provisioner can be run separately for improved management of the local volume lifecycle
When using a StatefulSet with volumeClaimTemplates, it will create a PersistentVolumeClaim for each replica. So if you use replicas: 2, two different PersistentVolumeClaims will be created, es-data-esnode-0 and es-data-esnode-1.
Each PersistentVolumeClaim will bound to an unique PersistentVolume, so in the case of two PVCs, you would need two different PersistentVolumes. But this is not easy to do using volumeClaimTemplate and hostPath-volumes in a desktop setup.
For what reasons do you need replicas: 2 in this case? It is usually used to provide better availability, e.g. using more than one node. But for a local setup in a desktop environment, usually a single replica on the single node should be fine? I think the easies solution for you is to use replicas: 1.

How to set pvc with statefulset in kubernetes?

On GKE, I set a statefulset resource as
---
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumes:
- name: redis-data
persistentVolumeClaim:
claimName: redis-data-pvc
Want to use pvc so created this one. (This step was did before the statefulset deployment)
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: redis-data-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
When check the resource in kubernetes
kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
redis-data-pvc Bound pvc-6163d1f8-fb3d-44ac-a91f-edef1452b3b9 10Gi RWO standard 132m
The default Storage Class is standard.
kubectl get storageclass
NAME PROVISIONER
standard (default) kubernetes.io/gce-pd
But when check the statafulset's deployment status. It always wrong.
# Describe its pod details
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s default-scheduler persistentvolumeclaim "redis-data-pvc" not found
Warning FailedScheduling 17s (x2 over 20s) default-scheduler pod has unbound immediate PersistentVolumeClaims (repeated 2 times)
Normal Created 2s (x2 over 3s) kubelet Created container redis
Normal Started 2s (x2 over 3s) kubelet Started container redis
Warning BackOff 0s (x2 over 1s) kubelet Back-off restarting failed container
Why can't it find the redis-data-pvc name?
What you have done, should work. Make sure that the PersistentVolumeClaim and the StatefulSet is located in the same namespace.
Thats said, this is an easier solution, and that let you easier scale up to more replicas:
When using StatefulSet and PersistentVolumeClaim, use the volumeClaimTemplates: field in the StatefulSet instead.
The volumeClaimTemplates: will be used to create unique PVCs for each replica, and they have unique naming ending with e.g. -0 where the number is an ordinal used for the replicas in a StatefulSet.
So instead, use a SatefuleSet manifest like this:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: redis
spec:
serviceName: "redis"
selector:
matchLabels:
app: redis
updateStrategy:
type: RollingUpdate
replicas: 3
template:
metadata:
labels:
app: redis
spec:
containers:
- name: redis
image: redis
resources:
limits:
memory: 2Gi
ports:
- containerPort: 6379
volumeMounts:
- name: redis-data
mountPath: /usr/share/redis
volumeClaimTemplates: // this will be used to create PVC
- metadata:
name: redis-data
spec:
accessModes: [ "ReadWriteOnce" ]
resources:
requests:
storage: 10Gi

Migrating ROM, RWO persistent volume claims from in-tree plugin to csi in GKE

I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume while the deployment is scaled down to 0. However, since in-tree plugins will be deprecated in future versions of kubernetes, I'm planning to migrate this process to volumes using csi drivers.
In order to clarify my current use of this kind of volumes, I'll put a sample yaml configuration file using the basic idea:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
storageClassName: standard
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: busybox
# Populate the volume
command:
- touch
- /foo/bar
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test
name: test
spec:
replicas: 0
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
command:
- sh
- '-c'
- |
# Check the volume has been populated
ls /foo/
# Prevent the pod from exiting for a while
sleep 3600
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
readOnly: true
so the job populates the the volume and later the deployment is scaled up. However, replacing the storageClassName field standard in the persistent volume claim by singlewriter-standard does not even allow the job to run.
Is this some kind of bug? Is there some workaround to this using volumes using the csi driver?
If this is a bug, I'd plan to migrate to using sci drivers later; however, if this is not a bug, how should I migrate my current workflow since in-tree plugins will eventually be deprecated?
Edit:
The version of the kubernetes server is 1.17.9-gke.1504. As for the storage classes, they are the standard and singlewriter-standard default storage classes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
components.gke.io/component-name: pdcsi-addon
components.gke.io/component-version: 0.5.1
storageclass.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: singlewriter-standard
parameters:
type: pd-standard
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
While the error is not shown in the job but in the pod itself (this is just for the singlewriter-standard storage class):
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
The message you encountered:
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
is not a bug. The attachdetach-controller is showing this error as it doesn't know in which accessMode it should mount the volume:
For [ReadOnlyMany, ReadWriteOnce] PV, the external attacher simply does not know if the attachment is going to be consumed as read-only(-many) or as read-write(-once)
-- Github.com: Kubernetes CSI: External attacher: Issues: 153
I encourage you to check the link above for a full explanation.
I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume
You can combine the steps from below guides:
Turn on the CSI Persistent disk driver in GKE
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Gce-pd-csi-driver
Create a PVC with pd.csi.storage.gke.io provisioner (you will need to modify YAML definitions with storageClassName: singlewriter-standard):
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Citing the documentation on steps to take (from ReadOnlyMany guide) that should fulfill the setup you've shown:
Before using a persistent disk in read-only mode, you must format it.
To format your persistent disk:
Create a persistent disk manually or by using dynamic provisioning.
Format the disk and populate it with data. To format the disk, you can:
Reference the disk as a ReadWriteOnce volume in a Pod. Doing this results in GKE automatically formatting the disk, and enables the Pod to pre-populate the disk with data. When the Pod starts, make sure the Pod writes data to the disk.
Manually mount the disk to a VM and format it. Write any data to the disk that you want. For details, see Persistent disk formatting.
Unmount and detach the disk:
If you referenced the disk in a Pod, delete the Pod, wait for it to terminate, and wait for the disk to automatically detach from the node.
If you mounted the disk to a VM, detach the disk using gcloud compute instances detach-disk.
Create Pods that access the volume as ReadOnlyMany as shown in the following section.
-- Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Additional resources:
Github.com: Kubernetes: Design proposals: Storage: CSI
Kubernetes.io: Blog: Container storage interface
Kubernetes-csi.github.io: Docs: Drivers
EDIT
Following the official documentation:
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Please treat it as an example.
Dynamically create a PVC that will be used with ReadWriteOnce accessMode:
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rwo
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 81Gi
Run a Pod with a PVC mounted to it:
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox-pvc
spec:
containers:
- image: k8s.gcr.io/busybox
name: busybox
command:
- "sleep"
- "36000"
volumeMounts:
- mountPath: /test-mnt
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pvc-rwo
Run following commands:
$ kubectl exec -it busybox-pvc -- /bin/sh
$ echo "Hello there!" > /test-mnt/hello.txt
Delete the Pod and wait for the drive to be unmounted. Please do not delete PVC as deleting it:
When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.
-- Cloud.google.com: Kubernetes Engine: Persistent Volumes: Dynamic provisioning
Get the name (it's in VOLUME column) of the earlier created disk by running:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-rwo Bound pvc-11111111-2222-3333-4444-555555555555 81Gi RWO singlewriter-standard 52m
Create a PV and PVC with following definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-rox
spec:
storageClassName: singlewriter-standard
capacity:
storage: 81Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: pvc-rox # <-- important
gcePersistentDisk:
pdName: <INSERT HERE THE DISK NAME FROM EARLIER COMMAND>
# pdName: pvc-11111111-2222-3333-4444-555555555555 <- example
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rox # <-- important
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 81Gi
You can test if your disk is in ROX accessMode when the spawned Pods were scheduled on multiple nodes and all of them have the PVC mounted:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 15
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /test-mnt
name: volume-ro
readOnly: true
volumes:
- name: volume-ro
persistentVolumeClaim:
claimName: pvc-rox
readOnly: true
$ kubectl get deployment nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 15/15 15 15 3m1s
$ kubectl exec -it nginx-6c77b8bf66-njhpm -- cat /test-mnt/hello.txt
Hello there!

Kubernetes Minikube with local persistent storage

I am currently trying to deploy the following on Minikube. I used the configuration files to use a hostpath as a persistent storage on minikube node.
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
capacity:
storage: "20Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "orientdb-pv-claim"
spec:
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "20Gi"
---
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb-config
mountPath: /data/orientdb/config
- name: orientdb-databases
mountPath: /data/orientdb/databases
- name: orientdb-backup
mountPath: /data/orientdb/backup
volumes:
- name: orientdb-config
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-databases
persistentVolumeClaim:
claimName: orientdb-pv-claim
- name: orientdb-backup
persistentVolumeClaim:
claimName: orientdb-pv-claim
---
apiVersion: v1
kind: Service
metadata:
name: orientdbservice
labels:
run: orientdbservice
spec:
type: NodePort
selector:
run: orientdbservice
ports:
- protocol: TCP
port: 2480
name: http
which results in following
#kubectl get pv
NAME CAPACITY ACCESSMODES RECLAIMPOLICY STATUS CLAIM STORAGECLASS REASON AGE
pv-volume 20Gi RWO Retain Available 4h
pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO Delete Bound default/orientdb-pv-claim standard 4h
#kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESSMODES STORAGECLASS AGE
orientdb-pv-claim Bound pvc-cd14d593-78fc-11e7-a46d-1277ec3dd2b5 20Gi RWO
#kubectl get svc
NAME CLUSTER-IP EXTERNAL-IP PORT(S) AGE
orientdbservice 10.0.0.16 <nodes> 2480:30552/TCP 4h
#kubectl get pods
NAME READY STATUS RESTARTS AGE
orientdbservice-458328598-zsmw5 0/1 ContainerCreating 0 4h
#kubectl describe pod orientdbservice-458328598-zsmw5
Events:
FirstSeen LastSeen Count From SubObjectPath TypeReason Message
--------- -------- ----- ---- ------------- -------- ------ -------
4h 1m 37 kubelet, minikube Warning FailedMount Unable to mount volumes for pod "orientdbservice-458328598-zsmw5_default(392b1298-78ff-11e7-a46d-1277ec3dd2b5)": timeout expired waiting for volumes to attach/mount for pod "default"/"orientdbservice-458328598-zsmw5". list of unattached/unmounted volumes=[orientdb-databases]
4h 1m 37 kubelet, minikube Warning FailedSync Error syncing pod
I see the following error
Unable to mount volumes for pod,timeout expired waiting for volumes to attach/mount for pod
Is there something incorrect in way I am creating Persistent Volume and PersistentVolumeClaim on my node.
minikube version: v0.20.0
Appreciate all the help
Your configuration is fine.
Tested under minikube v0.24.0, minikube v0.25.0 and minikube v0.26.1 without any problem.
Take in mind that minikube is under active development, and, specially if you're under windows, is like they say experimental software.
Update to a newer version of minikube and redeploy it. This should solve the problem.
You can check for updates with the minikube update-check command which results in something like this:
$ minikube update-check
CurrentVersion: v0.25.0
LatestVersion: v0.26.1
To upgrade minikube simply type minikube delete which deletes your current minikube installation and download the new release as described.
$ minikube delete
There is a newer version of minikube available (v0.26.1). Download it here:
https://github.com/kubernetes/minikube/releases/tag/v0.26.1
To disable this notification, run the following:
minikube config set WantUpdateNotification false
Deleting local Kubernetes cluster...
Machine deleted.
For somereason the provisioner provisioner: k8s.io/minikube-hostpath in minikube doesn't work.
So:
delete default storage class kubectl delete storageclass standard
create following storage class:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
name: standard
provisioner: docker.io/hostpath
reclaimPolicy: Retain
Also in your volume mounts, you have one PVC bound to one PV, so instead of multiple volumes just have one volume and mount them with different subpaths, that will create three subdirectories(backup, config & databases) on your host's /data directory:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: orientdbservice
spec:
#replicas: 1
template:
metadata:
name: orientdbservice
labels:
run: orientdbservice
test: orientdbservice
spec:
containers:
- name: orientdbservice
image: orientdb:latest
env:
- name: ORIENTDB_ROOT_PASSWORD
value: "rootpwd"
ports:
- containerPort: 2480
name: orientdb
volumeMounts:
- name: orientdb
mountPath: /data/orientdb/config
subPath: config
- name: orientdb
mountPath: /data/orientdb/databases
subPath: databases
- name: orientdb
mountPath: /data/orientdb/backup
subPath: backup
volumes:
- name: orientdb
persistentVolumeClaim:
claimName: orientdb-pv-claim
- Now deploy your yaml: kubectl create -f yourorientdb.yaml