Kubernetes pod fails with "Unable to attach or mount volumes" - kubernetes

After Kubernetes upgrade from 1.18.13 to 1.19.5 I get error bellow for some pods randomly. After some time pod fails to start(it's a simple pod, doesn't belong to deployment)
Warning FailedMount 99s kubelet Unable to attach or mount volumes: unmounted volumes=[red-tmp data logs docker src red-conf], unattached volumes=[red-tmp data logs docker src red-conf]: timed out waiting for the condition
On 1.18 we don't have such issue, also during upgrade K8S doesn't show any errors or incompatibility messages.
No additional logs from any other K8S components(tried to increase verbosity level for kubelet)
Everything is fine with disk space and other host's metrics like LA, RAM
No network storages, only local data
PV and PVC are created before pods and we don't change them
Tried to use higher K8S versions but no luck
We have pretty standard setup without any special customizations:
CNI: Flannel
CRI: Docker
Only one node as master and worker
16 cores and 32G RAM
Example of pod definition:
apiVersion: v1
kind: Pod
metadata:
labels:
app: provision
ver: latest
name: provision
namespace: red
spec:
containers:
- args:
- wait
command:
- provision.sh
image: app-tests
imagePullPolicy: IfNotPresent
name: provision
volumeMounts:
- mountPath: /opt/app/be
name: src
- mountPath: /opt/app/be/conf
name: red-conf
- mountPath: /opt/app/be/tmp
name: red-tmp
- mountPath: /var/lib/app
name: data
- mountPath: /var/log/app
name: logs
- mountPath: /var/run/docker.sock
name: docker
dnsConfig:
options:
- name: ndots
value: "2"
dnsPolicy: ClusterFirst
enableServiceLinks: false
restartPolicy: Never
volumes:
- hostPath:
path: /opt/agent/projects/app-backend
type: Directory
name: src
- name: red-conf
persistentVolumeClaim:
claimName: conf
- name: red-tmp
persistentVolumeClaim:
claimName: tmp
- name: data
persistentVolumeClaim:
claimName: data
- name: logs
persistentVolumeClaim:
claimName: logs
- hostPath:
path: /var/run/docker.sock
type: Socket
name: docker
PV
apiVersion: v1
kind: PersistentVolume
metadata:
name: red-conf
labels:
namespace: red
spec:
accessModes:
- ReadWriteMany
capacity:
storage: 2Gi
hostPath:
path: /var/lib/docker/k8s/red-conf
persistentVolumeReclaimPolicy: Retain
storageClassName: red-conf
volumeMode: Filesystem
PVC
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: conf
namespace: red
spec:
accessModes:
- ReadWriteMany
resources:
requests:
storage: 2Gi
storageClassName: red-conf
volumeMode: Filesystem
volumeName: red-conf
tmp data logs pv have the same setup as conf beside path. They have separate folders:
/var/lib/docker/k8s/red-tmp
/var/lib/docker/k8s/red-data
/var/lib/docker/k8s/red-logs
Currently I don't have any clues how to diagnose the issue :(
Would be glad to get advice. Thanks in advance.

you must be using local volumes. Follow the below link to understand how to create storage class, pv's and pvc when you use local volumes.
https://kubernetes.io/blog/2019/04/04/kubernetes-1.14-local-persistent-volumes-ga/

I recommend you to start troubleshooting by reviewing the VolumeAttachment events against what node has tied the PV, perhaps your volume is still linked to a node that was in evicted condition and was replaced by a new one.
You can use this command to check your PV name and status:
kubectl get pv
And then, to review what node has the correct volumeattachment, you can use the following command:
kubectl get volumeattachment
Once you get the name of your PV and at what node it is attached, then you will be able to see if the PV is tied to the correct node or if maybe it is tied to a previous node that is not working or was removed. The node gets evicted and scheduled into a new available node from the pool; to know what nodes are ready and running, you can use this command:
kubectl get nodes
If you detect that your PV is tied to the node that no longer exists, you will need to delete the VolumeAttachment with the following command:
kubectl delete volumeattachment [csi-volumeattachment_name]
If you need to review a detailed guide for this troubleshooting, you can follow this link.

Related

Why local persistent volumes not visible in EKS?

In order to test if I can get self written software deployed in amazon using docker images,
I have a test eks cluster.
I have written a small test script that reads and writes a file to see if I understand how to deploy. I have successfully deployed it in minikube, using three replica's. The replica's all use a shared directory on my local file system, and in minikube that is mounted into the pods with a volume
The next step was to deploy that in the eks cluster. However, I cannot get it working in eks. The problem is that the pods don't see the contents of the mounted directory.
This does not completely surprise me, since in minikube I had to create a mount first to a local directory on the server. I have not done something similar on the eks server.
My question is what I should do to make this working (if possible at all).
I use this yaml file to create a pod in eks:
apiVersion: v1
kind: PersistentVolume
metadata:
name: "pv-volume"
spec:
storageClassName: local-storage
capacity:
storage: "1Gi"
accessModes:
- "ReadWriteOnce"
hostPath:
path: /data/k8s
type: DirectoryOrCreate
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: local-storage
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
So what I expect is that I have a local directory, /data/k8s, that is visible in the pods as path /config.
When I apply this yaml, I get a pod that gives an error message that makes clear the data in the /data/k8s directory is not visible to the pod.
Kubectl gives me this info after creation of the volume and claim
[rdgon#NL013-PPDAPP015 probeer]$ kubectl get pv,pvc
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/pv-volume 1Gi RWO Retain Available 15s
persistentvolume/pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO Delete Bound default/pv-claim gp2 9s
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/pv-claim Bound pvc-156edfef-d272-4df6-ae16-09b12e1c2f03 1Gi RWO gp2 15s
Which seems to indicate everything is OK. But it seems that the filesystem of the master node, on which I run the yaml file to create the volume, is not the location where the pods look when they access the /config dir.
On EKS, there's no storage class named 'local-storage' by default.
There is only a 'gp2' storage class, which is also used when you don't specify a storageClassName.
The 'gp2' storage class creates a dedicated EBS volume and attaches it your Kubernetes Node when required, so it doesn't use a local folder. You also don't need to create the pv manually, just the pvc:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "pv-claim"
spec:
storageClassName: gp2
accessModes:
- "ReadWriteOnce"
resources:
requests:
storage: "500M"
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
persistentVolumeClaim:
claimName: pv-claim
If you want a folder on the Node itself, you can use a 'hostPath' volume, and you don't need a pv or pvc for that:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
hostPath:
path: /data/k8s
This is a bad idea, since the data will be lost if another node starts up, and your pod is moved to the new node.
If it's for configuration only, you can also use a configMap, and put the files directly in your kubernetes manifest files.
apiVersion: v1
kind: ConfigMap
metadata:
name: ruud-config
data:
ruud.properties: |
my ruud.properties file content...
---
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: "/config"
volumes:
- name: cmount
configMap:
name: ruud-config
Please check whether the pv got created and its "bound" to PVC by running below commands
kubectl get pv
kubectl get pvc
Which will give information whether the objects are created properly
The local path you refer to is not valid. Try:
apiVersion: v1
kind: Pod
metadata:
name: ruudtest
spec:
containers:
- name: ruud
image: MYIMAGE
volumeMounts:
- name: cmount
mountPath: /config
volumes:
- name: cmount
hostPath:
path: /data/k8s
type: DirectoryOrCreate # <-- You need this since the directory may not exist on the node.

Kubernetes PersistentVolume on local machine, share data

I would like to spin up a Pod on my local machine. Inside the pod is a single container with a .jar file in it. That jar file can take in files, process then, and then output them. I would like to create a PersistentVolume and attach that to the Pod, so the container can accesss the files.
My Dockerfile:
FROM openjdk:11
WORKDIR /usr/local/dat
COPY . .
ENTRYPOINT ["java", "-jar", "./tool/DAT.jar"]
(Please note that the folder used inside the container is /usr/local/dat)
My PersistentVolume.yml file:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dat-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 150Mi
storageClassName: hostpath
hostPath:
path: /home/zoltanvilaghy/WORK/ctp/shared
My PersistentVolumeClaim.yml file:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: dat-pvc
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 100Mi
storageClassName: hostpath
volumeName: dat-volume
My Pod.yml file:
apiVersion: v1
kind: Pod
metadata:
name: dat-tool-pod
labels:
name: dat-tool-pod
spec:
containers:
- name: dat-tool
image: dat_docker
imagePullPolicy: Never
args: ["-in", "/usr/local/dat/shared/input/Archive", "-out", "/usr/local/dat/shared/output/Archive2", "-da"]
volumeMounts:
- mountPath: /usr/local/dat/shared
name: dat-volume
restartPolicy: Never
volumes:
- name: dat-volume
persistentVolumeClaim:
claimName: dat-pvc
If all worked well, after attaching the PersistentVolume (and putting the Archive folder inside the shared/input folder), by giving the arguments to the jar file it would be able to process the files and output them to the shared/output folder.
Instead, I get an error saying that the folder cannot be found. Unfortunately, after the error the container exists, so I can't look around inside the container to check the file structure. Can somebody help me identify the problem?
Edit: Output of kubectl get sc, pvc, pv :
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
storageclass.storage.k8s.io/hostpath (default) docker.io/hostpath Delete Immediate false 20d
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
persistentvolumeclaim/dat-pvc Bound dat-volume 150Mi RWO hostpath 4m52s
NAME CAPACITY ACCESS MODES RECLAIM POLICY STATUS CLAIM STORAGECLASS REASON AGE
persistentvolume/dat-volume 150Mi RWO Retain Bound default/dat-pvc hostpath 4m55s
Assumed your sc/pvc/pv are all correct, here's how you can test:
apiVersion: v1
kind: Pod
metadata:
name: dat-tool-pod
labels:
name: dat-tool-pod
spec:
containers:
- name: dat-tool
image: busybox
imagePullPolicy: IfNotPresent
command: ["ash","-c","sleep 7200"]
volumeMounts:
- mountPath: /usr/local/dat/shared
name: dat-volume
restartPolicy: Never
volumes:
- name: dat-volume
persistentVolumeClaim:
claimName: dat-pvc
After the pod is created then you can kubectl exec -it dat-tool-pod -- ash and cd /usr/local/dat/shared. Here you can check the directory/files (incl. permission) to understand why your program complaint about missing directory/files.
For anyone else experiencing this problem, here is what helped me find a solution:
https://github.com/docker/for-win/issues/7023
(And actually the link inside the first comment in this issue.)
So my setup was a Windows 10 machine, using WSL2 to run docker containers and kubernetes cluster on my machine. No matter where I put the folder I wanted to share with my Pod, it didn't appear inside the pod. So based on the link above, I created my folder in /mnt/wsl, called /mnt/wsl/shared.
Because supposedly, this /mnt/wsl folder is where the DockerDesktop will start to look for the folder that you want to share. I changed my PersistentVolume.yml to the following:
apiVersion: v1
kind: PersistentVolume
metadata:
name: dat-volume
spec:
accessModes:
- ReadWriteOnce
capacity:
storage: 150Mi
storageClassName: hostpath
hostPath:
path: /run/desktop/mnt/host/wsl/shared
My understanding is that /run/desktop/mnt/host/wsl is the same as /mnt/wsl, and so I could finally pass files between my Pod and my machine.

Migrating ROM, RWO persistent volume claims from in-tree plugin to csi in GKE

I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume while the deployment is scaled down to 0. However, since in-tree plugins will be deprecated in future versions of kubernetes, I'm planning to migrate this process to volumes using csi drivers.
In order to clarify my current use of this kind of volumes, I'll put a sample yaml configuration file using the basic idea:
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: test
spec:
storageClassName: standard
accessModes:
- ReadOnlyMany
- ReadWriteOnce
resources:
requests:
storage: 1Gi
---
apiVersion: batch/v1
kind: Job
metadata:
name: test
spec:
template:
spec:
containers:
- name: test
image: busybox
# Populate the volume
command:
- touch
- /foo/bar
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
restartPolicy: Never
---
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: test
name: test
spec:
replicas: 0
selector:
matchLabels:
app: test
template:
metadata:
labels:
app: test
spec:
containers:
- name: test
image: busybox
command:
- sh
- '-c'
- |
# Check the volume has been populated
ls /foo/
# Prevent the pod from exiting for a while
sleep 3600
volumeMounts:
- name: test
mountPath: /foo/
subPath: foo
volumes:
- name: test
persistentVolumeClaim:
claimName: test
readOnly: true
so the job populates the the volume and later the deployment is scaled up. However, replacing the storageClassName field standard in the persistent volume claim by singlewriter-standard does not even allow the job to run.
Is this some kind of bug? Is there some workaround to this using volumes using the csi driver?
If this is a bug, I'd plan to migrate to using sci drivers later; however, if this is not a bug, how should I migrate my current workflow since in-tree plugins will eventually be deprecated?
Edit:
The version of the kubernetes server is 1.17.9-gke.1504. As for the storage classes, they are the standard and singlewriter-standard default storage classes:
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
labels:
addonmanager.kubernetes.io/mode: EnsureExists
kubernetes.io/cluster-service: "true"
name: standard
parameters:
type: pd-standard
provisioner: kubernetes.io/gce-pd
reclaimPolicy: Delete
volumeBindingMode: Immediate
---
apiVersion: storage.k8s.io/v1
kind: StorageClass
metadata:
annotations:
components.gke.io/component-name: pdcsi-addon
components.gke.io/component-version: 0.5.1
storageclass.kubernetes.io/is-default-class: "true"
labels:
addonmanager.kubernetes.io/mode: EnsureExists
name: singlewriter-standard
parameters:
type: pd-standard
provisioner: pd.csi.storage.gke.io
reclaimPolicy: Delete
volumeBindingMode: WaitForFirstConsumer
While the error is not shown in the job but in the pod itself (this is just for the singlewriter-standard storage class):
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
The message you encountered:
Warning FailedAttachVolume attachdetach-controller AttachVolume.Attach failed for volume "..." : CSI does not support ReadOnlyMany and ReadWriteOnce on the same PersistentVolume
is not a bug. The attachdetach-controller is showing this error as it doesn't know in which accessMode it should mount the volume:
For [ReadOnlyMany, ReadWriteOnce] PV, the external attacher simply does not know if the attachment is going to be consumed as read-only(-many) or as read-write(-once)
-- Github.com: Kubernetes CSI: External attacher: Issues: 153
I encourage you to check the link above for a full explanation.
I currently have a ROM, RWO persistent volume claim that I regularly use as a read only volume in a deployment that sporadically gets repopulated by some job using it as a read write volume
You can combine the steps from below guides:
Turn on the CSI Persistent disk driver in GKE
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Gce-pd-csi-driver
Create a PVC with pd.csi.storage.gke.io provisioner (you will need to modify YAML definitions with storageClassName: singlewriter-standard):
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Citing the documentation on steps to take (from ReadOnlyMany guide) that should fulfill the setup you've shown:
Before using a persistent disk in read-only mode, you must format it.
To format your persistent disk:
Create a persistent disk manually or by using dynamic provisioning.
Format the disk and populate it with data. To format the disk, you can:
Reference the disk as a ReadWriteOnce volume in a Pod. Doing this results in GKE automatically formatting the disk, and enables the Pod to pre-populate the disk with data. When the Pod starts, make sure the Pod writes data to the disk.
Manually mount the disk to a VM and format it. Write any data to the disk that you want. For details, see Persistent disk formatting.
Unmount and detach the disk:
If you referenced the disk in a Pod, delete the Pod, wait for it to terminate, and wait for the disk to automatically detach from the node.
If you mounted the disk to a VM, detach the disk using gcloud compute instances detach-disk.
Create Pods that access the volume as ReadOnlyMany as shown in the following section.
-- Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Additional resources:
Github.com: Kubernetes: Design proposals: Storage: CSI
Kubernetes.io: Blog: Container storage interface
Kubernetes-csi.github.io: Docs: Drivers
EDIT
Following the official documentation:
Cloud.google.com: Kubernetes Engine: How to: Persistent volumes: Readonlymany disks
Please treat it as an example.
Dynamically create a PVC that will be used with ReadWriteOnce accessMode:
pvc.yaml
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rwo
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 81Gi
Run a Pod with a PVC mounted to it:
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: busybox-pvc
spec:
containers:
- image: k8s.gcr.io/busybox
name: busybox
command:
- "sleep"
- "36000"
volumeMounts:
- mountPath: /test-mnt
name: my-volume
volumes:
- name: my-volume
persistentVolumeClaim:
claimName: pvc-rwo
Run following commands:
$ kubectl exec -it busybox-pvc -- /bin/sh
$ echo "Hello there!" > /test-mnt/hello.txt
Delete the Pod and wait for the drive to be unmounted. Please do not delete PVC as deleting it:
When you delete a claim, the corresponding PersistentVolume object and the provisioned Compute Engine persistent disk are also deleted.
-- Cloud.google.com: Kubernetes Engine: Persistent Volumes: Dynamic provisioning
Get the name (it's in VOLUME column) of the earlier created disk by running:
$ kubectl get pvc
NAME STATUS VOLUME CAPACITY ACCESS MODES STORAGECLASS AGE
pvc-rwo Bound pvc-11111111-2222-3333-4444-555555555555 81Gi RWO singlewriter-standard 52m
Create a PV and PVC with following definition:
apiVersion: v1
kind: PersistentVolume
metadata:
name: pv-rox
spec:
storageClassName: singlewriter-standard
capacity:
storage: 81Gi
accessModes:
- ReadOnlyMany
claimRef:
namespace: default
name: pvc-rox # <-- important
gcePersistentDisk:
pdName: <INSERT HERE THE DISK NAME FROM EARLIER COMMAND>
# pdName: pvc-11111111-2222-3333-4444-555555555555 <- example
fsType: ext4
readOnly: true
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: pvc-rox # <-- important
spec:
storageClassName: singlewriter-standard
accessModes:
- ReadOnlyMany
resources:
requests:
storage: 81Gi
You can test if your disk is in ROX accessMode when the spawned Pods were scheduled on multiple nodes and all of them have the PVC mounted:
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
name: nginx
spec:
selector:
matchLabels:
app: nginx
replicas: 15
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
volumeMounts:
- mountPath: /test-mnt
name: volume-ro
readOnly: true
volumes:
- name: volume-ro
persistentVolumeClaim:
claimName: pvc-rox
readOnly: true
$ kubectl get deployment nginx
NAME READY UP-TO-DATE AVAILABLE AGE
nginx 15/15 15 15 3m1s
$ kubectl exec -it nginx-6c77b8bf66-njhpm -- cat /test-mnt/hello.txt
Hello there!

how to find my persistent volume location

I tried creating persistent volume using the host path. I can bind it to a specific node using node affinity but I didn't provide that. My persistent volume YAML looks like this
apiVersion: v1
kind: PersistentVolume
metadata:
name: jenkins
labels:
type: fast
spec:
capacity:
storage: 1Gi
accessModes:
- ReadWriteOnce
persistentVolumeReclaimPolicy: Recycle
hostPath:
path: /mnt/data
After this I created PVC
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: myclaim
spec:
accessModes:
- ReadWriteOnce
volumeMode: Filesystem
resources:
requests:
storage: 1Gi
And finally attached it onto the pod.
apiVersion: v1
kind: Pod
metadata:
name: mypod
spec:
containers:
- name: myfrontend
image: thinkingmonster/nettools
volumeMounts:
- mountPath: "/var/www/html"
name: mypd
volumes:
- name: mypd
persistentVolumeClaim:
claimName: myclaim
Now in describe command for pv or pvc it does not tell that on which node it has actually kept the volume /mnt/data
and I had to ssh to all nodes to locate the same.
And pod is smart enough to be created on that node only where Kubernetes had mapped host directory to PV
How can I know that on which node Kubernetes has created Persistent volume? Without the requirement to ssh the nodes or check that where is pod running.
It's only when a volume is bound to a claim that it's associated with a particular node. HostPath volumes are a bit different than the regular sort, making it a little less clear. When you get the volume claim, the annotations on it should give you a bunch of information, including what you're looking for. In particular, look for the:
volume.kubernetes.io/selected-node: ${NODE_NAME}
annotation on the PVC. You can see the annotations, along with the other computed configuration, by asking the Kubernetes api server for that info:
kubectl get pvc -o yaml -n ${NAMESPACE} ${PVC_NAME}

Setup with Kubernetes hostPath but file doesn't show up in container

I'm trying to set up hostPath to share a file between pods.
I'm following this guide Configure a Pod to Use a PersistentVolume for Storage.
Here are the configuration files for pv,pvc,pod.
PV:
kind: PersistentVolume
apiVersion: v1
metadata:
name: task-pv-volume
labels:
type: local
spec:
storageClassName: manual
capacity:
storage: 10Gi
accessModes:
- ReadWriteOnce
hostPath:
path: "/tmp/data"
PVC:
kind: PersistentVolumeClaim
apiVersion: v1
metadata:
name: task-pv-claim
spec:
storageClassName: manual
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 3Gi
POD:
kind: Pod
apiVersion: v1
metadata:
name: task-pv-pod
spec:
volumes:
- name: task-pv-storage
persistentVolumeClaim:
claimName: task-pv-claim
containers:
- name: task-pv-container
image: nginx
ports:
- containerPort: 80
name: "http-server"
volumeMounts:
- mountPath: "/usr/share/nginx/html"
name: task-pv-storage
I'm adding a file to /tmp/data and but I can't see it in the container.
When I check the status of pv,pvc,pod, the result is as following:
Can some give me a clue on why I can't see the file? Any suggestion or command on how to debug this kind of issue is welcome.
MINIKUBE specific
I was getting the same error when running a local cluster on minikube for Mac.
Minikube actually creates a VM and then runs your containers on it. So the hostPath actually refers to paths inside that VM and not on your local machine. That is why all mounts show up as empty folders.
Solution:
Map your local path to minikube's VM by same name. That way you can refer it as is in you kubernetes Manifests.
minikube mount <source directory>:<target directory>
In this case:
minikube mount /tmp/data:/tmp/data
This should do the trick.
SOURCE: https://minikube.sigs.k8s.io/docs/handbook/mount/
I think I figure it out.
HostPath is only suitable for a one-node cluster. my cluster have 2 nodes. so the physical storage the PV use is on another computer.
when I first go through the documentation, I don't pay attention this:
"You need to have a Kubernetes cluster that has only one Node"
did you put a local file in /tmp/data in the first place ?