I'm having an issue where a container I'd like to run doesn't appear to be getting started on my cluster.
I've tried searching around for possible solutions, but there's a surprising lack of information out there to assist with this issue or anything of it's nature.
Here's the most I could gather:
$ kubectl describe pods/elasticsearch
Name: elasticsearch
Namespace: default
Image(s): my.image.host/my-project/elasticsearch
Node: /
Labels: <none>
Status: Pending
Reason:
Message:
IP:
Replication Controllers: <none>
Containers:
elasticsearch:
Image: my.image.host/my-project/elasticsearch
Limits:
cpu: 100m
State: Waiting
Ready: False
Restart Count: 0
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
Mon, 19 Oct 2015 10:28:44 -0500 Mon, 19 Oct 2015 10:34:09 -0500 12 {scheduler } failedScheduling no nodes available to schedule pods
I also see this:
$ kubectl get pod elasticsearch -o wide
NAME READY STATUS RESTARTS AGE NODE
elasticsearch 0/1 Pending 0 5s
I guess I'd like to know: What prerequisites exist so that I can be confident that my container is going to run in container engine? What do I need to do in this scenario to get it running?
Here's my yml file:
apiVersion: v1
kind: Pod
metadata:
name: elasticsearch
spec:
containers:
- name: elasticsearch
image: my.image.host/my-project/elasticsearch
ports:
- containerPort: 9200
resources:
volumeMounts:
- name: elasticsearch-data
mountPath: /usr/share/elasticsearch
volumes:
- name: elasticsearch-data
gcePersistentDisk:
pdName: elasticsearch-staging
fsType: ext4
Here's some more output about my node:
$ kubectl get nodes
NAME LABELS STATUS
gke-elasticsearch-staging-00000000-node-yma3 kubernetes.io/hostname=gke-elasticsearch-staging-00000000-node-yma3 NotReady
You only have one node in your cluster and its status in NotReady. So you won't be able to schedule any pods. You can try to determine why your node isn't ready by looking in /var/log/kubelet.log. You can also add new nodes to your cluster (scale the cluster size up to 2) or delete the node (it will be automatically replaced by the instance group manager) to see if either of those options get you a working node.
It appears that scheduler couldn't see any nodes in your cluster. You can run kubectl get nodes and gcloud compute instances list to confirm whether you have any nodes in the cluster. Did you correctly specify number of nodes (--num-nodes) when creating the cluster?
Related
I'v got microservices deployed on GKE, with Helm v3; all apps/helms stood nicely for months, but yesterday for some reason pods were re-created
kubectl get pods -l app=myapp
NAME READY STATUS RESTARTS AGE
myapp-75cb966746-grjkj 1/1 Running 1 14h
myapp-75cb966746-gz7g7 1/1 Running 0 14h
myapp-75cb966746-nmzzx 1/1 Running 1 14h
the helm3 history myapp shows it was updated 2days ago (40+hrs), not yesterday (so I exclude possibility someone simply run helm3 upgrade ..; (seems like someone ran a command kubectl rollout restart deployment/myapp), any thoughts how can I check why the pods were restarted? I'm not sure how to verify it; PS: the logs from kubectl logs deployment/myapp go back only to 3 hours ago
just for reference, I'm not asking for this command kubectl logs -p myapp-75cb966746-grjkj, with that there is no problem, I want to know what happened to the 3 pods that were there 14 hrs ago, and were simply deleted/replaced - and how to check that.
also no events on the cluster
MacBook-Pro% kubectl get events
No resources found in myns namespace.
as for describing the deployment all there is, is that first the deployment was created few months ago
CreationTimestamp: Thu, 22 Oct 2020 09:19:39 +0200
and that last update was >40hrs ago
lastUpdate: 2021-04-07 07:10:09.715630534 +0200 CEST m=+1.867748121
here is full describe if someone wants
MacBook-Pro% kubectl describe deployment myapp
Name: myapp
Namespace: myns
CreationTimestamp: Thu, 22 Oct 2020 09:19:39 +0200
Labels: app=myapp
Annotations: deployment.kubernetes.io/revision: 42
lastUpdate: 2021-04-07 07:10:09.715630534 +0200 CEST m=+1.867748121
meta.helm.sh/release-name: myapp
meta.helm.sh/release-namespace: myns
Selector: app=myapp,env=myns
Replicas: 3 desired | 3 updated | 3 total | 3 available | 0 unavailable
StrategyType: RollingUpdate
MinReadySeconds: 5
RollingUpdateStrategy: 25% max unavailable, 1 max surge
Pod Template:
Labels: app=myapp
Annotations: kubectl.kubernetes.io/restartedAt: 2020-10-23T11:21:11+02:00
Containers:
myapp:
Image: xxx
Port: 8080/TCP
Host Port: 0/TCP
Limits:
cpu: 1
memory: 1G
Requests:
cpu: 1
memory: 1G
Liveness: http-get http://:myappport/status delay=45s timeout=5s period=10s #success=1 #failure=3
Readiness: http-get http://:myappport/status delay=45s timeout=5s period=10s #success=1 #failure=3
Environment Variables from:
myapp-myns Secret Optional: false
Environment:
myenv: myval
Mounts:
/some/path from myvol (ro)
Volumes:
myvol:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: myvol
Optional: false
Conditions:
Type Status Reason
---- ------ ------
Progressing True NewReplicaSetAvailable
Available True MinimumReplicasAvailable
OldReplicaSets: <none>
NewReplicaSet: myapp-75cb966746 (3/3 replicas created)
Events: <none>
First thing first, I would check nodes on which the Pods were running.
If a Pod is restarted (which means that the RESTART COUNT is incremented) it usually means that the Pod had an error and that error caused the Pod to crash.
In your case tho, the Pod were completely recreated, this means (like you said) that someone could have use a rollout restart, or the deployment was scaled down and then up (both manual operations).
The most common case for Pods to be created automatically, is that the node / nodes were the Pods were executing on had a problem. If a node becomes NotReady, even for a small amount of time, Kubernetes Scheduler will try to schedule new Pods on other nodes in order to match the desired state (number of replicas and so on)
Old Pods on a NotReady node will go into Terminating state and will be forced to terminate as soon as the NotReady node will become Ready again (if they are still up and running)
This is described in details in the documentation ( https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-lifetime )
If a Node dies, the Pods scheduled to that node are scheduled for deletion after a timeout period. Pods do not, by themselves, self-heal. If a Pod is scheduled to a node that then fails, the Pod is deleted; likewise, a Pod won't survive an eviction due to a lack of resources or Node maintenance. Kubernetes uses a higher-level abstraction, called a controller, that handles the work of managing the relatively disposable Pod instances.
I suggest you run kubectl describe deployment <deployment-name> and kubectl describe pod <pod-name>.
In addition, kubectl get events will show cluster-level events and may help you understand what happened.
You can use
kubectl describe pod your_pod_name
where in Containers.your_container_name.lastState you get time and reason why your last pod was terminated (for example, due to error or due to being OOMKilled)
doc reference:
kubectl explain pod.status.containerStatuses.lastState
KIND: Pod
VERSION: v1
RESOURCE: lastState <Object>
DESCRIPTION:
Details about the container's last termination condition.
ContainerState holds a possible state of container. Only one of its members
may be specified. If none of them is specified, the default one is
ContainerStateWaiting.
FIELDS:
running <Object>
Details about a running container
terminated <Object>
Details about a terminated container
waiting <Object>
Details about a waiting container
Example on one of my containers, which terminated due to error in application:
Containers:
my_container:
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Tue, 06 Apr 2021 16:28:57 +0300
Finished: Tue, 06 Apr 2021 16:32:07 +0300
To get previous logs of your container (the restarted one), you may use --previous key on pod, like this:
kubectl logs your_pod_name --previous
Linux container pod, with docker images from Azure Container registry, keeps restarting with restartPolicy as Always. Pod description is as below.
kubectl describe pod example-pod
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 11 Jun 2020 03:27:11 +0000
Finished: Thu, 11 Jun 2020 03:27:12 +0000
...
Back-off restarting failed container
This pod is created with secret to access ACR registry repository.
Reason is that pod completes execution successfully with exit code 0. However, It should keep listening at particular port number. Microsoft document link is at this URL Container Group Runtime under header "Container continually exits and restarts"
deployment-example.yml file content is as below.
apiVersion: apps/v1
kind: Deployment
metadata:
name: example-deployment
namespace: development
labels:
app: example
spec:
replicas: 1
selector:
matchLabels:
app: example
template:
metadata:
labels:
app: example
spec:
containers:
- name: example
image: contentocr.azurecr.io/example:latest
#command: ["ping -t localhost"]
imagePullPolicy: Always
ports:
- name: http-port
containerPort: 3000
imagePullSecrets:
- name: regpass
restartPolicy: Always
nodeSelector:
agent: linux
---
apiVersion: v1
kind: Service
metadata:
name: example
namespace: development
labels:
app: example
spec:
ports:
- name: http-port
port: 3000
targetPort: 3000
selector:
app: example
type: LoadBalancer
Output of kubectl get events is as below.
3m39s Normal Scheduled pod/example-deployment-5dc964fcf8-gbm5t Successfully assigned development/example-deployment-5dc964fcf8-gbm5t to aks-agentpool-18342716-vmss000000
2m6s Normal Pulling pod/example-deployment-5dc964fcf8-gbm5t Pulling image "contentocr.azurecr.io/example:latest"
2m5s Normal Pulled pod/example-deployment-5dc964fcf8-gbm5t Successfully pulled image "contentocr.azurecr.io/example:latest"
2m5s Normal Created pod/example-deployment-5dc964fcf8-gbm5t Created container example
2m49s Normal Started pod/example-deployment-5dc964fcf8-gbm5t Started container example
2m20s Warning BackOff pod/example-deployment-5dc964fcf8-gbm5t Back-off restarting failed container
6m6s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-2fdt5
3m39s Normal SuccessfulCreate replicaset/example-deployment-5dc964fcf8 Created pod: example-deployment-5dc964fcf8-gbm5t
6m6s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m39s Normal ScalingReplicaSet deployment/example-deployment Scaled up replica set example-deployment-5dc964fcf8 to 1
3m38s Normal EnsuringLoadBalancer service/example Ensuring load balancer
3m34s Normal EnsuredLoadBalancer service/example Ensured load balancer
Docker file entry point is like ENTRYPOINT ["npm", "start"] with CMD ["tail -f /dev/null/"]
It runs locally. Implicitly, it assigns CI="true" flag. However, in docker-compose stdin_open: true or tty: true is to be set and in Kubernetes deployment file, ENV named variable CI is to be set up with value "true".
The below command solved my problem:-
az aks update -n aks-nks-k8s-cluster -g aks-nks-k8s-rg --attach-acr aksnksk8s
After executing the above command, below will be displayed:-
Add ROLE Propagation done [###############] 100.0000%
and then,
Running.. followed by Response trail after some time.
Here,
aks-nks-k8s-cluster : Cluster name I have created and using
aks-nks-k8s-rg : Resource Group have created and using
aksnksk8s : Container Registries which I have created and using
Pod containers are not ready and stuck under Waiting state over and over every single time after they run sh commands (/bin/sh as well).
As example all pod's containers seen at https://v1-17.docs.kubernetes.io/docs/tasks/configure-pod-container/configure-pod-configmap/#define-container-environment-variables-with-data-from-multiple-configmaps they just go on "Complete" status after executing the sh command, or if I set "restartPolicy: Always" they have the "Waiting" state for the reason CrashLoopBackOff.
(Containers work fine if I do not set any command on them.
If I use the sh command within container, after creating them I can read using "kubectl logs" the env variable was set correctly.
The expected behaviour is to get pod's containers running after they execute the sh command.
I cannot find references regarding this particular problem and I need little help if possible, thank you very much in advance!
Please disregard I tried different images, the problem happens either way.
environment: Kubernetes v 1.17.1 on qemu VM
yaml:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: [ "/bin/sh", "-c", "env" ]
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
describe pod:
kubectl describe pod dapi-test-pod
Name: dapi-test-pod
Namespace: default
Priority: 0
Node: kw1/10.1.10.31
Start Time: Thu, 21 May 2020 01:02:17 +0000
Labels: <none>
Annotations: cni.projectcalico.org/podIP: 192.168.159.83/32
kubectl.kubernetes.io/last-applied-configuration:
{"apiVersion":"v1","kind":"Pod","metadata":{"annotations":{},"name":"dapi-test-pod","namespace":"default"},"spec":{"containers":[{"command...
Status: Running
IP: 192.168.159.83
IPs:
IP: 192.168.159.83
Containers:
test-container:
Container ID: docker://63040ec4d0a3e78639d831c26939f272b19f21574069c639c7bd4c89bb1328de
Image: nginx
Image ID: docker-pullable://nginx#sha256:30dfa439718a17baafefadf16c5e7c9d0a1cde97b4fd84f63b69e13513be7097
Port: 88/TCP
Host Port: 0/TCP
Command:
/bin/sh
-c
env
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 21 May 2020 01:13:21 +0000
Finished: Thu, 21 May 2020 01:13:21 +0000
Ready: False
Restart Count: 7
Environment:
SPECIAL_LEVEL_KEY: <set to the key 'how' of config map 'special-config'> Optional: false
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-zqbsw (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-zqbsw:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-zqbsw
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13m default-scheduler Successfully assigned default/dapi-test-pod to kw1
Normal Pulling 12m (x4 over 13m) kubelet, kw1 Pulling image "nginx"
Normal Pulled 12m (x4 over 13m) kubelet, kw1 Successfully pulled image "nginx"
Normal Created 12m (x4 over 13m) kubelet, kw1 Created container test-container
Normal Started 12m (x4 over 13m) kubelet, kw1 Started container test-container
Warning BackOff 3m16s (x49 over 13m) kubelet, kw1 Back-off restarting failed container
You can use this manifest; The command ["/bin/sh", "-c"] says "run a shell, and execute the following instructions". The args are then passed as commands to the shell. Multiline args make it simple and easy to read. Your pod will display its environment variables and also start the NGINX process without stopping:
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
data:
how: very
---
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: nginx
ports:
- containerPort: 88
command: ["/bin/sh", "-c"]
args:
- env;
nginx -g 'daemon off;';
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: how
restartPolicy: Always
This happens because the process in the container you are running has completed and the container shuts down, and so kubernetes marks the pod as completed.
If the command that is defined in the docker image as part of CMD, or if you've added your own command as you have done, then the container shuts down after the command completed. It's the same reason why when you run Ubuntu using plain docker, it starts up then shuts down directly afterwards.
For pods, and their underlying docker container to continue running, you need to start a process that will continue running. In your case, running the env command completes right away.
If you set the pod to restart Always, then kubernetes will keep trying to restart it until it's reached it's back off threshold.
One off commands like you're running are useful for utility type things. I.e. do one thing then get rid of the pod.
For example:
kubectl run tester --generator run-pod/v1 --image alpine --restart Never --rm -it -- /bin/sh -c env
To run something longer, start a process that continues running.
For example:
kubectl run tester --generator run-pod/v1 --image alpine -- /bin/sh -c "sleep 30"
That command will run for 30 seconds, and so the pod will also run for 30 seconds. It will also use the default restart policy of Always. So after 30 seconds the process completes, Kubernetes marks the pod as complete, and then restarts it to do the same things again.
Generally pods will start a long running process, like a web server. For Kubernetes to know if that pod is healthy, so it can do it's high availability magic and restart it if it cashes, it can use readiness and liveness probes.
Minikube version v0.24.1
kubernetes version 1.8.0
The problem that I am facing is that I have several statefulsets created in minikube each with one pod.
Sometimes when I start up minikube my pods will start up initially then keep being restarted by kubernetes. They will go from the creating container state, to running, to terminating over and over.
Now I've seen kubernetes kill and restart things before if kubernetes detects disk pressure, memory pressure, or some other condition like that, but that's not the case here as these flags are not raised and the only message in the pod's event log is "Need to kill pod".
What's most confusing is that this issue doesn't happen all the time, and I'm not sure how to trigger it. My minikube setup will work for a week or more without this happening then one day I'll start minikube up and the pods for my statefulsets just keep restarting. So far the only workaround I've found is to delete my minikube instance and set it up again from scratch, but obviously this is not ideal.
Seen here is a sample of one of the statefulsets whose pod keeps getting restarted. Seen in the logs kubernetes is deleting the pod and starting it again. This happens repeatedly. I'm unable to figure out why it keeps doing that and why it only gets into this state sometimes.
$ kubectl describe statefulsets mongo --namespace=storage
Name: mongo
Namespace: storage
CreationTimestamp: Mon, 08 Jan 2018 16:11:39 -0600
Selector: environment=test,role=mongo
Labels: name=mongo
Annotations: kubectl.kubernetes.io/last-applied-configuration={"apiVersion":"apps/v1beta1","kind":"StatefulSet","metadata":{"annotations":{},"labels":{"name":"mongo"},"name":"mongo","namespace":"storage"},"...
Replicas: 1 desired | 1 total
Pods Status: 1 Running / 0 Waiting / 0 Succeeded / 0 Failed
Pod Template:
Labels: environment=test
role=mongo
Containers:
mongo:
Image: mongo:3.4.10-jessie
Port: 27017/TCP
Command:
mongod
--replSet
rs0
--smallfiles
--noprealloc
Environment: <none>
Mounts:
/data/db from mongo-persistent-storage (rw)
mongo-sidecar:
Image: cvallance/mongo-k8s-sidecar
Port: <none>
Environment:
MONGO_SIDECAR_POD_LABELS: role=mongo,environment=test
KUBERNETES_MONGO_SERVICE_NAME: mongo
Mounts: <none>
Volumes: <none>
Volume Claims:
Name: mongo-persistent-storage
StorageClass:
Labels: <none>
Annotations: volume.alpha.kubernetes.io/storage-class=default
Capacity: 5Gi
Access Modes: [ReadWriteOnce]
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulDelete 23m (x46 over 1h) statefulset delete Pod mongo-0 in StatefulSet mongo successful
Normal SuccessfulCreate 3m (x62 over 1h) statefulset create Pod mongo-0 in StatefulSet mongo successful
After some more digging there seems to have been a bug which can affect statefulsets that creates multiple controllers for the same statefulset:
https://github.com/kubernetes/kubernetes/issues/56355
This issue seems to have been fixed and the fix seems to have been backported to version 1.8 of kubernetes and included in version 1.9, but minikube doesn't yet have the fixed version. A workaround if your system enters this state is to list the controller revisions like so:
$ kubectl get controllerrevisions --namespace=storage
NAME CONTROLLER REVISION AGE
mongo-68bd5cbcc6 StatefulSet/mongo 1 19h
mongo-68bd5cbcc7 StatefulSet/mongo 1 7d
and delete the duplicate controllers for each statefulset.
$ kubectl delete controllerrevisions mongo-68bd5cbcc6 --namespace=storage
or to simply use version 1.9 of kubernetes or above that includes this bug fix.
I have a Kubernetes setup installed in my Ubuntu machine. I'm trying to setup a nfs volume and mount it to a container according to this http://kubernetes.io/v1.1/examples/nfs/ document.
nfs service and pod configurations
kind: Service
apiVersion: v1
metadata:
name: nfs-server
spec:
ports:
- port: 2049
selector:
role: nfs-server
---
apiVersion: v1
kind: Pod
metadata:
name: nfs-server
labels:
role: nfs-server
spec:
containers:
- name: nfs-server
image: jsafrane/nfs-data
ports:
- name: nfs
containerPort: 2049
securityContext:
privileged: true
pod configuration to mount nfs volume
apiVersion: v1
kind: Pod
metadata:
name: nfs-web
spec:
containers:
- name: web
image: nginx
ports:
- name: web
containerPort: 80
volumeMounts:
# name must match the volume name below
- name: nfs
mountPath: "/usr/share/nginx/html"
volumes:
- name: nfs
nfs:
# FIXME: use the right hostname
server: 192.168.3.201
path: "/"
When I run kubectl describe nfs-web I get following output mentioning it was unable to mount nfs volume. What could be the reason for that?
Name: nfs-web
Namespace: default
Image(s): nginx
Node: 192.168.1.114/192.168.1.114
Start Time: Sun, 06 Dec 2015 08:31:06 +0530
Labels: <none>
Status: Pending
Reason:
Message:
IP:
Replication Controllers: <none>
Containers:
web:
Container ID:
Image: nginx
Image ID:
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment Variables:
Conditions:
Type Status
Ready False
Volumes:
nfs:
Type: NFS (an NFS mount that lasts the lifetime of a pod)
Server: 192.168.3.201
Path: /
ReadOnly: false
default-token-nh698:
Type: Secret (a secret that should populate this volume)
SecretName: default-token-nh698
Events:
FirstSeen LastSeen Count From SubobjectPath Reason Message
───────── ──────── ───── ──── ───────────── ────── ───────
36s 36s 1 {scheduler } Scheduled Successfully assigned nfs-web to 192.168.1.114
36s 2s 5 {kubelet 192.168.1.114} FailedMount Unable to mount volumes for pod "nfs-web_default": exit status 32
36s 2s 5 {kubelet 192.168.1.114} FailedSync Error syncing pod, skipping: exit status 32
I had the same problem, and I solved it by installing nfs-common in every Kubernetes nodes.
apt-get install -y nfs-common
My nodes were installed without nfs-common. Kubernetes will ask each node to mount the NFS into a specific directory to be available to the pod. As mount.nfs was not found, the mounting process failed.
Good luck!
It looks like volumes.nfs.server=192.168.3.201 is incorrectly configured on your client. It should be set to the ClusterIP address of your nfs-server Service.
Had the same issue with NFS which only allowed root mounts.
fixed by:
a. allow non-root users to mount NFS (on the server).
or
b. in PersistentVolume add
mountOptions:
- nfsvers=4.1
I fixed this issue by installing nfs-utils on the worker nodes.
In my case the issue was that i hadn't declared the host server of the nfs in the /etc/exports file. After adding an entry in there for my host server, the volume was working correctly.
if you modify the file in anyway then you need restart the service too;
sudo systemctl restart nfs-kernel-server
An example of an entry in the /etc/exports file;
/var/nfs/home 192.111.222.333(rw,sync,no_subtree_check)
In my case, the issue was the folder defined in volume hostPath was not created in the local. Once the folder was created in the worker node server, the issue was addressed.
Warning FailedMount 3m18s kubelet Unable to attach or mount volumes: unmounted volumes=[temp-volume], unattached volumes=[nfsvol-vre-data temp1-volume consumer1-serviceaccount-token-sdfsdf nfsvol]: timed out waiting for the condition
Warning FailedMount 71s (x10 over 5m20s) kubelet MountVolume.SetUp failed for volume "temp-volume" : hostPath type check failed: /tmp/folder is not a directory
Warning FailedMount 63s kubelet Unable to attach or mount volumes: unmounted volumes=[temp-volume], unattached volumes=[nfsvol nfsvol-vre-data temp1-volume consumer1-serviceaccount-token-sdfsdf]: timed out waiting for the condition
You need to execute the following on each master and node
sudo yum install nfs-utils -y