I am using the readiness probe feature in my k8s pod. For this, probe checks if the file /tmp/healthy is present or not. If not, probe mark pod UN-READY. I have observed If I create the file again /tmp/healthy, the pod does not come back to READY state. Does it mean, in pod lifecycle there is only path of READY--> UN-READY but not vice-versa?
/home/ravi/for_others/ric/stub>kubectl describe pod -n myns deployment-eterm
Name: deployment-eterm
Namespace: myns
Priority: 0
Status: Running
IP: 192.168.252.87
IPs:
IP: 192.168.252.87
Controlled By: ReplicaSet/deployment-myns-eterm-micro-6b896556c8
Containers:
Liveness: exec [/bin/sh -c /tmp/liveliness-status-collector.sh] delay=60s timeout=1s period=2s #success=1 #failure=5
Readiness: exec [/bin/sh -c cat /tmp/healthy] delay=60s timeout=1s period=10s #success=1 #failure=1
Environment Variables from:
configmap-myns-eterm-env-micro ConfigMap Optional: false
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-57jpz (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m11s default-scheduler Successfully assigned myns/deployment-eterm to mnode
Normal Pulled 9m10s kubelet, mnode Container image "porics.microlab.com:5000/eterm:latest" already present on machine
Normal Created 9m10s kubelet, mnode Created container container-myns-eterm
Normal Started 9m10s kubelet, mnode Started container container-myns-eterm
Normal Pulled 9m10s kubelet, mnode Container image "porics.microlab.com:5000/config-proxy:latest" already present on machine
Normal Created 9m10s kubelet, mnode Created container container-myns-configproxy
Normal Started 9m10s kubelet, mnode Started container container-myns-configproxy
Warning Unhealthy 3s (x3 over 23s) kubelet, mnode Readiness probe failed: cat: /tmp/healthy: No such file or directory
/home/ravi/for_others/ric/stub>
The below statement is not true:
Does it mean, in pod lifecycle, there is the only one path of READY--> UN-READY
but not vice-versa?
You can run the below experiment to see the pod transition:
Create a pod with below manifest file:
apiVersion: v1
kind: Pod
metadata:
labels:
test: readiness
name: readiness-test
spec:
nodeName: kube-master
containers:
- name: liveness
image: bash
command: ['bash','-c', 'while true;do echo "$(date): creating /tmp/healthy"; touch /tmp/healthy; sleep 10; echo "$(date): deleting /tmp/healthy";rm /tmp/healthy ;sleep 10;done']
readinessProbe:
exec:
command:
- cat
- /tmp/healthy
initialDelaySeconds: 5
periodSeconds: 2
The above manifest will cause the pod to transition into ready/not-ready every 10 seconds.
kubectl get pod -w
NAME READY STATUS RESTARTS AGE
readiness-test 0/1 Running 0 79s
readiness-test 1/1 Running 0 83s
readiness-test 0/1 Running 0 97s
readiness-test 1/1 Running 0 103s
readiness-test 0/1 Running 0 117s
readiness-test 1/1 Running 0 2m3s
readiness-test 0/1 Running 0 2m17s
readiness-test 1/1 Running 0 2m23s
readiness-test 0/1 Running 0 2m37s
readiness-test 1/1 Running 0 2m43s
readiness-test 0/1 Running 0 2m57s
readiness-test 1/1 Running 0 3m3s
readiness-test 0/1 Running 0 3m17s
readiness-test 1/1 Running 0 3m23s
readiness-test 0/1 Running 0 3m37s
readiness-test 1/1 Running 0 3m43s
readiness-test 0/1 Running 0 3m57s
readiness-test 1/1 Running 0 4m3s
You can see the the readiness status getting changed:
while true; do k get pod readiness-test -o jsonpath='{.status.containerStatuses[*].ready}{"\n"}';sleep 2 ;done
true
true
true
true
true
true
false
false
true
true
true
true
true
true
false
false
true
true
true
true
true
true
false
false
true
true
true
true
true
false
false
false
Related
I have a metalLB loadbalancer, k8s clusters (one master and one worker) v1.18.5, helm 3.7, and nfs dynamic volume provisioning using helm. I run up a jupyterhub instance with helm. Within a minute everything is set up but when I use the external IP to open JupyterHub on my browser, noting loads up. here is my kubectl get all
pod/continuous-image-puller-4l5gj 1/1 Running 0 23s
pod/hub-6c9cb48df8-k5t4w 1/1 Running 0 23s
pod/nfs-subdir-external-provisioner-789697969b-hqp46 1/1 Running 0 23h
pod/nginx2-669c86457c-hc5mv 1/1 Running 0 35h
pod/proxy-66cb767659-svwbv 1/1 Running 0 23s
pod/user-scheduler-6d4698dd59-wqw9l 1/1 Running 0 23s
pod/user-scheduler-6d4698dd59-zk4c7 1/1 Running 0 23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.111.196.55 <none> 8081/TCP 23s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39h
service/nginx2 LoadBalancer 10.106.241.85 10.0.3.240 80:30746/TCP 32h
service/proxy-api ClusterIP 10.109.211.71 <none> 8001/TCP 23s
service/proxy-public LoadBalancer 10.111.233.85 10.0.3.241 80:31336/TCP 23s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/continuous-image-puller 1 1 1 1 1 <none> 23s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 1/1 1 1 23s
deployment.apps/nfs-subdir-external-provisioner 1/1 1 1 23h
deployment.apps/nginx2 1/1 1 1 35h
deployment.apps/proxy 1/1 1 1 23s
deployment.apps/user-scheduler 2/2 2 2 23s
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-6c9cb48df8 1 1 1 23s
replicaset.apps/nfs-subdir-external-provisioner-789697969b 1 1 1 23h
replicaset.apps/nginx2-669c86457c 1 1 1 35h
replicaset.apps/proxy-66cb767659 1 1 1 23s
replicaset.apps/user-scheduler-6d4698dd59 2 2 2 23s
NAME READY AGE
statefulset.apps/user-placeholder 0/0 23s
Also, below is my storage class for reference: kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/nfs-subdir-external-provisioner Delete Immediate true 23h
I will not paste the config file as it is very large, basically what I did was
helm show values jupyterhub/jupyterhub > /tmp/jupyterhub.yaml
(after changing some values)
helm install jupyterhub jupyterhub/jupyterhub --values /tmp/jupyterhub.yaml
The only few things I changed was the security-key (hex [as mentioned on the website]) along with writing nfs-client wherever it said storageClass and storageClassName and perhaps altering the storage size (1Gi/2Gi). That's all. The LoadBalancer works fine because I ran nginx and I can easily open it up on my browser. So I decided to check the JupyterHub pod's by first getting the pod's name using: kubectl get pods
NAME READY STATUS RESTARTS AGE
continuous-image-puller-4l5gj 1/1 Running 0 20m
hub-6c9cb48df8-k5t4w 1/1 Running 0 20m
nfs-subdir-external-provisioner-789697969b-hqp46 1/1 Running 0 23h
nginx2-669c86457c-hc5mv 1/1 Running 0 35h
proxy-66cb767659-svwbv 1/1 Running 0 20m
user-scheduler-6d4698dd59-wqw9l 1/1 Running 0 20m
user-scheduler-6d4698dd59-zk4c7 1/1 Running 0 20m
root#master:/home/ubuntu#
and then using kubectl describe pod hub-6c9cb48df8-k5t4w -n default which gave me this:
Name: hub-6c9cb48df8-k5t4w
Namespace: default
Priority: 0
Node: worker/10.0.0.126
Start Time: Sat, 27 Nov 2021 10:21:43 +0000
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=6c9cb48df8
release=jupyterhub
Annotations: checksum/config-map: f746d7e563a064e9158fe6f7f59bdbd463ed24ad7a927d75a1f18c022c3afeaf
checksum/secret: 926186a1b18e5cb9aa5b8c0a177f379299bcf0f05ac4de17d1958422054d15e5
cni.projectcalico.org/podIP: 192.168.171.97/32
cni.projectcalico.org/podIPs: 192.168.171.97/32
Status: Running
IP: 192.168.171.97
IPs:
IP: 192.168.171.97
Controlled By: ReplicaSet/hub-6c9cb48df8
Containers:
hub:
Container ID: docker://1d5e3a812f9712f6d59c09d855b034e2f6bc3e058bad4932db87145ec09f70d1
Image: jupyterhub/k8s-hub:1.2.0
Image ID: docker-pullable://jupyterhub/k8s-hub#sha256:e4770285aaf7230b930643986221757c2cc2e9420f5e21ac892582c96a57ce1c
Port: 8081/TCP
Host Port: 0/TCP
Args:
jupyterhub
--config
/usr/local/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
State: Running
Started: Sat, 27 Nov 2021 10:21:45 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
Readiness: http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jupyterhub
POD_NAMESPACE: default (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'> Optional: false
Mounts:
/srv/jupyterhub from pvc (rw)
/usr/local/etc/jupyterhub/config/ from config (rw)
/usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/usr/local/etc/jupyterhub/secret/ from secret (rw)
/usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-zd25x (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub
Optional: false
pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-zd25x:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-zd25x
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: hub.jupyter.org/dedicated=core:NoSchedule
hub.jupyter.org_dedicated=core:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21m default-scheduler Successfully assigned default/hub-6c9cb48df8-k5t4w to worker
Normal Pulled 21m kubelet, worker Container image "jupyterhub/k8s-hub:1.2.0" already present on machine
Normal Created 21m kubelet, worker Created container hub
Normal Started 21m kubelet, worker Started container hub
Warning Unhealthy 21m (x3 over 21m) kubelet, worker Readiness probe failed: Get http://192.168.171.97:8081/hub/health: dial tcp 192.168.171.97:8081: connect: connection refused
So I know that the pod is unhealthy. But I do not have any other details to debug this. Any help on how to fix or debug this would be highly appreciated.
Thank you!
I have an Openshift cluster. I have created one custom application and am trying to deploy it using Helm charts. When I deploy it on Openshift using 'oc new-app', the deployments works perfectly fine. But when I deploy it using helm chart, it does not work.
Following is the output of 'oc get all' ----
[root#worker2 ~]#
[root#worker2 ~]# oc get all
NAME READY STATUS RESTARTS AGE
pod/chart-acme-85648d4645-7msdl 1/1 Running 0 3d7h
pod/chart1-acme-f8b65b78d-k2fb6 1/1 Running 0 3d7h
pod/netshoot 1/1 Running 0 3d10h
pod/sample1-buildachart-5b5d9d8649-qqmsf 0/1 CrashLoopBackOff 672 2d9h
pod/sample2-686bb7f969-fx5bk 0/1 CrashLoopBackOff 674 2d9h
pod/vjobs-npm-96b65fcb-b2p27 0/1 CrashLoopBackOff 817 47h
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/chart-acme LoadBalancer 172.30.174.208 <pending> 80:30222/TCP 3d7h
service/chart1-acme LoadBalancer 172.30.153.36 <pending> 80:30383/TCP 3d7h
service/sample1-buildachart NodePort 172.30.29.124 <none> 80:32375/TCP 2d9h
service/sample2 NodePort 172.30.19.24 <none> 80:32647/TCP 2d9h
service/vjobs-npm NodePort 172.30.205.30 <none> 80:30476/TCP 47h
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/chart-acme 1/1 1 1 3d7h
deployment.apps/chart1-acme 1/1 1 1 3d7h
deployment.apps/sample1-buildachart 0/1 1 0 2d9h
deployment.apps/sample2 0/1 1 0 2d9h
deployment.apps/vjobs-npm 0/1 1 0 47h
NAME DESIRED CURRENT READY AGE
replicaset.apps/chart-acme-85648d4645 1 1 1 3d7h
replicaset.apps/chart1-acme-f8b65b78d 1 1 1 3d7h
replicaset.apps/sample1-buildachart-5b5d9d8649 1 1 0 2d9h
replicaset.apps/sample2-686bb7f969 1 1 0 2d9h
replicaset.apps/vjobs-npm-96b65fcb 1 1 0 47h
[root#worker2 ~]#
[root#worker2 ~]#
As per above diagram, you can see the deployment 'vjobs-npm' gives a 'CrashLoopBackOff' error.
Below is the output of 'oc describe pod' ----
[root#worker2 ~]#
[root#worker2 ~]# oc describe pod vjobs-npm-96b65fcb-b2p27
Name: vjobs-npm-96b65fcb-b2p27
Namespace: vjobs-testing
Priority: 0
Node: worker0/192.168.100.109
Start Time: Mon, 31 Aug 2020 09:30:28 -0400
Labels: app.kubernetes.io/instance=vjobs-npm
app.kubernetes.io/name=vjobs-npm
pod-template-hash=96b65fcb
Annotations: openshift.io/scc: restricted
Status: Running
IP: 10.131.0.107
IPs:
IP: 10.131.0.107
Controlled By: ReplicaSet/vjobs-npm-96b65fcb
Containers:
vjobs-npm:
Container ID: cri-o://c232849eb25bd96ae9343ac3ed1539d492985dd8cdf47a5a4df7d3cf776c4cf3
Image: quay.io/aditya7002/vjobs_local_build_new:latest
Image ID: quay.io/aditya7002/vjobs_local_build_new#sha256:87f18e3a24fc7043a43a143e96b0b069db418ace027d95a5427cf53de56feb4c
Port: 80/TCP
Host Port: 0/TCP
State: Running
Started: Mon, 31 Aug 2020 09:31:23 -0400
Last State: Terminated
Reason: Error
Exit Code: 137
Started: Mon, 31 Aug 2020 09:30:31 -0400
Finished: Mon, 31 Aug 2020 09:31:22 -0400
Ready: False
Restart Count: 1
Liveness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:http/ delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from vjobs-npm-token-vw6f7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
vjobs-npm-token-vw6f7:
Type: Secret (a volume populated by a Secret)
SecretName: vjobs-npm-token-vw6f7
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 68s default-scheduler Successfully assigned vjobs-testing/vjobs-npm-96b65fcb-b2p27 to worker0
Normal Killing 44s kubelet, worker0 Container vjobs-npm failed liveness probe, will be restarted
Normal Pulling 14s (x2 over 66s) kubelet, worker0 Pulling image "quay.io/aditya7002/vjobs_local_build_new:latest"
Normal Pulled 13s (x2 over 65s) kubelet, worker0 Successfully pulled image "quay.io/aditya7002/vjobs_local_build_new:latest"
Normal Created 13s (x2 over 65s) kubelet, worker0 Created container vjobs-npm
Normal Started 13s (x2 over 65s) kubelet, worker0 Started container vjobs-npm
Warning Unhealthy 4s (x4 over 64s) kubelet, worker0 Liveness probe failed: Get http://10.131.0.107:80/: dial tcp 10.131.0.107:80:
connect: connection refused
Warning Unhealthy 1s (x7 over 61s) kubelet, worker0 Readiness probe failed: Get http://10.131.0.107:80/: dial tcp 10.131.0.107:80
: connect: connection refused
[root#worker2 ~]#
Tried to install rook-ceph on kubernetes as this guide:
https://rook.io/docs/rook/v1.3/ceph-quickstart.html
git clone --single-branch --branch release-1.3 https://github.com/rook/rook.git
cd rook/cluster/examples/kubernetes/ceph
kubectl create -f common.yaml
kubectl create -f operator.yaml
kubectl create -f cluster.yaml
When I check all the pods
$ kubectl -n rook-ceph get pod
NAME READY STATUS RESTARTS AGE
csi-cephfsplugin-9c2z9 3/3 Running 0 23m
csi-cephfsplugin-provisioner-7678bcfc46-s67hq 5/5 Running 0 23m
csi-cephfsplugin-provisioner-7678bcfc46-sfljd 5/5 Running 0 23m
csi-cephfsplugin-smmlf 3/3 Running 0 23m
csi-rbdplugin-provisioner-fbd45b7c8-dnwsq 6/6 Running 0 23m
csi-rbdplugin-provisioner-fbd45b7c8-rp85z 6/6 Running 0 23m
csi-rbdplugin-s67lw 3/3 Running 0 23m
csi-rbdplugin-zq4k5 3/3 Running 0 23m
rook-ceph-mon-a-canary-954dc5cd9-5q8tk 1/1 Running 0 2m9s
rook-ceph-mon-b-canary-b9d6f5594-mcqwc 1/1 Running 0 2m9s
rook-ceph-mon-c-canary-78b48dbfb7-z2t7d 0/1 Pending 0 2m8s
rook-ceph-operator-757d6db48d-x27lm 1/1 Running 0 25m
rook-ceph-tools-75f575489-znbbz 1/1 Running 0 7m45s
rook-discover-gq489 1/1 Running 0 24m
rook-discover-p9zlg 1/1 Running 0 24m
$ kubectl -n rook-ceph get pod -l app=rook-ceph-osd-prepare
No resources found in rook-ceph namespace.
Do some other operation
$ kubectl taint nodes $(hostname) node-role.kubernetes.io/master:NoSchedule-
$ kubectl -n rook-ceph-system delete pods rook-ceph-operator-757d6db48d-x27lm
Create file system
$ kubectl create -f filesystem.yaml
Check again
$ kubectl get pods -n rook-ceph -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-9c2z9 3/3 Running 0 135m 192.168.0.53 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-s67hq 5/5 Running 0 135m 10.1.2.6 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-sfljd 5/5 Running 0 135m 10.1.2.5 kube3 <none> <none>
csi-cephfsplugin-smmlf 3/3 Running 0 135m 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-dnwsq 6/6 Running 0 135m 10.1.1.6 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-rp85z 6/6 Running 0 135m 10.1.1.5 kube2 <none> <none>
csi-rbdplugin-s67lw 3/3 Running 0 135m 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-zq4k5 3/3 Running 0 135m 192.168.0.53 kube3 <none> <none>
rook-ceph-crashcollector-kube2-6d95bb9c-r5w7p 0/1 Init:0/2 0 110m <none> kube2 <none> <none>
rook-ceph-crashcollector-kube3-644c849bdb-9hcvg 0/1 Init:0/2 0 110m <none> kube3 <none> <none>
rook-ceph-mon-a-canary-954dc5cd9-6ccbh 1/1 Running 0 75s 10.1.2.130 kube3 <none> <none>
rook-ceph-mon-b-canary-b9d6f5594-k85w5 1/1 Running 0 74s 10.1.1.74 kube2 <none> <none>
rook-ceph-mon-c-canary-78b48dbfb7-kfzzx 0/1 Pending 0 73s <none> <none> <none> <none>
rook-ceph-operator-757d6db48d-nlh84 1/1 Running 0 110m 10.1.2.28 kube3 <none> <none>
rook-ceph-tools-75f575489-znbbz 1/1 Running 0 119m 10.1.1.14 kube2 <none> <none>
rook-discover-gq489 1/1 Running 0 135m 10.1.1.3 kube2 <none> <none>
rook-discover-p9zlg 1/1 Running 0 135m 10.1.2.4 kube3 <none> <none>
Can't see pod as rook-ceph-osd-.
And rook-ceph-mon-c-canary-78b48dbfb7-kfzzx pod is always Pending.
If install toolbox as
https://rook.io/docs/rook/v1.3/ceph-toolbox.html
$ kubectl create -f toolbox.yaml
$ kubectl -n rook-ceph exec -it $(kubectl -n rook-ceph get pod -l "app=rook-ceph-tools" -o jsonpath='{.items[0].metadata.name}') bash
Inside the container, check the ceph status
[root#rook-ceph-tools-75f575489-znbbz /]# ceph -s
unable to get monitor info from DNS SRV with service name: ceph-mon
[errno 2] error connecting to the cluster
It's running on Ubuntu 16.04.6.
Deploy again
$ kubectl -n rook-ceph get pod -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
csi-cephfsplugin-4tww8 3/3 Running 0 3m38s 192.168.0.52 kube2 <none> <none>
csi-cephfsplugin-dbbfb 3/3 Running 0 3m38s 192.168.0.53 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-8kt96 5/5 Running 0 3m37s 10.1.2.6 kube3 <none> <none>
csi-cephfsplugin-provisioner-7678bcfc46-kq6vv 5/5 Running 0 3m38s 10.1.1.6 kube2 <none> <none>
csi-rbdplugin-4qrqn 3/3 Running 0 3m39s 192.168.0.53 kube3 <none> <none>
csi-rbdplugin-dqx9z 3/3 Running 0 3m39s 192.168.0.52 kube2 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-7f57t 6/6 Running 0 3m39s 10.1.2.5 kube3 <none> <none>
csi-rbdplugin-provisioner-fbd45b7c8-9zwhb 6/6 Running 0 3m39s 10.1.1.5 kube2 <none> <none>
rook-ceph-mon-a-canary-954dc5cd9-rgqpg 1/1 Running 0 2m40s 10.1.1.7 kube2 <none> <none>
rook-ceph-mon-b-canary-b9d6f5594-n2pwc 1/1 Running 0 2m35s 10.1.2.8 kube3 <none> <none>
rook-ceph-mon-c-canary-78b48dbfb7-fv46f 0/1 Pending 0 2m30s <none> <none> <none> <none>
rook-ceph-operator-757d6db48d-2m25g 1/1 Running 0 6m27s 10.1.2.3 kube3 <none> <none>
rook-discover-lpsht 1/1 Running 0 5m15s 10.1.1.3 kube2 <none> <none>
rook-discover-v4l77 1/1 Running 0 5m15s 10.1.2.4 kube3 <none> <none>
Describe pending pod
$ kubectl describe pod rook-ceph-mon-c-canary-78b48dbfb7-fv46f -n rook-ceph
Name: rook-ceph-mon-c-canary-78b48dbfb7-fv46f
Namespace: rook-ceph
Priority: 0
Node: <none>
Labels: app=rook-ceph-mon
ceph_daemon_id=c
mon=c
mon_canary=true
mon_cluster=rook-ceph
pod-template-hash=78b48dbfb7
rook_cluster=rook-ceph
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/rook-ceph-mon-c-canary-78b48dbfb7
Containers:
mon:
Image: rook/ceph:v1.3.4
Port: 6789/TCP
Host Port: 0/TCP
Command:
/tini
Args:
--
sleep
3600
Environment:
CONTAINER_IMAGE: ceph/ceph:v14.2.9
POD_NAME: rook-ceph-mon-c-canary-78b48dbfb7-fv46f (v1:metadata.name)
POD_NAMESPACE: rook-ceph (v1:metadata.namespace)
NODE_NAME: (v1:spec.nodeName)
POD_MEMORY_LIMIT: node allocatable (limits.memory)
POD_MEMORY_REQUEST: 0 (requests.memory)
POD_CPU_LIMIT: node allocatable (limits.cpu)
POD_CPU_REQUEST: 0 (requests.cpu)
ROOK_CEPH_MON_HOST: <set to the key 'mon_host' in secret 'rook-ceph-config'> Optional: false
ROOK_CEPH_MON_INITIAL_MEMBERS: <set to the key 'mon_initial_members' in secret 'rook-ceph-config'> Optional: false
ROOK_POD_IP: (v1:status.podIP)
Mounts:
/etc/ceph from rook-config-override (ro)
/etc/ceph/keyring-store/ from rook-ceph-mons-keyring (ro)
/var/lib/ceph/crash from rook-ceph-crash (rw)
/var/lib/ceph/mon/ceph-c from ceph-daemon-data (rw)
/var/log/ceph from rook-ceph-log (rw)
/var/run/secrets/kubernetes.io/serviceaccount from default-token-65xtn (ro)
Conditions:
Type Status
PodScheduled False
Volumes:
rook-config-override:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: rook-config-override
Optional: false
rook-ceph-mons-keyring:
Type: Secret (a volume populated by a Secret)
SecretName: rook-ceph-mons-keyring
Optional: false
rook-ceph-log:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/rook-ceph/log
HostPathType:
rook-ceph-crash:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/rook-ceph/crash
HostPathType:
ceph-daemon-data:
Type: HostPath (bare host directory volume)
Path: /var/lib/rook/mon-c/data
HostPathType:
default-token-65xtn:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-65xtn
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning FailedScheduling 22s (x3 over 84s) default-scheduler 0/3 nodes are available: 1 node(s) had taint {node-role.kubernetes.io/master: }, that the pod didn't tolerate, 2 node(s) didn't match pod affinity/anti-affinity, 2 node(s) didn't satisfy existing pods anti-affinity rules.
Test mount
Create a nginx.yaml file
apiVersion: v1
kind: Pod
metadata:
name: nginx
spec:
containers:
- name: nginx
image: nginx:1.7.9
ports:
- containerPort: 80
volumeMounts:
- name: www
mountPath: /usr/share/nginx/html
volumes:
- name: www
flexVolume:
driver: ceph.rook.io/rook
fsType: ceph
options:
fsName: myfs
clusterNamespace: rook-ceph
Deploy it and describe the pod detail
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 9m28s default-scheduler Successfully assigned default/nginx to kube2
Warning FailedMount 9m28s kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www default-token-fnb28], unattached volumes=[www default-token-fnb28]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
Warning FailedMount 6m14s (x2 over 6m38s) kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www], unattached volumes=[default-token-fnb28 www]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
Warning FailedMount 4m6s (x23 over 9m13s) kubelet, kube2 Unable to attach or mount volumes: unmounted volumes=[www], unattached volumes=[www default-token-fnb28]: failed to get Plugin from volumeSpec for volume "www" err=no volume plugin matched
rook-ceph-mon-x pods have following affinity:
spec:
affinity:
podAntiAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
- labelSelector:
matchLabels:
app: rook-ceph-mon
topologyKey: kubernetes.io/hostname
which doesn't allow for running 2 rook-ceph-mon pods on the same node.
Since you seem to have 3 nodes: 1 master and 2 workers, 2 pods get created, one on kube2 and one on kube3 node. kube1 is master node tainted as unschedulable so rook-ceph-mon-c cannot be scheduled there.
To solve it you can:
add one more worker node
remove NoSchedule taint with kubectl taint nodes kube1 key:NoSchedule-
change mon count to lower value
NAME READY STATUS RESTARTS AGE
coredns-5644d7b6d9-289qz 1/1 Running 0 76m
coredns-5644d7b6d9-ssbb2 1/1 Running 0 76m
etcd-k8s-master 1/1 Running 0 75m
kube-apiserver-k8s-master 1/1 Running 0 75m
kube-controller-manager-k8s-master 1/1 Running 0 75m
kube-proxy-2q9k5 1/1 Running 0 71m
kube-proxy-dz9pk 1/1 Running 0 76m
kube-scheduler-k8s-master 1/1 Running 0 75m
tiller-deploy-7b875fbf86-8nxmk 0/1 ContainerCreating 0 17m
weave-net-nzb67 2/2 Running 0 75m
weave-net-t8kmk 2/2 Running 0 71m
Installed Kubernates version v1.16.2 but when installing tiller using new service account it is strucking at Container creating. Tried all the solutions such as RBAC, Removing tiller role and do it again, reinstalling kubernates etc.
Output for Kubectl describe is as follows.
[02:32:50] root#k8s-master$ kubectl describe pods tiller-deploy-7b875fbf86-8nxmk --namespace kube-system
Name: tiller-deploy-7b875fbf86-8nxmk
Namespace: kube-system
Priority: 0
Node: worker-node1/172.17.0.1
Start Time: Thu, 24 Oct 2019 14:12:45 -0400
Labels: app=helm
name=tiller
pod-template-hash=7b875fbf86
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/tiller-deploy-7b875fbf86
Containers:
tiller:
Container ID:
Image: gcr.io/kubernetes-helm/tiller:v2.15.1
Image ID:
Ports: 44134/TCP, 44135/TCP
Host Ports: 0/TCP, 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Liveness: http-get http://:44135/liveness delay=1s timeout=1s period=10s #success=1 #failure=3
Readiness: http-get http://:44135/readiness delay=1s timeout=1s period=10s #success=1 #failure=3
Environment:
TILLER_NAMESPACE: kube-system
TILLER_HISTORY_MAX: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from tiller-token-rr2jg (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
tiller-token-rr2jg:
Type: Secret (a volume populated by a Secret)
SecretName: tiller-token-rr2jg
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled <unknown> default-scheduler Successfully assigned kube-system/tiller-deploy-7b875fbf86-8nxmk to worker-node1
Warning FailedCreatePodSandBox 61s (x5 over 17m) kubelet, worker-node1 Failed create pod sandbox: rpc error: code = DeadlineExceeded desc = context deadline exceeded
Normal SandboxChanged 60s (x5 over 17m) kubelet, worker-node1 Pod sandbox changed, it will be killed and re-created.
[~]
FailedCreatePodSandBox
Means that worker-node1 (172.17.0.1) does not have CNI installed or configured, and is a frequently asked question. Whatever mechanism you used to install kubernetes did not do a robust job, or else you missed a step along the way.
I also have pretty high confidence that your kubelet logs on worker-node1 are filled with error messages, if you were to actually look at them.
So I have this unhealthy cluster partially working in the datacenter. This is probably the 10th time I have rebuilt from the instructions at: https://kubernetes.io/docs/setup/independent/high-availability/
I can apply some pods to this cluster and it seems to work but eventually it starts slowing down and crashing as you can see below. Here is the scheduler manifest:
apiVersion: v1
kind: Pod
metadata:
creationTimestamp: null
labels:
component: kube-scheduler
tier: control-plane
name: kube-scheduler
namespace: kube-system
spec:
containers:
- command:
- kube-scheduler
- --bind-address=127.0.0.1
- --kubeconfig=/etc/kubernetes/scheduler.conf
- --authentication-kubeconfig=/etc/kubernetes/scheduler.conf
- --authorization-kubeconfig=/etc/kubernetes/scheduler.conf
- --leader-elect=true
image: k8s.gcr.io/kube-scheduler:v1.14.2
imagePullPolicy: IfNotPresent
livenessProbe:
failureThreshold: 8
httpGet:
host: 127.0.0.1
path: /healthz
port: 10251
scheme: HTTP
initialDelaySeconds: 15
timeoutSeconds: 15
name: kube-scheduler
resources:
requests:
cpu: 100m
volumeMounts:
- mountPath: /etc/kubernetes/scheduler.conf
name: kubeconfig
readOnly: true
hostNetwork: true
priorityClassName: system-cluster-critical
volumes:
- hostPath:
path: /etc/kubernetes/scheduler.conf
type: FileOrCreate
name: kubeconfig
status: {}
$ kubectl -n kube-system get pods
NAME READY STATUS RESTARTS AGE
coredns-fb8b8dccf-42psn 1/1 Running 9 88m
coredns-fb8b8dccf-x9mlt 1/1 Running 11 88m
docker-registry-dqvzb 1/1 Running 1 2d6h
kube-apiserver-kube-apiserver-1 1/1 Running 44 2d8h
kube-apiserver-kube-apiserver-2 1/1 Running 34 2d7h
kube-controller-manager-kube-apiserver-1 1/1 Running 198 2d2h
kube-controller-manager-kube-apiserver-2 0/1 CrashLoopBackOff 170 2d7h
kube-flannel-ds-amd64-4mbfk 1/1 Running 1 2d7h
kube-flannel-ds-amd64-55hc7 1/1 Running 1 2d8h
kube-flannel-ds-amd64-fvwmf 1/1 Running 1 2d7h
kube-flannel-ds-amd64-ht5wm 1/1 Running 3 2d7h
kube-flannel-ds-amd64-rjt9l 1/1 Running 4 2d8h
kube-flannel-ds-amd64-wpmkj 1/1 Running 1 2d7h
kube-proxy-2n64d 1/1 Running 3 2d7h
kube-proxy-2pq2g 1/1 Running 1 2d7h
kube-proxy-5fbms 1/1 Running 2 2d8h
kube-proxy-g8gmn 1/1 Running 1 2d7h
kube-proxy-wrdrj 1/1 Running 1 2d8h
kube-proxy-wz6gv 1/1 Running 1 2d7h
kube-scheduler-kube-apiserver-1 0/1 CrashLoopBackOff 198 2d2h
kube-scheduler-kube-apiserver-2 1/1 Running 5 18m
nginx-ingress-controller-dz8fm 1/1 Running 3 2d4h
nginx-ingress-controller-sdsgg 1/1 Running 3 2d4h
nginx-ingress-controller-sfrgb 1/1 Running 1 2d4h
$ kubectl -n kube-system describe pod kube-scheduler-kube-apiserver-1
Containers:
kube-scheduler:
Container ID: docker://c04f3c9061cafef8749b2018cd66e6865d102f67c4d13bdd250d0b4656d5f220
Image: k8s.gcr.io/kube-scheduler:v1.14.2
Image ID: docker-pullable://k8s.gcr.io/kube-scheduler#sha256:052e0322b8a2b22819ab0385089f202555c4099493d1bd33205a34753494d2c2
Port: <none>
Host Port: <none>
Command:
kube-scheduler
--bind-address=127.0.0.1
--kubeconfig=/etc/kubernetes/scheduler.conf
--authentication-kubeconfig=/etc/kubernetes/scheduler.conf
--authorization-kubeconfig=/etc/kubernetes/scheduler.conf
--leader-elect=true
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 1
Started: Tue, 28 May 2019 23:16:50 -0400
Finished: Tue, 28 May 2019 23:19:56 -0400
Ready: False
Restart Count: 195
Requests:
cpu: 100m
Liveness: http-get http://127.0.0.1:10251/healthz delay=15s timeout=15s period=10s #success=1 #failure=8
Environment: <none>
Mounts:
/etc/kubernetes/scheduler.conf from kubeconfig (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubeconfig:
Type: HostPath (bare host directory volume)
Path: /etc/kubernetes/scheduler.conf
HostPathType: FileOrCreate
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: :NoExecute
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Created 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Created container kube-scheduler
Normal Started 4h56m (x104 over 37h) kubelet, kube-apiserver-1 Started container kube-scheduler
Warning Unhealthy 137m (x71 over 34h) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
Normal Pulled 132m (x129 over 37h) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine
Warning BackOff 128m (x1129 over 34h) kubelet, kube-apiserver-1 Back-off restarting failed container
Normal SandboxChanged 80m kubelet, kube-apiserver-1 Pod sandbox changed, it will be killed and re-created.
Warning Failed 76m kubelet, kube-apiserver-1 Error: context deadline exceeded
Normal Pulled 36m (x7 over 78m) kubelet, kube-apiserver-1 Container image "k8s.gcr.io/kube-scheduler:v1.14.2" already present on machine
Normal Started 36m (x6 over 74m) kubelet, kube-apiserver-1 Started container kube-scheduler
Normal Created 32m (x7 over 74m) kubelet, kube-apiserver-1 Created container kube-scheduler
Warning Unhealthy 20m (x9 over 40m) kubelet, kube-apiserver-1 Liveness probe failed: Get http://127.0.0.1:10251/healthz: dial tcp 127.0.0.1:10251: connect: connection refused
Warning BackOff 2m56s (x85 over 69m) kubelet, kube-apiserver-1 Back-off restarting failed container
I feel like I am overlooking a simple option or configuration but I can't find it and after days of dealing with this problem and reading documentation I am at my wits end.
The load balancer is a TCP load balancer and seems to be working as expected as I can query the cluster from my desktop.
Any suggestions or troubleshooting tips are definitely welcome at this time.
Thank you.
The problem with our configuration was that a well intended technician decided to eliminate one of the rules on the kubernetes master firewall which prevented the master from looping back to ports it needed to probe. This caused all kinds of weird issues and misdiagnosed problems which was definitely the wrong direction. After we allowed all ports on the servers Kubernetes was back to its normal behavior.