I am a Kubernetes newbie. I am running out ideas in solving the Pod status being stuck at ContainerCreating. I am working on a sample application from AWS (https://docs.aws.amazon.com/eks/latest/userguide/getting-started.html#eks-guestbook), the sample is very similar to the official sample (https://kubernetes.io/docs/tutorials/stateless-application/guestbook/).
Many thanks for anyone giving guidance in finding the root causes:
Why do I get conn refused error, what does port 50051 do? Thanks.
$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
default guestbook-8k9pp 0/1 ContainerCreating 0 15h
default guestbook-b2n49 0/1 ContainerCreating 0 15h
default guestbook-gtjnj 0/1 ContainerCreating 0 15h
default redis-master-rhwnt 0/1 ContainerCreating 0 15h
default redis-slave-b284x 0/1 ContainerCreating 0 15h
default redis-slave-vnlj4 0/1 ContainerCreating 0 15h
kube-system aws-node-jkfg8 0/1 CrashLoopBackOff 273 1d
kube-system aws-node-lpvn9 0/1 CrashLoopBackOff 273 1d
kube-system aws-node-nmwzn 0/1 Error 274 1d
kube-system kube-dns-64b69465b4-ftlm6 0/3 ContainerCreating 0 4d
kube-system kube-proxy-cxdj7 1/1 Running 0 1d
kube-system kube-proxy-g2js4 1/1 Running 0 1d
kube-system kube-proxy-rhq6v 1/1 Running 0 1d
$ kubectl describe pod guestbook-8k9pp
Name: guestbook-8k9pp
Namespace: default
Node: ip-172-31-91-242.ec2.internal/172.31.91.242
Start Time: Wed, 31 Oct 2018 04:59:11 -0800
Labels: app=guestbook
Annotations: <none>
Status: Pending
IP:
Controlled By: ReplicationController/guestbook
Containers:
guestbook:
Container ID:
Image: k8s.gcr.io/guestbook:v3
Image ID:
Port: 3000/TCP
Host Port: 0/TCP
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-jb75l (ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
default-token-jb75l:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-jb75l
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 11m (x19561 over 13h) kubelet, ip-172-31-91-242.ec2.internal Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 74s (x19368 over 13h) kubelet, ip-172-31-91-242.ec2.internal Failed create pod sandbox: rpc error: code = Unknown desc = NetworkPlugin cni failed to set up pod "guestbook-8k9pp_default" network: rpc error: code = Unavailable desc = all SubConns are in TransientFailure, latest connection error: connection error: **desc = "transport: Error while dialing dial tcp 127.0.0.1:50051: connect: connection refused"**
The Kubernetes cluster that I created is on AWS EKS. The EKS cluster were created manually by me through the EKS console.
I have created a second cluster with official VPC sample for EKS cluster (https://amazon-eks.s3-us-west-2.amazonaws.com/cloudformation/2018-08-30/amazon-eks-vpc-sample.yaml), and it seems to be working now.
So the problem should be the VPC configurations. Once I figured out what actually went wrong, will post info here, thank you.
I had a similar problem. Same error message, but much simpler set of Pods.
Using kubectl get pods --all-namespaces it revealed that one particular node had CrashLoopBackOff.
I scaled-in my nodes, and then scaled-out again (effectively re-creating that node), and that problem seems to have gone away.
Related
Should be a simple task, I simply want to run the Kubernetes Dashboard on a clean install of Kubernetes on a Raspberry Pi cluster.
What I've done:
Setup the initial cluster (hostname, static ip, cgroup, swapspace, install and configure docker, install kubernetes, setup kubernetes network and join nodes)
I have flannel installed
I have applied the dashboard
Bunch of random testing trying to figure this out
Obviously, as seen below, the container in the dashboard pod is not working because it cannot access kubernetes-dashboard-csrf. I have no idea why this cannot be accessed, my only thought is that I missed a step when setting up the cluster. I've followed about 6 different guides without success, prioritizing the official guide. I have also seen quite a few people having the same or similar issues that most have not posted a resolution. Thanks!
Nodes: kubectl get nodes
NAME STATUS ROLES AGE VERSION
gus3 Ready <none> 346d v1.23.1
juliet3 Ready <none> 346d v1.23.1
shawn4 Ready <none> 346d v1.23.1
vick4 Ready control-plane,master 346d v1.23.1
All Pods: kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
kube-system coredns-74ff55c5b-7j2xg 1/1 Running 27 346d
kube-system coredns-74ff55c5b-cb2x8 1/1 Running 27 346d
kube-system etcd-vick4 1/1 Running 2 169m
kube-system kube-apiserver-vick4 1/1 Running 2 169m
kube-system kube-controller-manager-vick4 1/1 Running 2 169m
kube-system kube-flannel-ds-gclmp 1/1 Running 0 11m
kube-system kube-flannel-ds-hshjv 1/1 Running 0 12m
kube-system kube-flannel-ds-kdd4w 1/1 Running 0 11m
kube-system kube-flannel-ds-wzhkt 1/1 Running 0 10m
kube-system kube-proxy-4t25v 1/1 Running 26 346d
kube-system kube-proxy-b6vbx 1/1 Running 26 346d
kube-system kube-proxy-jgj4s 1/1 Running 27 346d
kube-system kube-proxy-n65sl 1/1 Running 26 346d
kube-system kube-scheduler-vick4 1/1 Running 2 169m
kubernetes-dashboard dashboard-metrics-scraper-5b8896d7fc-99wfk 1/1 Running 0 77m
kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p 0/1 CrashLoopBackOff 18 77m
Resources: kubectl get all -n kubernetes-dashboard
NAME READY STATUS RESTARTS AGE
pod/dashboard-metrics-scraper-5b8896d7fc-99wfk 1/1 Running 0 79m
pod/kubernetes-dashboard-897c7599f-qss5p 0/1 CrashLoopBackOff 19 79m
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/dashboard-metrics-scraper ClusterIP 172.20.0.191 <none> 8000/TCP 79m
service/kubernetes-dashboard ClusterIP 172.20.0.15 <none> 443/TCP 79m
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/dashboard-metrics-scraper 1/1 1 1 79m
deployment.apps/kubernetes-dashboard 0/1 1 0 79m
NAME DESIRED CURRENT READY AGE
replicaset.apps/dashboard-metrics-scraper-5b8896d7fc 1 1 1 79m
replicaset.apps/kubernetes-dashboard-897c7599f 1 1 0 79m
Notice CrashLoopBackOff
Pod Details: kubectl describe pods kubernetes-dashboard-897c7599f-qss5p -n kubernetes-dashboard
Name: kubernetes-dashboard-897c7599f-qss5p
Namespace: kubernetes-dashboard
Priority: 0
Node: shawn4/192.168.10.71
Start Time: Fri, 17 Dec 2021 18:52:15 +0000
Labels: k8s-app=kubernetes-dashboard
pod-template-hash=897c7599f
Annotations: <none>
Status: Running
IP: 172.19.1.75
IPs:
IP: 172.19.1.75
Controlled By: ReplicaSet/kubernetes-dashboard-897c7599f
Containers:
kubernetes-dashboard:
Container ID: docker://894a354e40ca1a95885e149dcd75415e0f186ead3f2e05ec0787f4b1c7a29622
Image: kubernetesui/dashboard:v2.4.0
Image ID: docker-pullable://kubernetesui/dashboard#sha256:526850ae4ea9aba360e72b6df69fd3126b129d446efe83ac5250282b85f95b7f
Port: 8443/TCP
Host Port: 0/TCP
Args:
--auto-generate-certificates
--namespace=kubernetes-dashboard
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Fri, 17 Dec 2021 20:10:19 +0000
Finished: Fri, 17 Dec 2021 20:10:49 +0000
Ready: False
Restart Count: 19
Liveness: http-get https://:8443/ delay=30s timeout=30s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/certs from kubernetes-dashboard-certs (rw)
/tmp from tmp-volume (rw)
/var/run/secrets/kubernetes.io/serviceaccount from kubernetes-dashboard-token-wq9m8 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kubernetes-dashboard-certs:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-certs
Optional: false
tmp-volume:
Type: EmptyDir (a temporary directory that shares a pod's lifetime)
Medium:
SizeLimit: <unset>
kubernetes-dashboard-token-wq9m8:
Type: Secret (a volume populated by a Secret)
SecretName: kubernetes-dashboard-token-wq9m8
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node-role.kubernetes.io/master:NoSchedule
node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning BackOff 21s (x327 over 79m) kubelet Back-off restarting failed container
Logs: kubectl logs -f -n kubernetes-dashboard kubernetes-dashboard-897c7599f-qss5p
2021/12/17 20:10:19 Starting overwatch
2021/12/17 20:10:19 Using namespace: kubernetes-dashboard
2021/12/17 20:10:19 Using in-cluster config to connect to apiserver
2021/12/17 20:10:19 Using secret token for csrf signing
2021/12/17 20:10:19 Initializing csrf token from kubernetes-dashboard-csrf secret
panic: Get "https://172.20.0.1:443/api/v1/namespaces/kubernetes-dashboard/secrets/kubernetes-dashboard-csrf": dial tcp 172.20.0.1:443: i/o timeout
goroutine 1 [running]:
github.com/kubernetes/dashboard/src/app/backend/client/csrf.(*csrfTokenManager).init(0x400055fae8)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:41 +0x350
github.com/kubernetes/dashboard/src/app/backend/client/csrf.NewCsrfTokenManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/csrf/manager.go:66
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).initCSRFKey(0x40001fc080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:502 +0x8c
github.com/kubernetes/dashboard/src/app/backend/client.(*clientManager).init(0x40001fc080)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:470 +0x40
github.com/kubernetes/dashboard/src/app/backend/client.NewClientManager(...)
/home/runner/work/dashboard/dashboard/src/app/backend/client/manager.go:551
main.main()
/home/runner/work/dashboard/dashboard/src/app/backend/dashboard.go:95 +0x1dc
If you need any more information please ask!
UPDATE 12/29/21:
Fixed this issue by reinstalling the cluster to the newest versions of Kubernetes and Ubuntu.
Turned out there were several issues:
I was using Ubuntu Buster which is deprecated.
My client/server Kubernetes versions were +/-0.3 out of sync
I was following outdated instructions
I reinstalled the whole cluster following Kubernetes official guide and, with a few snags along the way, it works!
I have a metalLB loadbalancer, k8s clusters (one master and one worker) v1.18.5, helm 3.7, and nfs dynamic volume provisioning using helm. I run up a jupyterhub instance with helm. Within a minute everything is set up but when I use the external IP to open JupyterHub on my browser, noting loads up. here is my kubectl get all
pod/continuous-image-puller-4l5gj 1/1 Running 0 23s
pod/hub-6c9cb48df8-k5t4w 1/1 Running 0 23s
pod/nfs-subdir-external-provisioner-789697969b-hqp46 1/1 Running 0 23h
pod/nginx2-669c86457c-hc5mv 1/1 Running 0 35h
pod/proxy-66cb767659-svwbv 1/1 Running 0 23s
pod/user-scheduler-6d4698dd59-wqw9l 1/1 Running 0 23s
pod/user-scheduler-6d4698dd59-zk4c7 1/1 Running 0 23s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
service/hub ClusterIP 10.111.196.55 <none> 8081/TCP 23s
service/kubernetes ClusterIP 10.96.0.1 <none> 443/TCP 39h
service/nginx2 LoadBalancer 10.106.241.85 10.0.3.240 80:30746/TCP 32h
service/proxy-api ClusterIP 10.109.211.71 <none> 8001/TCP 23s
service/proxy-public LoadBalancer 10.111.233.85 10.0.3.241 80:31336/TCP 23s
NAME DESIRED CURRENT READY UP-TO-DATE AVAILABLE NODE SELECTOR AGE
daemonset.apps/continuous-image-puller 1 1 1 1 1 <none> 23s
NAME READY UP-TO-DATE AVAILABLE AGE
deployment.apps/hub 1/1 1 1 23s
deployment.apps/nfs-subdir-external-provisioner 1/1 1 1 23h
deployment.apps/nginx2 1/1 1 1 35h
deployment.apps/proxy 1/1 1 1 23s
deployment.apps/user-scheduler 2/2 2 2 23s
NAME DESIRED CURRENT READY AGE
replicaset.apps/hub-6c9cb48df8 1 1 1 23s
replicaset.apps/nfs-subdir-external-provisioner-789697969b 1 1 1 23h
replicaset.apps/nginx2-669c86457c 1 1 1 35h
replicaset.apps/proxy-66cb767659 1 1 1 23s
replicaset.apps/user-scheduler-6d4698dd59 2 2 2 23s
NAME READY AGE
statefulset.apps/user-placeholder 0/0 23s
Also, below is my storage class for reference: kubectl get sc
NAME PROVISIONER RECLAIMPOLICY VOLUMEBINDINGMODE ALLOWVOLUMEEXPANSION AGE
nfs-client cluster.local/nfs-subdir-external-provisioner Delete Immediate true 23h
I will not paste the config file as it is very large, basically what I did was
helm show values jupyterhub/jupyterhub > /tmp/jupyterhub.yaml
(after changing some values)
helm install jupyterhub jupyterhub/jupyterhub --values /tmp/jupyterhub.yaml
The only few things I changed was the security-key (hex [as mentioned on the website]) along with writing nfs-client wherever it said storageClass and storageClassName and perhaps altering the storage size (1Gi/2Gi). That's all. The LoadBalancer works fine because I ran nginx and I can easily open it up on my browser. So I decided to check the JupyterHub pod's by first getting the pod's name using: kubectl get pods
NAME READY STATUS RESTARTS AGE
continuous-image-puller-4l5gj 1/1 Running 0 20m
hub-6c9cb48df8-k5t4w 1/1 Running 0 20m
nfs-subdir-external-provisioner-789697969b-hqp46 1/1 Running 0 23h
nginx2-669c86457c-hc5mv 1/1 Running 0 35h
proxy-66cb767659-svwbv 1/1 Running 0 20m
user-scheduler-6d4698dd59-wqw9l 1/1 Running 0 20m
user-scheduler-6d4698dd59-zk4c7 1/1 Running 0 20m
root#master:/home/ubuntu#
and then using kubectl describe pod hub-6c9cb48df8-k5t4w -n default which gave me this:
Name: hub-6c9cb48df8-k5t4w
Namespace: default
Priority: 0
Node: worker/10.0.0.126
Start Time: Sat, 27 Nov 2021 10:21:43 +0000
Labels: app=jupyterhub
component=hub
hub.jupyter.org/network-access-proxy-api=true
hub.jupyter.org/network-access-proxy-http=true
hub.jupyter.org/network-access-singleuser=true
pod-template-hash=6c9cb48df8
release=jupyterhub
Annotations: checksum/config-map: f746d7e563a064e9158fe6f7f59bdbd463ed24ad7a927d75a1f18c022c3afeaf
checksum/secret: 926186a1b18e5cb9aa5b8c0a177f379299bcf0f05ac4de17d1958422054d15e5
cni.projectcalico.org/podIP: 192.168.171.97/32
cni.projectcalico.org/podIPs: 192.168.171.97/32
Status: Running
IP: 192.168.171.97
IPs:
IP: 192.168.171.97
Controlled By: ReplicaSet/hub-6c9cb48df8
Containers:
hub:
Container ID: docker://1d5e3a812f9712f6d59c09d855b034e2f6bc3e058bad4932db87145ec09f70d1
Image: jupyterhub/k8s-hub:1.2.0
Image ID: docker-pullable://jupyterhub/k8s-hub#sha256:e4770285aaf7230b930643986221757c2cc2e9420f5e21ac892582c96a57ce1c
Port: 8081/TCP
Host Port: 0/TCP
Args:
jupyterhub
--config
/usr/local/etc/jupyterhub/jupyterhub_config.py
--upgrade-db
State: Running
Started: Sat, 27 Nov 2021 10:21:45 +0000
Ready: True
Restart Count: 0
Liveness: http-get http://:http/hub/health delay=300s timeout=3s period=10s #success=1 #failure=30
Readiness: http-get http://:http/hub/health delay=0s timeout=1s period=2s #success=1 #failure=1000
Environment:
PYTHONUNBUFFERED: 1
HELM_RELEASE_NAME: jupyterhub
POD_NAMESPACE: default (v1:metadata.namespace)
CONFIGPROXY_AUTH_TOKEN: <set to the key 'hub.config.ConfigurableHTTPProxy.auth_token' in secret 'hub'> Optional: false
Mounts:
/srv/jupyterhub from pvc (rw)
/usr/local/etc/jupyterhub/config/ from config (rw)
/usr/local/etc/jupyterhub/jupyterhub_config.py from config (rw,path="jupyterhub_config.py")
/usr/local/etc/jupyterhub/secret/ from secret (rw)
/usr/local/etc/jupyterhub/z2jh.py from config (rw,path="z2jh.py")
/var/run/secrets/kubernetes.io/serviceaccount from hub-token-zd25x (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: hub
Optional: false
secret:
Type: Secret (a volume populated by a Secret)
SecretName: hub
Optional: false
pvc:
Type: PersistentVolumeClaim (a reference to a PersistentVolumeClaim in the same namespace)
ClaimName: hub-db-dir
ReadOnly: false
hub-token-zd25x:
Type: Secret (a volume populated by a Secret)
SecretName: hub-token-zd25x
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: hub.jupyter.org/dedicated=core:NoSchedule
hub.jupyter.org_dedicated=core:NoSchedule
node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 21m default-scheduler Successfully assigned default/hub-6c9cb48df8-k5t4w to worker
Normal Pulled 21m kubelet, worker Container image "jupyterhub/k8s-hub:1.2.0" already present on machine
Normal Created 21m kubelet, worker Created container hub
Normal Started 21m kubelet, worker Started container hub
Warning Unhealthy 21m (x3 over 21m) kubelet, worker Readiness probe failed: Get http://192.168.171.97:8081/hub/health: dial tcp 192.168.171.97:8081: connect: connection refused
So I know that the pod is unhealthy. But I do not have any other details to debug this. Any help on how to fix or debug this would be highly appreciated.
Thank you!
All new containers are stuck in status "pending". It does not seem to be a resource issue, since the total cluster utilization is about 10% cpu, 30% memory.
How do I get more insights into the issue?
$ kubectl get pod
NAME READY STATUS RESTARTS AGE
cq-iam-boarding-77fd94dc94-8pc6f 1/1 Running 0 30h
cq-iam-demo-cloud-6b99f6544d-9v7j7 1/1 Running 0 30h
cq-iam-mpm-dev-8c6cc58fd-fczlw 1/1 Running 0 30h
cq-iam-proxy-86854cc78d-49gfw 0/1 Terminating 0 7h42m
cq-iam-proxy-86854cc78d-dqlz8 0/1 Terminating 0 7h36m
cq-iam-proxy-86854cc78d-m7zs2 0/1 Pending 0 5h22m
cq-launchpad-app-7b57c478b9-gqcxj 1/1 Running 0 13h
cq-management-api-7c689c7846-q9fz2 1/1 Running 0 29h
cq-opa-api-8458db697c-75rzd 1/1 Running 0 30h
cq-settings-app-6874885794-mspj9 1/1 Running 0 29h
node-debugger-aks-nodepool1-31127038-vmss000000-czt8s 0/1 Pending 0 8h
$ kubectl top pods
NAME CPU(cores) MEMORY(bytes)
cq-iam-boarding-77fd94dc94-8pc6f 2m 482Mi
cq-iam-demo-cloud-6b99f6544d-9v7j7 2m 507Mi
cq-iam-mpm-dev-8c6cc58fd-fczlw 2m 443Mi
cq-launchpad-app-7b57c478b9-gqcxj 0m 2Mi
cq-management-api-7c689c7846-q9fz2 1m 88Mi
cq-opa-api-8458db697c-75rzd 1m 17Mi
cq-settings-app-6874885794-mspj9 1m 2Mi
$ kubectl describe pod cq-iam-proxy-86854cc78d-m7zs2
Name: cq-iam-proxy-86854cc78d-m7zs2
Namespace: dev
Priority: 0
Node: aks-nodepool1-31127038-vmss000000/
Labels: app=cq-iam-proxy
pod-template-hash=86854cc78d
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/cq-iam-proxy-86854cc78d
Containers:
cq-iam-proxy:
Image: xxx.azurecr.io/karneval/cq-iam-proxy:1.0.14
Port: 80/TCP
Host Port: 0/TCP
Environment:
CQ_HOSTNAME: dev.hvt.zone
key1: TODO
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pl6p4 (ro)
Conditions:
Type Status
PodScheduled True
Volumes:
default-token-pl6p4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pl6p4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
Check the status of nodepool1:
nodepool is all good and running
there are three nodes which are all green (memory, disk, readiness)
Can you show the logs of the pod?
This is what I get when I print the pod logs:
$ kubectl logs cq-iam-proxy-86854cc78d-m7zs2
Error from server (NotFound): the server could not find the requested resource ( pods/log cq-iam-proxy-86854cc78d-m7zs2)
Please include the events of pods in Terminating status. There may be a clue there:
$ kubectl describe pod cq-iam-proxy-86854cc78d-49gfw
Name: cq-iam-proxy-86854cc78d-49gfw
Namespace: dev
Priority: 0
Node: aks-nodepool1-31127038-vmss000000/
Labels: app=cq-iam-proxy
pod-template-hash=86854cc78d
Annotations: <none>
Status: Terminating (lasts 2d18h)
Termination Grace Period: 30s
IP:
IPs: <none>
Controlled By: ReplicaSet/cq-iam-proxy-86854cc78d
Containers:
cq-iam-proxy:
Image: xxx.azurecr.io/karneval/cq-iam-proxy:1.0.14
Port: 80/TCP
Host Port: 0/TCP
Environment:
CQ_HOSTNAME: dev.hvt.zone
key1: TODO
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-pl6p4 (ro)
Conditions:
Type Status
PodScheduled True
Volumes:
default-token-pl6p4:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-pl6p4
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events: <none>
There are no events there? Is there anything in the logs of those two pods?
$ kubectl logs cq-iam-proxy-86854cc78d-dqlz8
Error from server (NotFound): the server could not find the requested resource ( pods/log cq-iam-proxy-86854cc78d-dqlz8)
This seems like a problem with the application itself.
It does not seem to be a problem with the application itself. I ran these two commands:
$ kubectl run --image=busybox myapp -- false
$ kubectl run --image=busybox myapp2 -- false
myapp was able to start
myapp2 is in pending mode (same as the other applications)
myapp 0/1 CrashLoopBackOff 5 11m
myapp2 0/1 Pending 0 9m26s
$ kubectl describe pod myapp
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 11m default-scheduler Successfully assigned dev/myapp to aks-nodepool1-31127038-vmss000001
Normal Created 11m (x4 over 11m) kubelet Created container myapp
Normal Started 11m (x4 over 11m) kubelet Started container myapp
Normal Pulling 10m (x5 over 11m) kubelet Pulling image "busybox"
Normal Pulled 10m (x5 over 11m) kubelet Successfully pulled image "busybox"
Warning BackOff 95s (x47 over 11m) kubelet Back-off restarting failed container
$ kubectl describe pod myapp2
...
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 10m default-scheduler Successfully assigned dev/myapp2 to aks-nodepool1-31127038-vmss000000
The only difference between myapp and myapp2 is that they have been scheduled on different nodes:
myapp was successfully started on node aks-nodepool1-31127038-vmss000001
myapp2 does not start on node aks-nodepool1-31127038-vmss000000
After two weeks the cluster healed it self.
The node nodepool1-31127038-vmss000000 was problematic and would get stuck starting a container.
Next time I encounter this problem I will play with these commands to heal the node:
kubectl cordon my-node # Mark my-node as unschedulable
kubectl drain my-node # Drain my-node in preparation for maintenance
kubectl uncordon my-node # Mark my-node as schedulable
kubectl top node my-node # Show metrics for a given node
I follow the Argo Workflow's Getting Started documentation. Everything goes smooth until I run the first sample workflow as described in 4. Run Sample Workflows. The workflow just gets stuck in the pending state:
vagrant#master:~$ argo submit --watch https://raw.githubusercontent.com/argoproj/argo/master/examples/hello-world.yaml
Name: hello-world-z4lbs
Namespace: default
ServiceAccount: default
Status: Pending
Created: Thu May 14 12:36:45 +0000 (now)
vagrant#master:~$ argo list
NAME STATUS AGE DURATION PRIORITY
hello-world-z4lbs Pending 27m 0s 0
Here it was mentioned that taints on the muster node may be the problem, so I untainted the master node:
vagrant#master:~$ kubectl taint nodes --all node-role.kubernetes.io/master-
node/master untainted
taint "node-role.kubernetes.io/master" not found
taint "node-role.kubernetes.io/master" not found
Then I deleted the pending workflow and resubmitted it, but it got stuck in the pending state again.
The details of the newly submitted workflow that is also stuck:
vagrant#master:~$ kubectl describe workflow hello-world-8kvmb
Name: hello-world-8kvmb
Namespace: default
Labels: <none>
Annotations: <none>
API Version: argoproj.io/v1alpha1
Kind: Workflow
Metadata:
Creation Timestamp: 2020-05-14T13:57:44Z
Generate Name: hello-world-
Generation: 1
Managed Fields:
API Version: argoproj.io/v1alpha1
Fields Type: FieldsV1
fieldsV1:
f:metadata:
f:generateName:
f:spec:
.:
f:arguments:
f:entrypoint:
f:templates:
f:status:
.:
f:finishedAt:
f:startedAt:
Manager: argo
Operation: Update
Time: 2020-05-14T13:57:44Z
Resource Version: 16780
Self Link: /apis/argoproj.io/v1alpha1/namespaces/default/workflows/hello-world-8kvmb
UID: aa82d005-b7ac-411f-9d0b-93f34876b673
Spec:
Arguments:
Entrypoint: whalesay
Templates:
Arguments:
Container:
Args:
hello world
Command:
cowsay
Image: docker/whalesay:latest
Name:
Resources:
Inputs:
Metadata:
Name: whalesay
Outputs:
Status:
Finished At: <nil>
Started At: <nil>
Events: <none>
While trying to get the workflow-controller logs I get the follwoing error:
vagrant#master:~$ kubectl logs -n argo -l app=workflow-controller
Error from server (BadRequest): container "workflow-controller" in pod "workflow-controller-6c4787844c-lbksm" is waiting to start: ContainerCreating
The details for the corresponding workflow-controller pod:
vagrant#master:~$ kubectl -n argo describe pods/workflow-controller-6c4787844c-lbksm
Name: workflow-controller-6c4787844c-lbksm
Namespace: argo
Priority: 0
Node: node-1/192.168.50.11
Start Time: Thu, 14 May 2020 12:08:29 +0000
Labels: app=workflow-controller
pod-template-hash=6c4787844c
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: ReplicaSet/workflow-controller-6c4787844c
Containers:
workflow-controller:
Container ID:
Image: argoproj/workflow-controller:v2.8.0
Image ID:
Port: <none>
Host Port: <none>
Command:
workflow-controller
Args:
--configmap
workflow-controller-configmap
--executor-image
argoproj/argoexec:v2.8.0
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from argo-token-pz4fd (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
argo-token-pz4fd:
Type: Secret (a volume populated by a Secret)
SecretName: argo-token-pz4fd
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SandboxChanged 7m17s (x4739 over 112m) kubelet, node-1 Pod sandbox changed, it will be killed and re-created.
Warning FailedCreatePodSandBox 2m18s (x4950 over 112m) kubelet, node-1 (combined from similar events): Failed to create pod sandbox: rpc error: code = Unknown desc = failed to set up sandbox container "1bd1fd11dfe677c749b4a1260c29c2f8cff0d55de113d154a822e68b41f9438e" network for pod "workflow-controller-6c4787844c-lbksm": networkPlugin cni failed to set up pod "workflow-controller-6c4787844c-lbksm_argo" network: stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/
I run Argo 2.8:
vagrant#master:~$ argo version
argo: v2.8.0
BuildDate: 2020-05-11T22:55:16Z
GitCommit: 8f696174746ed01b9bf1941ad03da62d312df641
GitTreeState: clean
GitTag: v2.8.0
GoVersion: go1.13.4
Compiler: gc
Platform: linux/amd64
I have checked the cluster status and it looks OK:
vagrant#master:~$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 95m v1.18.2
node-1 Ready <none> 92m v1.18.2
node-2 Ready <none> 92m v1.18.2
As to the K8s cluster installation, I created it using Vagrant as described here, the only differences being:
libvirt as provdier
newer version of Ubuntu: generic/ubuntu1804
newer version of Calico: v3.14
Any idea why the workflows get stuck in the pending state and how to fix it?
Workflows start in the Pending state and then are moved through their steps by the workflow-controller pod (which is installed in the cluster as part of Argo).
The workflow-controller pod is stuck in ContainerCreating. kubectl describe po {workflow-controller pod} reveals a Calico-related network error.
As mentioned in the comments, it looks like a common Calico error. Once you clear that up, your hello-world workflow should execute just fine.
Note from OP: Further debugging confirms the Calico problem (Calico nodes are not in the running state):
vagrant#master:~$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
argo argo-server-84946785b-94bfs 0/1 ContainerCreating 0 3h59m
argo workflow-controller-6c4787844c-lbksm 0/1 ContainerCreating 0 3h59m
kube-system calico-kube-controllers-74d45555dd-zhkp6 0/1 CrashLoopBackOff 56 3h59m
kube-system calico-node-2n9kt 0/1 CrashLoopBackOff 72 3h59m
kube-system calico-node-b8sb8 0/1 Running 70 3h56m
kube-system calico-node-pslzs 0/1 CrashLoopBackOff 67 3h56m
kube-system coredns-66bff467f8-rmxsp 0/1 ContainerCreating 0 3h59m
kube-system coredns-66bff467f8-z4lbq 0/1 ContainerCreating 0 3h59m
kube-system etcd-master 1/1 Running 2 3h59m
kube-system kube-apiserver-master 1/1 Running 2 3h59m
kube-system kube-controller-manager-master 1/1 Running 2 3h59m
kube-system kube-proxy-k59ks 1/1 Running 2 3h59m
kube-system kube-proxy-mn96x 1/1 Running 1 3h56m
kube-system kube-proxy-vxj8b 1/1 Running 1 3h56m
kube-system kube-scheduler-master 1/1 Running 2 3h59m
For the calico CrashLoopBackOff, kubeadm use the default interface eth0 to bootstrap the cluster.
But the eth0 interface is used by Vagrant (for ssh).
You could configure the kubelet to use a private IP address (for instance) and not eth0.
You'll have to do that for each node then vagrant reload.
sudo vim /etc/systemd/system/kubelet.service.d/10-kubeadm.conf
#Add the Environment line in 10-kubeadm.conf and replace your_node_ip
Environment="KUBELET_EXTRA_ARGS=--node-ip=your_node_ip"
Hope it helps
I followed Istio's official documentation to setup Istio for sample bookinfo app with minikube. but I'm getting Unable to connect to the server: net/http: TLS handshake timeout error. these are the steps that I have followed(I have kubectl & minikube installed).
minikube start
curl -L https://git.io/getLatestIstio | sh -
cd istio-1.0.3
export PATH=$PWD/bin:$PATH
kubectl apply -f install/kubernetes/helm/istio/templates/crds.yaml
kubectl apply -f install/kubernetes/istio-demo-auth.yaml
kubectl get pods -n istio-system
This is the terminal output I'm getting
$ kubectl get pods -n istio-system
NAME READY STATUS RESTARTS AGE
grafana-9cfc9d4c9-xg7bh 1/1 Running 0 4m
istio-citadel-6d7f9c545b-lwq8s 1/1 Running 0 3m
istio-cleanup-secrets-69hdj 0/1 Completed 0 4m
istio-egressgateway-75dbb8f95d-k6xj2 1/1 Running 0 4m
istio-galley-6d74549bb9-mdc97 0/1 ContainerCreating 0 4m
istio-grafana-post-install-xz9rk 0/1 Completed 0 4m
istio-ingressgateway-6bd4957bc-vhbct 1/1 Running 0 4m
istio-pilot-7f8c49bbd8-x6bmm 0/2 Pending 0 4m
istio-policy-6c65d8cff4-hx2c7 2/2 Running 0 4m
istio-security-post-install-gjfj2 0/1 Completed 0 4m
istio-sidecar-injector-74855c54b9-nnqgx 0/1 ContainerCreating 0 3m
istio-telemetry-65cdd46d6c-rqzfw 2/2 Running 0 4m
istio-tracing-ff94688bb-hgz4h 1/1 Running 0 3m
prometheus-f556886b8-chdxw 1/1 Running 0 4m
servicegraph-778f94d6f8-9xgw5 1/1 Running 0 3m
$kubectl describe pod istio-galley-6d74549bb9-mdc97
Error from server (NotFound): pods "istio-galley-5bf4d6b8f7-8s2z9" not found
pod describe output
$ kubectl -n istio-system describe pod istio-galley-6d74549bb9-mdc97
Name: istio-galley-6d74549bb9-mdc97
Namespace: istio-system
Node: minikube/172.17.0.4
Start Time: Sat, 03 Nov 2018 04:29:57 +0000
Labels: istio=galley
pod-template-hash=1690826493
Annotations: scheduler.alpha.kubernetes.io/critical-pod=
sidecar.istio.io/inject=false
Status: Pending
IP:
Controlled By: ReplicaSet/istio-galley-5bf4d6b8f7
Containers:
validator:
Container ID:
Image: gcr.io/istio-release/galley:1.0.0 Image ID:
Ports: 443/TCP, 9093/TCP Host Ports: 0/TCP, 0/TCP
Command: /usr/local/bin/galley
validator --deployment-namespace=istio-system
--caCertFile=/etc/istio/certs/root-cert.pem
--tlsCertFile=/etc/istio/certs/cert-chain.pem
--tlsKeyFile=/etc/istio/certs/key.pem
--healthCheckInterval=2s
--healthCheckFile=/health
--webhook-config-file
/etc/istio/config/validatingwebhookconfiguration.yaml
State: Waiting
Reason: ContainerCreating
Ready: False
Restart Count: 0
Requests:
cpu: 10m
Liveness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Readiness: exec [/usr/local/bin/galley probe --probe-path=/health --interval=4s] delay=4s timeout=1s period=4s #success=1 #failure=3
Environment: <none>
Mounts:
/etc/istio/certs from certs (ro)
/etc/istio/config from config (ro)
/var/run/secrets/kubernetes.io/serviceaccount from istio-galley-service-account-token-9pcmv(ro)
Conditions:
Type Status
Initialized True
Ready False
PodScheduled True
Volumes:
certs:
Type: Secret (a volume populated by a Secret)
SecretName: istio.istio-galley-service-account
Optional: false
config:
Type: ConfigMap (a volume populated by a ConfigMap)
Name: istio-galley-configuration
Optional: false
istio-galley-service-account-token-9pcmv:
Type: Secret (a volume populated by a Secret)
SecretName: istio-galley-service-account-token-9pcmv
Optional: false
QoS Class: Burstable
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 1m default-scheduler Successfully assigned istio-galley-5bf4d6b8f7-8t8qz to minikube
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "config"
Normal SuccessfulMountVolume 1m kubelet, minikube MountVolume.SetUp succeeded for volume "istio-galley-service-account-token-9pcmv"
Warning FailedMount 27s (x7 over 1m) kubelet, minikube MountVolume.SetUp failed for volume "certs" : secrets "istio.istio-galley-service-account" not found
after some time :-
$ kubectl describe pod istio-galley-6d74549bb9-mdc97
Unable to connect to the server: net/http: TLS handshake timeout
so I wait for istio-sidecar-injector and istio-galley containers to get created. If I again run kubectl get pods -n istio-system or any other kubectl commands gives Unable to connect to the server: net/http: TLS handshake timeout error.
Please help me with this issue.
ps: I'm running minikube on ubuntu 16.04
Thanks in advance.
Looks like you are running into this and this the secret istio.istio-galley-service-account is missing in your istio-system namespace. You can try the workaround as described:
Install as outlined in the docs: https://istio.io/docs/setup/kubernetes/minimal-install/ the missing secret is created by the citadel pod which isn't running due to the --set security.enabled=false flag, setting that to true starts citadel and the secret is created.
Problem resolved. when I run minikube start --memory=4048. maybe it was a memory issue.
When using either the istio-demo.yaml or istio-demo-auth.yaml, you'll find that a minimum of 4GB RAM is required to run Istio (particularly when you deploy its sample app, BookInfo, too). This is true whether your running MiniKube or Docker Desktop and is one of the gotchas that Meshery identifies and attempts to help those deploying Istio or other service meshes circumvent.