I am running a cronjob in kubernetes. Cronjob started and but not exited. Status of pod is always in RUNNING.
Below is logs
kubectl get pods
cronjob-1623253800-xnwwx 1/1 Running 0 13h
When i describe the JOB below are noticed
kubectl describe job cronjob-1623300120
Name: cronjob-1623300120
Namespace: cronjob
Selector: xxxxx
Labels: xxxxx
Annotations: <none>
Controlled By: CronJob/cronjob
Parallelism: 1
Completions: 1
Start Time: Thu, 9 Jun 2021 10:12:03 +0530
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: app=cronjob
controller-xxxx
job-name=cronjob-1623300120
Containers:
plannercronjob:
Image: xxxxxxxxxxxxx
Port: <none>
Host Port: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 13h job-controller Created pod: cronjob-1623300120
I Noticed that Pods Statuses: 1 Running / 0 Succeeded / 0 Failed. This means that the when code return zero , then job Succeeded/Failed. Is that correct ?.
When i enter into the pod using execute command
kubectl exec --stdin --tty cronjob-1623253800-xnwwx -n cronjob -- /bin/bash
root#cronjob-1623253800-xnwwx:/# ps ax| grep python
1 ? Ssl 0:01 python -m sfit.src.app
18 pts/0 S+ 0:00 grep python
I found that python process is still running. Is this a code issue deadlock or something else.
pod describe
Name: cronjob-1623302220-xnwwx
Namespace: default
Priority: 0
Node: aks-agentpool-xxxxvmss000000/10.240.0.4
Start Time: Thu, 9 Jun 2021 10:47:02 +0530
Labels: app=cronjob
controller-uid=xxxxxx
job-name=cronjob-1623302220
Annotations: <none>
Status: Running
IP: 10.244.1.30
IPs:
IP: 10.244.1.30
Controlled By: Job/cronjob-1623302220
Containers:
plannercronjob:
Container ID: docker://xxxxxxxxxxxxxxxx
Image: xxxxxxxxxxx
Image ID: docker-xxxx
Port: <none>
Host Port: <none>
State: Running
Started: Thu, 9 Jun 2021 10:47:06 +0530
Ready: True
Restart Count: 0
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-97xzv (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
default-token-97xzv:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-97xzv
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 13h default-scheduler Successfully assigned cronjob/cronjob-1623302220-xnwwx to aks-agentpool-xxx-vmss000000
Normal Pulling 13h kubelet, aks-agentpool-xxx-vmss000000 Pulling image "xxxx.azurecr.io/xxx:1.1.1"
Normal Pulled 13h kubelet, aks-agentpool-xxx-vmss000000 Successfully pulled image "xxx.azurecr.io/xx:1.1.1"
Normal Created 13h kubelet, aks-agentpool-xxx-vmss000000 Created container cronjob
Normal Started 13h kubelet, aks-agentpool-xxx-vmss000000 Started container cronjob
#KrishnaChaurasia . I run the docker image in my system. There is some error in my python code. But it is exit with error. But in the kubernetes it is not exited and not stop
docker run xxxxx/cronjob:1
File "/usr/local/lib/python3.8/site-packages/azure/core/pipeline/transport/_requests_basic.py", line 261, in send
raise error
azure.core.exceptions.ServiceRequestError: <urllib3.connection.HTTPSConnection object at 0x7f113f6480a0>: Failed to establish a new connection: [Errno -2] Name or service not known
echo $?
1
If you are seeing your pod is always running and never completed, try to add staratingDeadlineSeconds.
https://medium.com/#hengfeng/what-does-kubernetes-cronjobs-startingdeadlineseconds-exactly-mean-cc2117f9795f
Related
I am a beginner in kubernetes and was trying to deploy my flask application following this guide: https://medium.com/analytics-vidhya/build-a-python-flask-app-and-deploy-with-kubernetes-ccc99bbec5dc
I have successfully built a docker image and pushed it to dockerhub https://hub.docker.com/repository/docker/beatrix1997/kubernetes_flask_app
but am having trouble debugging a pod.
This is my yaml file:
apiVersion: apps/v1
kind: Deployment
metadata:
name: kubernetesflaskapp-deploy
labels:
app: kubernetesflaskapp
spec:
replicas: 1
selector:
matchLabels:
app: kubernetesflaskapp
template:
metadata:
labels:
app: kubernetesflaskapp
spec:
containers:
- name: kubernetesflaskapp
image: beatrix1997/kubernetes_flask_app
ports:
- containerPort: 5000
And this is the description of the pod:
Name: kubernetesflaskapp-deploy-5764bbbd44-8696k
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Fri, 20 May 2022 11:26:33 +0100
Labels: app=kubernetesflaskapp
pod-template-hash=5764bbbd44
Annotations: <none>
Status: Running
IP: 172.17.0.12
IPs:
IP: 172.17.0.12
Controlled By: ReplicaSet/kubernetesflaskapp-deploy-5764bbbd44
Containers:
kubernetesflaskapp:
Container ID: docker://d500dc15e389190670a9273fea1d70e6bd6ab2e7053bd2480d114ad6150830f1
Image: beatrix1997/kubernetes_flask_app
Image ID: docker-pullable://beatrix1997/kubernetes_flask_app#sha256:1bfa98229f55b04f32a6b85d72860886abcc0f17295b14e173151a8e4b0f0334
Port: 5000/TCP
Host Port: 0/TCP
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 20 May 2022 11:58:38 +0100
Finished: Fri, 20 May 2022 11:58:38 +0100
Ready: False
Restart Count: 11
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-zq8n7 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-zq8n7:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 33m default-scheduler Successfully assigned default/kubernetesflaskapp-deploy-5764bbbd44-8696k to minikube
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 14.783413947s
Normal Pulled 33m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.243534487s
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.373217701s
Normal Pulling 32m (x4 over 33m) kubelet Pulling image "beatrix1997/kubernetes_flask_app"
Normal Created 32m (x4 over 33m) kubelet Created container kubernetesflaskapp
Normal Pulled 32m kubelet Successfully pulled image "beatrix1997/kubernetes_flask_app" in 1.239794774s
Normal Started 32m (x4 over 33m) kubelet Started container kubernetesflaskapp
Warning BackOff 3m16s (x138 over 33m) kubelet Back-off restarting failed container
I am using ubuntu as my OS if it matters at all.
Any help would be appreciated!
Many thanks!
I would check the following:
Check if your Docker image works in Docker, you can run it with the run command, find the official doc here
If it doesn't work, then you can check what is wrong in your app first.
If it does, try checking the readiness and liveness probe, here the official documentation
You can find more hints about failing pods here
The error can be due to the issue in the application as the reported reason is "Back-off restarting failed container". Please paste the following logs in the question for further clarification
kubectl logs -n <NS> pods <pod-name>
I build a k8s cluster on my virtual Machines(CentOS/7) with Virtual Box:
kubectl get nodes -o wide
NAME STATUS ROLES AGE VERSION INTERNAL-IP EXTERNAL-IP OS-IMAGE KERNEL-VERSION CONTAINER-RUNTIME
k8s-master Ready control-plane,master 8d v1.21.2 192.168.0.186 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker01 Ready <none> 8d v1.21.2 192.168.0.187 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
k8s-worker02 Ready <none> 8d v1.21.2 192.168.0.188 <none> CentOS Linux 7 (Core) 3.10.0-1160.31.1.el7.x86_64 docker://20.10.7
And i run some pods on the default namespace with a ReplicaSet several days before.
They were all worked fine at first, and then I shut down the VM.
Today, after I restarted the VMs, I found that they are not working properly anymore:
kubectl get all
NAME READY STATUS RESTARTS AGE
pod/dnsutils 1/1 Running 3 5d13h
pod/kubapp-6qbfz 0/1 Running 0 5d13h
pod/kubapp-d887h 0/1 Running 0 5d13h
pod/kubapp-z6nw7 0/1 Running 0 5d13h
NAME DESIRED CURRENT READY AGE
replicaset.apps/kubapp 3 3 0 5d13h
Then I delete the ReplicaSet and re-create it to create the pods.
And i run the command to get more infomations:
[root#k8s-master ch04]# kubectl describe po kubapp-z887v
Name: kubapp-d887h
Namespace: default
Priority: 0
Node: k8s-worker02/192.168.0.188
Start Time: Fri, 23 Jul 2021 15:55:16 +0000
Labels: app=kubapp
Annotations: cni.projectcalico.org/podIP: 10.244.69.244/32
cni.projectcalico.org/podIPs: 10.244.69.244/32
Status: Running
IP: 10.244.69.244
IPs:
IP: 10.244.69.244
Controlled By: ReplicaSet/kubapp
Containers:
kubapp:
Container ID: docker://fc352ce4c6a826f2cf108f9bb9a335e3572509fd5ae2002c116e2b080df5ee10
Image: evalle/kubapp
Image ID: docker-pullable://evalle/kubapp#sha256:560c9c50b1d894cf79ac472a9925dc795b116b9481ec40d142b928a0e3995f4c
Port: <none>
Host Port: <none>
State: Running
Started: Fri, 23 Jul 2021 15:55:21 +0000
Ready: False
Restart Count: 0
Readiness: exec [ls /var/ready] delay=0s timeout=1s period=10s #success=1 #failure=3
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-m9rwr (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
kube-api-access-m9rwr:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 30m default-scheduler Successfully assigned default/kubapp-d887h to k8s-worker02
Normal Pulling 30m kubelet Pulling image "evalle/kubapp"
Normal Pulled 30m kubelet Successfully pulled image "evalle/kubapp" in 4.049160061s
Normal Created 30m kubelet Created container kubapp
Normal Started 30m kubelet Started container kubapp
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
I don`t know what it happens and how i should do for fix it.
SO here i am and ask to you guys for help.
I am a k8s newbie,just give a hand please.
Thanks for paul-becotte`s help and recommendation.I think i should to post the definition of the pod:
apiVersion: apps/v1
kind: ReplicaSet
metadata:
# here is the name of the replication controller (RC)
name: kubapp
spec:
replicas: 3
# what pods the RC is operating on
selector:
matchLabels:
app: kubapp
# the pod template for creating new pods
template:
metadata:
labels:
app: kubapp
spec:
containers:
- name: kubapp
image: evalle/kubapp
readinessProbe:
exec:
command:
- ls
- /var/ready
There is a example definition of yaml from https://github.com/Evalle/k8s-in-action/blob/master/Chapter_4/kubapp-rs.yaml.
I don`t know where to find the dockerfile of the image evalle/kubapp.
And I don't know if it has the /var/ready directory.
Look at your event
Warning Unhealthy 11s (x182 over 30m) kubelet Readiness probe failed: ls: cannot access /var/ready: No such file or directory
Your readiness probe is failing- looks like it is checking for the existence of a file at /var/ready.
Your next step is "does that make sense? Is my container going to actually write a file at /var/ready when its ready?" If so, you'll want to look at the logs from your pod and figure out why its not writing the file. If its NOT the correct check, look at the yaml you used to create your pod/deployment/replicaset whatever and replace that check with something that does make sense.
I am trying to set up a Hyperledger Fabric network on Kubernetes by using this.
I am at the step where I am trying to create channels. I run the command argo submit output.yaml -v where output.yaml is the output of the command helm template channel-flow/ -f samples/simple/network.yaml -f samples/simple/crypto-config.yaml but with spec.securityContext added as follows:
...
spec:
securityContext:
runAsNonRoot: true
#runAsUser: 8737 (I commented out this because I don't know my user ID; not sure if this could cause a problem)
entrypoint: channels
...
My argo workflow ends up getting stuck in the pending state. I say this because I check my orderer and peer logs but I see no movement in their logs.
I referenced Argo sample workflows stuck in the pending state and I start with getting the argo logs:
[user#vmmock3 fabric-kube]$ kubectl logs -n argo -l app=workflow-controller
time="2021-05-31T05:02:41.145Z" level=info msg="Get leases 200"
time="2021-05-31T05:02:41.150Z" level=info msg="Update leases 200"
time="2021-05-31T05:02:46.162Z" level=info msg="Get leases 200"
time="2021-05-31T05:02:46.168Z" level=info msg="Update leases 200"
time="2021-05-31T05:02:51.179Z" level=info msg="Get leases 200"
time="2021-05-31T05:02:51.185Z" level=info msg="Update leases 200"
time="2021-05-31T05:02:56.193Z" level=info msg="Get leases 200"
time="2021-05-31T05:02:56.199Z" level=info msg="Update leases 200"
time="2021-05-31T05:03:01.213Z" level=info msg="Get leases 200"
time="2021-05-31T05:03:01.219Z" level=info msg="Update leases 200"
I try to describe the workflow controller pod:
[user#vmmock3 fabric-kube]$ kubectl -n argo describe pod workflow-controller-57fcfb5df8-qvn74
Name: workflow-controller-57fcfb5df8-qvn74
Namespace: argo
Priority: 0
Node: hlf-pool1-8rnem/10.104.0.8
Start Time: Tue, 25 May 2021 13:44:56 +0800
Labels: app=workflow-controller
pod-template-hash=57fcfb5df8
Annotations: <none>
Status: Running
IP: 10.244.0.158
IPs:
IP: 10.244.0.158
Controlled By: ReplicaSet/workflow-controller-57fcfb5df8
Containers:
workflow-controller:
Container ID: containerd://78c7f8dcb0f3a3b861293559ae0a11b92ce6843065e6f9459556a6b7099c8961
Image: argoproj/workflow-controller:v3.0.5
Image ID: docker.io/argoproj/workflow-controller#sha256:740dca63b11168490d9cc7b2d1b08c1364f4a4064e1d9b7a778ca2ab12a63158
Ports: 9090/TCP, 6060/TCP
Host Ports: 0/TCP, 0/TCP
Command:
workflow-controller
Args:
--configmap
workflow-controller-configmap
--executor-image
argoproj/argoexec:v3.0.5
--namespaced
State: Running
Started: Mon, 31 May 2021 13:08:11 +0800
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Mon, 31 May 2021 12:59:05 +0800
Finished: Mon, 31 May 2021 13:03:04 +0800
Ready: True
Restart Count: 1333
Liveness: http-get http://:6060/healthz delay=90s timeout=1s period=60s #success=1 #failure=3
Environment:
LEADER_ELECTION_IDENTITY: workflow-controller-57fcfb5df8-qvn74 (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from argo-token-hflpb (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
argo-token-hflpb:
Type: Secret (a volume populated by a Secret)
SecretName: argo-token-hflpb
Optional: false
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Warning Unhealthy 7m44s (x3994 over 5d23h) kubelet Liveness probe failed: Get "http://10.244.0.158:6060/healthz": dial tcp 10.244.0.158:6060: connect: connection refused
Warning BackOff 3m46s (x16075 over 5d22h) kubelet Back-off restarting failed container
Could this failure be why my argo workflow is stuck in the pending state? How should I go about troubleshooting this?
EDIT: Output of kubectl get pods --all-namespaces (FYI these are being run on Digital Ocean):
[user#vmmock3 fabric-kube]$ kubectl get pods --all-namespaces
NAMESPACE NAME READY STATUS RESTARTS AGE
argo argo-server-5695555c55-867bx 1/1 Running 1 6d19h
argo minio-58977b4b48-r2m2h 1/1 Running 0 6d19h
argo postgres-6b5c55f477-7swpp 1/1 Running 0 6d19h
argo workflow-controller-57fcfb5df8-qvn74 0/1 CrashLoopBackOff 1522 6d19h
default hlf-ca--atlantis-58bbd79d9d-x4mz4 1/1 Running 0 21h
default hlf-ca--karga-547dbfddc8-7w6b5 1/1 Running 0 21h
default hlf-ca--nevergreen-7ffb98484c-nlg4j 1/1 Running 0 21h
default hlf-orderer--groeifabriek--orderer0-0 1/1 Running 0 21h
default hlf-peer--atlantis--peer0-0 2/2 Running 0 21h
default hlf-peer--karga--peer0-0 2/2 Running 0 21h
default hlf-peer--nevergreen--peer0-0 2/2 Running 0 21h
kube-system cilium-2kjfz 1/1 Running 3 26d
kube-system cilium-operator-84bdd6f7b6-kp9vb 1/1 Running 1 6d20h
kube-system cilium-operator-84bdd6f7b6-pkkf9 1/1 Running 1 6d20h
kube-system coredns-55ff57f948-jb5jc 1/1 Running 0 6d20h
kube-system coredns-55ff57f948-r2q4g 1/1 Running 0 6d20h
kube-system csi-do-node-4r9gj 2/2 Running 0 26d
kube-system do-node-agent-sbc8b 1/1 Running 0 26d
kube-system kube-proxy-hpsc7 1/1 Running 0 26d
I will answer partially on your question as I'm not promising everything else will work fine, however I know how to fix the issue with argo workflow-controller pod.
Answer
In short words you need to update argo workflows to a new version (at least 3.0.6, ideally 3.0.7 which available) because it looks like a bug in 3.0.5 version.
How I got there
First I installed argo 3.0.5 version (which is not production ready)
Ended up with workflow-controller pod restarts:
kubectl get pods -n argo
NAME READY STATUS RESTARTS AGE
argo-server-645cf8bc47-sbnqv 1/1 Running 0 9m7s
workflow-controller-768565d958-9lftf 1/1 Running 2 9m7s
curl-pod 1/1 Running 0 6m47s
And the same liveness probe failed:
kubectl describe pod workflow-controller-768565d958-9lftf -n argo
Name: workflow-controller-768565d958-9lftf
Namespace: argo
Priority: 0
Node: worker1/10.186.0.3
Start Time: Tue, 01 Jun 2021 14:25:00 +0000
Labels: app=workflow-controller
pod-template-hash=768565d958
Annotations: <none>
Status: Running
IP: 10.244.1.151
IPs:
IP: 10.244.1.151
Controlled By: ReplicaSet/workflow-controller-768565d958
Containers:
workflow-controller:
Container ID: docker://4b797b57ae762f9fc3f7acdd890d25434a8d9f6f165bbb7a7bda35745b5f4092
Image: argoproj/workflow-controller:v3.0.5
Image ID: docker-pullable://argoproj/workflow-controller#sha256:740dca63b11168490d9cc7b2d1b08c1364f4a4064e1d9b7a778ca2ab12a63158
Ports: 9090/TCP, 6060/TCP
Host Ports: 0/TCP, 0/TCP
Command:
workflow-controller
Args:
--configmap
workflow-controller-configmap
--executor-image
argoproj/argoexec:v3.0.5
State: Running
Started: Tue, 01 Jun 2021 14:33:00 +0000
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Tue, 01 Jun 2021 14:29:00 +0000
Finished: Tue, 01 Jun 2021 14:33:00 +0000
Ready: True
Restart Count: 2
Liveness: http-get http://:6060/healthz delay=90s timeout=1s period=60s #success=1 #failure=3
Environment:
LEADER_ELECTION_IDENTITY: workflow-controller-768565d958-9lftf (v1:metadata.name)
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from kube-api-access-ts9zf (ro)
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
Volumes:
kube-api-access-ts9zf:
Type: Projected (a volume that contains injected data from multiple sources)
TokenExpirationSeconds: 3607
ConfigMapName: kube-root-ca.crt
ConfigMapOptional: <nil>
DownwardAPI: true
QoS Class: BestEffort
Node-Selectors: kubernetes.io/os=linux
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 8m57s default-scheduler Successfully assigned argo/workflow-controller-768565d958-9lftf to worker1
Normal Pulled 57s (x3 over 8m56s) kubelet Container image "argoproj/workflow-controller:v3.0.5" already present on machine
Normal Created 57s (x3 over 8m56s) kubelet Created container workflow-controller
Normal Started 57s (x3 over 8m56s) kubelet Started container workflow-controller
Warning Unhealthy 57s (x6 over 6m57s) kubelet Liveness probe failed: Get "http://10.244.1.151:6060/healthz": dial tcp 10.244.1.151:6060: connect: connection refused
Normal Killing 57s (x2 over 4m57s) kubelet Container workflow-controller failed liveness probe, will be restarted
I also tested this endpoint with a pod in the same namespace based on curlimages/curl image - it has built-in curl.
here's a pod.yaml
apiVersion: v1
kind: Pod
metadata:
namespace: argo
labels:
app: curl
name: curl-pod
spec:
containers:
- image: curlimages/curl
name: curl-pod
command: ['sh', '-c', 'while true ; do sleep ; done']
dnsPolicy: ClusterFirst
restartPolicy: Always
kubectl exec -it curl-pod -n argo -- curl http://10.244.1.151:6060/healthz
Which resulted in the same error:
curl: (7) Failed to connect to 10.244.1.151 port 6060: Connection refused
Next step was trying a newer version (3.10rc and then 3.0.7). And it succeeded!
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 27m default-scheduler Successfully assigned argo/workflow-controller-74b4b5455d-skb2f to worker1
Normal Pulling 27m kubelet Pulling image "argoproj/workflow-controller:v3.0.7"
Normal Pulled 27m kubelet Successfully pulled image "argoproj/workflow-controller:v3.0.7" in 15.728042003s
Normal Created 27m kubelet Created container workflow-controller
Normal Started 27m kubelet Started container workflow-controller
And check it with curl:
kubectl exec -it curl-pod -n argo -- curl 10.244.1.169:6060/healthz
ok
The problem is due to the usage of old version of a workflow-controller.If you are following docs it downloads the old version of the work-flow controller which ends up causing a lot of issues .Use below commands instead
or find out the latest one in here
kubectl create namespace argo
kubectl apply -n argo -f https://github.com/argoproj/argo-workflows/releases/download/v3.3.6/install.yaml
We are using client-go to create kubernetes jobs and deployments. Today in one of our cluster (kubernetes v1.18.19), I encounter below weird problem.
Pods of kubernetes Job are always stuck in Pending status, without any reasons. kubectl describe pod shows there are no events. Creating Jobs from host (via kubectl) are normal and pods became running eventually.
What surprises me is Creating Deployments is ok, pods get running eventually!! It won't work only for Kubernetes Jobs. Why? How to fix that?? What I can do?? I have taken hours here but got no progress.
kubeconfig by client-go:
Mount from host machine, path: /root/.kube/config
kubectl describe job shows:
Name: unittest
Namespace: default
Selector: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
Labels: job-id=unittest
Annotations: <none>
Parallelism: 1
Completions: 1
Start Time: Sat, 19 Jun 2021 00:20:12 +0800
Pods Statuses: 1 Running / 0 Succeeded / 0 Failed
Pod Template:
Labels: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
job-name=unittest
Containers:
unittest:
Image: ubuntu:18.04
Port: <none>
Host Port: <none>
Command:
echo hello
Environment: <none>
Mounts: <none>
Volumes: <none>
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal SuccessfulCreate 21m job-controller Created pod: unittest-tt5b2
Kubectl describe on target pod shows:
Name: unittest-tt5b2
Namespace: default
Priority: 0
Node: <none>
Labels: controller-uid=f3cec901-c0f4-4098-86d7-f9a7d1fe6cd1
job-name=unittest
Annotations: <none>
Status: Pending
IP:
IPs: <none>
Controlled By: Job/unittest
Containers:
unittest:
Image: ubuntu:18.04
Port: <none>
Host Port: <none>
Command:
echo hello
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-72g27 (ro)
Volumes:
default-token-72g27:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-72g27
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events: <none>
kubectl get events shows:
55m Normal ScalingReplicaSet deployment/job-scheduler Scaled up replica set job-scheduler-76b7465d74 to 1
19m Normal ScalingReplicaSet deployment/job-scheduler Scaled up replica set job-scheduler-74f8896f48 to 1
58m Normal SuccessfulCreate job/unittest Created pod: unittest-pp665
49m Normal SuccessfulCreate job/unittest Created pod: unittest-xm6ck
17m Normal SuccessfulCreate job/unittest Created pod: unittest-tt5b2
I fixed the issue.
We use a custom scheduler for NPU devices and default scheduler for GPU devices. For GPU devices, the scheduler name is "default-scheduler" other than "default". I passed "default" for those kube Jobs, this causes the pods to stuck in pending.
Below is my Pod manifest:
apiVersion: v1
kind: Pod
metadata:
name: pod-debian-container
spec:
containers:
- name: pi
image: debian
command: ["/bin/echo"]
args: ["Hello, World."]
And below is the output of "describe" command for this Pod:
C:\Users\so.user\Desktop>kubectl describe pod/pod-debian-container
Name: pod-debian-container
Namespace: default
Priority: 0
Node: minikube/192.168.49.2
Start Time: Mon, 15 Feb 2021 21:47:43 +0530
Labels: <none>
Annotations: <none>
Status: Running
IP: 10.244.0.21
IPs:
IP: 10.244.0.21
Containers:
pi:
Container ID: cri-o://f9081af183308f01bf1de6108b2c988e6bcd11ab2daedf983e99e1f4d862981c
Image: debian
Image ID: docker.io/library/debian#sha256:102ab2db1ad671545c0ace25463c4e3c45f9b15e319d3a00a1b2b085293c27fb
Port: <none>
Host Port: <none>
Command:
/bin/echo
Args:
Hello, World.
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Mon, 15 Feb 2021 21:56:49 +0530
Finished: Mon, 15 Feb 2021 21:56:49 +0530
Ready: False
Restart Count: 6
Environment: <none>
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-sxlc9 (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-sxlc9:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-sxlc9
Optional: false
QoS Class: BestEffort
Node-Selectors: <none>
Tolerations: node.kubernetes.io/not-ready:NoExecute op=Exists for 300s
node.kubernetes.io/unreachable:NoExecute op=Exists for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 15m default-scheduler Successfully assigned default/pod-debian-container to minikube
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.1633901s
Normal Pulled 15m kubelet Successfully pulled image "debian" in 11.4271866s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.0252907s
Normal Pulled 14m kubelet Successfully pulled image "debian" in 11.1897469s
Normal Started 14m (x4 over 15m) kubelet Started container pi
Normal Pulling 13m (x5 over 15m) kubelet Pulling image "debian"
Normal Created 13m (x5 over 15m) kubelet Created container pi
Normal Pulled 13m kubelet Successfully pulled image "debian" in 9.1170801s
Warning BackOff 5m25s (x31 over 15m) kubelet Back-off restarting failed container
Warning Failed 10s kubelet Error: ErrImagePull
And below is another output:
C:\Users\so.user\Desktop>kubectl get pod,job,deploy,rs
NAME READY STATUS RESTARTS AGE
pod/pod-debian-container 0/1 CrashLoopBackOff 6 15m
Below are my question:
I can see that Pod is running but Container inside it is crashing. I can't understand "why" because I see that Debian image is successfully pulled
As you can see in "kubectl get pod,job,deploy,rs" output, RESTARTS is equal to 6, is it the Pod which has restarted 6 times or is it the container?
Why 6 restart happened, I didn't mention anything in my spec
This looks like a liveness problem related to the CrashLoopBackOff have you cosidered taking a look into this blog it explains very well how to debug the problem blog