Created a pod using following command
kubectl run bb --image=busybox --generator=run-pod/v1 --command -- sh -c "echo hi"
Pod is getting created repeatedly
bb 1/1 Running 1 7s
bb 0/1 Completed 1 8s
bb 0/1 CrashLoopBackOff 1 9s
bb 0/1 Completed 2 22s
bb 0/1 CrashLoopBackOff 2 23s
bb 0/1 Completed 3 53s
exit code is 0
k describe pod bb
...
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 29 Aug 2019 22:58:36 +0000
Finished: Thu, 29 Aug 2019 22:58:36 +0000
Ready: False
Restart Count: 7
Thanks
kubectl run defaults to setting the "restart policy" to "Always". It also sets up a Deployment in this case, to manage the pod.
--restart='Always': The restart policy for this Pod. Legal values
[Always, OnFailure, Never]. If set to 'Always' a deployment is created,
if set to 'OnFailure' a job is created, if set to 'Never', a regular pod
is created. For the latter two --replicas must be 1. Default 'Always',
for CronJobs `Never`.
If you change the command to:
kubectl run bb \
--image=busybox \
--generator=run-pod/v1 \
--restart=Never \
--command -- sh -c "echo hi"
A Job will be setup and the pod won't be restarted.
Outside of kubectl run
All pod specs will include a restartPolicy, which defaults to Always so must be specified if you want different behaviour.
spec:
template:
spec:
containers:
- name: something
restartPolicy: Never
If you are looking to run a task to completion, try a Job instead.
Please see the Last state reason which is Completed.
Terminated: Indicates that the container completed its execution and has stopped running. A container enters into this when it has successfully completed execution or when it has failed for some reason. Regardless, a reason and exit code is displayed, as well as the container’s start and finish time. Before a container enters into Terminated, preStop hook (if any) is executed.
...
State: Terminated
Reason: Completed
Exit Code: 0
Started: Wed, 30 Jan 2019 11:45:26 +0530
Finished: Wed, 30 Jan 2019 11:45:26 +0530
...
Please see more details here. And you can try something like this will show the difference.
kubectl run bb --image=busybox --generator=run-pod/v1 --command -- sh -c "sleep 1000"
Related
My understanding is that the AGE shown for a pod when using kubectl get pod, shows the time that the pod has been running since the last restart. So, for the pod shown below, my understanding is that it intially restarted 14 times, but hasn't restarted in the last 17 hours. Is this correct, and where is a kubernetes reference that explains this?
Hope you're enjoying your Kubernetes journey !
In fact, the AGE Headers when using kubectl get pod shows you for how long your pod has been created and it's running. But do not confuse POD and container:
The header "RESTARTS" is actually linked to the parameters > '.status.containerStatuses[0].restartCount' of the pod manifest. That means that this header is linked to the number of restarts, not of the pod, but of the container inside the pod.
Here is an example:
I just deployed a new pod:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-f4cql 2/2 Running 0 9m38s
If I check the yaml configuration of this pod, we can see that in the "status" section we have the said "restartCount" field:
❯ k get po test-bg-7d57d546f4-f4cql -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
status:
...
containerStatuses:
...
- containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798
...
name: test-bg
ready: true
restartCount: 0
...
So, to demonstrate what I'm saying, I'm going to connect into my pod and kill the main process's my pod is running:
❯ k exec -it test-bg-7d57d546f4-f4cql -- bash
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ ps aux
USER PID %CPU %MEM VSZ RSS TTY STAT START TIME COMMAND
1000 1 0.0 0.0 5724 3256 ? Ss 03:20 0:00 bash -c source /tmp/entrypoint.bash
1000 22 1.5 0.1 2966140 114672 ? Sl 03:20 0:05 java -jar test-java-bg.jar
1000 41 3.3 0.0 5988 3592 pts/0 Ss 03:26 0:00 bash
1000 48 0.0 0.0 8588 3260 pts/0 R+ 03:26 0:00 ps aux
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ kill 22
I have no name!#test-bg-7d57d546f4-f4cql:/tmp$ command terminated with exit code 137
and after this, if I reexecute the "kubectl get pod" command, I got this:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-f4cql 2/2 Running 1 11m
Then, if I go back to my yaml config, We can see that the restartCount field is actually linked to my container and not to my pod.
❯ k get po test-bg-7d57d546f4-f4cql -o yaml
apiVersion: v1
kind: Pod
metadata:
...
spec:
...
status:
...
containerStatuses:
...
- containerID: docker://3f53f140f775416644ea598d554e9b8185e7dd005d6da1940d448b547d912798
...
name: test-bg
ready: true
restartCount: 1
...
So, to conclude, the RESTARTS header is giving you the restartCount of the container not of the pod, but the AGE header is giving you the age of the pod.
This time, if I delete the pod:
❯ k delete pod test-bg-7d57d546f4-f4cql
pod "test-bg-7d57d546f4-f4cql" deleted
we can see that the restartCount is back to 0 since its a brand new pod with a brand new age:
NAME READY STATUS RESTARTS AGE
test-bg-7d57d546f4-bnvxx 2/2 Running 0 23s
test-bg-7d57d546f4-f4cql 2/2 Terminating 2 25m
For your example, it means that the container restarted 14 times, but the pod was deployed 17 hours ago.
I can't find the exact documentation of this but (as it is explained here: https://kubernetes.io/docs/concepts/workloads/_print/#working-with-pods):
"Note: Restarting a container in a Pod should not be confused with restarting a Pod. A Pod is not a process, but an environment for running container(s). A Pod persists until it is deleted."
Hope this has helped you better understand.
Here is a little tip from https://kubernetes.io/docs/reference/kubectl/cheatsheet/:
kubectl get pods --sort-by='.status.containerStatuses[0].restartCount'
(to sort your pods by their restartCount number :p)
Bye
OMG, they add a new feature in kubernetes (i dont know since when) but look:
NAME READY STATUS RESTARTS AGE
pod/nginx-deployment-74d589986c-ngvgc 1/1 Running 0 21h
pod/postgres 0/1 CrashLoopBackOff 7 (3m16s ago) 14m
now you can see the actual AGE of the container !!!!!! (3m16s ago in my example)
here is my kubernetes version:
❯ kubectl version --short
Client Version: v1.22.5
Server Version: v1.23.5
Why does kubectl run dask --image daskdev/dask fail?
# starting the container with docker to make sure it basically works
➜ ~ docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#5b34ce038eb3:/# python
Python 3.8.0 (default, Nov 6 2019, 21:49:08)
[GCC 7.3.0] :: Anaconda, Inc. on linux
Type "help", "copyright", "credits" or "license" for more information.
>>> import dask
>>>
>>> exit()
(base) root#5b34ce038eb3:/# exit
exit
# now trying to fire up the container on a minikube cluster
➜ ~ kubectl run dask --image daskdev/dask
pod/dask created
# let's see what's going on with the Pod
➜ ~ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
dask 0/1 CrashLoopBackOff 1 13s
dask 0/1 Completed 2 24s
dask 0/1 CrashLoopBackOff 2 38s
# not sure why the logs look like something is missing
➜ ~ kubectl logs dask --tail=100
+ '[' '' ']'
+ '[' -e /opt/app/environment.yml ']'
+ echo 'no environment.yml'
+ '[' '' ']'
+ '[' '' ']'
+ exec
no environment.yml
So basically, if you will check result of kubectl describe pod dask, you will see that last state was Terminated with Exit Code 0, that literally means you container was launched successfully, did it job and finished also successfully. What else you expect to happen with pod?
IN addition, when you create pod using kubectl run dask --image daskdev/dask- it creates with the restartPolicy: Always by default!!!!
Always means that the container will be restarted even if it exited with a zero exit code (i.e. successfully).
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Fri, 02 Apr 2021 15:06:00 +0000
Finished: Fri, 02 Apr 2021 15:06:00 +0000
Ready: False
Restart Count: 3
Environment: <none>
There is no /opt/app/environment.yml in your container. If im not mistake, you should first configure it with prepare.sh. PLease check more here - DASK
section
#docker run --rm -it --entrypoint bash daskdev/dask:latest
(base) root#431d69bb9a80:/# ls -la /opt/app/
total 12
drwxr-xr-x 2 root root 4096 Mar 27 15:43 .
drwxr-xr-x 1 root root 4096 Mar 27 15:43 ..
not sure why the logs look like something is missing ➜ ~ kubectl logs dask --tail=100
...
exec no environment.yml
There is already prepared helm DASK chart. Use it. It works fine:
helm repo add dask https://helm.dask.org/
helm repo update
helm install raffael-dask-release dask/dask
NAME: raffael-dask-release
LAST DEPLOYED: Fri Apr 2 15:43:38 2021
NAMESPACE: default
STATUS: deployed
REVISION: 1
TEST SUITE: None
NOTES:
Thank you for installing DASK, released at name: raffael-dask-release.
This release includes a Dask scheduler, 3 Dask workers, and 1 Jupyter servers.
The Jupyter notebook server and Dask scheduler expose external services to
which you can connect to manage notebooks, or connect directly to the Dask
cluster. You can get these addresses by running the following:
export DASK_SCHEDULER="127.0.0.1"
export DASK_SCHEDULER_UI_IP="127.0.0.1"
export DASK_SCHEDULER_PORT=8080
export DASK_SCHEDULER_UI_PORT=8081
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_PORT:8786 &
kubectl port-forward --namespace default svc/raffael-dask-release-scheduler $DASK_SCHEDULER_UI_PORT:80 &
export JUPYTER_NOTEBOOK_IP="127.0.0.1"
export JUPYTER_NOTEBOOK_PORT=8082
kubectl port-forward --namespace default svc/raffael-dask-release-jupyter $JUPYTER_NOTEBOOK_PORT:80 &
echo tcp://$DASK_SCHEDULER:$DASK_SCHEDULER_PORT -- Dask Client connection
echo http://$DASK_SCHEDULER_UI_IP:$DASK_SCHEDULER_UI_PORT -- Dask dashboard
echo http://$JUPYTER_NOTEBOOK_IP:$JUPYTER_NOTEBOOK_PORT -- Jupyter notebook
NOTE: It may take a few minutes for the LoadBalancer IP to be available. Until then, the commands above will not work for the LoadBalancer service type.
You can watch the status by running 'kubectl get svc --namespace default -w raffael-dask-release-scheduler'
NOTE: It may take a few minutes for the URLs above to be available if any EXTRA_PIP_PACKAGES or EXTRA_CONDA_PACKAGES were specified,
because they are installed before their respective services start.
NOTE: The default password to login to the notebook server is `dask`. To change this password, refer to the Jupyter password section in values.yaml, or in the README.md.
If you still want create manually pod, use below... Main idea is set restartPolicy: Never.
apiVersion: v1
kind: Pod
metadata:
name: dask-tesssssst
labels:
foo: bar
spec:
restartPolicy: Never
containers:
- image: daskdev/dask:latest
imagePullPolicy: Always
name: dask-tesssssst
Please check DASK KubeCluster official documentation for more examples. Last one I took exactly from there.
I have been playing around with minikube and after a set of operations, the output of kubectl get pod -w is like this-
nginx 1/1 Running 2 10m
nginx 1/1 Running 3 10m
nginx 0/1 Completed 2 10m
nginx 0/1 CrashLoopBackOff 2 11m
nginx 1/1 Running 3 11m
nginx 1/1 Running 3 12m
I don't understand the count shown at line 3 and 4. What does restart count convey exactly?
About the CrashLoopBackOff Status:
A CrashloopBackOff means that you have a pod starting, crashing, starting again, and then crashing again.
Failed containers that are restarted by the kubelet are restarted with an exponential back-off delay (10s, 20s, 40s …) capped at five minutes, and is reset after ten minutes of successful execution.
CrashLoopBackOff events occurs for different reasons, most of te cases related to the following:
- The application inside the container keeps crashing
- Some parameter of the pod or container have been configured incorrectly
- An error made during the deployment
Whenever you face a CrashLoopBackOff do a kubectl describe to investigate:
kubectl describe pod POD_NAME --namespace NAMESPACE_NAME
user#minikube:~$ kubectl describe pod ubuntu-5d4bb4fd84-8gl67 --namespace default
Name: ubuntu-5d4bb4fd84-8gl67
Namespace: default
Priority: 0
Node: minikube/192.168.39.216
Start Time: Thu, 09 Jan 2020 09:51:03 +0000
Labels: app=ubuntu
pod-template-hash=5d4bb4fd84
Status: Running
Controlled By: ReplicaSet/ubuntu-5d4bb4fd84
Containers:
ubuntu:
Container ID: docker://c4c0295e1e050b5e395fc7b368a8170f863159879821dd2562bc2938d17fc6fc
Image: ubuntu
Image ID: docker-pullable://ubuntu#sha256:250cc6f3f3ffc5cdaa9d8f4946ac79821aafb4d3afc93928f0de9336eba21aa4
State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 Jan 2020 09:54:37 +0000
Finished: Thu, 09 Jan 2020 09:54:37 +0000
Last State: Terminated
Reason: Completed
Exit Code: 0
Started: Thu, 09 Jan 2020 09:53:05 +0000
Finished: Thu, 09 Jan 2020 09:53:05 +0000
Ready: False
Restart Count: 5
Mounts:
/var/run/secrets/kubernetes.io/serviceaccount from default-token-xxxst (ro)
Conditions:
Type Status
Initialized True
Ready False
ContainersReady False
PodScheduled True
Volumes:
default-token-xxxst:
Type: Secret (a volume populated by a Secret)
SecretName: default-token-xxxst
Optional: false
QoS Class: BestEffort
Tolerations: node.kubernetes.io/not-ready:NoExecute for 300s
node.kubernetes.io/unreachable:NoExecute for 300s
Events:
Type Reason Age From Message
---- ------ ---- ---- -------
Normal Scheduled 7m16s default-scheduler Successfully assigned default/ubuntu-5d4bb4fd84-8gl67 to minikube
Normal Created 5m59s (x4 over 6m52s) kubelet, minikube Created container ubuntu
Normal Started 5m58s (x4 over 6m52s) kubelet, minikube Started container ubuntu
Normal Pulling 5m17s (x5 over 7m5s) kubelet, minikube Pulling image "ubuntu"
Normal Pulled 5m15s (x5 over 6m52s) kubelet, minikube Successfully pulled image "ubuntu"
Warning BackOff 2m2s (x24 over 6m43s) kubelet, minikube Back-off restarting failed container
The Events section will provide you with detailed explanation on what happened.
RestartCount represents the number of times the container inside a pod has been restarted, it is based on the number of dead containers that have not yet been removed. Note that this is calculated from dead containers.
-w on the command is for watch flag and various headers are as listed below
$ kubectl get pods -w
NAME READY STATUS RESTARTS AGE
nginx 1/1 Running 0 21m
To get detailed output use -o wide flag
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 1 21h 10.244.2.36 worker-node-2 <none> <none>
So the READY field represents the containers inside the pods and can be seen in detailed by describe pod command. Refer POD Lifecycle
$ kubectl describe pod nginx| grep -i -A6 "Conditions"
Conditions:
Type Status
Initialized True
Ready True
ContainersReady True
PodScheduled True
RESTARTS Field is tracked under Restart Count , grep it from pod description as below.
$ kubectl describe pod nginx | grep -i "Restart"
Restart Count: 0
So as a test we now try to restart the above container and see what field are updated.
We find the node where our container is running and kill the container from node using docker command and it should be restarted automatically by kubernetes
$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 0 21h 10.244.2.36 worker-node-2 <none> <none>
ubuntu#worker-node-2:~$ sudo docker ps -a | grep -i nginx
4c8e2e6bf67c nginx "nginx -g 'daemon of…" 22 hours ago Up 22 hours
ubuntu#worker-node-2:~$ sudo docker kill 4c8e2e6bf67c
4c8e2e6bf67c
POD Status is changed to ERROR
READY count goes to 0/1
ubuntu#cluster-master:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 0/1 Error 0 21h 10.244.2.36 worker-node-2 <none> <none>
Once POD recovers the failed container.
READY count is 1/1 again
STATUS changes back to running
RESTARTS count is incremented by 1
ubuntu#cluster-master:~$ kubectl get pods -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
nginx 1/1 Running 1 21h 10.244.2.36 worker-node-2 <none> <none>
Check restart by describe command as well
$ kubectl describe pods nginx | grep -i "Restart"
Restart Count: 1
The values in your output are not inconsistent .. that is how the pod with a restartPolicy of Always will work it will try to bring back the failed container until CrashLoopBackOff limit is reached.
Refer POD State Examples
Pod is running and has one Container. Container exits with success.
Log completion event.
If restartPolicy is:
Always: Restart Container; Pod phase stays Running.
OnFailure: Pod phase becomes Succeeded.
Never: Pod phase becomes Succeeded.
List the Restarted pods accross all namespaces:
kubectl get pods -A |awk '$5 != "0" {print $0}'
I have a Kubernetes Job that has, for instance, parallelism set to 4. When this job is created, I might want to scale this out to, say, 8. But it seems like editing the Job and setting parallelism to 8 doesn't actually create more pods in the Job.
Am I missing something? Or is there no way to scale out a Job?
So as per job documentation you can still scale a Job running the following command:
kubectl scale job my-job --replicas=[VALUE]
Simple test shows that this option works right now as expected, but will be really deprecated in a future
kubectl scale job is DEPRECATED and will be removed in a future
version.
The ability to use kubectl scale jobs is deprecated. All other scale
operations remain in place, but the ability to scale jobs will be
removed in a future release.
The reason is: Deprecate kubectl scale job
Use below Job yaml as an example to create job:
apiVersion: batch/v1
kind: Job
metadata:
name: test-job
spec:
template:
spec:
containers:
- name: pi
image: perl
command: ["perl", "-Mbignum=bpi", "-wle", "print bpi(2010)"]
restartPolicy: Never
completions: 1000
parallelism: 5
Now lets test behavior:
kubectl describe jobs.batch test-job
Parallelism: 5
Completions: 1000
Start Time: Fri, 17 May 2019 16:58:36 +0200
Pods Statuses: 5 Running / 21 Succeeded / 0 Failed
kubectl get pods | grep test-job | grep Running
test-job-98mlv 1/1 Running 0 13s
test-job-fs2hb 1/1 Running 0 8s
test-job-l8n6v 1/1 Running 0 16s
test-job-lbh46 1/1 Running 0 13s
test-job-m8btl 1/1 Running 0 2s
Changing parallelism with kubectl scale:
kubectl scale jobs.batch test-job --replicas=10
kubectl describe jobs.batch test-job
Parallelism: 10
Completions: 1000
Start Time: Fri, 17 May 2019 16:58:36 +0200
Pods Statuses: 10 Running / 87 Succeeded / 0 Fail
kubectl get pods | grep test-job | grep Running
test-job-475zf 1/1 Running 0 10s
test-job-5k45h 1/1 Running 0 14s
test-job-8p99v 1/1 Running 0 22s
test-job-jtssp 1/1 Running 0 4s
test-job-ltx8f 1/1 Running 0 12s
test-job-mwnqb 1/1 Running 0 16s
test-job-n7t8b 1/1 Running 0 20s
test-job-p4bfs 1/1 Running 0 18s
test-job-vj8qw 1/1 Running 0 18s
test-job-wtjdl 1/1 Running 0 10s
And the last step that i believe will be the most interesting for you - you can always edit your job using kubectl patch command
kubectl patch job test-job -p '{"spec":{"parallelism":15}}'
kubectl describe jobs.batch test-job
Parallelism: 15
Completions: 1000
Start Time: Fri, 17 May 2019 16:58:36 +0200
Pods Statuses: 15 Running / 175 Succeeded / 0 Failed
kubectl get pods | grep test-job | grep Running | wc -l
15
kubectl scale doesn't support Job resource anymore. Here is working solution for me:
kubectl edit job [JOB_NAME]
set parallelism field to appropriate value for you.
It will create new pods or will terminate existing ones.
There's a scale command:
kubectl scale job my-job --replicas=[VALUE]
From docs:
kubectl scale causes the number of concurrently-running Pods to
change. Specifically, it changes the value of parallelism to the
[VALUE] you specify.
kubectl scale job, deprecated since 1.10, has been removed in 1.15.
Just use kubectl edit job to edit config yaml.
A bit old question, but since it has no accepted answer, here's the one I believe is the correct:
Since scale --replicas=... does not work for jobs anymore, the workaround is:
oc patch -n [namespace] job.batch/[jobname] -p '{"spec":{"parallelism":0}}'
or:
oc patch job -n [namespace] [jobname] -p '{"spec":{"parallelism":0}}'
kubectl patch cronjob -p "{"spec":{"jobTemplate":{"spec":{"parallelism":0, "completions":0}}}}" -n
NOTE: On Windows, that outer quotes has to be a double quote. For OpenShift (OC command), it accepts single quotes
Any idea to view the log files of a crashed pod in kubernetes?
My pod is listing it's state as "CrashLoopBackOff" after started the replicationController. I search the available docs and couldn't find any.
Assuming that your pod still exists:
kubectl logs <podname> --previous
$ kubectl logs -h
-p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
In many cases, kubectl logs <podname> --previous is returning:
Error from server (BadRequest): previous terminated container "<container-name>" in pod "<pod-name>" not found
So you can try to check in the namespace's events (kubectl get events ..) like #alltej showed.
If you don't find the reason for the error with kubectl logs / get events and you can't view it with external logging tool I would suggest:
1 ) Check on which node that pod was running on with:
$kubectl get -n <namespace> pod <pod-name> -o=custom-columns=NAME:.metadata.name,STATUS:.status.phase,NODE:.spec.nodeName
NAME STATUS NODE
failed-pod-name Pending dns-of-node
(If you remove the <pod-name> you can see other pods in the namespace).
2 ) SSH into that node and:
A ) Search for the failed pod container name in /var/log/containers/ and dump its .log file and search for errors - in most of the cases the cause of error will be displayed there alongside with the actions / events that took place before the error.
B ) If previous step doesn't help try searching for latest System level errors by running: sudo journalctl -u kubelet -n 100 --no-pager.
kubectl logs command only works if the pod is up and running. If they are not, you can use the kubectl events command.
kubectl get events -n <your_app_namespace> --sort-by='.metadata.creationTimestamp'
By default it does not sort the events, hence the --sort-by flag.
There was a bug in kubernetes that prevents logs obtaining for pods in CrashLoopBackOff state. Looks like it was fixed. Here issue on github with additional information
As discussed on another StackOverflow question, I wrote an open source tool to do this
The main difference with the other answers is that this is triggered automatically when a pod crashes, so it can help avoid scenarios where you start debugging this much later on and the pod itself no longer exists and logs can't be fetched.
If the pod does not exist anymore:
kubectl describe pod {RUNTIME_NAME_OF_POD}
In the output you should have the section "Events" which contains the error messages that prevented the pod to start.
Container Failures could be due to resource limits reached
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: OOMKilled
Exit Code: 137
Started: Wed, 18 Jan 2023 11:28:14 +0530
Finished: Wed, 18 Jan 2023 11:28:18 +0530
Ready: False
Restart Count: 13
OR
The application ended due to an error
State: Waiting
Reason: CrashLoopBackOff
Last State: Terminated
Reason: Error
Exit Code: 2
Started: Wed, 18 Jan 2023 11:50:59 +0530
Finished: Wed, 18 Jan 2023 11:51:03 +0530
Ready: False
Debugging container failure:
Looking at pod status which will contain the above status information
sugumar$ kubectl get pod POD_NAME -o yaml
sugumar$ kubectl get events -w | grep POD_NAME_STRING
For default container logs
sugumar$ kubectl logs -f POD_NAME
For specific container: reason for application failure
sugumar$ kubectl logs -f POD_NAME --container CONTAINER_NAME
Looking at events
sugumar$ kubectl describe deployment DEPLOYMENT_NAME
sugumar$ kubectl describe pod POD_NAME