How to see logs of terminated pods - kubernetes

I am running selenium hubs and my pods are getting terminated frequently. I would like to look at the logs of the pods which are terminated. How to do it?
NAME READY STATUS RESTARTS AGE
chrome-75-0-0e5d3b3d-3580-49d1-bc25-3296fdb52666 0/2 Terminating 0 49s
chrome-75-0-29bea6df-1b1a-458c-ad10-701fe44bb478 0/2 Terminating 0 23s
chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 0/2 ContainerCreating 0 7s
kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
$ kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 --previous
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found

Running kubectl logs -p will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.
As mentioned in other answers, the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service.
Alternatively and given the logging architecture in Kubernetes, you might be able to fetch the logs directly from the log-rotate files in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.

From kubernetes docs:
Examples
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
# Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
# Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
Options
-c, --container="": Print the logs of this container
-f, --follow[=false]: Specify if the logs should be streamed.
--limit-bytes=0: Maximum bytes of logs to return. Defaults to no limit.
-p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
--since=0: Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.
--since-time="": Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.
--tail=-1: Lines of recent log file to display. Defaults to -1, showing all log lines.
--timestamps[=false]: Include timestamps on each line in the log output
That is just a simple way of doing it. But in production , I would send all the logs of all the pods to a centeral Log management system such as ELK by deploying a log sending client on the kubernetes cluster as daemon-set such as fluentbit , that will keep sending logs to ELk where I am able to filter things base don the namespace , pod , container , or any other label.

kubectl get event -o custom-columns=NAME:.metadata.name -n <namespace> --no-headers
use the above command to get the list of terminated pods in your namespace and use
kubectl logs -f pod-name -n <namespace> -p
to see the terminated pod's logs
P.S.: The above command to fetch terminated pod details will give you the pods which were terminated 1 hour ag

You can try --previous flag on the logs
i.e.
kubectl --namespace namespace logs pod_name --previous
This will show the logs from dump pod logs (stdout) for a previous container accroding to kubernetes docs

A combination of a flag --previous and a container name for a container that was terminated with reason: CrashLoopBackOff:
First find a pod in a namespace. Its status is CrashLoopBackOff:
kubectl get pods -n namespace_name
NAME READY STATUS RESTARTS AGE
crashing_pod_name 0/9 Init:CrashLoopBackOff 17 (105s ago) 63m
Then use describe to find out a name of a container that failed:
kubectl describe pod -n namespace_name crashing_pod_name
Find a name of a container that is terminated with a reason CrashLoopBackOff
Then list logs:
kubectl logs -n namespace_name crashing_pod_name -c failing_container_name --previous

Related

Does `kubectl log deployment/name` get all pods or just one pod?

I need to see the logs of all the pods in a deployment with N worker pods
When I do kubectl logs deployment/name --tail=0 --follow the command syntax makes me assume that it will tail all pods in the deployment
However when I go to process I don't see any output as expected until I manually view the logs for all N pods in the deployment
Does kubectl log deployment/name get all pods or just one pod?
Yes, if you run kubectl logs with a deployment, it will return the logs of only one pod from the deployment.
However, you can accomplish what you are trying to achieve by using the -l flag to return the logs of all pods matching a label.
For example, let's say you create a deployment using:
kubectl create deployment my-dep --image=nginx --replicas=3
Each of the pods gets a label app=my-dep, as seen here:
$ kubectl get pods -l app=my-dep
NAME READY STATUS RESTARTS AGE
my-dep-6d4ddbf4f7-8jnsw 1/1 Running 0 6m36s
my-dep-6d4ddbf4f7-9jd7g 1/1 Running 0 6m36s
my-dep-6d4ddbf4f7-pqx2w 1/1 Running 0 6m36s
So, if you want to get the combined logs of all pods in this deployment you can use this command:
kubectl logs -l app=my-dep
only one pod seems to be the answer.
i went here How do I get logs from all pods of a Kubernetes replication controller? and it seems that the command kubectl logs deployment/name only shows one pod of N
also when you do execute the kubectl logs on a deployment it does say it only print to console that it is for one pod (not all the pods)

kubectl logs deploy/my-deployment does not show logs from all pods

What is the purpose of kubectl logs deploy/my-deployment shown at https://kubernetes.io/docs/reference/kubectl/cheatsheet/#interacting-with-deployments-and-services?
I would think it will show me logs from all the pods deployed as part of the my-deployment object. However, even though I have 2 pods in my deployment, that command shows logs from only one of them.
If your deployment has multiple pod replicas, then kubectl logs deployment/... will just pick one on its own.
Here is an example:
kubectl get pods -n kube-system | grep coredns
coredns-78fcd69978-dqf95 1/1 Running 0 42h
coredns-78fcd69978-vgvf2 1/1 Running 0 42h
kubectl logs deployment/coredns -n kube-system
Found 2 pods, using pod/coredns-78fcd69978-vgvf2
As you can see from the documentation you linked:
kubectl logs deploy/my-deployment # dump Pod logs for a Deployment (single-container case)
kubectl logs deploy/my-deployment -c my-container # dump Pod logs for a Deployment (multi-container case)
kubectl logs deploy/my-deployment is used when you have just one container. So in your case is probably taking the first one. If you have multiple containers you have to specify one with -c option.
If you want to have logs from multiple pods, you can use Stern
By following documentation provided, when there are multiple Pods using the below command, it displays logs from only one Pod at a time it will pick randomly one at a point of time.
kubectl get pods -n kube-system | grep coredns
If there are multiple containers then one can specify by using ā€œ-cā€ and mention the container name.
By following the Stren documentation, one can get the logs from multiple containers within a pod. Using the below command will display the multiple container data.
kubectl logs deploy/my-deployment -c my-container
This should work:
kubectl -n <namespace> logs -l <label_selector> --all-containers=true -f --tail=25

How to check the log of a service deployed in a pod of kubernetes, where the pod is at evicted state

Normally to view logs of the service running in pod we use below command:
kubectl logs -f <pod_name>
but for the evicted pod, it doesn't work.
All I can see the overall pod health by running below command:
kubectl describe po <evicted_pod_name>
You can use below command
kubectl logs my-pod -c my-container --previous
This dumps pod container logs (stdout, multi-container case) for a previous instantiation of a container
or
kubectl logs my-pod --previous
This dumps pod logs (stdout) for a previous instantiation of a container
Alternatively you could also login to the node where the pod was scheduled and use docker ps to get containarerid and docker logs containarerid to get logs.

using kubectl delete command to remove core-dns pod blocked / No activity

I found my coredns pod throw error: Readiness probe failed: Get http://172.30.224.7:8080/health: net/http: request canceled (Client.Timeout exceeded while awaiting headers) . I am delete pod using this command:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
but the command keep waiting and nothing response,how to know the progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal hangs or blocked,when I use browser UI with using kubernetes dashboard the pod exits.how to force delete it? or fix it the right way?
You are deleting a pod which is monitored by deployment controller. That's why when you delete one of the pods, the controller create another to make sure the number of pods equal to the replica count. If you really want to delete the coredns[not recommended], delete the deployment instead of the pods.
$ kubectl delete deployment coredns -n kube-system
Answering another part of your question:
but the command keep waiting and nothing response,how to know the
progress of deleting? this is output:
[root#ops001 ~]# kubectl delete pod coredns-89764d78c-mbcbz -n kube-system
pod "coredns-89764d78c-mbcbz" deleted
and the terminal blocked...
When you're deleting a Pod and you want to see what's going on under the hood, you can additionally provide -v flag and specify the desired verbosity level e.g.:
kubectl delete pod coredns-89764d78c-mbcbz -n kube-system -v 8
If there is some issue with the deletion of specific Pod, it should tell you the details.
I totally agree with #P Ekambaram's comment:
if coredns is not started. you need to check logs and find out why it
is not getting started ā€“ P Ekambaram
You can always delete the whole coredns Deployment and re-deploy it again but generally you shouldn't do that. Looking at Pod logs:
kubectl logs coredns-89764d78c-mbcbz -n kube-system
should also tell you some details explaining why it doesn't work properly. I would say that deleting the whole coredns Deployment is a last resort command.

How to debug why my pods are pending in GCE

I'#m trying to get a pod running on GCE. The pod has an init container, and is created by me applying a manifest with a deployment that creates 1 replica of the pod.
When I look at my workloads on the cloud console, I can see that under 'Active revisions' my deployment is in the state of 'Pods are pending', and under 'Managed pods', the status is 'PodsInitializing'.
The container logs are empty, and the audit logs contain a single entry for the creation of the deployment.
My pods seem to be stuck in the above state, and I'm not really sure why. How do I go about debugging that?
Edit:
kubectl get pods --namespace=my-namespace
Outputs:
NAME READY STATUS RESTARTS AGE
my-pod-v77jm 0/1 Init:0/1 0 55m
But when I run:
kubectl describe pod my-pod-v77jm
I get
Error from server (NotFound): pods "my-pod-v77jm" not found
If you have access to kube-api via kubectl:
Use describe see details about the pod and containers
kubectl describe myPod --namespace mynamespace
To view container logs (including init containers)
kubectl logs myPod --namespace mynamespace -c initContainerName
You can get more information about pod statuses and how to debug init containers here