Kubernetes: view log of failed container in a deployment - kubernetes

I created a deployment in which the container always fails, I noticed a new container is automatically created because of the restart policy, but then I am unable to check the log of the failed container. Is there a way to check the log?

You can use the kubectl logs --previous flag:
--previous If true, print the logs for the previous instance of the container in a pod if it exists.
Example:
kubectl logs my-pod-crashlooping --container my-container --previous

Related

Kubernetes Pods failed hours ago, how to debug a terminated pod

I have a deployment of pods which failed 22h ago, how often does Kubernetes log-rotate its logs?
Is there any possibility to view the logs of the deployment but 22 hours ago?
Thanks
I think we can not retrieve logs from a pod that is not in ready state.
We can get the logs of the container inside the pod , By logging into the worker node where pod was running .
docker ps -a | grep <pod name>
docker logs <container name/id from above output
You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
Kubernetes does NOT provide built-in log rotation.
Check official Debug Running Pods documentation:
If your container has previously crashed, you can access the previous
container's crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
In my opinion you are asking not about logs on pod, you are more interested in full debug. Your starting point is again official documentation Troubleshoot Applications-Debugging Pods. ANd start check with kubectl describe pods ${POD_NAME}
4.All I wrote above is great, however sometimes the only way to get the logs is #confused genius answer.

How to check the log of a service deployed in a pod of kubernetes, where the pod is at evicted state

Normally to view logs of the service running in pod we use below command:
kubectl logs -f <pod_name>
but for the evicted pod, it doesn't work.
All I can see the overall pod health by running below command:
kubectl describe po <evicted_pod_name>
You can use below command
kubectl logs my-pod -c my-container --previous
This dumps pod container logs (stdout, multi-container case) for a previous instantiation of a container
or
kubectl logs my-pod --previous
This dumps pod logs (stdout) for a previous instantiation of a container
Alternatively you could also login to the node where the pod was scheduled and use docker ps to get containarerid and docker logs containarerid to get logs.

How to view log of crash pod of a deployment

I know I can view logs of a crash pod by using
kubectl logs --previous
But if a pod belongs to a deployment, when it crashes, a new pod with a different name is going to be created.
I can no longer know the crashed pod name.
Where can I find the log of the crashed pod?
And how can I know if/when/why the pod crashed?
If a Deployment-managed Pod crashes, the same Pod will restart, and you can look at its logs using kubectl logs --previous the same as before.
If you manually kubectl delete pod something a Deployment manages, you'll lose its logs and the Deployment will create a new one; but you have to explicitly do that, if a pod fails it will be the same pod restarting (or in CrashLoopBackOff state).
If you can't get the logs then try the below command to know the reason why the pod is failed to start
kubectl describe pod <pod-name>

How to see logs of terminated pods

I am running selenium hubs and my pods are getting terminated frequently. I would like to look at the logs of the pods which are terminated. How to do it?
NAME READY STATUS RESTARTS AGE
chrome-75-0-0e5d3b3d-3580-49d1-bc25-3296fdb52666 0/2 Terminating 0 49s
chrome-75-0-29bea6df-1b1a-458c-ad10-701fe44bb478 0/2 Terminating 0 23s
chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 0/2 ContainerCreating 0 7s
kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
$ kubectl logs chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5 --previous
Error from server (NotFound): pods "chrome-75-0-8929d8c8-1f7b-4eba-96f2-918f7a0d77f5" not found
Running kubectl logs -p will fetch logs from existing resources at API level. This means that terminated pods' logs will be unavailable using this command.
As mentioned in other answers, the best way is to have your logs centralized via logging agents or directly pushing these logs into an external service.
Alternatively and given the logging architecture in Kubernetes, you might be able to fetch the logs directly from the log-rotate files in the node hosting the pods. However, this option might depend on the Kubernetes implementation as log files might be deleted when the pod eviction is triggered.
From kubernetes docs:
Examples
# Return snapshot logs from pod nginx with only one container
kubectl logs nginx
# Return snapshot of previous terminated ruby container logs from pod web-1
kubectl logs -p -c ruby web-1
# Begin streaming the logs of the ruby container in pod web-1
kubectl logs -f -c ruby web-1
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
Options
-c, --container="": Print the logs of this container
-f, --follow[=false]: Specify if the logs should be streamed.
--limit-bytes=0: Maximum bytes of logs to return. Defaults to no limit.
-p, --previous[=false]: If true, print the logs for the previous instance of the container in a pod if it exists.
--since=0: Only return logs newer than a relative duration like 5s, 2m, or 3h. Defaults to all logs. Only one of since-time / since may be used.
--since-time="": Only return logs after a specific date (RFC3339). Defaults to all logs. Only one of since-time / since may be used.
--tail=-1: Lines of recent log file to display. Defaults to -1, showing all log lines.
--timestamps[=false]: Include timestamps on each line in the log output
That is just a simple way of doing it. But in production , I would send all the logs of all the pods to a centeral Log management system such as ELK by deploying a log sending client on the kubernetes cluster as daemon-set such as fluentbit , that will keep sending logs to ELk where I am able to filter things base don the namespace , pod , container , or any other label.
kubectl get event -o custom-columns=NAME:.metadata.name -n <namespace> --no-headers
use the above command to get the list of terminated pods in your namespace and use
kubectl logs -f pod-name -n <namespace> -p
to see the terminated pod's logs
P.S.: The above command to fetch terminated pod details will give you the pods which were terminated 1 hour ag
You can try --previous flag on the logs
i.e.
kubectl --namespace namespace logs pod_name --previous
This will show the logs from dump pod logs (stdout) for a previous container accroding to kubernetes docs
A combination of a flag --previous and a container name for a container that was terminated with reason: CrashLoopBackOff:
First find a pod in a namespace. Its status is CrashLoopBackOff:
kubectl get pods -n namespace_name
NAME READY STATUS RESTARTS AGE
crashing_pod_name 0/9 Init:CrashLoopBackOff 17 (105s ago) 63m
Then use describe to find out a name of a container that failed:
kubectl describe pod -n namespace_name crashing_pod_name
Find a name of a container that is terminated with a reason CrashLoopBackOff
Then list logs:
kubectl logs -n namespace_name crashing_pod_name -c failing_container_name --previous

Automatic restart of a Kubernetes pod

I have a Kubernetes cluster on Google Cloud Platform. The Kubernetes cluster contains a deployment which has one pod. The pod has two containers. I have observed that the pod has been replaced by a new pod and the entire data is wiped out. I am not able to identify the reason behind it.
I have tried the below two commands:
kubectl logs [podname] -c [containername] --previous
**Result: ** previous terminated container [containername] in pod [podname] not found
kubectl get pods
Result: I see that the number of restarts for my pod equals 0.
Is there anything I could do to get the logs from my old pod?
Try below command to see the pod info
kubectl describe po
Not many chances you will retrieve this information, but try next:
1) If you know your failed container id - try to find old logs here
/var/lib/docker/containers/<container id>/<container id>-json.log
2) look at kubelet's logs:
journalctl -u kubelet