Kubernetes Pods failed hours ago, how to debug a terminated pod - kubernetes

I have a deployment of pods which failed 22h ago, how often does Kubernetes log-rotate its logs?
Is there any possibility to view the logs of the deployment but 22 hours ago?
Thanks

I think we can not retrieve logs from a pod that is not in ready state.
We can get the logs of the container inside the pod , By logging into the worker node where pod was running .
docker ps -a | grep <pod name>
docker logs <container name/id from above output

You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container.

Kubernetes does NOT provide built-in log rotation.
Check official Debug Running Pods documentation:
If your container has previously crashed, you can access the previous
container's crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
In my opinion you are asking not about logs on pod, you are more interested in full debug. Your starting point is again official documentation Troubleshoot Applications-Debugging Pods. ANd start check with kubectl describe pods ${POD_NAME}
4.All I wrote above is great, however sometimes the only way to get the logs is #confused genius answer.

Related

debugging a bad k8s deployment

I have a deployment that fails within one second. And the logs are destroyed as the deployment does a rollback.
Is there anything similar to logs -f that works before a deployment have started and wait until it starts?
Check previous logs with kubectl logs -p <pod-name> to spot application issues.
Also, check the exit code of your container with:
kubectl describe pod <pod-name> | grep "Exit Code"
Finally, if it is a scheduling problem, check out the event log of the corresponding ReplicaSet:
kubectl describe replicaset <name-of-deployment>

Deployment stops after creation of few resources

I am instantiating a deployment from Helm. a few pods are getting created but the deployment stops right after creating few pods. Although I cannot share much info on the deployment as it is related to my company, how can I debug this kind of issue? The created pods have no problem as seen from logs and events.
to debug your application you should first of all :
Check the pods logs using sh kubectl logs pod <pod-name>
check the event using sh kubectl get events .....
Sometimes if a pods crush you can find the logs or events so you need to add a flag to logs command :
sh kubectl logs pods <pod-name> --previous=true
I hope that can help you to resolve your issue.

how to see the kubernetes container servcie log with restart pod

Now my kubernetes (v1.15.x) deployment keeps restarting all the time. From the log ouput with kubernetes dashboard I could not see anything useful. Now I want to log into the pod and check the log from log dir of my service. But the pod keeps restarting all the time and I have no chance to log into the pod.
Is there any way to login restart pod or dump some file or see the file in the pod? I want to find why the pod restart all the time.
if you are running the GKE and logging is enabled you can get all container log by default into the dashboard of stack driver logging.
As of now you can run the kubectl describe pod <pod name> to check the status code of the container which got exited. Status code might be helpful to understand the reason for restart, is it due to Error or OOM killed.
you can also use the flag --previous and get logs of restarted POD
Example :
kubectl logs <POD name> --previous
in the above case of --previous your pod needs but still exist inside the cluster.
#HarshManvar is right but I would like to provide you with some more options:
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
These two methods above can be found useful when checking logs or execing into the container would not be efficient.

Automatic restart of a Kubernetes pod

I have a Kubernetes cluster on Google Cloud Platform. The Kubernetes cluster contains a deployment which has one pod. The pod has two containers. I have observed that the pod has been replaced by a new pod and the entire data is wiped out. I am not able to identify the reason behind it.
I have tried the below two commands:
kubectl logs [podname] -c [containername] --previous
**Result: ** previous terminated container [containername] in pod [podname] not found
kubectl get pods
Result: I see that the number of restarts for my pod equals 0.
Is there anything I could do to get the logs from my old pod?
Try below command to see the pod info
kubectl describe po
Not many chances you will retrieve this information, but try next:
1) If you know your failed container id - try to find old logs here
/var/lib/docker/containers/<container id>/<container id>-json.log
2) look at kubelet's logs:
journalctl -u kubelet

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous