debugging a bad k8s deployment - kubernetes

I have a deployment that fails within one second. And the logs are destroyed as the deployment does a rollback.
Is there anything similar to logs -f that works before a deployment have started and wait until it starts?

Check previous logs with kubectl logs -p <pod-name> to spot application issues.
Also, check the exit code of your container with:
kubectl describe pod <pod-name> | grep "Exit Code"
Finally, if it is a scheduling problem, check out the event log of the corresponding ReplicaSet:
kubectl describe replicaset <name-of-deployment>

Related

What are "snapshot logs" and how they differ from "standard(?) logs" in Kubernetes?

I was looking for a way to stream the logs of all pods of a specific deployment of mine.
So, some days ago I've found this SO answer giving me a magical command:
kubectl logs -f deployment/<my-deployment> --all-containers=true
However, I've just discovered, after a lot of time debugging, that this command actually shows the logs of just one pod, and not all of the deployment.
So I went to Kubectl's official documentation and found nothing relevant on the topic, just the following phrase above the example that uses the deployment, as a kind of selector, for log streaming:
...
# Show logs from a kubelet with an expired serving certificate
kubectl logs --insecure-skip-tls-verify-backend nginx
# Return snapshot logs from first container of a job named hello
kubectl logs job/hello
# Return snapshot logs from container nginx-1 of a deployment named nginx
kubectl logs deployment/nginx -c nginx-1
So why is that the first example shown says "Show logs" and the other two say "Return snapshot logs"?
Is it because of this "snapshot" that I can't retrieve logs from all the pods of the deployment?
I've searched a lot for more deep documentation on streaming logs with kubectl but couldn't find any.
To return all pod(s) log of a deployment you can use the same selector as the deployment. You can retrieve the deployment selector like this kubectl get deployment <name> -o jsonpath='{.spec.selector}' --namespace <name>, then you retrieve logs using the same selector kubectl logs --selector <key1=value1,key2=value2> --namespace <name>

Kubernetes Pods failed hours ago, how to debug a terminated pod

I have a deployment of pods which failed 22h ago, how often does Kubernetes log-rotate its logs?
Is there any possibility to view the logs of the deployment but 22 hours ago?
Thanks
I think we can not retrieve logs from a pod that is not in ready state.
We can get the logs of the container inside the pod , By logging into the worker node where pod was running .
docker ps -a | grep <pod name>
docker logs <container name/id from above output
You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
Kubernetes does NOT provide built-in log rotation.
Check official Debug Running Pods documentation:
If your container has previously crashed, you can access the previous
container's crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
In my opinion you are asking not about logs on pod, you are more interested in full debug. Your starting point is again official documentation Troubleshoot Applications-Debugging Pods. ANd start check with kubectl describe pods ${POD_NAME}
4.All I wrote above is great, however sometimes the only way to get the logs is #confused genius answer.

Deployment stops after creation of few resources

I am instantiating a deployment from Helm. a few pods are getting created but the deployment stops right after creating few pods. Although I cannot share much info on the deployment as it is related to my company, how can I debug this kind of issue? The created pods have no problem as seen from logs and events.
to debug your application you should first of all :
Check the pods logs using sh kubectl logs pod <pod-name>
check the event using sh kubectl get events .....
Sometimes if a pods crush you can find the logs or events so you need to add a flag to logs command :
sh kubectl logs pods <pod-name> --previous=true
I hope that can help you to resolve your issue.

Tailing few lines from huge logs of kubectl logs -f

kubectl logs -f pod shows all logs from the beginning and it becomes a problem when the log is huge and we have to wait for a few minutes to get the last log. Its become more worst when connecting remotely. Is there a way that we can tail the logs for the last 100 lines of logs and follow them?
In a cluster best practices are to gather all logs in a single point through an aggregator and analyze them with a dedicated tool. For that reason in K8S, log command is quite basic.
Anyway kubectl logs -h shows some options useful for you:
# Display only the most recent 20 lines of output in pod nginx
kubectl logs --tail=20 nginx
# Show all logs from pod nginx written in the last hour
kubectl logs --since=1h nginx
Some tools with your requirements (and more) are available on github, some of which are:
https://github.com/boz/kail
https://github.com/wercker/stern
Try kubectl logs -f pod --tail=10
To fetch tail lines from logs of a pod with multi containers.
kubectl logs <pod name> --all-containers=true --tail=10
To fetch tail lines from logs of pods within an application:
kubectl logs --selector app=<your application> --tail=10
(ex:if your application has 3 pods then output of above command can be 30 logs 10 of each pod logs)
You can ues this way to get the first 10 lines
kubectl logs my-pod-name -n my-ns | head -n 10
You can also follow logs from the end if you are testing something:
kubectl logs my-pod-name --follow
This will work just like running tail -f in bash or other shells.

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous