Automatic restart of a Kubernetes pod - kubernetes

I have a Kubernetes cluster on Google Cloud Platform. The Kubernetes cluster contains a deployment which has one pod. The pod has two containers. I have observed that the pod has been replaced by a new pod and the entire data is wiped out. I am not able to identify the reason behind it.
I have tried the below two commands:
kubectl logs [podname] -c [containername] --previous
**Result: ** previous terminated container [containername] in pod [podname] not found
kubectl get pods
Result: I see that the number of restarts for my pod equals 0.
Is there anything I could do to get the logs from my old pod?

Try below command to see the pod info
kubectl describe po

Not many chances you will retrieve this information, but try next:
1) If you know your failed container id - try to find old logs here
/var/lib/docker/containers/<container id>/<container id>-json.log
2) look at kubelet's logs:
journalctl -u kubelet

Related

in a google cloud Kubernetes cluster my pods sometimes all restart, how do I find the reason for the restart?

From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.
Using below methods for checking the reason for pod restart:
Use kubectl describe deployment <deployment_name> and kubectl describe pod <pod_name> which contains the information.
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Warning BackOff 40m kubelet, gke-xx Back-off restarting failed container
# ..
You can see that the pod is restarted due to image pull backoff. We need to troubleshoot on that particular issue.
Check for logs using : kubectl logs <pod_name>
To get previous logs of your container (the restarted one), you may use --previous key on pod, like this:
kubectl logs your_pod_name --previous
You can also write a final message to /dev/termination-log, and this will show up as described in docs.
Attaching a troubleshooting doc for reference.
It's also a good thing to check your cluster and node-pool operations.
Check the cluster operation in cloud shell and run the command:
gcloud container operations list
Check the age of the nodes with the command
kubectl get nodes
Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below:
Please note you have to add your cluster and node-pool name in the queries.
Control plane (master) upgraded:
resource.type="gke_cluster"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal")
(protoPayload.metadata.operationType="UPGRADE_MASTER"
OR protoPayload.response.operationType="UPGRADE_MASTER")
resource.labels.cluster_name=""
Node-pool upgraded
resource.type="gke_nodepool"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal")
protoPayload.metadata.operationType="UPGRADE_NODES"
resource.labels.cluster_name=""
resource.labels.nodepool_name=""

Kubernetes Pods failed hours ago, how to debug a terminated pod

I have a deployment of pods which failed 22h ago, how often does Kubernetes log-rotate its logs?
Is there any possibility to view the logs of the deployment but 22 hours ago?
Thanks
I think we can not retrieve logs from a pod that is not in ready state.
We can get the logs of the container inside the pod , By logging into the worker node where pod was running .
docker ps -a | grep <pod name>
docker logs <container name/id from above output
You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
Kubernetes does NOT provide built-in log rotation.
Check official Debug Running Pods documentation:
If your container has previously crashed, you can access the previous
container's crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
In my opinion you are asking not about logs on pod, you are more interested in full debug. Your starting point is again official documentation Troubleshoot Applications-Debugging Pods. ANd start check with kubectl describe pods ${POD_NAME}
4.All I wrote above is great, however sometimes the only way to get the logs is #confused genius answer.

how to see the kubernetes container servcie log with restart pod

Now my kubernetes (v1.15.x) deployment keeps restarting all the time. From the log ouput with kubernetes dashboard I could not see anything useful. Now I want to log into the pod and check the log from log dir of my service. But the pod keeps restarting all the time and I have no chance to log into the pod.
Is there any way to login restart pod or dump some file or see the file in the pod? I want to find why the pod restart all the time.
if you are running the GKE and logging is enabled you can get all container log by default into the dashboard of stack driver logging.
As of now you can run the kubectl describe pod <pod name> to check the status code of the container which got exited. Status code might be helpful to understand the reason for restart, is it due to Error or OOM killed.
you can also use the flag --previous and get logs of restarted POD
Example :
kubectl logs <POD name> --previous
in the above case of --previous your pod needs but still exist inside the cluster.
#HarshManvar is right but I would like to provide you with some more options:
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
These two methods above can be found useful when checking logs or execing into the container would not be efficient.

How to list Kubernetes recently deleted pods?

Is there a way to get some details about Kubernetes pod that was deleted (stopped, replaced by new version).
I am investigating bug. I have logs with my pod name. That pod does not exist anymore, it was replaced by another one (with different configuration). New pod resides in same namespace, replication controller and service as old one.
Commands like
kubectl get pods
kubectl get pod <pod-name>
work only with current pods (live or stopped).
How I could get more details about old pods? I would like to see
when they were created
which environment variables they had when created
why and when they were stopped
As of today, kubectl get pods -a is deprecated, and as a result you cannot get deleted pods.
What you can do though, is to get a list of recently deleted pod names - up to 1 hour in the past unless you changed the ttl for kubernetes events - by running:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
You can then investigate further issues within your logging pipeline if you have one in place.
As far as I know you cannot get the Pod details once the Pod is deleted. Can I know what is the usecase?
Example:
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/false
you will have a Pod with status terminated:error
if a Pod is created using kubectl run busybox-test-pod-status --image=busybox --restart=Never -- /bin/true
you will have a Pod with status terminated:Completed
if a container in a Pod restarts: the Pod will be alive and you can get the logs of previous container (only the previous container) using
kubectl logs --container <container name> --previous=true <pod name>
if you doing an upgrade of you app and you are creating Pods using Deployments. If the update deployment "say a new image", the Pod will be terminated and new Pod will be created. You can get the Pod details from the Deployment's YAML. if you want to get details of previous Pod you have see "spec" section of previous Deployment's YAML
You can try kubectl logs --previous to list the logs of a previously stopped pod
http://kubernetes.io/docs/user-guide/kubectl/kubectl_logs/
You may also want to check out these debugging tips
http://kubernetes.io/docs/user-guide/debugging-pods-and-replication-controllers/
There is a way to find out why pods were deleted and who deleted them.
The only way to find out something is to set the ttl for k8s to be greater than the default 1h and search through the events:
kubectl get event -o custom-columns=NAME:.metadata.name | cut -d "." -f1
If your container has previously crashed, you can access the previous container’s crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
There is this flag:
-a, --show-all=false: When printing, show all resources (default hide terminated pods.)
But this may not help in all cases of old pods.
kubectl get pods -a
you will get the list of running pods and the terminated pods in case you are searching for this
If you want to see all the previously deleted pods and you are trying to fetch the previous pods.
Command line:
kubectl get pods
in which you will get all the pod details, because every service has one or more pods and they have unique ip address
Here you can check the lifecycle of pods and what phases of pod has.
https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle
and you can see the previous pod logs by typing a command:
kubectl logs --previous

Error while creating pods in Kubernetes

I have installed Kubernetes in Ubuntu server using instructions here. I am trying to create pods using kubectl run hello-minikube --image=gcr.io/google_containers/echoserver:1.4 --hostport=8000 --port=8080 as listed in the example. However, when I do kubectl get pod I get the status of the container as pending. I further did kubectl describe pod for debugging and I see the message:
FailedScheduling pod (hello-minikube-3383150820-1r4f7) failed to fit in any node fit failure on node (minikubevm): PodFitsHostPorts.
I am further trying to delete this pod by kubectl delete pod hello-minikube-3383150820-1r4f7 but when I further do kubectl get pod I see another pod with prefix "hello-minikube-3383150820-" that I havent created. Does anyone know how to fix this problem? Thank you in advance.
The PodFitsHostPorts predicate is failing because you have something else on your nodes using port 8000. You might be able to find what it is by running kubectl describe svc.
kubectl run creates a deployment object (you can see it with kubectl describe deployments) which makes sure that you always keep the intended number of replicas of the pod running (in this case 1). When you delete the pod, the deployment controller automatically creates another for you. If you want to delete the deployment and the pods it keeps creating, you can run kubectl delete deployments hello-minikube.