Now my kubernetes (v1.15.x) deployment keeps restarting all the time. From the log ouput with kubernetes dashboard I could not see anything useful. Now I want to log into the pod and check the log from log dir of my service. But the pod keeps restarting all the time and I have no chance to log into the pod.
Is there any way to login restart pod or dump some file or see the file in the pod? I want to find why the pod restart all the time.
if you are running the GKE and logging is enabled you can get all container log by default into the dashboard of stack driver logging.
As of now you can run the kubectl describe pod <pod name> to check the status code of the container which got exited. Status code might be helpful to understand the reason for restart, is it due to Error or OOM killed.
you can also use the flag --previous and get logs of restarted POD
Example :
kubectl logs <POD name> --previous
in the above case of --previous your pod needs but still exist inside the cluster.
#HarshManvar is right but I would like to provide you with some more options:
Debugging with an ephemeral debug container: Ephemeral containers are useful for interactive troubleshooting when kubectl exec is insufficient because a container has crashed or a container image doesn't include debugging utilities, such as with distroless images.
Debugging via a shell on the node: If none of these approaches work, you can find the host machine that the pod is running on and SSH into that host.
These two methods above can be found useful when checking logs or execing into the container would not be efficient.
Related
From time to time all my pods restart and I'm not sure how to figure out why it's happening. Is there someplace in google cloud where I can get that information? or a kubectl command to run? It happens every couple of months or so. maybe less frequently than that.
Using below methods for checking the reason for pod restart:
Use kubectl describe deployment <deployment_name> and kubectl describe pod <pod_name> which contains the information.
# Events:
# Type Reason Age From Message
# ---- ------ ---- ---- -------
# Warning BackOff 40m kubelet, gke-xx Back-off restarting failed container
# ..
You can see that the pod is restarted due to image pull backoff. We need to troubleshoot on that particular issue.
Check for logs using : kubectl logs <pod_name>
To get previous logs of your container (the restarted one), you may use --previous key on pod, like this:
kubectl logs your_pod_name --previous
You can also write a final message to /dev/termination-log, and this will show up as described in docs.
Attaching a troubleshooting doc for reference.
It's also a good thing to check your cluster and node-pool operations.
Check the cluster operation in cloud shell and run the command:
gcloud container operations list
Check the age of the nodes with the command
kubectl get nodes
Check and analyze your deployment on how it reacts to operations such as cluster upgrade, node-pool upgrade & node-pool auto-repair. You can check the cloud logging if your cluster upgrade or node-pool upgrades using queries below:
Please note you have to add your cluster and node-pool name in the queries.
Control plane (master) upgraded:
resource.type="gke_cluster"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateCluster" OR "UpdateClusterInternal")
(protoPayload.metadata.operationType="UPGRADE_MASTER"
OR protoPayload.response.operationType="UPGRADE_MASTER")
resource.labels.cluster_name=""
Node-pool upgraded
resource.type="gke_nodepool"
log_id("cloudaudit.googleapis.com/activity")
protoPayload.methodName:("UpdateNodePool" OR "UpdateClusterInternal")
protoPayload.metadata.operationType="UPGRADE_NODES"
resource.labels.cluster_name=""
resource.labels.nodepool_name=""
I have a deployment of pods which failed 22h ago, how often does Kubernetes log-rotate its logs?
Is there any possibility to view the logs of the deployment but 22 hours ago?
Thanks
I think we can not retrieve logs from a pod that is not in ready state.
We can get the logs of the container inside the pod , By logging into the worker node where pod was running .
docker ps -a | grep <pod name>
docker logs <container name/id from above output
You can use kubectl logs --previous to retrieve logs from a previous instantiation of a container.
Kubernetes does NOT provide built-in log rotation.
Check official Debug Running Pods documentation:
If your container has previously crashed, you can access the previous
container's crash log with:
kubectl logs --previous ${POD_NAME} ${CONTAINER_NAME}
In my opinion you are asking not about logs on pod, you are more interested in full debug. Your starting point is again official documentation Troubleshoot Applications-Debugging Pods. ANd start check with kubectl describe pods ${POD_NAME}
4.All I wrote above is great, however sometimes the only way to get the logs is #confused genius answer.
How to get more details about what is actually the problem?
kubtectl logs foo-app-5695559f9c-ntrqf
Error from server (BadRequest): container "foo" in pod "foo-app-5695559f9c-ntrqf"
is waiting to start: trying and failing to pull image
I would like to see the http traffic between K8s and the container registry.
If a container has not started, then there are no container logs from that pod to view, as appears to be the case.
To get more information about the pod or why the container may not be starting, you can use kubectl describe pod which should show you both the pod status and the events relevant to the given pod:
kubectl describe pod <pod-name> --namespace <namespace>
The most common error is an access issue to the registry. Make sure you have an imagePullSecrets set for the registry that you're trying to pull from.
See: How to pull image from a private registry.
If your image pull secret is correct and you are able to reach container registry from your kubernetes cluster, what i would do in this case is use contianer runtime(docker,containerd) that my kubernetes cluster is using to pull the image and see what is causing the issue, which gives more detail logs and can be run in debug mode.
For Docker Set "debug": true in the daemon.json configuration.
For Containerd set:
[debug]
level = "debug"
in /etc/containerd/config.toml.
I have a Kubernetes cluster on Google Cloud Platform. The Kubernetes cluster contains a deployment which has one pod. The pod has two containers. I have observed that the pod has been replaced by a new pod and the entire data is wiped out. I am not able to identify the reason behind it.
I have tried the below two commands:
kubectl logs [podname] -c [containername] --previous
**Result: ** previous terminated container [containername] in pod [podname] not found
kubectl get pods
Result: I see that the number of restarts for my pod equals 0.
Is there anything I could do to get the logs from my old pod?
Try below command to see the pod info
kubectl describe po
Not many chances you will retrieve this information, but try next:
1) If you know your failed container id - try to find old logs here
/var/lib/docker/containers/<container id>/<container id>-json.log
2) look at kubelet's logs:
journalctl -u kubelet
We have a k8 cluster. I am trying to access logs from inside and kubectl won't work inside. Where would be the logs stored in k8?
We do not have systemd and found in the docs that:
If systemd is not present, they write to .log files in the /var/log directory. System components inside containers always write to the /var/log directory, bypassing the default logging mechanism.
But I could not find any logs in here. So how can I get access to these logs which I would get by kubectl logs from inside the pod?
How does default logging work in k8 without any logging mechanism setup?
PS: I did go through other similar posts and had no luck with those.
If the application does not log to a file, it may log to stdout sometimes (which kubectl logs <pod name> should also show).
You can try docker logs <name or ID of the container
If the /var/log directory does not persist in a volume mounted in the container, it will be lost when the pod restarts or moves in the cluster as the /var/log directory will be ephemeral. Check if the pod has restarted or moved in the cluster.
Find if the pod uses any volumes for persistent storage of /var/log by doing:
kubectl get <pod name> -o yaml | grep -i volume
kubectl get persistentvolumes --all-namespaces
kubectl get persistentvolumeclaims --all-namespaces
If you have the logs you want to access available via kubectl logs then it means they are ultimately output to stdout or stderr. Docker and kubelet work on top of standard outputs, processing these logs in their own fashion (ie. by logging plugins). When your process throws something to sdtout, it is obviously not stored anywhere on the local filesystem of the container.
That said, you can configure your app to log to files, but mind that you need to handle logrotation, cleanup etc. or your container will perpetualy grow. If you can't have both in parallel, you do loose them from docker/kubernetes logs though, which is not so nice. If that is the case, you can have a process in a sidecar that reads logfiles from mounted volume and sends them to stdout/stderr.
The real question is why do you need to access logs inside POD. Knowing that maybe there is a better way to achieve what you need (ie. pipe them first via some parser process).
I understood that the application just logs in a file/directory inside the container.
Could you use
kubectl exec -it <podname> bash
or
kubectl exec -it <podname> sh
to enter the container/pod and check the logs inside the container.