How to access Logs of Pods in Kubernetes after its deletion - kubernetes

We have CentOS based infra for kubernetes and also using Openshift on top of tis. We have terminated a pod and now its not visible on master controller any more. However we are willing to analyze its logs.Can we still access its logs?How ?

Containers together with its logs get deleted when you issue a kubectl delete pod <pod-name>. You can use something like Fluentd or logspout to pipe your logs to say an ELK or an EFK stack.

Looks like the container is removed (docker rm) once the kubectl delete of the pod is done and the log files are gone. One way I can think of is using fluentd or something similar for log aggregation.

If you have access to the Kubernetes Dashboard, you can access logs for deleted/completed pods in the desired namespace.

Related

deployed a service on k8s but not showing any pods weven when it failed

I have deployed a k8s service, however its not showing any pods. This is what I see
kubectl get deployments
It should create on the default namespace
kubectl get nodes (this shows me nothing)
How do I troubleshoot a failed deployment. The test-control-plane is the one deployed by kind this is the k8s one I'm using.
kubectl get nodes
If above command is not showing anything which mean there is no Nodes in your cluster so where your workload will run ?
You need to have at least one worker node in K8s cluster so deployment can schedule the POD on it and run the application.
You can check worker node using same command
kubectl get nodes
You can debug more and check the reason of issue further using
kubectl describe deployment <name of your deployment>
To find out what really went wrong, first follow the steps described in Harsh Manvar in his answer. Perhaps obtaining that information can help you find the problem. If not, check the logs of your deployment. Try to list your pods and see which ones did not boot properly, then check their logs.
You can also use the kubectl describe on pods to see in more detail what went wrong. Since you are using kind, I include a list of known errors for you.
You can also see this visual guide on troubleshooting Kubernetes deployments and 5 Tips for Troubleshooting Kubernetes Deployments.

How to debug a kubernetes cluster?

As the question shows, I have very low knowledge about kubernetes. Following a tutorial, I made a Kubernetes cluster to run a web app on a local server using Minikube. I have applied the kubernetes components and they are running but the Web-Server does not respond to HTTP requests. My problem is that all the system that I have created is like a black box for me and I have literally no idea how to open it and see where the problem is. Can you explain how I can debug such implementaions in a wise way. Thanks.
use a tool like https://github.com/kubernetes/kubernetes-dashboard
You can install kubectl and kubernetes-dashboard in a k8s cluster (https://kubernetes.io/docs/tasks/tools/install-kubectl/), and then use the kubectl command to query information about a pod or container, or use the kubernetes-dashboard web UI to query information about the cluster.
For more information, please refer to https://kubernetes.io/
kubectl get pods
will show you all your pods and their status. A quick check to make sure that all is at least running.
If there are pods that are unhealthy, then
kubectl describe pod <pod name>
will give some more information.. eg image not found etc
kubectl log <pod name> --all
is often the next step , use -f to follow the logs as you exercise your api.
It is possible to link up images running in a pod with most ide debuggers, but instructions will differ depending on language and ide used...

Why can't I delete heapster and kubernetes-dashboard on gke namespace=kube-system

I want to have full control of what I do with my single node cluster (savings...lol), but somehow I can't do this even if I delete the deployment it respawns ..
As mentioned in another answer, you cannot delete them directly via the Kubernetes API; however, you can delete them indirectly via the Google Container Engine API.
To remove the dashboard, run gcloud container clusters update $CLUSTER_NAME --update-addons=KubernetesDashboard=DISABLED.
To disable heapster you need to disable monitoring using gcloud container clusters update $CLUSTER_NAME --monitoring-service=none (it may actually require disabling another add-on too, I can't recall at the moment).
See https://cloud.google.com/sdk/gcloud/reference/container/clusters/update for the commands referenced above.
Heapster is configured as a cluster addon. The addon manager is going to reconcile it to it's preconfigured state if you change or delete it.
You are stuck with it.
Even if you delete heapster pod; it restart automatically. I can made it with scaling it down to zero as shown below
kubectl scale --replicas=0 deployment/heapster-v1.6.0-beta.1 --namespace=kube-system
And you can find the exact name of the heapster pod within result of the command below
kubectl get deployments --namespace=kube-system
By the way you can find more options to reduce resource usage here:
https://cloud.google.com/kubernetes-engine/docs/how-to/small-cluster-tuning

pods keep creating themselves even I deleted all deployments

I am running k8s on aws, and I updated the deployment of nginx - which normally, it works fine-, but after this time, the nginx deployment won't show up in "kubectl get deployments".
I want to kill all the pods related to nginx, but they keep reproduce themselves. I deleted all deployments "kubectl delete --all deployments", other pods just got terminated, but not nginx.
I have no idea where I can stop the pods recreating.
any idea where to start ?
check the deployment, replication controller and replica set and remove them.
kubectl get deploy,rc,rs
In modern kubernetes, there is also an annotation kubernetes.io/created-by on the Pod showing its "owner", as seen here, but I can't lay my hands on the documentation link right now. However, I found a pastebin containing a concrete example of the contents of the annotation

Google (Stackdriver) Logging fails after Kubernetes rolling-update

When performing a kubectl rolling-update of a replication controller in Kubernetes (Google Container Engine), the Google (Stackdriver) Logging agent doesn't pick up the newly deployed pod. The Log is stuck at the last message produced from the old pod.
Consequently, the logs for the replication controller are out-of-date until we do a manual restart (i.e. kubectl scale and kubectl delete) of the pod and the logs are updated again.
Can anybody else confirm that behaviour? Is there a workaround?
I can try to repro the behavior, but first can you try running kubectl logs <pod-name> on the newly created pod after doing the rolling-update to verify that the new version of your app was producing logs at all?
This sounds more likely to be an application problem than an infrastructure problem, but if you can confirm that it is an infra problem I'd love to get to the bottom of it.