Kubectl get events says there are no resources - kubernetes

I am using Azure kubernetes service(managed servcie). kubectl get events -namespace abc says there are no resources.
I used get the events all the time, on the same cluster and suddenly it returns there are no resources. Can some one help out?
Remark: This is a cluster which is currently having lots of traffic and should have events.

Try deleting some pod, then check for
kubectl get events -w
in that namespace, you will get some events, so likely when you were checking, there was no event going on. Both the Control Plane components and the Kubelet emit events to the API server as they perform actions like pod creation, deletion, replica set creation, hpa etc

probably means there no events. Now I see only 1 event in kube-system namespace. You will most likely see some events in that namespace:
kubectl get events -n kube-system
which will confirm everything is fine.

Have a look at Timeline of kubernetes events. Events seem to be retained only a certain amount of time, so maybe there are no events in the particular namespace.
Also as 4c74356b41 suggest check kube-system ns you most probably will see events.

The 'namespace' parameter sould be prefixed with two hyphens. The right command is
kubectl get events --namespace abc
OR
kubectl get events -n abc
'kubectl get events' misleads by throwing an error message as "No resources found in default namespace." when the syntax of the command is wrong.

Related

How to monitor pod preemption event

I have a bunch of Rancher clusters I take care of and on some of them developers use PriorityClasses to ensure that some of the more important workloads get scheduled. The 3 PriorityClasses are in 3 digits range so they will not interfere with the default ones. However, at present none of the PriorityClasses is set as default and neither is the preemptionPolicy set so it defaults to PreemptLowerPriority.
None of the rancher, longhorn, prometheus, grafana, etc., workloads have priorityClassName set.
Long story short, I believe this causes havoc on the cluster when resources are in short supply.
Before I take my opinion to the developers I would like to collect some data to back up my story.
The question: How do I detect if the pod was Terminated due to Preemption?
I tried to google the subject but couldn't find anything. I was hoping kube state metrics would have something but I didn't find anything.
Any help would be greatly appreciated.
You can try to look for convincing data like the pod termination reason with help of kubectl.
You can see the last restart logs of a container using the following command:
kubectl logs podname -c containername --previous
You can also use the following command to check the lifecycle events sent by the kubelet to the apiserver about the pod.
kubectl describe pod podname
Finally, You can also write a final message to /dev/termination-log, and this will show up as described in the docs.
To use kubectl commands with rancher kindly refer to this documentation page.

Memory effective way to monitor change of Kubernetes objects in a namespace

I am working with kubernetes and kubectl commands, and am able to get a list of namespaces, and then can get the resources inside those namespaces. Question is, is there an effective way to monitor all resources (CRDs especially) in a certain namespace for changes? I know I could do this:
kubectl get myobjecttype -n <user-account-1>
and then check timestamps with a separate command, but that seems resource-taxing.
You might be looking for the Kubernetes Watch API.
In fact, you make a List request (see API reference for e.g. Pods) and add the watch=1 query parameter to get a continuous stream of changes to the specified resources.
kubectl also supports watches with the -w/--watch flag:
kubectl get myobjecttype -n <user-account-1> -w

How to get status history/lineage for Kubernetes pods

I was wondering if there is a kubectl command to quickly get the history of all STATUS for a given pod?
for example: Lets say a pod - my-test-pod went from ContainerCreating to Running to OomKill to Terminating:
I was wondering if there is a command that experts use to get this lineage. Appreciate a nudge..
Using kubectl get events you can only see events of last 1 hour. If you want to persist events for a longer duration you can sue eventrouter.The event router serves as an active watcher of event resource in the kubernetes system, which takes those events and pushes them to a user specified sink. This is useful for a number of different purposes, but most notably long term behavioral analysis of your workloads running on your kubernetes cluster.
kubectl get events or kubectl describe pod which shows the events for the pod at the bottom. However events are only kept for a little while, so it's not a permanent history. For that you would need some webhooks or a tool like Prometheus.

How to know when a Service is created?

In kubernetes, you can listen for events using kubectl get events. This works for other resources, but I would like to know when a Service is created and destroyed.
When I run kubectl describe my-service I get Events: <none>.
How can I know when a service was created?
Every api object has a creation timestamp in the metadata section. Though that doesn’t tell when it is edited. For that you might want an audit webhook or something like Brigade.
When a service gets created, you should get the service listed when you run below command
Kubectl get svc

How can I access the pod when it become CrashLoopBackOff?

Right now, I deployed some pods on my kubernetes cluster. But sometime, my image may has some bugs which make the pod cannot start correctly.
For example:
nats-1 0/1 CrashLoopBackOff 121 10h
I also cannot see any error in the kubectl log.
So is there any way to access this pod? Or is there any tools or tech can allow to to enter the container?
Thanks a lot all! :)
You can kubectl describe to get the events, it sometimes might show some errors there. Otherwise you can probably also make the deployment/pod run a command like sleep 3600 to keep it open for you to exec into it to investigate further.
Edited after clarification:
You could go into the worker (kubectl get pod <pod-name> -o wide to get which one) and access the node syslogs or pods' logs. That should show you a more detailed information of what happened.
But #ho-man approach is very valid and less cumbersome.