My software has a dashboard and there is system information (CPU, Ram, Disk). But in kubernetes pods completely Worker resources appear. How should my approach be here? As far as I researched, pods not has its assigned resources.
There are multiple ways please check based on your preference:
You can use this way by k9s which is very easy way to check all the details.
Or if you want to check them manually
Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage
or
Deploying the dashboard is not default you can get by below command:
kubectl apply -f https://raw.githubusercontent.com/kubernetes/dashboard/v2.7.0/aio/deploy/recommended.yaml
Use kubectl proxy to enable access to the dashboard and it will be available in the following link and it will be accessed from the machine where you have executed the command. For more details please check this link.
I have a bunch of Rancher clusters I take care of and on some of them developers use PriorityClasses to ensure that some of the more important workloads get scheduled. The 3 PriorityClasses are in 3 digits range so they will not interfere with the default ones. However, at present none of the PriorityClasses is set as default and neither is the preemptionPolicy set so it defaults to PreemptLowerPriority.
None of the rancher, longhorn, prometheus, grafana, etc., workloads have priorityClassName set.
Long story short, I believe this causes havoc on the cluster when resources are in short supply.
Before I take my opinion to the developers I would like to collect some data to back up my story.
The question: How do I detect if the pod was Terminated due to Preemption?
I tried to google the subject but couldn't find anything. I was hoping kube state metrics would have something but I didn't find anything.
Any help would be greatly appreciated.
You can try to look for convincing data like the pod termination reason with help of kubectl.
You can see the last restart logs of a container using the following command:
kubectl logs podname -c containername --previous
You can also use the following command to check the lifecycle events sent by the kubelet to the apiserver about the pod.
kubectl describe pod podname
Finally, You can also write a final message to /dev/termination-log, and this will show up as described in the docs.
To use kubectl commands with rancher kindly refer to this documentation page.
Memory and cpu resources of a container can be tracked using prometheus. But can we track I/O of a container? Are there any metrices available?
If you are using Docker containers you can check the data with the docker stats command (as P... mentioned in the comment). Here you can find more information about this command.
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
For more look at this similar question.
Here you can find another interesting question. You should know, that
Containers inside pods partially share /procwith the host system include path about a memory and CPU information.
See also this article about Memory inside Linux containers.
I have a play framework based java application deployed in kubernetes. One of the pods died due to out of memory/memory leak. In local , can use some utilities and monitor jvm heap usage. I am new to kubernetes.
Appreciate if you tell how to check for heap usage history of my application in a Kubernetes pod which got killed. kubectl get events on this killed pod will give events history but I want to check object wise heap usage history on that dead pod. Thanks much
You can install addons or external tools like Prometheus or metrics-server.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
You can define queries:
For CPU percentage
avg((sum (rate (container_cpu_usage_seconds_total {container_name!="" ,pod="<Pod name>" } [5m])) by (namespace , pod, container ) / on (container , pod , namespace) ((kube_pod_container_resource_limits_cpu_cores >0)*300))*100)
For Memory percentage
avg((avg (container_memory_working_set_bytes{pod="<pod name>"}) by (container_name , pod ))/ on (container_name , pod)(avg (container_spec_memory_limit_bytes>0 ) by (container_name, pod))*100)
Take a look: prometheus-pod-memory-usage.
You can visualize such metrics using Grafana - take a look how to set it up with Prometheus - grafana-prometheus-setup.
Metrics-server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.
You can execute:
$ kubectl top pod <your-pod-name> --namespace=your-namespace --containers
The following command will give you both the CPU usage as well as the memory usage for a given pod and its containers.
See how to firstly install metrics-server: metrics-server-installtion.
Otherwise if you want to check cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to shell of running container kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
Remember that memory usage is in bytes.
Take a look: memory-usage-kubernetes.
I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:
kubectl top pod podname --namespace=default
I am getting the following error:
W0205 15:14:47.248366 2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?
Do we get the same output if we enter the pod and run the linux top command?
CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage
Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level
NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.
kubectl top pod <pod-name> -n <fed-name> --containers
FYI, this is on v1.16.2
Use k9s for a super easy way to check all your resources' cpu and memory usage.
As described in the docs, you should install metrics-server
250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:
1 AWS vCPU
1 GCP Core
1 Azure vCore
1 Hyperthread on a bare-metal Intel processor with Hyperthreading
Fractional values are allowed. A Container that requests 0.5 CPU is
guaranteed half as much CPU as a Container that requests 1 CPU. You
can use the suffix m to mean milli. For example 100m CPU, 100
milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not
allowed.
CPU is always requested as an absolute quantity, never as a relative
quantity; 0.1 is the same amount of CPU on a single-core, dual-core,
or 48-core machine.
No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.
There are more details on these links:
Why top and free inside containers don't show the correct container memory
Kubernetes top vs Linux top
A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.
kubectl describe PodMetrics <pod_name>
replace <pod_name> with the pod name you get by using
kubectl get pod
You need to run metric server to make below commands working with correct data:
kubectl get hpa
kubectl top node
kubectl top pods
Without metric server:
Go into the pod by running below command:
kubectl exec -it pods/{pod_name} sh
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You will get memory usage of pod in bytes.
Not sure why it's not here
To see all pods with time alive - kubectl get pods --all-namespaces
To see memory and CPU - kubectl top pods --all-namespaces
As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server
You can install metrics-server in following way:
Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:
- command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Run the following command: kubectl apply -f deploy/1.8+
It will install all the requirements you need for metrics server.
For more info, please have a look at my following answer:
How to Enable KubeAPI server for HPA Autoscaling Metrics
If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:
Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
Per-node memory usage percentage:
100 * (
sum(container_memory_usage_bytes{container!=""}) by (node)
/ on(node)
kube_node_status_capacity{resource="memory"}
)
Per-node CPU usage percentage:
100 * (
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
/ on(node)
kube_node_status_capacity{resource="cpu"}
)
An alternative approach without having to install the metrics server.
It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.
Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)
Find your container id of the pod sudo crictl ps
use stats to get CPU and RAM sudo crictl stats <CONTAINERID>
Sample output for reference:
CONTAINER CPU % MEM DISK INODES
873f04b6cef94 0.50 54.16MB 28.67kB 8
To check the usage of individual pods in Kubernetes type the following commands in terminal
$ docker ps | grep <pod_name>
This will give your list of running containers in Kubernetes
To check CPU and memory utilization using
$ docker stats <container_id>
CONTAINER_ID NAME CPU% MEM USAGE/LIMIT MEM% NET_I/O BLOCK_I/O PIDS
you need to deploy heapster or metric server to see the cpu and memory usage of the pods
You can use API as defined here:
For example:
kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq
{
"kind": "PodMetrics",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"name": "nginx-7fb5bc5df-b6pzh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
"creationTimestamp": "2021-06-14T07:54:31Z"
},
"timestamp": "2021-06-14T07:53:54Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "33239n",
"memory": "13148Ki"
}
},
{
"name": "git-repo-syncer",
"usage": {
"cpu": "0",
"memory": "6204Ki"
}
}
]
}
Where nginx-7fb5bc5df-b6pzh is pod's name.
Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU
I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:
Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
Right-click on the pod and select "Terminal"
Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.
Hope it helps
In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.
If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.
Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.
Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.
Kubernetes is FUN!!!