Checking kubernetes pod CPU and memory - kubernetes

I am trying to see how much memory and CPU is utilized by a kubernetes pod. I ran the following command for this:
kubectl top pod podname --namespace=default
I am getting the following error:
W0205 15:14:47.248366 2767 top_pod.go:190] Metrics not available for pod default/podname, age: 190h57m1.248339485s
error: Metrics not available for pod default/podname, age: 190h57m1.248339485s
What do I do about this error? Is there any other way to get CPU and memory usage of the pod?
I saw the sample output of this command which shows CPU as 250m. How is this to be interpreted?
Do we get the same output if we enter the pod and run the linux top command?

CHECK WITHOUT METRICS SERVER or ANY THIRD PARTY TOOL
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec -it pod_name -n namespace -- /bin/bash
Run cat /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage
Run cat /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage
Make Sure you have added the resources section (requests and limits) to deployment so that it can calculate the usage based on cgroup and container will respect the limits set on pod level
NOTE: This usage is in bytes. This can vary upon pod usage and these values changes frequently.

kubectl top pod <pod-name> -n <fed-name> --containers
FYI, this is on v1.16.2

Use k9s for a super easy way to check all your resources' cpu and memory usage.

As described in the docs, you should install metrics-server
250m means 250 milliCPU, The CPU resource is measured in CPU units, in Kubernetes, is equivalent to:
1 AWS vCPU
1 GCP Core
1 Azure vCore
1 Hyperthread on a bare-metal Intel processor with Hyperthreading
Fractional values are allowed. A Container that requests 0.5 CPU is
guaranteed half as much CPU as a Container that requests 1 CPU. You
can use the suffix m to mean milli. For example 100m CPU, 100
milliCPU, and 0.1 CPU are all the same. Precision finer than 1m is not
allowed.
CPU is always requested as an absolute quantity, never as a relative
quantity; 0.1 is the same amount of CPU on a single-core, dual-core,
or 48-core machine.
No, kubectl top pod podname shows metrics for a given pod, Linux top and free runs inside a Container and report metrics based on Linux system reporting based on the information stored in the virtual filesystem /proc/, they are not aware of the cgroup where it runs.
There are more details on these links:
Why top and free inside containers don't show the correct container memory
Kubernetes top vs Linux top

A quick way to check CPU/Memory is by using the following kubectl command. I found it very useful.
kubectl describe PodMetrics <pod_name>
replace <pod_name> with the pod name you get by using
kubectl get pod

You need to run metric server to make below commands working with correct data:
kubectl get hpa
kubectl top node
kubectl top pods
Without metric server:
Go into the pod by running below command:
kubectl exec -it pods/{pod_name} sh
cat /sys/fs/cgroup/memory/memory.usage_in_bytes
You will get memory usage of pod in bytes.

Not sure why it's not here
To see all pods with time alive - kubectl get pods --all-namespaces
To see memory and CPU - kubectl top pods --all-namespaces

As heapster is deprecated and will not be releasing any future releases, you should go with installing metrics-server
You can install metrics-server in following way:
Clone the metrics-server github repo: git clone https://github.com/kubernetes-incubator/metrics-server.git
Edit the deploy/1.8+/metrics-server-deployment.yaml file and add following section just after command section:
- command:
- /metrics-server
- --metric-resolution=30s
- --kubelet-insecure-tls
- --kubelet-preferred-address-types=InternalIP
Run the following command: kubectl apply -f deploy/1.8+
It will install all the requirements you need for metrics server.
For more info, please have a look at my following answer:
How to Enable KubeAPI server for HPA Autoscaling Metrics

If you use Prometheus operator or VictoriaMetrics operator for Kubernetes monitoring, then the following PromQL queries can be used for determining per-container, per-pod and per-node resource usage:
Per-container memory usage in bytes:
sum(container_memory_usage_bytes{container!~"POD|"}) by (namespace,pod,container)
Per-container CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!~"POD|"}[5m])) by (namespace,pod,container)
Per-pod memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (namespace,pod)
Per-pod CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (namespace,pod)
Per-node memory usage in bytes:
sum(container_memory_usage_bytes{container!=""}) by (node)
Per-node CPU usage in CPU cores:
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
Per-node memory usage percentage:
100 * (
sum(container_memory_usage_bytes{container!=""}) by (node)
/ on(node)
kube_node_status_capacity{resource="memory"}
)
Per-node CPU usage percentage:
100 * (
sum(rate(container_cpu_usage_seconds_total{container!=""}[5m])) by (node)
/ on(node)
kube_node_status_capacity{resource="cpu"}
)

An alternative approach without having to install the metrics server.
It requires you to currently install crictl into Worker Nodes where pods are installed. There is Kubernetes task defined in official doc.
Once, you have installed it properly you can use the below commands. (I had to use sudo in my case, but, probably may not be required depending on your Kubernetes Cluster install)
Find your container id of the pod sudo crictl ps
use stats to get CPU and RAM sudo crictl stats <CONTAINERID>
Sample output for reference:
CONTAINER CPU % MEM DISK INODES
873f04b6cef94 0.50 54.16MB 28.67kB 8

To check the usage of individual pods in Kubernetes type the following commands in terminal
$ docker ps | grep <pod_name>
This will give your list of running containers in Kubernetes
To check CPU and memory utilization using
$ docker stats <container_id>
CONTAINER_ID NAME CPU% MEM USAGE/LIMIT MEM% NET_I/O BLOCK_I/O PIDS

you need to deploy heapster or metric server to see the cpu and memory usage of the pods

You can use API as defined here:
For example:
kubectl -n default get --raw /apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh | jq
{
"kind": "PodMetrics",
"apiVersion": "metrics.k8s.io/v1beta1",
"metadata": {
"name": "nginx-7fb5bc5df-b6pzh",
"namespace": "default",
"selfLink": "/apis/metrics.k8s.io/v1beta1/namespaces/default/pods/nginx-7fb5bc5df-b6pzh",
"creationTimestamp": "2021-06-14T07:54:31Z"
},
"timestamp": "2021-06-14T07:53:54Z",
"window": "30s",
"containers": [
{
"name": "nginx",
"usage": {
"cpu": "33239n",
"memory": "13148Ki"
}
},
{
"name": "git-repo-syncer",
"usage": {
"cpu": "0",
"memory": "6204Ki"
}
}
]
}
Where nginx-7fb5bc5df-b6pzh is pod's name.
Pay attention CPU is measured in nanoCPUs where 1x10E9 nanoCPUs = 1 CPU

I know this is an old thread, but I just found it trying to do something similar. In the end, I found I can just use the Visual Studio Code Kubernetes plugin. This is what I did:
Select the cluster and open the Workloads/Pods section, find the pod you want to monitor (you can reach the pod through any other grouping in the Workloads section)
Right-click on the pod and select "Terminal"
Now you can either cat the files described above or use the "top" command to monitor CPU and memory in real-time.
Hope it helps

In case you are using minikube, you can enable the metrics-server addon; this will show the information in the dashboard.

If you exec into your pod, using sh or bash, you can run the top command which will give you some stats about resource utilisation that updates every few moments.

Metrics are available only if metric server is enabled or third party solutions like prometheus is configured. Otherwise you need to look at /sys/fs/cgroup/cpu/cpuacct.usage for cpu usage, which is the total cpu time occupied by this cgroup/container and /sys/fs/cgroup/memory/memory.usage_in_bytes for memory usage, which is total memory consumed by all the processes in the cgroup/container.
Also don't forget another beast called QOS, which can have values like Bursted, Guaranteed. If your pod appears Bursted, then it will be OOMKilled, even if it has not breached the CPU or Memory threshold.
Kubernetes is FUN!!!

Related

How to get iostats of a container running in a pod on Kubernetes?

Memory and cpu resources of a container can be tracked using prometheus. But can we track I/O of a container? Are there any metrices available?
If you are using Docker containers you can check the data with the docker stats command (as P... mentioned in the comment). Here you can find more information about this command.
If you want to check pods cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to pod's exec mode kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
For more look at this similar question.
Here you can find another interesting question. You should know, that
Containers inside pods partially share /procwith the host system include path about a memory and CPU information.
See also this article about Memory inside Linux containers.

jvm heap usage history in a killed Kubernetes pod

I have a play framework based java application deployed in kubernetes. One of the pods died due to out of memory/memory leak. In local , can use some utilities and monitor jvm heap usage. I am new to kubernetes.
Appreciate if you tell how to check for heap usage history of my application in a Kubernetes pod which got killed. kubectl get events on this killed pod will give events history but I want to check object wise heap usage history on that dead pod. Thanks much
You can install addons or external tools like Prometheus or metrics-server.
Prometheus is an open-source systems monitoring and alerting toolkit originally built at SoundCloud.
You can define queries:
For CPU percentage
avg((sum (rate (container_cpu_usage_seconds_total {container_name!="" ,pod="<Pod name>" } [5m])) by (namespace , pod, container ) / on (container , pod , namespace) ((kube_pod_container_resource_limits_cpu_cores >0)*300))*100)
For Memory percentage
avg((avg (container_memory_working_set_bytes{pod="<pod name>"}) by (container_name , pod ))/ on (container_name , pod)(avg (container_spec_memory_limit_bytes>0 ) by (container_name, pod))*100)
Take a look: prometheus-pod-memory-usage.
You can visualize such metrics using Grafana - take a look how to set it up with Prometheus - grafana-prometheus-setup.
Metrics-server is a scalable, efficient source of container resource metrics for Kubernetes built-in autoscaling pipelines.
Metrics Server collects resource metrics from Kubelets and exposes them in Kubernetes apiserver through Metrics API for use by Horizontal Pod Autoscaler and Vertical Pod Autoscaler. Metrics API can also be accessed by kubectl top, making it easier to debug autoscaling pipelines.
You can execute:
$ kubectl top pod <your-pod-name> --namespace=your-namespace --containers
The following command will give you both the CPU usage as well as the memory usage for a given pod and its containers.
See how to firstly install metrics-server: metrics-server-installtion.
Otherwise if you want to check cpu/memory usage without installing any third party tool then you can get memory and cpu usage of pod from cgroup.
Go to shell of running container kubectl exec pod_name -- /bin/bash
Go to cd /sys/fs/cgroup/cpu for cpu usage run cat cpuacct.usage
Go to cd /sys/fs/cgroup/memory for memory usage run cat memory.usage_in_bytes
Remember that memory usage is in bytes.
Take a look: memory-usage-kubernetes.

Kubectl status nodes provides different responses for equivalent clusters

I have recently started using kubectl krew (v0.3.4), which later was used to install "status" plugin (v0.4.1).
I am managing right now different clusters, and I'm checking the nodes' status. Most of the clusters answer something exactly like:
Node/[NodeName], created 25d ago linux Oracle Linux Server 7.8
(amd64), kernel 4.1.12-124.36.4.el7uek.x86_64, kubelet v1.18.2, kube-proxy v1.18.2
cpu: 0.153/7 (2%)
mem: 4.4GB/7.1GB (63%)
ephemeral-storage: 2.2GB
There is one cluster that answers, for some reason:
Node/[nodeName], created 11d ago
linux Oracle Linux Server 7.8 (amd64), kernel 4.1.12-124.26.5.el7uek.x86_64, kubelet v1.18.2, kube-proxy v1.18.2
cpu: 5, mem: 7.1GB, ephemeral-storage: 2.2GB
(Let me clarify that I'm trying to automate some resources checking and the way resources are differently displayed is quite annoying, plus the used vs total resources is exactly what I need!)
I am absolutely unable to locate the status plugin repo, and I have no idea where to go with this issue. kubectl version says that both clusters have the same server version, I'm executing the kubectl status command from my local in both cases and... I am completely out of ideas.
Does anyone know why this might be happening, or when can I go to look for answers?
To display used and total resources you can use kubectl top
Display Resource (CPU/Memory/Storage) usage.
The top command allows you to see the resource consumption for nodes or pods.
This command requires Metrics Server to be correctly configured and working on the server.
Available Commands:
node Display Resource (CPU/Memory/Storage) usage of nodes
pod Display Resource (CPU/Memory/Storage) usage of pods
Usage:
kubectl top [flags] [options]
You can also have a look at Tools for Monitoring Resources inside Kubernetes docs.
As for doing the same using Kubernetes Python Client you can use:
from kubernetes.config import load_kube_config
from kubernetes.client import CustomObjectsApi
load_kube_config()
cust = CustomObjectsApi()
cust.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'nodes') # All node metrics
cust.list_cluster_custom_object('metrics.k8s.io', 'v1beta1', 'pods') # All Pod Metrics

The metrics of kubectl top nodes is not correct?

I try to get CPU/Memory usage of the k8s Cluster Nodes via metrics-server API, but I found the returned values of metrics-server is lower than actual used CPU/Memory.
The output of kubectl top command : kubectl top nodes
The following is the output of the free command, from which you could see the memory usage is great than 90%.
Why the difference is so high?
kubectl top nodes is reflecting the actual usage of your Kubernetes Nodes.
For example:
Your node has 60GB memory and you actually use 30GB so it will be 50% of usage.
But you can request for example:
100 MB and have a limit 200MB memory.
This doesn't mean you only consume 0.16% (100 / 60000) memory, but the amount of your configuration.
I know this is an old topic, but I think the problem is still remaining.
To answer simply, the kubectl top command shows ONLY the actual resource usage, and it is not related to the request/limits configurations in your manifests.
for example:
you could obtain a 400m:1Gi (cpu/memory) usage for a specifique node while total requests/limits are 1.5:4Gi (cpu/memory).
You will observe enougth available resources to schedule but actually it will not work.
requests/limits are impacting directly node resources (resources reservation) but it does not means they are completly used (what kubectl top nodes is showing).

Disk Utilization stats per POD in K8

I was looking for ways to get details on disk utilization (mainly writes and Delete's) on a per POD level. I did get google advice such as cAdvisor/heapster etc but none of them talk about disk usage profiling from POD perspective.
Any help on this is greatly appreciated.
TIA!
Assuming the pods are running a linux variant you can do:
kubectl exec -it <pod> cat /proc/1/io
Returns info on the main process' IO defined here
You could then write a script to run the above command (or use the kuberentes API) for each pod of interest.