Different unit for Kubernetes resource memory on pods - kubernetes

My question is related to Kubernetes and the units of the metrics used for the HPA (autoscaling).
When I run the command
kubectl describe hpa my-autoscaler
I get (a part of more information) this:
...
Metrics: ( current / target )
resource memory on pods: 318067507200m / 1000Mi
resource cpu on pods (as a percentage of request): 1% (1m) / 80%
...
In this example, when you can see the metrics for the resource memory on pods, you can see that the unit for the current value is m, which is "millis" (as is described in the official documentation), but the unit used for the target value is Mi, which is "Mebis"
Is there any problem with the usage of different units?
Thanks!

No, they are just different multipliers. The actual code is using a raw number of bytes under the hood.

Related

Check the maximum usage of kubernetes pod

I want to check the maximum and average of kubernetes Pod. and I tried to find it but cannot get any relevant information. Also, I checked the Lens (third-party software) but only get the current usage and it only shows usage, limit for past 1 hour.
How to find the maximum usage of Pod?
If you don't specify the ressources Limits in your config creation yaml file, it will takes by default those values :
Create the Pod. The output shows that the Pod's container has a memory
request of 256 MiB and a memory limit of 512 MiB. These are the
default values specified by the LimitRange
You have more infos here
You have an article here, if you want to specify your pods limit manually.
PS: if you enable promotheus in your Lens, you can visualize your different metrics (pods usage and limits for the cpu, memory, network, and filessytem)
kubectl describe quota
Or within a different namespace:
kubectl describe quota --namespace=<your-namespace>

container_memory_usage_bytes by deployment name

Given a kubernetes cluster with:
prometheus
node-exporter
kube-state-metrics
I like to use the metric container_memory_usage_bytes but select by deployment_name instead of pod.
Selectors like container_memory_usage_bytes{pod_name=~"foo-.+"} if the deployment_name=foo are great as long there is not a deployment with deployment_name=foo-bar.
The same I'd like to achieve with the metric kube_pod_container_resource_limits_memory_bytes.
Is there a way to achieve this?
TL;DR
There is no straightforward way to query prometheus by a deployment-name.
You can query memory usage of a specific deployment by using deployment's labels.
Used query:
sum(
kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}
)
by (label_app)
There is an awesome article which explains the concepts behind this query. I encourage you to read it:
Medium.com: Amimahloof: Kubernetes promql prometheus cpu aggregation walkthrough
I've included an explanation with example below.
The selector mentioned in the question:
container_memory_usage_bytes{pod_name=~"foo-.+"}
.+ - match any string but not an empty string
with pods like:
foo-12345678-abcde - will match (deployment foo)
foo-deployment-98765432-zyxzy - will match (deployment foo-deployment)
As shown above it will match for both pods and for both deployments.
For more reference:
Prometheus.io: Docs: Prometheus: Querying: Basics
As mentioned earlier, you can use labels from your deployment to pinpoint the resource used by your specific deployment.
Assuming that:
There are 2 deployments in memory-testing namespace:
ubuntu with 3 replicas
ubuntu-additional with 3 replicas
Above deployments have labels the same as their names (they can be different):
app: ubuntu
app: ubuntu-additional
Kubernetes cluster version 1.18.X
Why do I specify Kubernetes version?
Kubernetes 1.16 will remove the duplicate pod_name and container_name metric labels from cAdvisor metrics. For the 1.14 and 1.15 release all pod, pod_name, container and container_name were available as a grace period.
Github.com: Kubernetes: Metrics Overhaul
This means that you will need to substitute the parameters like:
pod with pod_name
container with container_name
To deploy Prometheus and other tools to monitor the cluster I used: Github.com: Coreos: Kube-prometheus
The pods in ubuntu deployment are configured to generate artificial load (stress-ng). This is done to show how to avoid situation where the used resources are doubled.
The resources used by pods in memory-testing namespace:
$ kubectl top pod --namespace=memory-testing
NAME CPU(cores) MEMORY(bytes)
ubuntu-5b5d6c56f6-cfr9g 816m 280Mi
ubuntu-5b5d6c56f6-g6vh9 834m 278Mi
ubuntu-5b5d6c56f6-sxldj 918m 288Mi
ubuntu-additional-84bdf9b7fb-b9pxm 0m 0Mi
ubuntu-additional-84bdf9b7fb-dzt72 0m 0Mi
ubuntu-additional-84bdf9b7fb-k5z6w 0m 0Mi
If you were to query Prometheus with below query:
container_memory_usage_bytes{namespace="memory-testing", pod=~"ubuntu.*"}
You would get output similar to one below (it's cut to show only one pod for example purposes, by default it would show all pods with ubuntu in name and in memory-testing namespace):
container_memory_usage_bytes{endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 308559872
container_memory_usage_bytes{container="POD",endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74/312980f90e6104d021c12c376e83fe2bfc524faa4d4cee6553182d0fa2e007a1",image="k8s.gcr.io/pause:3.2",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_POD_ubuntu-5b5d6c56f6-cfr9g_memory-testing_b96dea39-b388-471e-a789-8c74b1670c74_0",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 782336
container_memory_usage_bytes{container="ubuntu",endpoint="https-metrics",id="/kubepods/besteffort/podb96dea39-b388-471e-a789-8c74b1670c74/1b93889a3e7415ad3fa040daf89f3f6bc77e569d85069de518267666ede8e21c",image="ubuntu#sha256:55cd38b70425947db71112eb5dddfa3aa3e3ce307754a3df2269069d2278ce47",instance="192.168.0.117:10250",job="kubelet",metrics_path="/metrics/cadvisor",name="k8s_ubuntu_ubuntu-5b5d6c56f6-cfr9g_memory-testing_b96dea39-b388-471e-a789-8c74b1670c74_0",namespace="memory-testing",node="node1",pod="ubuntu-5b5d6c56f6-cfr9g",service="kubelet"} 307777536
In this point you will need to choose which metric you will be using. In this example I used the first one. For more of a deep dive please take a look on this articles:
Blog.freshtracks.io: A deep dive into kubernetes metrics part 3 container resource metrics
Ianlewis.org: Almighty pause container
If we were to aggregate this metrics with sum (QUERY) by (pod) we would have in fact doubled our reported used resources.
Dissecting the main query:
container_memory_usage_bytes{namespace="memory-testing", container=""}
Above query will output records with used memory metric for each pod. The container=""parameter is used to get only one record (mentioned before) which does not have container parameter.
kube_pod_labels{label_app=~"ubuntu.*"}
Above query will output record with pods and it's labels with regexp of ubuntu.*
kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}
Above query will match the pod from kube_pod_labels with pod of container_memory_usage_bytes and add the label_app to each of the records.
sum (kube_pod_labels{label_app=~"ubuntu.*"} * on (pod) group_right(label_app) container_memory_usage_bytes{namespace="memory-testing", container=""}) by (label_app)
Above query will sum the records by label_app.
After that you should be able to get the query that will sum the used memory by a label (and in fact a Deployment).
As for:
The same I'd like to achieve with the metric
kube_pod_container_resource_limits_memory_bytes.
You can use below query to get the memory limit for the deployment tagged with labels as in previous example, assuming that each pod in a deployment is having the same limits:
kube_pod_labels{label_app="ubuntu-with-limits"} * on (pod) group_right(label_app) kube_pod_container_resource_limits_memory_bytes{namespace="memory-testing", pod=~".*"}
You can apply functions like avg(),mean(),max() on this query to get the single number that will be your memory limit:
avg(kube_pod_labels{label_app="ubuntu-with-limits"} * on (pod) group_right(label_app) kube_pod_container_resource_limits_memory_bytes{namespace="memory-testing", pod=~".*"}) by (label_app)
Your memory limits can vary if you use VPA. In that situation you could show all of them simultaneously or use the avg() to get the average for all of the "deployment".
As a workaround to above solutions you could try to work with regexp like below:
container_memory_usage_bytes{pod=~"^ubuntu-.{6,10}-.{5}"}
The following PromQL query should return per-deployment memory usage in Kubernetes:
sum(
label_replace(
container_memory_usage_bytes{pod!=""},
"deployment", "$1", "pod", "(.+)-[^-]+-[^-]+"
)
) by (namespace,deployment)
The query works in the following way:
The container_memory_usage_bytes{pod!=""} time series selector selects all the time series with the name container_memory_usage_bytes and with non-empty pod label. We need to filter out time series without pod label, since such time series account for cgroups hierarchy, which isn't needed in this query. See this answer for more details on this.
The inner label_replace() extracts deployment name from pod label and puts it to deployment label. It expects that pod names are constructed with the following pattern: <deployment>-<some_suffix_1>-<some_suffix_2>.
The outer sum() sums pod memory usage per each group with identical namespace and deployment labels.

How to calculate Percentage utilization of particular pod in kubernetes with metrics server?

I want to calculate the CPU and Memory Percentage of Resource utilization of an individual pod in Kubernetes. For that, I am using metrics server API
From the metrics server, I get the utilization from this command
kubectl top pods --all-namespaces
kube-system coredns-5644d7b6d9-9whxx 2m 6Mi
kube-system coredns-5644d7b6d9-hzgjc 2m 7Mi
kube-system etcd-manhattan-master 10m 53Mi
kube-system kube-apiserver-manhattan-master 23m 257Mi
But I want the percentage utilization of individual pod Both CPU % and MEM%
From this output by top command it is not clear that from how much amount of cpu and memory it consumes the resultant amount?
I don't want to use Prometheus operator I saw one formula for it
sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)
Can I calculate it with MetricsServer API?
I thought to calculate like this
CPU% = ((2+2+10+23)/ Total CPU MILLICORES)*100
MEM% = ((6+7+53+257)/AllocatableMemory)* 100
Please tell me if I right or wrong. Because I didn't see any standard formula for calculating pod utilization in Kubernetes documentation
Unfortunately kubectl top pods provides only a quantity values and not a percentages.
Here is a good explanation of how to interpret those values.
It is currently not possible to list pod resource usage in percentages with a kubectl top command.
You could still chose Grafana with Prometheus but it was already stated that you don't want to use it (however maybe another member of the community with similar problem would do so I am mentioning it here).
EDIT:
Your formulas are correct. They will calculate how much CPU/Mem is being consumed by all Pods relative to total CPU/Mem you got.
I hope it helps.

How to calculate percentage of specific pod CPU usage on each node?

I have a Fluentd daemonset on the k8s cluster with 3 nodes. I would like to get a value which represents percentage which gives me an information on how much CPU (in %) on each node, fluentd pod occupies at this moment.
What would be the way to do it in Prometheus?
Thanks.
You would want to use the container_cpu_usage_seconds_total query in Prometheus.
Like so:
sum (rate (container_cpu_usage_seconds_total{}[5m])) by (container_name)
This will return the CPU usage of all the pods by container name in the system.
You can apply some filters as well to fine grain the output. For example:
sum (rate (container_cpu_usage_seconds_total{container_name=~"fluentd.*"}[5m])) by (container_name)
The above query will return the CPU usage with pods that matches containers name starting with fluentd
You can divide the usage of those pods by the total cpu cores of your cluster to find out the usage in percentage, like so:
sum (rate (container_cpu_usage_seconds_total{container_name=~"fluentd.*"}[5m])) / sum (machine_cpu_cores{}) * 100
Finally, in order to get percentage of total cpu cores usage on specific node for specific container_name, you would add additional filter: instance="INSTANCE_NAME":
sum (rate (container_cpu_usage_seconds_total{container_name=~"fluentd.*", instance="INSTANCE_NAME"}[5m])) / sum (machine_cpu_cores{}) * 100
N.B: Depending on K8S version, the fields returned by the container_cpu_usage_seconds_total query may vary. On some system, the name of the containers is represented by the container_name field, wheres on some system it is container.
The following query should return per-node CPU usage in percentage for pods with container name, which starts from fluentd:
100 * (
sum(rate(container_cpu_usage_seconds_total{container=~"fluentd.*"}[5m])) by (node)
/ on (node)
kube_node_status_capacity{resource="cpu"}
)
The container_cpu_usage_seconds_total metric is exported by cadvisor - see these docs.
The kube_node_status_capacity metric is exported by kube-state-metrics. See these docs.

The metrics of kubectl top nodes is not correct?

I try to get CPU/Memory usage of the k8s Cluster Nodes via metrics-server API, but I found the returned values of metrics-server is lower than actual used CPU/Memory.
The output of kubectl top command : kubectl top nodes
The following is the output of the free command, from which you could see the memory usage is great than 90%.
Why the difference is so high?
kubectl top nodes is reflecting the actual usage of your Kubernetes Nodes.
For example:
Your node has 60GB memory and you actually use 30GB so it will be 50% of usage.
But you can request for example:
100 MB and have a limit 200MB memory.
This doesn't mean you only consume 0.16% (100 / 60000) memory, but the amount of your configuration.
I know this is an old topic, but I think the problem is still remaining.
To answer simply, the kubectl top command shows ONLY the actual resource usage, and it is not related to the request/limits configurations in your manifests.
for example:
you could obtain a 400m:1Gi (cpu/memory) usage for a specifique node while total requests/limits are 1.5:4Gi (cpu/memory).
You will observe enougth available resources to schedule but actually it will not work.
requests/limits are impacting directly node resources (resources reservation) but it does not means they are completly used (what kubectl top nodes is showing).