Check the maximum usage of kubernetes pod - kubernetes

I want to check the maximum and average of kubernetes Pod. and I tried to find it but cannot get any relevant information. Also, I checked the Lens (third-party software) but only get the current usage and it only shows usage, limit for past 1 hour.
How to find the maximum usage of Pod?

If you don't specify the ressources Limits in your config creation yaml file, it will takes by default those values :
Create the Pod. The output shows that the Pod's container has a memory
request of 256 MiB and a memory limit of 512 MiB. These are the
default values specified by the LimitRange
You have more infos here
You have an article here, if you want to specify your pods limit manually.
PS: if you enable promotheus in your Lens, you can visualize your different metrics (pods usage and limits for the cpu, memory, network, and filessytem)

kubectl describe quota
Or within a different namespace:
kubectl describe quota --namespace=<your-namespace>

Related

How to find metrics about CPU/MEM for the pod running on a Kubernetes cluster on Prometheus

I have Prometheus setup via Helm from Terraform and it's is configured to connect to my Kubernetes cluster. I open my Prometheus but I am not sure which metric to choose from the list to be able to view the CPU/MEM of running pods/jobs.
Here are all the pods running with the command (test1 is the kube namespace):
kubectl -n test1 get pods
podsrunning
When, I am on Prometheus, I see many metrics related to CPU, but not sure which one to choose:
prom1
I tried to choose one, but the namespace = prometheus and it uses prometheus-node-exporter and I don't see my cluster or my namespace test1 anywhere here.
prom2
Could you please help me? Thank you very much in advance.
UPDATE SCREENSHOT
UPDATE SCREENSHOT
I need to concentrate on this specific namespace, normally with the command:
kubectl get pods --all-namespaces | grep hermatwin
I see the first line with namespace = jobs I think this is namespace.
No result when set calendar to last Friday:
UPDATE SCREENSHOT April 20
I tried to select 2 days with starting date on last Saturday 17 April but I don't see any result:
ANd, if I remove (namespace="jobs") condition, I don't see any result either:
I tried to rerun the job (simulation jobs) again just now and tried to execute the prometheus query while the job was still running mode but I don't get any result :-( Here you can see my jobs where running.
I don't get any result:
When using simple filter, just container_cpu_usage_seconds_total, I can see the namespace="jobs"
node_cpu_seconds_total is a metric from node-exporter, the exporter that brings machine statistics and its metrics are prefixed with node_. You need metrics from cAdvisor, this one produces metrics related to containers and they are prefixed with container_:
container_cpu_usage_seconds_total
container_cpu_load_average_10s
container_memory_usage_bytes
container_memory_rss
Here are some basic queries for you to get started. Be ready that they may require tweaking (you may have different label names):
CPU Utilisation Per Pod
sum(irate(container_cpu_usage_seconds_total{container!="POD", container=~".+"}[2m])) by (pod)
RAM Usage Per Pod
sum(container_memory_usage_bytes{container!="POD", container=~".+"}) by (pod)
In/Out Traffic Rate Per Pod
Beware that pods with host network mode (not isolated) show traffic rate for the whole node. * 8 is to convert bytes to bits for convenience (MBit/s, GBit/s, etc).
# incoming
sum(irate(container_network_receive_bytes_total[2m])) by (pod) * 8
# outgoing
sum(irate(container_network_transmit_bytes_total[2m])) by (pod) * 8

DataDog metric for Kubernetes PersistentVolume usage or remaining space

Is there a DataDog metric to report the space used or remaining in a GCP PersistentVolume. I have found disk use metrics for the container itself, but not for a PersistentVolume.
I am working in GoogleCloudPlatform.
Found the answer:
The metric kubernetes.kubelet.volume.stats.used_bytes will allow you to get the disk usage on PersistentVolumes. This can be achieved using the tag persistentvolumeclaim on this metric.
You can view this metric and tag combination in your account here: https://app.datadoghq.com/metric/summary?filter=kubernetes.kubelet.volume.stats.used_bytes&metric=kubernetes.kubelet.volume.stats.used_bytes
Documentation on this metric can be found here: https://docs.datadoghq.com/agent/kubernetes/data_collected/#kubelet

How to calculate Percentage utilization of particular pod in kubernetes with metrics server?

I want to calculate the CPU and Memory Percentage of Resource utilization of an individual pod in Kubernetes. For that, I am using metrics server API
From the metrics server, I get the utilization from this command
kubectl top pods --all-namespaces
kube-system coredns-5644d7b6d9-9whxx 2m 6Mi
kube-system coredns-5644d7b6d9-hzgjc 2m 7Mi
kube-system etcd-manhattan-master 10m 53Mi
kube-system kube-apiserver-manhattan-master 23m 257Mi
But I want the percentage utilization of individual pod Both CPU % and MEM%
From this output by top command it is not clear that from how much amount of cpu and memory it consumes the resultant amount?
I don't want to use Prometheus operator I saw one formula for it
sum (rate (container_cpu_usage_seconds_total{image!=""}[1m])) by (pod_name)
Can I calculate it with MetricsServer API?
I thought to calculate like this
CPU% = ((2+2+10+23)/ Total CPU MILLICORES)*100
MEM% = ((6+7+53+257)/AllocatableMemory)* 100
Please tell me if I right or wrong. Because I didn't see any standard formula for calculating pod utilization in Kubernetes documentation
Unfortunately kubectl top pods provides only a quantity values and not a percentages.
Here is a good explanation of how to interpret those values.
It is currently not possible to list pod resource usage in percentages with a kubectl top command.
You could still chose Grafana with Prometheus but it was already stated that you don't want to use it (however maybe another member of the community with similar problem would do so I am mentioning it here).
EDIT:
Your formulas are correct. They will calculate how much CPU/Mem is being consumed by all Pods relative to total CPU/Mem you got.
I hope it helps.

Different unit for Kubernetes resource memory on pods

My question is related to Kubernetes and the units of the metrics used for the HPA (autoscaling).
When I run the command
kubectl describe hpa my-autoscaler
I get (a part of more information) this:
...
Metrics: ( current / target )
resource memory on pods: 318067507200m / 1000Mi
resource cpu on pods (as a percentage of request): 1% (1m) / 80%
...
In this example, when you can see the metrics for the resource memory on pods, you can see that the unit for the current value is m, which is "millis" (as is described in the official documentation), but the unit used for the target value is Mi, which is "Mebis"
Is there any problem with the usage of different units?
Thanks!
No, they are just different multipliers. The actual code is using a raw number of bytes under the hood.

The metrics of kubectl top nodes is not correct?

I try to get CPU/Memory usage of the k8s Cluster Nodes via metrics-server API, but I found the returned values of metrics-server is lower than actual used CPU/Memory.
The output of kubectl top command : kubectl top nodes
The following is the output of the free command, from which you could see the memory usage is great than 90%.
Why the difference is so high?
kubectl top nodes is reflecting the actual usage of your Kubernetes Nodes.
For example:
Your node has 60GB memory and you actually use 30GB so it will be 50% of usage.
But you can request for example:
100 MB and have a limit 200MB memory.
This doesn't mean you only consume 0.16% (100 / 60000) memory, but the amount of your configuration.
I know this is an old topic, but I think the problem is still remaining.
To answer simply, the kubectl top command shows ONLY the actual resource usage, and it is not related to the request/limits configurations in your manifests.
for example:
you could obtain a 400m:1Gi (cpu/memory) usage for a specifique node while total requests/limits are 1.5:4Gi (cpu/memory).
You will observe enougth available resources to schedule but actually it will not work.
requests/limits are impacting directly node resources (resources reservation) but it does not means they are completly used (what kubectl top nodes is showing).