I want to monitor the current/target CPU utilization at the deployment/HPA level using Prometheus. GCP Kubernetes monitoring has these metrics available on Stackdriver dashboard but could not find them how they are tracking it.
Following links contains the list of HPA metrics exposed, which does not have the required/target CPU utilization.
https://github.com/kubernetes/kube-state-metrics/blob/1dfe6681e9/docs/horizontalpodautoscaler-metrics.md
I think you can take a look at cAdvisor. Actually, cAdvisor is a part of kubelet service and represents itself as a monitoring agent for performance and resource usage by the containers within particular node. By default, cAdvisor exposes container statistics across Prometheus metrics which are available in /metrics endpoint for each container. I guess you can use container_cpu_load_average_10s metric in order to fetch CPU utilization per each container for relevant Pod/Deployment.
Related
I'm trying to find the sutiable expression to set alert that can calculate the Pods utlization that are reaching 80% for both memory and CPU on all the pods that are exisiting on namespace so i appreciate if can someone help me to achieve that
The Kubernetes ecosystem includes two complementary add-ons for aggregating and reporting valuable monitoring data from your cluster: Metrics Server and kube-state-metrics.
Metrics Server collects resource usage statistics from the kubelet on each node and provides aggregated metrics through the Metrics API. Metrics Server stores only near-real-time metrics in memory, so it is primarily valuable for spot checks of CPU or memory usage, or for periodic querying by a full-featured monitoring service that retains data over longer timespans.
kube-state-metrics is a service that makes cluster state information easily consumable. Whereas Metrics Server exposes metrics on resource usage by pods and nodes, kube-state-metrics listens to the Control Plane API server for data on the overall status of Kubernetes objects (nodes, pods, Deployments, etc.), as well as the resource limits and allocations for those objects. It then generates metrics from that data that are available through the Metrics API.
Once you have installed the same, you can use the following command to get the metrics
kubectl top pod <pod-name> -n <namespace> --containers
I like to see how my services will work on kubernethes so I can optimize my code and set good values for request/limit on both CPU and memory.
To do that I have tried kubectl top bit ot only gives me the current usage.
kubectl top pod podname
How do I get the init, min and max usage?
If it is not possible to get all those values, is there any way to get max usage?
In order to see stats you may want to use one of these monitoring tools:
cAdvisor
Container Advisor is a great monitoring tool that provides
container-level metrics and exposes resource usage and performance
data from running containers. It provides quick insight into CPU
usage, memory usage, and network receive/transmit of running
containers. cAdvisor is embedded into the kubelet, hence you can
scrape the kubelet to get container metrics, store the data in a
persistent time-series store like Prometheus/InfluxDB, and then
visualize it via Grafana.
Metrics Server
Metrics Server is a cluster-wide aggregator of resource usage data and
collects basic metrics like CPU and memory usage for Kubernetes nodes,
pods, and containers. It’s used by Horizontal Pod Autoscaler and the
Kubernetes dashboard itself, and users can access these metrics
directly by using the kubectl top command. Metrics Server replaces
Heapster as the primary metrics aggregator in the cluster, which has
been marked as deprecated in the newer version of Kubernetes.
Node Exporter
Node Exporter is the Prometheus exporter for hardware and operating
system metrics. It allows you to monitor node-level metrics such as
CPU, memory, filesystem space, network traffic, and other monitoring
metrics, which Prometheus scraps from a running node exporter
instance. You can then visualize these metrics in Grafana.
Kube-State-Metrics
Kube-state-metrics is an add-on agent that listens to the Kubernetes
API server. It generates metrics about the state of the Kubernetes
objects inside the cluster like deployments, replica sets, nodes, and
pods.
Metrics generated by kube-state-metrics are different from resource
utilization metrics, which are primarily geared more towards CPU,
memory, and network usage. Kube-state-metrics expose critical metrics
about the condition of your Kubernetes cluster:
Resource requests and limits
Number of objects–nodes, pods, namespaces, services, deployments
Number of pods in a running/terminated/failed state
Prometheus
Prometheus is a free software application used for event monitoring
and alerting. It records real-time metrics in a time series database
built using a HTTP pull model, with flexible queries and real-time
alerting
You can visualize Prometheus monitoring data with Grafana
and its dashboard collection.
You can find detailed Monitor Your Kubernetes Cluster With Prometheus and Grafana instruction how to use them together
I want to know the share of resource - cpu, memory- per kubernetes's pod.
And I want to know what the standard of share is
This is hard to do using kubectl only (or I don't know how). What we usually do is to use the kubelet metric-server to export all metric to prometheus. We then use Grafana to calculate those values. The following metrics should allow you to calculate your values:
CPU cores:
kube_node_status_allocatable_cpu_cores - available cores
kube_pod_container_resource_requests_cpu_cores - requested cores per container
container_cpu_usage_seconds_total - used cores per container
Memory:
kube_node_status_allocatable_memory_bytes - available memory
kube_pod_container_resource_requests_memory_bytes - requested memory by container
container_memory_usage_bytes - used memory by container
You can filter those by label (i.e. by pod name or namespace) and calculate all kinds of things based on them.
Try kubectl top and friends. More words because SO requires them.
You can use metrics server which is the Cluster-wide aggregator of resource usage data. It is the source of container resource metrics for Kubernetes built-in autoscaling pipelines. It collects resource metrics from Kubelets and exposes them in Kubernetes apiserver. You can also access the metrics API by kubectl top.
Another solution is using Prometheus as suggested by #jankantert.
There are three levels of metrics collection to consider in Kubernetes - Node, Pod and the Application that runs in the pod.
For Node and Application metrics I have solutions that work wonderfully, but I am stuck on pod metrics.
I have tried cAdvisor and Kube state metrics but none of them give me what I want. Kube state metrics only gives information that is already known like pod CPU limits and requests. cAdvisor doesn't insert pod labels to container names so I have no means of knowing which pod is misbehaving.
Given a pod, I'd like to know it's CPU, memory and storage usage both with respect to the pod itself and also with respect to the node it is scheduled on.
I am using prometheus to collect metrics via the prometheus operator CRD.
Can anyone help suggest an open source metrics exporter that would do the job I mentioned above?
The standard metric collector is Heapster. It comes preinstalled in many vendors like GKE also. With Heapster installed, you can just do kubectl top pods to see cpu/mem metrics on the client side. You can plug it with some sink to store the results for archival.
https://github.com/kubernetes/heapster
I defined my deployment resources
resources:
limits:
cpu: 900m
memory: 2500Mi
now on http://localhost:8001/api
how can I get the max usage of memory and cpu (in order to handle and define usages and resources well)?
Usually, you will need to implement some monitoring solution for your K8s cluster to store historical metrics.
If your Kubernetes deployment runs on GKE, you can use Stackdriver
for that and if you have opted for Stackdriver Premium, you will see your historical metrics there. If it's your own Kubernetes deployment, Prometheus/Grafana is a popular choice
By default the pods run with unbounded CPU and memory limits. This means that any pod in the system will be able to consume as much CPU and memory as is on the node that executes the pod.
Reference:
https://kubernetes.io/docs/tasks/administer-cluster/cpu-memory-limit/
Some tools like kubectl top can only get the resource usage of your pod in current time. You need a monitor service.
kube-state-metrics -> prometheus -> grafana-kubernetes-app is a popular solution to monitor metrics of self deployed k8s cluster.
kube-state-metrics can generates metrics about the state of k8s objects, such as node, pod, deployment.
you just need to trace metrics kube_pod_container_resource_requests_cpu_cores and kube_pod_container_resource_requests_memory_bytes as the document specified.
peometheus should collect metrics from kube-state-metrics every few seconds and generate time series data.
grafana-kubernetes-app charts them easily. You can find the maximum cpu usage of a given pod then.
if you are running on GKE
for now day, the kubernetes console has advanced and you can see all statistics on the console ui
go to https://console.cloud.google.com/kubernetes/list
under workloads choose the deployment
A little late to the party, but I created Kube Eagle for exactly this purpose: https://github.com/google-cloud-tools/kube-eagle