I deployed Prometheus and Grafana into my cluster.
When I open the dashboards I don't get data for pod CPU usage.
When I check Prometheus UI, it shows pods 0/0 up, however I have many pods running in my cluster.
What could be the reason? I have node exporter running in all of nodes.
Am getting this for kube-state-metrics,
I0218 14:52:42.595711 1 builder.go:112] Active collectors: configmaps,cronjobs,daemonsets,deployments,endpoints,horizontalpodautoscalers,jobs,limitranges,namespaces,nodes,persistentvolumeclaims,persistentvolumes,poddisruptionbudgets,pods,replicasets,replicationcontrollers,resourcequotas,secrets,services,statefulsets
I0218 14:52:42.595735 1 main.go:208] Starting metrics server: 0.0.0.0:8080
Here is my Prometheus config file:
https://gist.github.com/karthikeayan/41ab3dc4ed0c344bbab89ebcb1d33d16
I'm able to hit and get data for:
http://localhost:8080/api/v1/nodes/<my_worker_node>/proxy/metrics/cadvisor
As it was mentioned by karthikeayan in comments:
ok, i found something interesting in the values.yaml comments, prometheus.io/scrape: Only scrape pods that have a value of true, when i remove this relabel_config in k8s configmap, i got the data in prometheus ui.. unfortunately k8s configmap doesn't have comments, i believe helm will remove the comments before deploying it.
And just for clarification:
kube-state-metrics vs. metrics-server
The metrics-server is a project that has been inspired by Heapster and is implemented to serve the goals of the Kubernetes Monitoring Pipeline. It is a cluster level component which periodically scrapes metrics from all Kubernetes nodes served by Kubelet through Summary API. The metrics are aggregated, stored in memory and served in Metrics API format. The metric-server stores the latest values only and is not responsible for forwarding metrics to third-party destinations.
kube-state-metrics is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). It holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it. And just like the metric-server it too is not responsibile for exporting its metrics anywhere.
Having kube-state-metrics as a separate project also enables access to these metrics from monitoring systems such as Prometheus.
Related
I am currently trying to set up an horizontal pod autoscaler for my application running inside Kubernetes. The HPA is relying on external metrics that are fetched from Prometheus by a Prometheus adapter (https://github.com/kubernetes-sigs/prometheus-adapter).
The metrics are fetched by the adapter and made available to the Kubernetes metrics API successfully, but the metricLabels map is empty, making it impossible for the HPA to associate the correct metrics with the correct pod.
Eg. of a query to the metrics API
kubectl get --raw "/apis/external.metrics.k8s.io/v1beta1/namespaces/<namespace>/batchCommandsActive_totalCount/"
{"kind":"ExternalMetricValueList","apiVersion":"external.metrics.k8s.io/v1beta1","metadata":{},"items":[{"metricName":"batchCommandsActive_totalCount",**"metricLabels":{}**,"timestamp":"2023-02-10T11:38:48Z","value":"0"}]}
Those metrics should have three labels associated to them (hostname, localnode and path) in order for the correct pod to retrieve them.
Here is an extract of the Prometheus adapter configmap that defines the queries made to Prometheus by the Prometheus adapter
- seriesQuery: '{__name__="batchCommandsActive_totalCount",hostname!="",localnode!="",path!=""}'
metricsQuery: sum(<<.Series>>{<<.LabelMatchers>>}) by (name)
resources:
namespaced: false
Thanks for your help!
So far, no answer from StackOverflow or tutorial (eg. https://github.com/kubernetes-sigs/prometheus-adapter/blob/master/docs/walkthrough.md) have helped with my problem.
I like to see how my services will work on kubernethes so I can optimize my code and set good values for request/limit on both CPU and memory.
To do that I have tried kubectl top bit ot only gives me the current usage.
kubectl top pod podname
How do I get the init, min and max usage?
If it is not possible to get all those values, is there any way to get max usage?
In order to see stats you may want to use one of these monitoring tools:
cAdvisor
Container Advisor is a great monitoring tool that provides
container-level metrics and exposes resource usage and performance
data from running containers. It provides quick insight into CPU
usage, memory usage, and network receive/transmit of running
containers. cAdvisor is embedded into the kubelet, hence you can
scrape the kubelet to get container metrics, store the data in a
persistent time-series store like Prometheus/InfluxDB, and then
visualize it via Grafana.
Metrics Server
Metrics Server is a cluster-wide aggregator of resource usage data and
collects basic metrics like CPU and memory usage for Kubernetes nodes,
pods, and containers. It’s used by Horizontal Pod Autoscaler and the
Kubernetes dashboard itself, and users can access these metrics
directly by using the kubectl top command. Metrics Server replaces
Heapster as the primary metrics aggregator in the cluster, which has
been marked as deprecated in the newer version of Kubernetes.
Node Exporter
Node Exporter is the Prometheus exporter for hardware and operating
system metrics. It allows you to monitor node-level metrics such as
CPU, memory, filesystem space, network traffic, and other monitoring
metrics, which Prometheus scraps from a running node exporter
instance. You can then visualize these metrics in Grafana.
Kube-State-Metrics
Kube-state-metrics is an add-on agent that listens to the Kubernetes
API server. It generates metrics about the state of the Kubernetes
objects inside the cluster like deployments, replica sets, nodes, and
pods.
Metrics generated by kube-state-metrics are different from resource
utilization metrics, which are primarily geared more towards CPU,
memory, and network usage. Kube-state-metrics expose critical metrics
about the condition of your Kubernetes cluster:
Resource requests and limits
Number of objects–nodes, pods, namespaces, services, deployments
Number of pods in a running/terminated/failed state
Prometheus
Prometheus is a free software application used for event monitoring
and alerting. It records real-time metrics in a time series database
built using a HTTP pull model, with flexible queries and real-time
alerting
You can visualize Prometheus monitoring data with Grafana
and its dashboard collection.
You can find detailed Monitor Your Kubernetes Cluster With Prometheus and Grafana instruction how to use them together
I had been trying to implement Kubernetes HPA using Metrics from Kafka-exporter. Hpa supports Prometheus, so we tried writing the metrics to prometheus instance. From there, we are unclear on the steps to do. Is there an article where it will explain in details ?
I followed https://medium.com/google-cloud/kubernetes-hpa-autoscaling-with-kafka-metrics-88a671497f07
for same in GCP and we used stack driver, and the implementation worked like a charm. But, we are struggling in on-premise setup, as stack driver needs to be replaced by Prometheus
In order to scale based on custom metrics, Kubernetes needs to query an API for metrics to check for those metrics. That API needs to implement the custom metrics interface.
So for Prometheus, you need to setup an API that exposes Prometheus metrics through the custom metrics API. Luckily, there already is an adapter.
When I implemented Kubernetes HPA using Metrics from Kafka-exporter I had a few setbacks which I solved doing the following:
I deployed the kafka-exporter container as a sidecar to the pods I
wanted to scale. I found that the HPA scales the pod it gets the
metrics from.
I used annotations to make Prometheus scrape the metrics from the pods with exporter.
Then I verified that the kafka-exporter metrics are getting to Prometheus. If it's not there you can't advance further.
I deployed prometheus adapter using its helm chart. The adapter will "translate" Prometheus's metrics into custom Metrics
Api, which will make it visible to HPA.
I made sure that the metrics are visible in k8s by executing kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1 from one of
the master nodes.
I created an hpa with the matching metric name.
Here is a complete guide explaining how to implement Kubernetes HPA using Metrics from Kafka-exporter
Please comment if you have more questions
I've read some pages about monitoring k8s, and I found kubernetes_sd_config (within prometheus), metrics-server (took the place of heapster) and kube-state-metrics. All of them could provides metrics, but what's the difference?
Does kubernetes_sd_config (within prometheus) provide all the data those I can get using metrics-server and kube-state-metrics?
Is kubernetes_sd_config just enough for monitoring?
Is metrics-server just for providing data (less than kubernetes_sd_config) to the internal components(such as hpa controller)?
Is kube-state-metrics just for the objects (pod, deployment...) in k8s?
what is their own target respectively?
1 Metrics-server is a cluster level component which periodically scrapes container CPU and memory usage metrics from all Kubernetes nodes served by Kubelet through Summary API.
The Kubelet exports a "summary" API that aggregates stats from all pods.
$ kubectl proxy &
Starting to serve on 127.0.0.1:8001
$ NODE=$(kubectl get nodes -o=jsonpath="{.items[0].metadata.name}")
$ curl localhost:8001/api/v1/proxy/nodes/${NODE}:10255/stats/summary
Use-Cases:
Horizontal Pod Autoscaler.
kubectl top --help: command.
2 kube-state-metrics
is focused on generating completely new metrics from Kubernetes' object state (e.g. metrics based on deployments, replica sets, etc.). It holds an entire snapshot of Kubernetes state in memory and continuously generates new metrics based off of it
Use-Cases
count the number of k8s Objects.
How many namespaces are there ?
sysdig-k8s-state-metrics provide the further Information.
3 Prometheus Node_Exporter − Gets the host level matrices and exposes them to Prometheus.
Use-Cases
User and Kernel Space level information.
Lastly, kubernetes_sd_config is the configuration file defines everything related to scraping targets.
You can decide in the config file what kind of information you want to gather and from whom.
There are three levels of metrics collection to consider in Kubernetes - Node, Pod and the Application that runs in the pod.
For Node and Application metrics I have solutions that work wonderfully, but I am stuck on pod metrics.
I have tried cAdvisor and Kube state metrics but none of them give me what I want. Kube state metrics only gives information that is already known like pod CPU limits and requests. cAdvisor doesn't insert pod labels to container names so I have no means of knowing which pod is misbehaving.
Given a pod, I'd like to know it's CPU, memory and storage usage both with respect to the pod itself and also with respect to the node it is scheduled on.
I am using prometheus to collect metrics via the prometheus operator CRD.
Can anyone help suggest an open source metrics exporter that would do the job I mentioned above?
The standard metric collector is Heapster. It comes preinstalled in many vendors like GKE also. With Heapster installed, you can just do kubectl top pods to see cpu/mem metrics on the client side. You can plug it with some sink to store the results for archival.
https://github.com/kubernetes/heapster