How can I check cadvisor which is included in kubelet? - kubernetes

I have multiple kubernetes clusters, whose versions are 1.13, 1.16, 1.19.
I'm trying to monitor the total number of threads so that I need the metric "container_threads".
But for the cluster version equal or lower than 1.16, the container_threads metric looks like somewhat wrong.
For 1.16, the metric values is always 0, for 1.13 no container_threads metrics exists.
I know that the metric is from cadvisor which is included in kubelet.
I want to make sure that from which version, the cadvisor doesn't have container_threads.
I know how to check kubelet version "kubelet --version".
But I don't know how to find the version of cadvisor.
Does anyone know about it?
Thanks!

There is no specific command to find the version of cAdvisor. However, metrics can be accessed using commands like $ kubectl top
For Latest Version Of Cadvisor , We will use the official cAdvisor docker image from google hosted on the Docker Hub.
For more information about cAdvisor UI Overview and Processes, over to the cAdvisor section. Also, cAdvisor’s UI has been marked deprecated as of Kubernetes version 1.10 and the interface is scheduled to be completely removed in version 1.12.
If you run Kubernetes version 1.12 or later, the UI has been removed. However, the metrics are still there since cAdvisor is part of the kubelet binary.
The kubelet binary exposes all its runtime metrics and all the cAdvisor metrics at the /metrics endpoint using the Prometheus exposition format.
Note: cAdvisor doesn’t store metrics for long-term use, so if you want that functionality, you’ll need to look for a dedicated monitoring tool.

The cAdvisor version can be found through the cadvisor_version_info metric which is exposed in the /metrics endpoint of your cAdvisor service.
I believe the metric was added in cAdvisor v.0.18.0

Related

Kubernetes - access Metric Server

In the official Kubernetes documentation:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
We can see the following:
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation.
To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.
In order to verify I can "make use of custom metrics", I ran:
kubectl get metrics-server
And got the result: error: the server doesn't have a resource type "metrics-server"
May I ask what can I do to verify "Metrics server monitoring needs to be deployed in the cluster" please?
Thank you
The actual behavior behind the kubectl is to send an API request to a particular endpoint in the Kubernetes API server. There are a couple of predefined objects coming along with kubectl. But if you have some endpoints that are not defined with kubectl, you can use the flag --raw to send the request to API server.
In your case, you can checkout the built-in metrics with this command.
> kubectl get --raw /apis/metrics.k8s.io
{"kind":"APIGroup","apiVersion":"v1","name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}
You will get the JSON response from kubectl. Then, you can follow the path under the response to query your target resources. In my case, in order to get the actual metrics, I will need to use this command.
> kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
For this metrics endpoint, it refers to the built-in metrics. They are CPU and memory. If you want to use the custom metrics, you will need to install the prometheus, prometheus adaptor and corresponding exporter depending on your application. For the custom metrics verification, you can go to the following endpoint.
> kubectl get --raw /apis/custom.metrics.k8s.io

Prometheus (node_exporter) issue when update from GKE 1.15 to 1.16

I'm using Prometheus and Grafana applications on Kubernetes in Google GKE since many months. For example, on Grafana I used to monitor container_cpu_usage_seconds_total.
But since I upgraded my nodes of GKE from 1.15 to 1.16, I have lost container_* information.
To test it, I have created a new cluster with the 1.15 version. I installed Prometheus from the Google Marketeplace and upgraded GKE step by step until the issue appears. Again, the container_* monitoring stopped with version 1.16.
Here you can see container_cpu_usage_seconds_total and it stopped when I upgrade the node. There are 3 nodes
Am I the only one with this issue? Has anyone found a solution?
Thanks for your help :)
Valentin
I found what was going wrong.
With docker or kubernetes, node-exporter don't send pods metrics ( container_* ).
Cadvisor must be installed (In Google Marketeplace, Cadvisor is installed in node-exporter image)
Since Kubernetes 1.16, Cadvisor's configuration is wrong. You should edit the configuration to solve the issue
All informations are in this post : Prometheus not receiving metrics from cadvisor in GKE

Installation of heapster does not show metrics

I'm trying to get additional metrics like CPU and Memory displayed on the Kubernetes Dashboard. Based on the different forums, it looks like you have to install Heapter under the Kube System namespace.
I installed heapster, however I'm not seeing any metrics on the dashboard and when I go visit the URL, it shows 404.
How do I show additional heapster metrics on Kubernetes Dashboard?
Heapter is deprecated in favor of metrics-server, it provides the same functionality you are looking for, i.e CPU and Memory usage in the dashboard
if you are using Kubernetes 1.8+ you can install it using
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
for more information check https://github.com/kubernetes-sigs/metrics-server

Are Kube-state-metrics new or well formatted metrics?

I am fairly new to Kubernetes and had a question concerning kube-state-metrics. When I simply monitor Kubernetes using Prometheus I obtain a set of metrics from the cAdvisor, the nodes (node exporter), the pods, etc. When I include the kube-state-metrics, I seem to obtain more "relevant" metrics. Do kube-state-metrics allow to scrape "new" information from Kubernetes or are they rather "formatted" metrics using the initial Kubernetes metrics (from the nodes, etc. I mentioned earlier).
The two are basically unrelated. Cadvisor is giving you low-level stats about the containers like how much RAM and CPU they are using. KSM gives you info from the Kubernetes API like the Pod object status. Both are useful for different things and you probably want both.

Getting container resource metrics from kubernetes cluster

I am exploring client-go library for collecting resource metric for a kubernetes cluster. I am more keen on collecting the container metrics from all the pods.
But according to the wiki, https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/, i see that we can only get pod or node level metrics and not container level metrics.
Is there a way i can collect container level metric (like the way docker api gives the metrics for a container)?
You should deploy the External Metric server like Prometheus Adapter (example of full metric solution mentioned in official doc you linked).
The quickest way to achieve it is via kube-prometheus repository, which is part of prometheus-operator project.
It already includes kube-state-metrics, which generates custom metrics of your interest (Pod Metrics), example of container related one: kube_pod_container_resource_requests_cpu_cores.