Installation of heapster does not show metrics - kubernetes

I'm trying to get additional metrics like CPU and Memory displayed on the Kubernetes Dashboard. Based on the different forums, it looks like you have to install Heapter under the Kube System namespace.
I installed heapster, however I'm not seeing any metrics on the dashboard and when I go visit the URL, it shows 404.
How do I show additional heapster metrics on Kubernetes Dashboard?

Heapter is deprecated in favor of metrics-server, it provides the same functionality you are looking for, i.e CPU and Memory usage in the dashboard
if you are using Kubernetes 1.8+ you can install it using
kubectl apply -f https://github.com/kubernetes-sigs/metrics-server/releases/download/v0.3.7/components.yaml
for more information check https://github.com/kubernetes-sigs/metrics-server

Related

How can I check cadvisor which is included in kubelet?

I have multiple kubernetes clusters, whose versions are 1.13, 1.16, 1.19.
I'm trying to monitor the total number of threads so that I need the metric "container_threads".
But for the cluster version equal or lower than 1.16, the container_threads metric looks like somewhat wrong.
For 1.16, the metric values is always 0, for 1.13 no container_threads metrics exists.
I know that the metric is from cadvisor which is included in kubelet.
I want to make sure that from which version, the cadvisor doesn't have container_threads.
I know how to check kubelet version "kubelet --version".
But I don't know how to find the version of cadvisor.
Does anyone know about it?
Thanks!
There is no specific command to find the version of cAdvisor. However, metrics can be accessed using commands like $ kubectl top
For Latest Version Of Cadvisor , We will use the official cAdvisor docker image from google hosted on the Docker Hub.
For more information about cAdvisor UI Overview and Processes, over to the cAdvisor section. Also, cAdvisor’s UI has been marked deprecated as of Kubernetes version 1.10 and the interface is scheduled to be completely removed in version 1.12.
If you run Kubernetes version 1.12 or later, the UI has been removed. However, the metrics are still there since cAdvisor is part of the kubelet binary.
The kubelet binary exposes all its runtime metrics and all the cAdvisor metrics at the /metrics endpoint using the Prometheus exposition format.
Note: cAdvisor doesn’t store metrics for long-term use, so if you want that functionality, you’ll need to look for a dedicated monitoring tool.
The cAdvisor version can be found through the cadvisor_version_info metric which is exposed in the /metrics endpoint of your cAdvisor service.
I believe the metric was added in cAdvisor v.0.18.0

Kubernetes - access Metric Server

In the official Kubernetes documentation:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
We can see the following:
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation.
To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.
In order to verify I can "make use of custom metrics", I ran:
kubectl get metrics-server
And got the result: error: the server doesn't have a resource type "metrics-server"
May I ask what can I do to verify "Metrics server monitoring needs to be deployed in the cluster" please?
Thank you
The actual behavior behind the kubectl is to send an API request to a particular endpoint in the Kubernetes API server. There are a couple of predefined objects coming along with kubectl. But if you have some endpoints that are not defined with kubectl, you can use the flag --raw to send the request to API server.
In your case, you can checkout the built-in metrics with this command.
> kubectl get --raw /apis/metrics.k8s.io
{"kind":"APIGroup","apiVersion":"v1","name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}
You will get the JSON response from kubectl. Then, you can follow the path under the response to query your target resources. In my case, in order to get the actual metrics, I will need to use this command.
> kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
For this metrics endpoint, it refers to the built-in metrics. They are CPU and memory. If you want to use the custom metrics, you will need to install the prometheus, prometheus adaptor and corresponding exporter depending on your application. For the custom metrics verification, you can go to the following endpoint.
> kubectl get --raw /apis/custom.metrics.k8s.io

Live monitoring of container, nodes and cluster

we are using k8s cluster for one of our application, cluster is owned by other team and we dont have full control over there… We are trying to find out metrics around resource utilization (CPU and memory), detail about running containers/pods/nodes etc. Need to find out how many parallel containers are running. Problem is they have exposed monitoring of cluster via Prometheus but with Prometheus we are not getting live data, it does not have info about running containers.
My query is , what is that API which is by default available in k8s cluster and can give all what we need. We dont want to read data form another client like Prometheus or anything else, we want to read metrics directly from cluster so that data is not stale. Any suggestions?
As you mentioned you will need metrics-server (or heapster) to get those information.
You can confirm if your metrics server is running kubectl top nodes/pods or just by checking if there is a heapster or metrics-server pod present in kube-system namespace.
Also the provided command would be able to show you the information you are looking for. I wont go into details as here you can find a lot of clues and ways of looking at cluster resource usage. You should probably take a look at cadvisor too which should be already present in the cluster. It exposes a web UI which exports live information about all the containers on the machine.
Other than that there are probably commercial ways of acheiving what you are looking for, for example SignalFx and other similar projects - but this will probably require the cluster administrator involvement.

Getting container resource metrics from kubernetes cluster

I am exploring client-go library for collecting resource metric for a kubernetes cluster. I am more keen on collecting the container metrics from all the pods.
But according to the wiki, https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/, i see that we can only get pod or node level metrics and not container level metrics.
Is there a way i can collect container level metric (like the way docker api gives the metrics for a container)?
You should deploy the External Metric server like Prometheus Adapter (example of full metric solution mentioned in official doc you linked).
The quickest way to achieve it is via kube-prometheus repository, which is part of prometheus-operator project.
It already includes kube-state-metrics, which generates custom metrics of your interest (Pod Metrics), example of container related one: kube_pod_container_resource_requests_cpu_cores.

How can I retrieve the memory utilization of a pod in kubernetes via kubectl?

Inside a namespace, I have created a pod with its specs consisting of memory limit and memory requests parameters. Once up a and running, I would like to know how can I get the memory utilization of the pod in order to figure out if the memory utilization is within the specified limit or not. "kubectl top" command returns back with a services related error.
kubectl top pod <pod-name> -n <fed-name> --containers
FYI, this is on v1.16.2
You need to install metrics server to get the metrics. Follow the below thread
Error from server (NotFound): podmetrics.metrics.k8s.io "mem-example/memory-demo" not found
kubectl top pod POD_NAME --containers
shows metrics for a given pod and its containers.
If you want to see graphs of memory and cpu utilization then you can see them through the kubernetes dashboard.
A better solution would be to install a metrics server alongwith prometheus and grafana in your cluster. Prometheus will scrape the metrics which can be used by grafana for displaying as graphs.
This might be useful.
Instead of building ad-hoc metric snapshots, a much better way is to install and work with 3rd party data collector programs which if managed well gives you a great solution for monitoring systems and a neat Grafana UI (or likewise) you can play with. One of them is Prometheus and which comes highly recommended.
using such PnP systems, you can not only create a robust monitoring pipeline but also the consumption and hence the reaction to the problem is well managed and executed compared to only relying on TOP