Getting container resource metrics from kubernetes cluster - kubernetes

I am exploring client-go library for collecting resource metric for a kubernetes cluster. I am more keen on collecting the container metrics from all the pods.
But according to the wiki, https://kubernetes.io/docs/tasks/debug-application-cluster/resource-usage-monitoring/, i see that we can only get pod or node level metrics and not container level metrics.
Is there a way i can collect container level metric (like the way docker api gives the metrics for a container)?

You should deploy the External Metric server like Prometheus Adapter (example of full metric solution mentioned in official doc you linked).
The quickest way to achieve it is via kube-prometheus repository, which is part of prometheus-operator project.
It already includes kube-state-metrics, which generates custom metrics of your interest (Pod Metrics), example of container related one: kube_pod_container_resource_requests_cpu_cores.

Related

Kubernetes - access Metric Server

In the official Kubernetes documentation:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
We can see the following:
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation.
To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.
In order to verify I can "make use of custom metrics", I ran:
kubectl get metrics-server
And got the result: error: the server doesn't have a resource type "metrics-server"
May I ask what can I do to verify "Metrics server monitoring needs to be deployed in the cluster" please?
Thank you
The actual behavior behind the kubectl is to send an API request to a particular endpoint in the Kubernetes API server. There are a couple of predefined objects coming along with kubectl. But if you have some endpoints that are not defined with kubectl, you can use the flag --raw to send the request to API server.
In your case, you can checkout the built-in metrics with this command.
> kubectl get --raw /apis/metrics.k8s.io
{"kind":"APIGroup","apiVersion":"v1","name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}
You will get the JSON response from kubectl. Then, you can follow the path under the response to query your target resources. In my case, in order to get the actual metrics, I will need to use this command.
> kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
For this metrics endpoint, it refers to the built-in metrics. They are CPU and memory. If you want to use the custom metrics, you will need to install the prometheus, prometheus adaptor and corresponding exporter depending on your application. For the custom metrics verification, you can go to the following endpoint.
> kubectl get --raw /apis/custom.metrics.k8s.io

Prometheus Adapter configuration for kubernetes metrics

I installed prometheus-adapter with helm.
Now I don't know how to configure prometheus-adapter so that my kubernetes cluster can communicate with a extern server where prometheus is installed.
Where and how can i connect the prometheus-adapter to prometheus.
I want to use data from prometheus for my external metrics in kubernetes.
First, you'll need to deploy the Prometheus Operator.
This walkthrough assumes that Prometheus is deployed in the prom namespace. Most of the sample commands and files are namespace-agnostic, but there are a few commands or pieces of configuration that rely on that namespace. If you're using a different namespace, simply substitute that in for prom when it appears.
Note that if you are deploying on a non-x86_64 (amd64) platform, you'll need to change the image field in the Deployment to be the appropriate image for your platform.
Make sure that you have default adapter which configuration should work with standard Prometheus Operator configuration, but if you've got custom relabelling rules, or your labels above weren't exactly namespace and pod, you may need to edit the configuration in the ConfigMap. The configuration walkthrough provides an overview of how configuration works.
Make sure that you have registered the API with the API aggregator (part of the main Kubernetes API server).
Try fetching the discovery information for it:
$ kubectl get --raw /apis/custom.metrics.k8s.io/v1beta1
Since you've set up Prometheus to collect your app's metrics, you should see a pods/http_request resource show up. This represents the http_requests_total metric, converted into a rate, aggregated to have one datapoint per pod. Notice that this translates to the same API that our HorizontalPodAutoscaler was trying to use above.
The API is registered as custom.metrics.k8s.io/v1beta1, and you can find more information about aggregation at Concepts: Aggregation.
More information you can find in this instruction.
Let me know if it helps.
if you just want to communicate between prometheus-adapter and prometheus, you need to mount prometheus service url prometheus-adapter, so that prometheus-adapter will know where to grab the metric.
the default prometheus service url is http://prometheus.svc:9090 . you need to figure out what is your prometheus service url.

Are Kube-state-metrics new or well formatted metrics?

I am fairly new to Kubernetes and had a question concerning kube-state-metrics. When I simply monitor Kubernetes using Prometheus I obtain a set of metrics from the cAdvisor, the nodes (node exporter), the pods, etc. When I include the kube-state-metrics, I seem to obtain more "relevant" metrics. Do kube-state-metrics allow to scrape "new" information from Kubernetes or are they rather "formatted" metrics using the initial Kubernetes metrics (from the nodes, etc. I mentioned earlier).
The two are basically unrelated. Cadvisor is giving you low-level stats about the containers like how much RAM and CPU they are using. KSM gives you info from the Kubernetes API like the Pod object status. Both are useful for different things and you probably want both.

Service Level metrics Prometheus in k8

would like to see k8 Service level metrics in Grafana from underlying prometheus server.
For instance:
1) If i have 3 application pods exposed through a service i would like to see service level metrics for CPU,memory & network I/O pressure ,Total # of requests,# of requests failed
2)Also if i have group of pods(replicas) related to an application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
What would be the prometheus queries to achieve the same
Service level metrics for CPU, memory & network I/O pressure
If you have Prometheus installed on your Kubernetes cluster, all those statistics are being already collected by Prometheus. There are many good articles about how to install and how to use Kubernetes+Prometheus, try to check that one, as an example.
Here is an example of a request to fetch container memory usage:
container_memory_usage_bytes{image="CONTAINER:VERSION"}
Total # of requests,# of requests failed
Those are service-level metrics, and for collecting them, you need to use Prometheus Exporter created especially for your service. Check the list with exporters, find one which you need for your service and follow its instruction.
If you cannot find an Exporter for your application, you can write it yourself, here is an official documentation about it.
application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
It is possible to combine any graphics in a single view in Grafana using Dashboards and Panels. Check an official documentation, all that topics pretty detailed and easy to understand.
Aggregation can be done by Prometheus itself by aggregation operations.
All metrics from Kubernetes has labels, so you can group by them:
sum(http_requests_total) by (application, group), where application and group is labels.
Also, here is an official Prometheus instruction about how to add Prometheus to Grafana as a Datasourse.

Collecting app-level metrics from Kubernetes containers

According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster.
Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead?
Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?
Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).
There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
You may want to look at handling this by sending your metrics directly from your job to a Prometheus pushgateway. This is the precise use case it was created for:
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
Prometheus developer here. If you want to monitor the metrics of applications running on Kubernetes, the approach is to have Prometheus scrape the application directly. Prometheus can auto-discover Kubernetes apps, see http://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>
There is no point in involving Heapster if you're using Prometheus, as Prometheus can do everything it does more directly.