How to get network information from metrics server - kubernetes

I know how to get information(limit, request, usage) of cpu and memory from metrics-server.
But I don't know how to get network information(Memory Distribution, Network Traffic(KBps), Network Utilization) from metrics-server.
Essentially, metrics-server don't provide network information?
Is there another way?
I should develop with python.

Metrics server does not export these kind of metrics but Prometheus Node Exporter does. Node exporter run as a systemd service that will periodically (every 1 second) gather all the metrics of your system.
The Prometheus Node
Exporter exposes a
wide variety of hardware- and kernel-related metrics.
You can check all the list of the collectors here. You may also want to check this article about monitoring network with prometheus.

Related

Kubernetes pods metrices through kubernetes-client library

I want to fetch the kubernetes CPU utilization and memory usage data points for last 7 days using kubernetes-client library for node js.
I'm using godaddy's kubernetes client library.
https://github.com/godaddy/kubernetes-client
Which function should I call to get pods metrices details?
You shouldn't try to query stuff like CPU utilization and memory usage from the kubeernetes API.
Instead, you should query cAdvisor for that. They have a go client you can use to query the metrics directly, but it's more conventional to install prometheus on your cluster and query metrics from there.

Kubernetes - Monitoring pod IO

I would like to monitor the IO which my pod is doing. Using commands like 'kubectl top pods/nodes', i can monitor CPU & Memory. But I am not sure how to monitor IO which my pod is doing, especially disk IO.
Any suggestions ?
Since you already used kubectl top command I assume you have metrics server. In order to have more advanced monitoring solution I would suggest to use cAdvisor, Prometheus or Elasticsearch.
For getting started with Prometheus you can check this article.
Elastic search has System diskio and Docker diskio metrics set. You can easily deploy it using helm chart.
Part 3 of the series about kubernetes monitoring is especially focused on monitoring container metrics using cAdvisor. Allthough it is worth checking whole series.
Let me know if this helps.

Geting custom metrics from Kubernetes pods

I was looking into Kubernetes Heapster and Metrics-server for getting metrics from the running pods. But the issue is, I need some custom metrics which might vary from pod to pod, and apparently Heapster only provides cpu and memory related metrics. Is there any tool already out there, which would provide me the functionality I want, or do I need to build one from scratch?
What you're looking for is application & infrastructure specific metrics. For this, the TICK stack could be helpful! Specifically Telegraf can be set up to gather detailed infrastructure metrics like Memory- and CPU pressure or even the resources used by individual docker containers, network and IO metrics etc... But it can also scrape Prometheus metrics from pods. These metrics are then shipped to influxdb and visualized using either chronograph or grafana.
Not sure if this is still open.
I would classify metrics into 3 types.
Events or Logs - System and Applications events which are sent to logs. These are non-deterministic.
Metrics - CPU and Memory utilization on the node the app is hosted. This is deterministic and are collected periodically.
APM - Applicaton Performance Monitoring metrics - these are application level metrics like requests received vs failed vs responded etc.
Not all the platforms do everything. ELK for instance does both the Metrics and Log Monitoring and does not do APM. Some of these tools have plugins into collect daemons which collect perfmon metrics of the node.
APM is a completely different area as it requires developer tool to provider metrics as Springboot does Actuator, Nodejs does AppMetrics etc. This carries the request level data. Statsd is an open source library which application can consume to provide APM metrics too Statsd agents installed in the node.
AWS offers CloudWatch agents for log shipping and sink and Xray for distributed tracing which can be used for APM.

Service Level metrics Prometheus in k8

would like to see k8 Service level metrics in Grafana from underlying prometheus server.
For instance:
1) If i have 3 application pods exposed through a service i would like to see service level metrics for CPU,memory & network I/O pressure ,Total # of requests,# of requests failed
2)Also if i have group of pods(replicas) related to an application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
What would be the prometheus queries to achieve the same
Service level metrics for CPU, memory & network I/O pressure
If you have Prometheus installed on your Kubernetes cluster, all those statistics are being already collected by Prometheus. There are many good articles about how to install and how to use Kubernetes+Prometheus, try to check that one, as an example.
Here is an example of a request to fetch container memory usage:
container_memory_usage_bytes{image="CONTAINER:VERSION"}
Total # of requests,# of requests failed
Those are service-level metrics, and for collecting them, you need to use Prometheus Exporter created especially for your service. Check the list with exporters, find one which you need for your service and follow its instruction.
If you cannot find an Exporter for your application, you can write it yourself, here is an official documentation about it.
application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
It is possible to combine any graphics in a single view in Grafana using Dashboards and Panels. Check an official documentation, all that topics pretty detailed and easy to understand.
Aggregation can be done by Prometheus itself by aggregation operations.
All metrics from Kubernetes has labels, so you can group by them:
sum(http_requests_total) by (application, group), where application and group is labels.
Also, here is an official Prometheus instruction about how to add Prometheus to Grafana as a Datasourse.

Collecting app-level metrics from Kubernetes containers

According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster.
Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead?
Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?
Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).
There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
You may want to look at handling this by sending your metrics directly from your job to a Prometheus pushgateway. This is the precise use case it was created for:
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
Prometheus developer here. If you want to monitor the metrics of applications running on Kubernetes, the approach is to have Prometheus scrape the application directly. Prometheus can auto-discover Kubernetes apps, see http://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>
There is no point in involving Heapster if you're using Prometheus, as Prometheus can do everything it does more directly.