POD incoming request size in Prometheus - kubernetes

I have an AKS cluster where I have deployed some microservices. I have a Prometheus system connected to that and I'm able to get a lot of metrics.
One metric that is missing for me is: The size of the requests that go in each of the microservices deployed.
Do you know how to get that metric in the environment described above?

Related

k8s resources overview planning

We are planning on delivering small k8s clusters to clients with our application on top.
Currently we are struggling on see what resources we actually need. At average we are running 20-30 pods in the system.
While getting resources requests and limits per deployment is not hard to see.
It is hared to get full view of all requests or all limits resources for all pods that are running in the cluster. At least in an automated way.
Is there prebuild dashboard in Grafana or some kind of kubectl command that would collect all of the requests and limits for all pods running in the k8s cluster?
The result should be a "nice" report for all resource requirements.
Since we are delivering a "static" cluster to clients there is no hpa roles in our clusters.
So far we have done manual check per each pod and write it in Excel table which is not time efficient and repeatable.
Hi skolko you can use prometheus for monitoring your kubernetes cluster there are various options available like monitoring individual deployments, monitoring entire cluster and monitoring each pod individually. Follow this document for setting up the prometheus monitoring for kubernetes and this document for getting an overview on metrics available for monitoring.

Best practices when trying to implement custom Kubernetes monitoring system

I have two Kubernetes clusters representing dev and staging environments.
Separately, I am also deploying a custom DevOps dashboard which will be used to monitor these two clusters. On this dashboard I will need to show information such as:
RAM/HD Space/CPU usage of each deployed Pod in each environment
Pod health (as in if it has too many container restarts etc)
Pod uptime
All these stats have to be at a cluster level and also per namespace, preferably. As in, if I query a for a particular namespace, I have to get all the resource usages of that namespace.
So the webservice layer of my dashboard will send a service request to the master node of my respective cluster in order to fetch this information.
Another thing I need is to implement real time notifications in my DevOps dashboard. Every time a container fails, I need to catch that event and notify relevant personnel.
I have been reading around and two things that pop up a lot are Prometheus and Metric Server. Do I need both or will one do? I set up Prometheus on a local cluster but I can't find any endpoints it exposes which could be called by my dashboard service. I'm also trying to set up Prometheus AlertManager but so far it hasn't worked as expected. Trying to fix it now. Just wanted to check if these technologies have the capabilities to meet my requirements.
Thanks!
I don't know why you are considering your own custom monitoring system. Prometheus operator provides all the functionality that you mentioned.
You will end up only with your own grafana dashboard with all required information.
If you need custom notification you can set it up in Alertmanager creating correct prometheusrules.monitoring.coreos.com, you can find a lot of preconfigured prometheusrules in kubernetes-mixin
.
Using labels and namespaces in Alertmanager you can setup a correct route to notify person responsible for a given deployment.
Do I need both or will one do?, yes, you need both - Prometheus collects and aggregates metric when Metrick server exposes metrics from your cluster node for your Prometheus to scrape it.
If you have problems with Prometheus, Alertmanger and so on consider using helm chart as entrypoint.
Prometheus + Grafana are a pretty standard setup.
Installing kube-prometheus or prometheus-operator via helm will give you
Grafana, Alertmanager, node-exporter and kube-state-metrics by default and all be setup for kubernetes metrics.
Configure alertmanager to do something with the alerts. SMTP is usually the first thing setup but I would recommend some sort of event manager if this is a service people need to rely on.
Although a dashboard isn't part of your requirements, this will inform how you can connect into prometheus as a data source. There is docco on adding prometheus data source for grafana.
There are a number of prebuilt charts available to add to Grafana. There are some charts to visualise alertmanager too.
Your external service won't be querying the metrics directly with prometheus, in will be querying the collected data in prometheus stored inside your cluster. To access the API externally you will need to setup an external path to the prometheus service. This can be configured via an ingress controller in the helm deployment:
prometheus.ingress.enabled: true
You can do the same for the alertmanager API and grafana if needed.
alertmanager.ingress.enabled: true
grafana.ingress.enabled: true
You could use Grafana outside the cluster as your dashboard via the same prometheus ingress if it proves useful.

Deploy service dynamically according to load with Googl Kubernetes Engine

I'm currently working on an application deployed with Google Kubernetes Engine. I want to be able to change the behavior of a service if the load on my application reaches a certain point. The idea is to deploy a similar service which consumes less ressources so that my application can still work with a bigger load.
Is it possible with Google Kubernetes Engine ?
Yes it can be done with HPA and custom metrics in prometheus. We are using this setup to autoscale our deployments based on requests per minute.
Prometheus scrapes this metric from the application and prometheus adapter makes them available to kubernetes.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
https://github.com/DirectXMan12/k8s-prometheus-adapter

How do you monitor kubernetes nodes deployed using kops?

We have some Kubernetes clusters that have been deployed using kops in AWS.
We really like using the upstream/official images.
We have been wondering whether or not there was a good way to monitor the systems without installing software directly on the hosts? Are there docker containers that can extract the information from the host? I think that we are likely concerned with:
Disk space (this seems to be passed through to docker via df
Host CPU utilization
Host memory utilization
Is this host/node level information already available through heapster?
Not really a question about kops, but a question about operating Kubernetes. kops stops at the point of having a functional k8s cluster. You have networking, DNS, and nodes have joined the cluster. From there your world is your oyster.
There are many different options for monitoring with k8s. If you are a small team I usually recommend offloading monitoring and logging to a provider.
If you are a larger team or have more specific needs then you can look at such options as Prometheus and others. Poke around in the https://github.com/kubernetes/charts repository, as I know there is a Prometheus chart there.
As with any deployment of any form of infrastructure you are going to need Logging, Monitoring, and Metrics. Also, do not forget to monitor the monitoring ;)
I am using https://prometheus.io/, it goes naturally with kubernetes.
Kubernetes api already exposes a bunch of metrics in prometheus format,
https://github.com/kubernetes/ingress-nginx also exposes prometheus metrics (enable-vts-status: "true"), and you can also install https://github.com/prometheus/node_exporter as a daemonset to monitor CPU, disk, etc...
I install one prometheus inside the cluster to monitor internal metrics and one outside the cluster to monitor LBs and URLs.
Both send alerts to the same https://github.com/prometheus/alertmanager that MUST be outside the cluster.
It took me about a week to configure everything properly.
It was worth it.

Collecting app-level metrics from Kubernetes containers

According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster.
Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead?
Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?
Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).
There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
You may want to look at handling this by sending your metrics directly from your job to a Prometheus pushgateway. This is the precise use case it was created for:
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
Prometheus developer here. If you want to monitor the metrics of applications running on Kubernetes, the approach is to have Prometheus scrape the application directly. Prometheus can auto-discover Kubernetes apps, see http://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>
There is no point in involving Heapster if you're using Prometheus, as Prometheus can do everything it does more directly.