Kubernetes autoscale based on internal application parameter - kubernetes

Given a Kubernetes cluster that runs a certain application in a pod, is there any way to expose an internal parameter of the application (e.g., socket buffer, concurrent requests in the application, number of items in a certain application queue …, ) and then asks the Kubernetes horizontal/vertical pod autoscaler to scale up or down based on the value of such internal parameter application?

surely HPA supports custom metrics. You can push your custom metrics to prometheus and configure HPA to scale up on your metrics.
There's a more beautiful article on how to use HPA custom metrics with prometheus to scale pod. You can refer the below link for more details.
https://github.com/kubernetes/community/blob/master/contributors/design-proposals/instrumentation/custom-metrics-api.md

Related

Configure Kubernetes HPA based on active sessions

Is there a way to configure Kubernetes Horizontal Pod Autoscaler based on the sessions which the pod has? For example, we have an application which stores user sessions. I have configured HPA to trigger based on CPU. But the issue is when the HPA scales down, the active session also gets disrupted since the pod is deleted. Is there a custom metric or a way we can configure this?
HPA can make scaling decisions based on custom or externally provided metrics and works automatically after initial configuration. All you need to do is define the MIN and MAX number of replicas.
Once configured, the Horizontal Pod Autoscaler controller is in charge of checking the metrics and then scaling your replicas up or down accordingly. By default, HPA checks metrics every 15 seconds.
To check metrics, HPA depends on another Kubernetes resource known as the Metrics Server. The Metrics Server provides standard resource usage measurement data by capturing data from “kubernetes.summary_api” such as CPU and memory usage for nodes and pods. It can also provide access to custom metrics (that can be collected from an external source) like the number of active sessions on a load balancer indicating traffic volume.
Try Session affinity which provides a best-effort attempt to send requests from a particular client to the same backend for as long as the backend is healthy and has the capacity, according to the configured balancing mode.
When you use session affinity, we recommend the RATE balancing mode rather than UTILIZATION. Session affinity works best if you set the balancing mode to requests per second (RPS).
Please go through Kubernetes HPA for more information.

Is it possible to start or stop pods based on some events?

Is it possible to start or stop Kubernets PODS based on some events like a Kafka event?
For e.g., if there is an event that some work is complete and based on that I want to bring down a POD or bring a POD up. In my case, minimum replicas of the PODs keep running even though they are not required to be running for the most part of the day.
Pods with Horizontal Pod Autoscaling based on custom metrics is the option you are looking for.
Probably you instrument your code with custom Prometheus metrics. In your case it is publishing a metric in Prometheus that says the number of messages available for processing at a point in time. Then use that custom Prometheus metric to scale pods on that basis.

Service Level metrics Prometheus in k8

would like to see k8 Service level metrics in Grafana from underlying prometheus server.
For instance:
1) If i have 3 application pods exposed through a service i would like to see service level metrics for CPU,memory & network I/O pressure ,Total # of requests,# of requests failed
2)Also if i have group of pods(replicas) related to an application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
What would be the prometheus queries to achieve the same
Service level metrics for CPU, memory & network I/O pressure
If you have Prometheus installed on your Kubernetes cluster, all those statistics are being already collected by Prometheus. There are many good articles about how to install and how to use Kubernetes+Prometheus, try to check that one, as an example.
Here is an example of a request to fetch container memory usage:
container_memory_usage_bytes{image="CONTAINER:VERSION"}
Total # of requests,# of requests failed
Those are service-level metrics, and for collecting them, you need to use Prometheus Exporter created especially for your service. Check the list with exporters, find one which you need for your service and follow its instruction.
If you cannot find an Exporter for your application, you can write it yourself, here is an official documentation about it.
application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
It is possible to combine any graphics in a single view in Grafana using Dashboards and Panels. Check an official documentation, all that topics pretty detailed and easy to understand.
Aggregation can be done by Prometheus itself by aggregation operations.
All metrics from Kubernetes has labels, so you can group by them:
sum(http_requests_total) by (application, group), where application and group is labels.
Also, here is an official Prometheus instruction about how to add Prometheus to Grafana as a Datasourse.

custom metrics for horizontal pod autoscaler in OpenShift

I am using openshift v3 which uses kubernetes version 1.2. I am exploring more on autoscaling feature.
Currently it says only CPU metrics is supported.
Is there a way pods in openshift can be scaled based on memory or other metrics data collected from heapster?
As you mentioned: OpenShift v3 is using multiple components of Kubernetes.
The official documentation of Kubernetes and openshift
are talking about autoscaling on CPU. (Kubernetes 1.2 adds alpha support for scaling based on application-specific metrics like QPS).
Autoscaling on memory wasn't released in the initial version of horizontal pod autoscaling because it does not work in the right way.
Memory consumption of pods usually never shrinks, so, adding a new pod will not decrease memory consumption of the old pods.
That's why Kubernetes isn't supporting autoscaling on memory usage at the moment.
They are talking about it as a possible feature:
[future] Autoscale pods based on metrics different than CPU (e.g. memory, network traffic, qps). This includes scaling based on a custom/application metric.
or other metrics data collected from heapster
From the announcements of Kubernetes 1.12, this should be now (Q4 2018) supported (albeit still in Beta).
Arbitrary / Custom Metrics in the Horizontal Pod Autoscaler is moving to a second beta (autoscaling/v2beta2) to test some additional feature enhancements.
This reworked Horizontal Pod Autoscaler functionality includes support for custom metrics and status conditions.
See kubernetes feature 117 and commit 9d84a49, and the new Horizontal Pod Autoscaler Walkthrough page update.
It introduces the notion of labels.
Autoscaling on more specific metrics
Many metrics pipelines allow you to describe metrics either by name or by a set of additional descriptors called labels.
For all non-resource metric types (pod, object, and external, described below), you can specify an additional label selector which is passed to your metric pipeline.
For instance, if you collect a metric http_requests with the verb label, you can specify the following metric block to scale only on GET requests:
type: Object
object:
metric:
name: `http_requests`
selector: `verb=GET`
This selector uses the same syntax as the full Kubernetes label selectors.
The monitoring pipeline determines how to collapse multiple series into a single value, if the name and selector match multiple series.
The selector is additive, and cannot select metrics that describe objects that are not the target object (the target pods in the case of the Pods type, and the described object in the case of the Object type).
custom metrics for horizontal pod autoscaler in OpenShift

Collecting app-level metrics from Kubernetes containers

According to Kubernetes Custom Metrics Proposal containers can expose its app-level metrics in Prometheus format to be collected by Heapster.
Could anyone elaborate, if metrics are pulled by Heapster that means after the container terminates metrics for the last interval are lost? Can app push metrics to Heapster instead?
Or, is there a recommended approach to collect metrics from moderately short-lived containers running in Kubernetes?
Not to speak for the original author's intent, but I believe that proposal is primarily focused on custom metrics that you want to use for things like scheduling and autoscaling within the cluster, not for general purpose monitoring (for which as you mention, pushing metrics is sometimes critical).
There isn't a single recommended pattern for what to do with custom metrics in general. If your environment has a preferred monitoring stack or vendor, a common approach is to run a second container in each pod (a "sidecar" container) to push relevant metrics about the main container to your monitoring backend.
You may want to look at handling this by sending your metrics directly from your job to a Prometheus pushgateway. This is the precise use case it was created for:
The Prometheus Pushgateway exists to allow ephemeral and batch jobs to expose their metrics to Prometheus. Since these kinds of jobs may not exist long enough to be scraped, they can instead push their metrics to a Pushgateway. The Pushgateway then exposes these metrics to Prometheus.
Prometheus developer here. If you want to monitor the metrics of applications running on Kubernetes, the approach is to have Prometheus scrape the application directly. Prometheus can auto-discover Kubernetes apps, see http://prometheus.io/docs/operating/configuration/#<kubernetes_sd_config>
There is no point in involving Heapster if you're using Prometheus, as Prometheus can do everything it does more directly.