Google Cloud Kubernetes external autoscale - kubernetes

I need to create a custom autoscaler in Google Cloud Kubernetes Engine.
I develope a software that write time series data to a custom metric in Google Cloud Monitoring and I want that Kubernetes autoscaler uses this data to manage the number of replicas.
I read a lot of documentation but I have a lot of doubts:
Is mandatory to deploy the custom metrics stackdriver adapter? If yes, why?
Which is the difference between "target total value" and "target average value" and how they work?
I saw that is possible to create more than one metric, how autoscaler works with 2 or more metrics?

Related

Sizing resources with Kubernetes for GIS application

I am thinking to use the stack Geoserver, PostGIS, Openlayers, ReactJS for my GIS project. I also plan to deploy this solution on Kubernetes with AWS.
Questions:
Assume the traffic will be : 100 requests/s -> to 1000rq/s
What is the minimum resources (vCPU, RAM) for:
- Each node K8s
- Each Geoserver (pod)
- PostGIS
Is there any formula so I can apply to have that result?
Thank you in advance
Lp Ccmu
Not really. It all depends on the footprint of all the different components of your specific application. I suggest you start small, gather a lot of metrics, and adjust.
Either grow or shrink depending on what you see on your metrics and make use of Kubernetes autoscaling tools like HPAs and the cluster autoscaler.
You can gather metrics using the AWS tools or something like Prometheus. There are many available resources on how to use Prometheus to gather Kubernetes metrics on the web.
Not really. For GeoServer it depends on the type of data and the size and complexity of the data sets as well as they styling you are applying.
You can integrate APM, Elastic and kibana to have react app to send the metrics related API endpoints requests and page hits to monitor traffic. Based on data you can adjust your deployment's resources.
You can see this post for GIS stack on Kubernetes.
https://link.medium.com/r645NGwpejb

GKE is built by default in Anthos solution ? Getting Anthos Metrics

I have a cluster with 7 nodes and a lot of services, nodes, etc in the Google Cloud Platform. I'm trying to get some metrics with StackDriver Legacy, so in the Google Cloud Console -> StackDriver -> Metrics Explorer I have all the set of anthos metrics listed but when I try to create a chart based on that metrics it doesn't show the data, actually the only response that I get in the panel is no data is available for the selected time frame even changing the time frame and stuffs.
Is right to think that with anthos metrics I can retrieve information about my cronjobs, pods, services like failed initializations, jobs failures ? And if so, I can do it with StackDriver Legacy or I need to Update to StackDriver kubernetes Engine Monitoring ?
Anthos solution, includes what’s called GKE-on prem. I’d take a look at the instructions to use logging and monitoring on GKE-on prem. Stackdriver monitors GKE On-Prem clusters in a similar way as cloud-based GKE clusters.
However, there’s a note where they say that currently, Stackdriver only collects cluster logs and system component metrics. The full Kubernetes Monitoring experience will be available in a future release.
You can also check that you’ve met all the configuration requirements.

Deploy service dynamically according to load with Googl Kubernetes Engine

I'm currently working on an application deployed with Google Kubernetes Engine. I want to be able to change the behavior of a service if the load on my application reaches a certain point. The idea is to deploy a similar service which consumes less ressources so that my application can still work with a bigger load.
Is it possible with Google Kubernetes Engine ?
Yes it can be done with HPA and custom metrics in prometheus. We are using this setup to autoscale our deployments based on requests per minute.
Prometheus scrapes this metric from the application and prometheus adapter makes them available to kubernetes.
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale/
https://github.com/DirectXMan12/k8s-prometheus-adapter

Service Level metrics Prometheus in k8

would like to see k8 Service level metrics in Grafana from underlying prometheus server.
For instance:
1) If i have 3 application pods exposed through a service i would like to see service level metrics for CPU,memory & network I/O pressure ,Total # of requests,# of requests failed
2)Also if i have group of pods(replicas) related to an application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
What would be the prometheus queries to achieve the same
Service level metrics for CPU, memory & network I/O pressure
If you have Prometheus installed on your Kubernetes cluster, all those statistics are being already collected by Prometheus. There are many good articles about how to install and how to use Kubernetes+Prometheus, try to check that one, as an example.
Here is an example of a request to fetch container memory usage:
container_memory_usage_bytes{image="CONTAINER:VERSION"}
Total # of requests,# of requests failed
Those are service-level metrics, and for collecting them, you need to use Prometheus Exporter created especially for your service. Check the list with exporters, find one which you need for your service and follow its instruction.
If you cannot find an Exporter for your application, you can write it yourself, here is an official documentation about it.
application which doesn"t have Service on top of them would like to see the aggregated metrics of the pods related to that application in a single view on grafana
It is possible to combine any graphics in a single view in Grafana using Dashboards and Panels. Check an official documentation, all that topics pretty detailed and easy to understand.
Aggregation can be done by Prometheus itself by aggregation operations.
All metrics from Kubernetes has labels, so you can group by them:
sum(http_requests_total) by (application, group), where application and group is labels.
Also, here is an official Prometheus instruction about how to add Prometheus to Grafana as a Datasourse.

Is it possible to set up Stackdriver Monitoring for Kubernetes on Google Compute Engine?

I'm running Kubernetes myself on Google Compute Engine (not Google Container Engine). Google Container Engine has built-in integration with Stackdriver Monitoring and I'm wondering if it's possible to set this up for a Kubernetes cluster on Google Compute Engine.
Specifically, I'd like to see more than just cpu, disk, etc. I want to see Kubernetes data like pod scheduling failures, pod counts, etc.
It is not possible to configure Stackdriver exactly the same way that it is done in GKE.
However, you can set ENABLE_CLUSTER_MONITORING to google in config-default.sh to enable Heapster and Google Cloud Monitoring.