GKE Cluster autoscaler profile for older luster - kubernetes

Now in GKE there is new tab while creating new K8s cluster
Automation - Set cluster-level criteria for automatic maintenance, autoscaling, and auto-provisioning. Edit the node pool for automation like auto-scaling, auto-upgrades, and repair.
it has two options - Balanced (default) & Optimize utilization (beta)
cant we set this for older cluster any work around?
we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes.
Currently, we have 2 different pools - only one has auto node provisioning enable but during peak hour if HPA scales POD, New node taking some time to join the cluster and sometimes exiting node start crashing due to resource pressure.

You can set the autoscaling profile by going into:
GCP Cloud Console (Web UI) -> Kubernetes Engine -> CLUSTER-NAME -> Edit -> Autoscaling profile
This screenshot was made on GKE version 1.14.10-gke.50
You can also run:
gcloud beta container clusters update CLUSTER-NAME --autoscaling-profile optimize-utilization
The official documentation states:
You can specify which autoscaling profile to use when making such decisions. The currently available profiles are:
balanced: The default profile.
optimize-utilization: Prioritize optimizing utilization over keeping spare resources in the cluster. When selected, the cluster autoscaler scales down the cluster more aggressively: it can remove more nodes, and remove nodes faster. This profile has been optimized for use with batch workloads that are not sensitive to start-up latency. We do not currently recommend using this profile with serving workloads.
-- Cloud.google.com: Kubernetes Engine: Cluster autoscaler: Autoscaling profiles
This setting (optimize-utilization) could not be the best option when using it for serving workloads. It will more aggressively try to scale-down (remove a node). It will automatically reduce the amount of available resources your cluster is having and could be more vulnerable to workload spikes.
Answering the part of the question:
we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes.
As stated in the documentation:
Cluster autoscaler increases or decreases the size of the node pool automatically, based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:
If Pods are unschedulable because there are not enough nodes in the node pool, cluster autoscaler adds nodes, up to the maximum size of the node pool.
-- Cloud.google.com: Kubernetes Engine: Cluster autoscaler: How cluster autoscaler works
You can't directly scale the cluster based on the percentage of resource utilization (70%).
Autoscaler bases on inability of the cluster to schedule pods on currently existing nodes.
You can scale the amount of replicas of your Deployment by CPU usage with Horizontal Pod Autoscaler. This Pods could have a buffer to handle increased amount of traffic and after a specific threshold they could spawn new Pods where the CA( Cluster autoscaler) would send a request for a new node (if new Pods are unschedulable). This buffer would be the mechanism to prevent sudden spikes that application couldn't manage.
The buffer part and over-provisioning explained in details in:
Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke: Autoscaler and over-provisioning
There is an extensive documentation about running cost effective apps on GKE:
Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke
I encourage you to check above link as there are a lot of tips and insights on (scaling, over-provisioning, workload spikes, HPA, VPA,etc.)
Additional resources:
Cloud.google.com: Kubernetes Engine: Node auto provisioning

Related

GKE node pool doesn't scale up

I have a GKE cluster which doesn't scale up when a particular deployment needs more resources.
I've checked the cluster autoscaler logs and it has entries with this error:
no.scale.up.nap.pod.zonal.resources.exceeded. The documentation for this error says:
Node auto-provisioning did not provision any node group for the Pod in
this zone because doing so would violate resource limits.
I don't quite understand which resource limits are mentiond in the documentation and why it prevents node-pool from scaling up?
If I scale cluster up manually - deployment pods are scaled up and everything works as expected, so, seems it's not a problem with project quotas.
Limits for clusters that you define are enforced based on the total CPU and memory resources used across your cluster, not just auto-provisioned pools.
When you are not using node auto provisioning (NAP), disable node auto provisioning feature for the cluster.
When you are using NAP, then update the cluster wide resource limits defined in NAP for the cluster .
Try a workaround by specifying the machine type explicitly in the workload spec. Ensure to use a supported machine family with GKE node auto-provisioning

Specifying memory in Kubernetes pods for deployment of Docker image

I am exploring about implementation of Kubernetes cluster and deployment into Kubernetes cluster using Jenkins via CI/CD pipeline. When exploring I found that we don't need to define the worker machine node where we need to deploy our pods. Kubernetes master will take care for where to deploy / free pod in worker machine for deployment. We only need to define how much memory need to that pod in definition.
Here my confusion is that, Already we assigned and configured Kubernetes cluster for deployment. That all nodes containing its own memory according to creation of AWS EC2 (since I am planning to use AWS Ec2 - Ubuntu 16.04 LTS).
So why we again need to define memory in pod ? Is that proper way of pod deployment ?
I am only started in CI/CD pipeline world.
Specifying memory and cpu in the pod specification is completely optional. Still there are a couple of aspects to specifying memory and CPU at pod level:
As explained here, if you don't specify CPU/memory - the pod/container can consume all resources on that node and potentially affect other pod/containers running on that node.
Each application should specify the memory and CPU they need for running the application. This information is used by Kubernetes during scheduling the pod on one of the nodes in the cluster where enough resources are available. This information ensures better scheduling decisions.
It enables the Horizontal Pod Autoscaler (HPA) to scale the pods when the resource consumption beyond a certain limit. The details are explained in this doc. Unless there is a memory/cpu limit specified, you can not calculate that the pod is running 80% of that metric and it should be scaled into two replicas.
You can also enable a certain default at namespace level and then only override for specific applications, details here

Kubernetes automatic shutdown after some idle time

Does kubernetes or Helm support shut down the pods if it is idle for more than a given threshold time?
This would be very useful in the development environment, to provide room for other processes to consume it and save cost.
Kubernetes is featured with the ability to autoscale your application in a cluster. Literally, it means that Kubernetes can start additional pods when the load is increasing and terminate excessive pods when the load is decreasing.
It is possible to downscale the application to zero pods, but, in this case, you will have a delay serving the first request while the pod is starting.
This functionality relies on performance metrics provided by Heapster application, that must be run in the cluster. From the practical side, it means that autoscaling doesn't happen instantly, because it takes some time to performance metrics reach the configured threshold.
The mentioned Kubernetes feature called HPA(horizontal pod autoscale) is described in this document.
In case you are running your cluster on GCP or GKE, you are able to go further and automatically start additional nodes for your cluster when you need more computing capacity and shut down nodes when they are not running application pods anymore.
More information about this functionality can be found following the link.
If you decide to give it a try, you might find this information useful:
Creating a Container cluster in GKE
70% cheaper Kubernetes cluster on AWS
How to build a Kubernetes Horizontal Pod Autoscaler using custom metrics

How does multiple replicas/pods scale Kubernetes?

From what I understand, using multiple replicas as well as auto-scaling is supposed to help in the case that lots of people visit your website and make calls to services provided by your Kubernetes cluster.
How do the replicas help with scaling?
Aren't these extra pods all just running on the same computer with constant resources? That would mean that they're all limited by a constant amount of CPU and memory.
Kubernetes has couple of scaling mechanisms. Horizontal Pod Autoscaler being the basic, but not the only one.
With HPA you can spin up additional PODs according to some metrics (most commonly cpu and memory). At some point you will hit a moment when your cluster nodes do not have enough resources to satisfy resource requirements of your pods (you will have pods in Pending state due to lack of nodes available for scheduling).
At that point a Cluster Autoscaler can kick in and ie. scale AWS ASG (or some other cloud-ish node pool) to add new node to the cluster and make space for the pending pod(s)

GKE cluster autoscaler vs Autoscaling in Managed instance groups

I am using Google Container Engine . Now I want auto scaling functionality in my cluster . As per documentation GKE autoscaler is in beta release . I can also enable autoscaling in instance group that is managing cluster nodes .
Cluster autoscaler add/remove nodes so that all scheduled pods have a place to run where instance group add/remove nodes based on different policies like average cpu utilization .
I think by adjusting pods CPU limit and target CPU utilization for pods in Kubernetes autoscaler , Managed Instance Group autoscaling can also be used to resize GKE cluster .
So my question is what should I use ?
Short answer - don't use GCE MIG autoscaling feature. It will just not work properly with your cluster.
See details in this FAQ:
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#should-i-use-a-cpu-usage-based-node-autoscaler-with-kubernetes
(read the question linked above and 2 next ones)
As per GCP docs :
"Caution: Do not enable Compute Engine autoscaling for managed instance groups for your cluster nodes. GKE's cluster autoscaler is separate from Compute Engine autoscaling. This can lead to node pools failing to scale up or scale down as the Compute Engine autoscaler will be in conflict with GKE's cluster autoscaler"
More Details :
https://cloud.google.com/kubernetes-engine/docs/concepts/cluster-autoscaler