GKE node pool doesn't scale up - kubernetes

I have a GKE cluster which doesn't scale up when a particular deployment needs more resources.
I've checked the cluster autoscaler logs and it has entries with this error:
no.scale.up.nap.pod.zonal.resources.exceeded. The documentation for this error says:
Node auto-provisioning did not provision any node group for the Pod in
this zone because doing so would violate resource limits.
I don't quite understand which resource limits are mentiond in the documentation and why it prevents node-pool from scaling up?
If I scale cluster up manually - deployment pods are scaled up and everything works as expected, so, seems it's not a problem with project quotas.

Limits for clusters that you define are enforced based on the total CPU and memory resources used across your cluster, not just auto-provisioned pools.
When you are not using node auto provisioning (NAP), disable node auto provisioning feature for the cluster.
When you are using NAP, then update the cluster wide resource limits defined in NAP for the cluster .
Try a workaround by specifying the machine type explicitly in the workload spec. Ensure to use a supported machine family with GKE node auto-provisioning

Related

GKE Cluster autoscaler profile for older luster

Now in GKE there is new tab while creating new K8s cluster
Automation - Set cluster-level criteria for automatic maintenance, autoscaling, and auto-provisioning. Edit the node pool for automation like auto-scaling, auto-upgrades, and repair.
it has two options - Balanced (default) & Optimize utilization (beta)
cant we set this for older cluster any work around?
we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes.
Currently, we have 2 different pools - only one has auto node provisioning enable but during peak hour if HPA scales POD, New node taking some time to join the cluster and sometimes exiting node start crashing due to resource pressure.
You can set the autoscaling profile by going into:
GCP Cloud Console (Web UI) -> Kubernetes Engine -> CLUSTER-NAME -> Edit -> Autoscaling profile
This screenshot was made on GKE version 1.14.10-gke.50
You can also run:
gcloud beta container clusters update CLUSTER-NAME --autoscaling-profile optimize-utilization
The official documentation states:
You can specify which autoscaling profile to use when making such decisions. The currently available profiles are:
balanced: The default profile.
optimize-utilization: Prioritize optimizing utilization over keeping spare resources in the cluster. When selected, the cluster autoscaler scales down the cluster more aggressively: it can remove more nodes, and remove nodes faster. This profile has been optimized for use with batch workloads that are not sensitive to start-up latency. We do not currently recommend using this profile with serving workloads.
-- Cloud.google.com: Kubernetes Engine: Cluster autoscaler: Autoscaling profiles
This setting (optimize-utilization) could not be the best option when using it for serving workloads. It will more aggressively try to scale-down (remove a node). It will automatically reduce the amount of available resources your cluster is having and could be more vulnerable to workload spikes.
Answering the part of the question:
we are running old GKE version 1.14 we want to auto-scale cluster when 70% of resource utilization of existing nodes.
As stated in the documentation:
Cluster autoscaler increases or decreases the size of the node pool automatically, based on the resource requests (rather than actual resource utilization) of Pods running on that node pool's nodes. It periodically checks the status of Pods and nodes, and takes action:
If Pods are unschedulable because there are not enough nodes in the node pool, cluster autoscaler adds nodes, up to the maximum size of the node pool.
-- Cloud.google.com: Kubernetes Engine: Cluster autoscaler: How cluster autoscaler works
You can't directly scale the cluster based on the percentage of resource utilization (70%).
Autoscaler bases on inability of the cluster to schedule pods on currently existing nodes.
You can scale the amount of replicas of your Deployment by CPU usage with Horizontal Pod Autoscaler. This Pods could have a buffer to handle increased amount of traffic and after a specific threshold they could spawn new Pods where the CA( Cluster autoscaler) would send a request for a new node (if new Pods are unschedulable). This buffer would be the mechanism to prevent sudden spikes that application couldn't manage.
The buffer part and over-provisioning explained in details in:
Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke: Autoscaler and over-provisioning
There is an extensive documentation about running cost effective apps on GKE:
Cloud.google.com: Solutions: Best practices for running cost effective kubernetes applications on gke
I encourage you to check above link as there are a lot of tips and insights on (scaling, over-provisioning, workload spikes, HPA, VPA,etc.)
Additional resources:
Cloud.google.com: Kubernetes Engine: Node auto provisioning

AWS EKS Cluster Autoscaler - Scale-In Policy

I've a CA (Cluster Autoscaler) deployed on EKS followed this post. What I'm wondering is CA automatically scales down the cluster whenever at least a single pod is deployed on that node i.e. if there are 3 nodes with the capacity of 8 pods, if 9th pod comes up, CA would provision 4th nodes to run that 9th pod. What I see is CA is continuously terminating & creating a node randomly chosen from within the cluster disturbing other pods & nodes.
How can I tell EKS (without defining minimal nodes or disabling scale-in policy in ASG) to not to kill the node having at least 1 pod running on it. Any suggestion would be appreciated.
You cannot use pod as unit. CA work with resources cpu and memory unit.
If the cluster does not have enough cpu or memory it add one new.
You have to play with your requests resources (in the pod definition) or redefine your node to take an instance type with more or less resources depending how many pod you want on each.
Or you can play with the param scale-down-utilization-threshold
https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#what-are-the-parameters-to-ca

Specifying memory in Kubernetes pods for deployment of Docker image

I am exploring about implementation of Kubernetes cluster and deployment into Kubernetes cluster using Jenkins via CI/CD pipeline. When exploring I found that we don't need to define the worker machine node where we need to deploy our pods. Kubernetes master will take care for where to deploy / free pod in worker machine for deployment. We only need to define how much memory need to that pod in definition.
Here my confusion is that, Already we assigned and configured Kubernetes cluster for deployment. That all nodes containing its own memory according to creation of AWS EC2 (since I am planning to use AWS Ec2 - Ubuntu 16.04 LTS).
So why we again need to define memory in pod ? Is that proper way of pod deployment ?
I am only started in CI/CD pipeline world.
Specifying memory and cpu in the pod specification is completely optional. Still there are a couple of aspects to specifying memory and CPU at pod level:
As explained here, if you don't specify CPU/memory - the pod/container can consume all resources on that node and potentially affect other pod/containers running on that node.
Each application should specify the memory and CPU they need for running the application. This information is used by Kubernetes during scheduling the pod on one of the nodes in the cluster where enough resources are available. This information ensures better scheduling decisions.
It enables the Horizontal Pod Autoscaler (HPA) to scale the pods when the resource consumption beyond a certain limit. The details are explained in this doc. Unless there is a memory/cpu limit specified, you can not calculate that the pod is running 80% of that metric and it should be scaled into two replicas.
You can also enable a certain default at namespace level and then only override for specific applications, details here

Kubernetes Horizontal Pod Autoscaler not utilising node resources

I am currently running Kubernetes 1.9.7 and successfully using the Cluster Autoscaler and multiple Horizontal Pod Autoscalers.
However, I recently started noticing the HPA would favour newer pods when scaling down replicas.
For example, I have 1 replica of service A running on a node alongside several other services. This node has plenty of available resource. During load, the target CPU utilisation for service A rose above the configured threshold, therefore the HPA decided to scale it to 2 replicas. As there were no other nodes available, the CAS span up a new node on which the new replica was successfully scheduled - so far so good!
The problem is, when the target CPU utilisation drops back below the configured threshold, the HPA decides to scale down to 1 replica. I would expect to see the new replica on the new node removed, therefore enabling the CAS to turn off that new node. However, the HPA removed the existing service A replica that was running on the node with plenty of available resources. This means I now have service A running on a new node, by itself, that can't be removed by the CAS even though there is plenty of room for service A to be scheduled on the existing node.
Is this a problem with the HPA or the Kubernetes scheduler? Service A has now been running on the new node for 48 hours and still hasn't been rescheduled despite there being more than enough resources on the existing node.
After scouring through my cluster configuration, I managed to come to a conclusion as to why this was happening.
Service A was configured to run on a public subnet and the new node created by the CA was public. The existing node running the original replica of Service A was private, therefore leading the HPA to remove this replica.
I'm not sure how Service A was scheduled onto this node in the first place, but that is a different issue.

How does multiple replicas/pods scale Kubernetes?

From what I understand, using multiple replicas as well as auto-scaling is supposed to help in the case that lots of people visit your website and make calls to services provided by your Kubernetes cluster.
How do the replicas help with scaling?
Aren't these extra pods all just running on the same computer with constant resources? That would mean that they're all limited by a constant amount of CPU and memory.
Kubernetes has couple of scaling mechanisms. Horizontal Pod Autoscaler being the basic, but not the only one.
With HPA you can spin up additional PODs according to some metrics (most commonly cpu and memory). At some point you will hit a moment when your cluster nodes do not have enough resources to satisfy resource requirements of your pods (you will have pods in Pending state due to lack of nodes available for scheduling).
At that point a Cluster Autoscaler can kick in and ie. scale AWS ASG (or some other cloud-ish node pool) to add new node to the cluster and make space for the pending pod(s)