How to scale out a Kubernetes Cluster based on allocatable resources? - kubernetes

I am looking for an opportunity to add a new worker node to my kubernetes cluster when its allocatable resources falls below a specified minimum.
To scale out a cluster automatically I found the cluster-autoscaler. But the documentation says, it only scales out, if:
there are pods that failed to run in the cluster due to insufficient resources
Therefore a new node gets started only after a pod is in pending state. The startup of a new node takes about 2 minutes and I would like to avoid a pending time of 2 minutes for a pod.

Related

How does k8s manage containers using more cpu than requested without limits?

I'm trying to understand what happens when a container is configured with a CPU request and without a limit, and it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources.
Will k8s keep the container throttled in its current node or will it be moved to another node with available resources? do we know how/when k8s decides to move the container when its throttled in such a case?
I would appreciate any extra resources to read on this matter, as I couldn't find anything that go into details for this specific scenario.
Q1) what happens when a container is configured with a CPU request and without a limit ?
ANS:
If you do not specify a CPU limit
If you do not specify a CPU limit for a Container, then one of these situations applies:
The Container has no upper bound on the CPU resources it can use. The Container could use all of the CPU resources available on the Node where it is running.
The Container is running in a namespace that has a default CPU limit, and the Container is automatically assigned the default limit. Cluster administrators can use a LimitRange to specify a default value for the CPU limit.
If you specify a CPU limit but do not specify a CPU request
If you specify a CPU limit for a Container but do not specify a CPU request, Kubernetes automatically assigns a CPU request that matches the limit. Similarly, if a Container specifies its own memory limit, but does not specify a memory request, Kubernetes automatically assigns a memory request that matches the limit.
Q2) it tries to use more CPU than requested while the node is fully utilized, but there is another node with available resources?
ANS:
The Kubernetes scheduler is a control plane process which assigns Pods to Nodes. The scheduler determines which Nodes are valid placements for each Pod in the scheduling queue according to constraints and available resources. The scheduler then ranks each valid Node and binds the Pod to a suitable Node. Multiple different schedulers may be used within a cluster; kube-scheduler is the reference implementation. See scheduling for more information about scheduling and the kube-scheduler component.
Scheduling, Preemption and Eviction
In Kubernetes, scheduling refers to making sure that Pods are matched to Nodes so that the kubelet can run them. Preemption is the process of terminating Pods with lower Priority so that Pods with higher Priority can schedule on Nodes. Eviction is the process of terminating one or more Pods on Nodes.
Q3) Will k8s keep the container throttled in its current node or will it be moved to another node with available resources?
ANS:
Pod Disruption
Pod disruption is the process by which Pods on Nodes are terminated either voluntarily or involuntarily.
Voluntary disruptions are started intentionally by application owners or cluster administrators. Involuntary disruptions are unintentional and can be triggered by unavoidable issues like Nodes running out of resources, or by accidental deletions.
Voluntary and involuntary disruptions
Pods do not disappear until someone (a person or a controller) destroys them, or there is an unavoidable hardware or system software error.
We call these unavoidable cases involuntary disruptions to an application.
Examples are:
a hardware failure of the physical machine backing the node
cluster administrator deletes VM (instance) by mistake
cloud provider or hypervisor failure makes VM disappear
a kernel panic
the node disappears from the cluster due to cluster network partition
eviction of a pod due to the node being out-of-resources.
Suggestion:
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints.
Command:
kubectl taint nodes node1 key1=value1:NoSchedule
Example:
kubectl taint nodes node1 key1=node.kubernetes.io/disk-pressure:NoSchedule

Pods are not rescheduling to failure node ,when node come alive

My situation is ,I have 5 nodes having K8 cluster .Initially pods are distributed across the 5 node. Sometime we need to restart particular node server. Then that node goes down and and pod will created on another node. But once failed/down node comes up ,no pods are creating in it automatically, as replica number already reached .We need all node have minimum 1 pods to run .Could please help on this
We need all node have minimum 1 pods to run
Instead of running the Deployment, you can run the Daemon set if you want to run a minimum of one 1 pod to run on the node.
So if anytime new comes to the cluster it will have one replica running of your POD.
Read more daemon set : https://kubernetes.io/docs/concepts/workloads/controllers/daemonset/
You can spread also replicas if running as deployment however it would be tricky to manage with Affinity, POD topology spread

What if i delete a Node in GKS

I have setup GKS in free trail access.
here is screenshot of cluster
I have already setup vm instance in gce. So my kubernets cluster is having less resource for testing i have setup it but i want to know if i delete 1 node out of 3 what will happened
my pods are running in all 3 nodes(disturbed)
So i delete one node will it create a new node with deploy my running pods into another 2 nodes it will become heavy
how do i know its HA using and Scale Up and Scale Down
Please clear my questions
So i delete one node will it create a new node with deploy my running
pods into another 2 nodes it will become heavy
GKE will manage the Nodes using Node pool config.
if inside your GKE you have set 3 nodes and manually remove 1 instance it will auto create new Node in cluster.
You pod might get moved to another node if space is left there or else it will go to pending state and wait for new node to join the GKE cluster.
If you want to redice nodes in GKE you have to redice minimum count in GKE node pool.
If you want to test scale up and down, you can enable auto scaling on Node pool and increase the POD count on cluster, GKE will auto add nodes. Make sure you have set correctly min and max nodes into node pool section for autoscaling.
When you delete a node, its pods are also deleted. Depending on your deployment, i.e. you have Pod scale of 3, one node will hold 2 pods and the other 1. If your app will suffer or not it depends on the actual traffic.

Question about concept on Kubernetes pod assignment to nodes

I am quite a beginner in Kuberenetes and would like to ask about some concepts related to kuberenetes pod assignment.
Suppose there is a deployment to be made with a requirement of 3 replica sets.
(1)
Assume that there are 4 nodes, where each of it being a different physical server with different CPU and memory.
When the deployment is made, how would kubernetes assgin the pods to the nodes? Will there be scenario where it will put multiple pods on the same server, while a server does not have pod assignment (due to resource considereation)?
(2)
Assume there are 4 nodes (on 4 indentical physical servers), and 1 pod is created on each of the 4 nodes.
Suppose that now one of the nodes goes down. How would kuberenetes handle this? Will it recreate the pod on one of the other 3 nodes, based on which one having more available resources?
Thank you for any advice in advance.
There's a brief discussion of the Kubernetes Scheduler in the Kubernetes documentation. Generally scheduling is fairly opaque, but you also tend to aim for fairly well-loaded nodes; the important thing from your application point of view is to set appropriate resource requests: in your pod specifications. Just so long as there's enough room on each node to meet the resource requests, it usually doesn't matter to you which node gets picked.
In the scenario you describe, (1) it is possible that two replicas will be placed on the same node and so two nodes will go unused. That's especially true if the nodes aren't identical and they have resource constraints: if your pods require 4 GB of RAM, but you have some nodes that have less than that (after accounting for system pods and daemon set pods), the pods can't get scheduled there.
If a node fails (2) Kubernetes will automatically reschedule the pods running on that node if possible. "Fail" is a broad case, and can include a node being intentionally stopped to be upgraded or replaced. In this latter case you have some control over the cluster's behavior; see Disruptions in the documentation.
Many environments will run a cluster autoscaler. This can cause nodes to come and go automatically: if you try to schedule a pod and it won't fit, the autoscaler will allocate a new node, and if a node is under 50% utilization, it will be removed (and its pods rescheduled). In your first scenario you might start with only one node, but when the pod replicas don't all fit, the autoscaler would create a new node and once it's available the excess pods could be scheduled there.
Kubernetes will try to deploy pods to multiple nodes for better availability and resiliency. This will be based on the resource availability of the nodes. So if any node is not having enough capacity to host a pod it's possible that more than one replica of a pod is scheduled into same node.
Kubernetes will reschedule pods from the failed node to other available node which has enough capacity to host the pod. In this process again if there is no enough node which can host the replicas then there is a possibility that more than one replica is scheduled on same node.
You can read more on the scheduling algorithm here.
You can influence the scheduler by node and pod affinity and antiaffinity

Kubernetes Horizontal Pod Autoscaler not utilising node resources

I am currently running Kubernetes 1.9.7 and successfully using the Cluster Autoscaler and multiple Horizontal Pod Autoscalers.
However, I recently started noticing the HPA would favour newer pods when scaling down replicas.
For example, I have 1 replica of service A running on a node alongside several other services. This node has plenty of available resource. During load, the target CPU utilisation for service A rose above the configured threshold, therefore the HPA decided to scale it to 2 replicas. As there were no other nodes available, the CAS span up a new node on which the new replica was successfully scheduled - so far so good!
The problem is, when the target CPU utilisation drops back below the configured threshold, the HPA decides to scale down to 1 replica. I would expect to see the new replica on the new node removed, therefore enabling the CAS to turn off that new node. However, the HPA removed the existing service A replica that was running on the node with plenty of available resources. This means I now have service A running on a new node, by itself, that can't be removed by the CAS even though there is plenty of room for service A to be scheduled on the existing node.
Is this a problem with the HPA or the Kubernetes scheduler? Service A has now been running on the new node for 48 hours and still hasn't been rescheduled despite there being more than enough resources on the existing node.
After scouring through my cluster configuration, I managed to come to a conclusion as to why this was happening.
Service A was configured to run on a public subnet and the new node created by the CA was public. The existing node running the original replica of Service A was private, therefore leading the HPA to remove this replica.
I'm not sure how Service A was scheduled onto this node in the first place, but that is a different issue.