Horizontal Pod Autoscaler with the ArangoDB Kubernetes Operator - kubernetes

Would it be possible to use the Kubernetes Horizontal Pod Autoscaler with the ArangoDB Kubernetes Operator?

Firstly, it would be better if you specify your need in detail, such as what you want to scale, or do you want to scale operator itself or your arango cluster (kind: arangodeployments)?
Anyway, as from this Kubernetes HPA Documentation it says:
The Horizontal Pod Autoscaler automatically scales the number of pods
in a replication controller, deployment or replica set based on
observed CPU utilization (or, with custom metrics support, on some
other application-provided metrics). Note that Horizontal Pod
Auto-scaling does not apply to objects that can’t be scaled, for
example, DaemonSets.
It means you can only scale Deployment, ReplicaSet, StatefulSet, or ReplicationController
In order to autoscale operator itself follow those steps:
$ kubectl get deploy
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
arango-deployment-operator 2 2 2 2 19m
arango-deployment-replication-operator 2 2 2 2 19m
Then autoscale this deployment via: (Modify auto-scale threshold values and change deployment name according to yours)
$ kubectl autoscale deployment arango-deployment-operator --cpu-percent=10 --min=1 --max=10
horizontalpodautoscaler.autoscaling/arango-deployment-operator autoscaled
If you are looking for auto-scaling ArangoDb cluster, such as dbservers or coordinators, it won't be possible out of the box, because those objects are part of arangodeployments.database.arangodb.com and this crd is not supported by HPA.
You can scale up and down dbservers and coordinators manually by changing counts in arangodeployment as in mentioned in this Documentation
Hope it would be useful for you.

Related

does GKE autopilot auto scale both pods and nodes?

when I change the replicas: x in my .yaml file I can see GKE autopilot boots pods up/down depending on the value, but what will happen if the load on my deployment gets too big. Will it then autoscale the number of pods and nodes to handle the traffic and then reduce back to the value specified in replicas when the request load is reduced again?
I'm basically asking how does autopilot horizontal autoscaling works?
and how do I get a minimum of 2 pod replicas that can horizontally autoscale in autopilot?
GKE autopilot by default will not scale the replicas count beyond what you specified. This is the default behavior of Kubernetes in general.
If you want automatic autoscaling you have to use Horizental Pod Autoscaler (HPA) which is supported in Autopilot
If you deploy HPA to scale up and down your workload, Autopilot will scale up and down the nodes automatically and that's transparent for you as the nodes are managed by Google.
GKE autoscale only Nodes by default, while you have to take care of your HPA deployment scaling.
Autopilot: GKE provisions and manages the cluster's underlying
infrastructure, including nodes and node pools, giving you an
optimized cluster with a hands-off experience.
We need to configure both scaling options for deployment VPA and HPA.
Pre-configured: Autopilot handles all the scaling and configuring of
your nodes.
Default: You configure Horizontal pod autoscaling (HPA) You configure
Vertical Pod autoscaling (VPA)
GKE will manage the scaling up/down of your nodes in node pools, without worrying about the infrastructure you just have to start deploying the application with HPA & VPA auto-scaling.
You can read more about the options here : https://cloud.google.com/kubernetes-engine/docs/concepts/autopilot-overview#comparison

Can a node autoscaler automatically start an extra pod when replica count is 1 & minAvailable is also 1?

our autoscaling (horizontal and vertical) works pretty fine, except the downscaling is not working somehow (yeah, we checked the usual suspects like https://github.com/kubernetes/autoscaler/blob/master/cluster-autoscaler/FAQ.md#i-have-a-couple-of-nodes-with-low-utilization-but-they-are-not-scaled-down-why ).
Since we want to save resources and have pods which are not ultra-sensitive, we are setting following
Deployment
replicas: 1
PodDisruptionBudget
minAvailable: 1
HorizontalPodAutoscaler
minReplicas: 1
maxReplicas: 10
But it seems now that this is the problem that the autoscaler is not scaling down the nodes (even though the node is only used by 30% by CPU + memory and we have other nodes which have absolutely enough memory + cpu to move these pods).
Is it possible in general that the auto scaler starts an extra pod on the free node and removes the old pod from the old node?
Is it possible in general that the auto scaler starts an extra pod on the free node and removes the old pod from the old node?
Yes, that should be possible in general, but in order for the cluster autoscaler to remove a node, it must be possible to move all pods running on the node somewhere else.
According to docs there are a few type of pods that are not movable:
Pods with restrictive PodDisruptionBudget.
Kube-system pods that:
are not run on the node by default
don't have a pod disruption budget set or their PDB is too restrictive >(since CA 0.6).
Pods that are not backed by a controller object (so not created by >deployment, replica set, job, stateful set etc).
Pods with local storage.
Pods that cannot be moved elsewhere due to various constraints (lack of >resources, non-matching node selectors or affinity, matching anti-affinity, etc)
Pods that have the following annotation set:
cluster-autoscaler.kubernetes.io/safe-to-evict: "false
You could check the cluster autoscaler logs, they may provide a hint to why no scale in happens:
kubectl -n kube-system logs -f deployment.apps/cluster-autoscaler
Without having more information about your setup it is hard to guess what is going wrong, but unless you are using local storage, node selectors or affinity/anti-affinity rules etc Pod disruption policies is a likely candidate. Even if you are not using them explicitly they can still prevent node scale in if they there are pods in the kube-system namespace that are missing pod disruption policies (See this answer for an example of such a scenario in GKE)

Kubernetes - Set Pod replication criteria based on memory and cpu usage

I am newbie to Kubernetes world. Please excuse if I am getting anything wrong.
I understand that pod replication is handled by k8s itself. We can also set cpu and memory usage for pods. But is it possible to change replication criteria based on memory and cpu usage? For example if I want to a pod to replicate when its memory/cpu usage reaches 70%.
Can we do it using metrics collected by Prometheus etc ?
You can use horizontal pod autoscaler. From the docs
The Horizontal Pod Autoscaler automatically scales the number of Pods
in a replication controller, deployment, replica set or stateful set
based on observed CPU utilization (or, with custom metrics support, on
some other application-provided metrics). Note that Horizontal Pod
Autoscaling does not apply to objects that can't be scaled, for
example, DaemonSets.
The Horizontal Pod Autoscaler is implemented as a Kubernetes API
resource and a controller. The resource determines the behavior of the
controller. The controller periodically adjusts the number of replicas
in a replication controller or deployment to match the observed
average CPU utilization to the target specified by user
An example from the doc
The following command will create a Horizontal Pod Autoscaler that maintains between 1 and 10 replicas of the Pods. HPA will increase and decrease the number of replicas to maintain an average CPU utilization across all Pods of 50%.
kubectl autoscale deployment php-apache --cpu-percent=50 --min=1 --max=10

How to setup auto scale Kubernetes cluster in hybrid mode

I need to create K8s autoscale setup for spark application which will be running - on premise and AWS both as docker images.By scale, I mean (scale up and down of nodes) from on-premise to AWS cloud using cluster autoscaler or by other means
I browsed so many articles like how to set up K8 cluster on AWS/ HPA & CA scaling but could not get concrete directions to follow
I am looking for any direction which can help me understand from where i should start/steps to follow to setup such K8s cluster.
Regarding Cluster Autoscaler:
Cluster Autoscaler is a tool that automatically adjusts the size of the Kubernetes cluster when one of the following conditions is true:
- there are pods that failed to run in the cluster due to insufficient resources,
- there are nodes in the cluster that have been underutilized for an extended period of time and their pods can be placed on other existing nodes.
The cluster autoscaler on Azure dynamically scales Kubernetes worker nodes. It runs as a deployment in your cluster.
This README will help you get cluster autoscaler running on your Azure Kubernetes cluster.
Regarding HPA:
The Horizontal Pod Autoscaler automatically scales the number of pods in a replication controller, deployment or replica set based on observed CPU utilization or other custom metrics. HPA normally fetches metrics from a series of aggregated APIs:
- metrics.k8s.io
- custom.metrics.k8s.io
- external.metrics.k8s.io
Metrics-server needs to be launched separately if you wish to base on something more than just CPU utilization. More info can be found here and here.
How to make it work?
HPA is being supported by kubectl by default:
kubectl create - creates a new autoscaler
kubectl get hpa - lists your autoscalers
kubectl describe hpa - gets a detailed description of autoscalers
kubectl delete - deletes an autoscaler
Example:
kubectl autoscale rs foo --min=2 --max=5 --cpu-percent=80 creates an autoscaler for replication set foo, with target CPU utilization set to 80% and the number of replicas between 2 and 5.
Here is a detailed documentation of how to use kubectl autoscale command.

Autoscaling in Google Container Engine

I understand the Container Engine is currently on alpha and not yet complete.
From the docs I assume there is no auto-scaling of pods (e.g. depending on CPU load) yet, correct? I'd love to be able to configure a replication controller to automatically add pods (and VM instances) when the average CPU load reaches a defined threshold.
Is this somewhere on the near future roadmap?
Or is it possible to use the Compute Engine Autoscaler for this? (if so, how?)
As we work towards a Beta release, we're definitely looking at integrating the Google Compute Engine AutoScaler.
There are actually two different kinds of scaling:
Scaling up/down the number of worker nodes in the cluster depending on # of containers in the cluster
Scaling pods up and down.
Since Kubernetes is an OSS project as well, we'd also like to add a Kubernetes native autoscaler that can scale replication controllers. It's definitely something that's on the roadmap. I expect we will actually have multiple autoscaler implementations, since it can be very application specific...
Kubernetes autoscaling: http://kubernetes.io/docs/user-guide/horizontal-pod-autoscaling/
kubectl command: http://kubernetes.io/docs/user-guide/kubectl/kubectl_autoscale/
Example:
kubectl autoscale deployment foo --min=2 --max=5 --cpu-percent=80
You can autoscale your deployment by using kubectl autoscale.
Autoscaling is actually when you want to modify the number of pods automatically as the requirement may arise.
kubectl autoscale deployment task2deploy1 –cpu-percent=50 –min=1 –max=10
kubectl get deployment task2deploy1
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
task2deploy1 1 1 1 1 49s
As the resource consumption increases the number of pods will increase and will be more than the number of pods you specified in your deployment.yaml file but always less than the maximum number of pods specified in the kubectl autoscale command.
kubectl get deployment task2deploy1
NAME DESIRED CURRENT UP-TO-DATE AVAILABLE AGE
task2deploy1 7 7 7 3 4m
Similarly, as the resource consumption decreases, the number of pods will go down but never less than the number of minimum pods specified in the kubectl autoscale command.