Is there a way to set max/min replicas on Kubernetes statefulset? - kubernetes

For ReplicaSets I see there is a way to use a Horizontal Pod Autoscaler (HPA) and set a max/min value for the number of replicas allowed. Is there a similar feature for StatefulSets? Since it also allows you to specify the number of replicas to deploy initially? For example, how would I tell Kubernetes to limit the number of pods it can deploy for a given StatefulSets?

I have posted community wiki answer for better visibility.
Jonas well mentioned in the comment:
First sentence in the documentation:
"The Horizontal Pod Autoscaler automatically scales the number of Pods in a replication controller, deployment, replica set or stateful set"
summary, it is possible to set min / max replicas for a statefulset using HPA. In this documentation you will learn how HPA works, how to use it, what is supported etc. HPA will not work only with the objects that can't be scaled, for example, DaemonSets.
See also this related question.

Related

kubernetes / prometheus custom metric for horizontal autoscaling

I'm wondering about an approach one has to take for our server setup. We have pods that are short lived. They are started up with 3 pods at a minimum and each server is waiting on a single request that it handles - then the pod is destroyed. I'm not sure of the mechanism that this pod is destroyed, but my question is not about this part anyway.
There is an "active session count" metric that I am envisioning. Each of these pod resources could make a rest call to some "metrics" pod that we would create for our cluster. The metrics pod would expose a sessionStarted and sessionEnded endpoint - which would increment/decrement the kubernetes activeSessions metric. That metric would be what is used for horizontal autoscaling of the number of pods needed.
Since having a pod as "up" counts as zero active sessions, the custom event that increments the session count would update the metric server session count with a rest call and then decrement again on session end (the pod being up does not indicate whether or not it has an active session).
Is it correct to think that I need this metric server (and write it myself)? Or is there something that Prometheus exposes where this type of metric is supported already - rest clients and all (for various languages), that could modify this metric?
Looking for guidance and confirmation that I'm on the right track. Thanks!
It's impossible to give only one way to solve this and your question is more "opinion-based". However there is an useful similar question on StackOverFlow, please check the comments that can give you some tips. If nothing works, probably you should write the script. There is no exact solution from Kubernetes's side.
Please also take into the consideration of Apache Flink. It has Reactive Mode in combination of Kubernetes:
Reactive Mode allows to run Flink in a mode, where the Application Cluster is always adjusting the job parallelism to the available resources. In combination with Kubernetes, the replica count of the TaskManager deployment determines the available resources. Increasing the replica count will scale up the job, reducing it will trigger a scale down. This can also be done automatically by using a Horizontal Pod Autoscaler.

How to increase or decrease number of pods in Kubernetes deployment

I have one requirement based on some input value I need to decide the number of active pods I have, let's say in beginning the number would be 1, so there we need to start one pod after some time if number goes to 3, I need to start 2 more pods.
Next day it could happen number goes back to 1, so I need to accordingly remove 2 pods and keep 1 active. How can this be achieved in Kubernetes?
There are few ways to achieve this. The most obvious and manual one would be to use kubectl scale [--resource-version=version] [--current-replicas=count] --replicas=COUNT (-f FILENAME | TYPE NAME) as per this document or this tutorial. You can also consider taking advantage of Kubernetes autoscaling (Horizontal Pod Autoscaler and Cluster Autoscaler) described in this article and this document.

High total CPU request but low total usage (kubernetes resources)

I have a bunch of pods in a cluster that is almost requesting all (7.35/8) available CPU resources on a node:
even though their actual total usage is almost nothing (0.34/8).
The pod that is currently requesting the most only requests 210m which I guess is not an outrageous amount - also I would like to enforce some sensible minimum request size for all pods in the cluster. Of course that will accumulate when there are lots of pods.
It seems I could easily scale down the request by a factor of 10 and leave the limits where they are to begin with.
But is there something else that I should look into instead before doing that - reducing replica count etc.?
Also it looks a bit strange that the pods are not more evenly distributed between the nodes.
Your request values seems overestimated.
You need time and metrics to find the right request/limit for your workload.
Keep in mind that if you change those values, your pods will restart.
Also, It's normal that you can find some unbalance nodes on your cluster. Kubernetes will never remove a pod if you don't ask.
For example, if your create a cluster with 3 nodes, fill those 3 nodes with pods and then add another 3 nodes. The new nodes will stay empty.
You can setup some HorizontalPodAutoScaler on your cluster to adapt your number of pod to your workload.
Doing that, your workload will spread among nodes and with a correct balance. (if you use the default Scheduling Policy
I suggest following:
Resource Allocation: Based on history value set your request to meaningful value with buffer. Also to have guaranteed pod resource allocation it may be a good idea to set request and limit as same value. But that means you pod cannot burst for new resource. One more thing to note is scheduling only happens based on requested value, so if node has no more resource left, then pod will be killed and rescheduled if you request is trying to burst to limit.
Resource quotas: Check Kubernetes Resource Quotas to have sensible namespace level quotas to control overly provisioned resources by developers
Affinity/AntiAffinity: Check concept of Anti-affinity to have your replicas or different pods scheduled across your cluster. You can ensure for eg., that one host or Avalability zone etc can have only one replica of your pod (helps in HA), spread different pods to different nodes (layer scheduling etc) - Check this video
There are good answers already but I would like to add some more info.
It is very important to have a good strategy when calculating how much resources you would need for each container.
Optimally, your pods should be using exactly the amount of resources you requested but that's almost impossible to achieve. If the usage is lower than your request, you are wasting resources. If it's higher, you are risking performance issues. Consider a 25% margin up and down the request value as a good starting point. Regarding limits, achieving a good setting would depend on trying and adjusting. There is no optimal value that would fit everyone as it depends on many factors related to the application itself, the demand model, the tolerance to errors etc.
Kubernetes best practices: Resource requests and limits is a very good guide explaining the idea behind these mechanisms with a detailed explanation and examples.
Also, Managing Resources for Containers will provide you with the official docs regarding:
Requests and limits
Resource types
Resource requests and limits of Pod and Container
Resource units in Kubernetes
How Pods with resource requests are scheduled
How Pods with resource limits are run, etc
Just in case you'll need a reference.

Kubernetes scale up pod with dynamic environment variable

I am wondering if there is a way to set up dynamically environment variables on a scale depending on the high load.
Let's imagine that we have
Kubernetes with service called Generic Consumer which at the beginning have 4 pods. First of all
I would like to set that 75% of pods should have env variable Gold and 25% Platinium. Is that possible? (% can be changed to static number for example 3 nodes Gold, 1 Platinium)
Second question:
If Platinium pod is having a high load is there a way to configure Kubernetes/charts to scale only the Platinium and then decrease it after higher load subsided
So far I came up with creating 2 separate YAML files with diff env variables and replicas numbers.
Obviously, the whole purpose of this is to prioritize some topics
I have used this as a reference https://www.confluent.io/blog/prioritize-messages-in-kafka.
So in the above example, Generic Consumer would be the Kafka consumer which would use env variable to get bucket config
configs.put(ConsumerConfig.PARTITION_ASSIGNMENT_STRATEGY_CONFIG,
BucketPriorityAssignor.class.getName());
configs.put(BucketPriorityConfig.TOPIC_CONFIG, "orders-per-bucket");
configs.put(BucketPriorityConfig.BUCKETS_CONFIG, "Platinum, Gold");
configs.put(BucketPriorityConfig.ALLOCATION_CONFIG, "70%, 30%");
configs.put(BucketPriorityConfig.BUCKET_CONFIG, "Platinum");
consumer = new KafkaConsumer<>(configs);
If you have any alternatives, please let me know!
As was mention in comment section, the most versitale option (and probably the best for your scenario with prioritization) is to keep two separate deployments with gold and platinium labels.
Regarding first question, like #David Maze pointed, pods from Deployment are identical and you cannot have few pods with one label and few others with different. Even if you would create manually (3 pods with gold and 1 with platiunuim) you won't be able to use HPA.
This option allows you to adjustments depends on the situation. For example you would be able to scale one deployment using HPA and another with VPA or both with HPA. Would help you maintain budget, i.e for gold users you might limit to have maximum 5 pods to run simultaneously and for platinium you could set this maximum to 10.
You could consider Istio Traffic Management to routing request, but in my opinion, method with two separated deployments is more suitable.

What to set .spec.replicas to for autoscaled deployments in kubernetes?

When creating a kubernetes deployment I set .spec.replicas to my minimum desired number of replicas. Then I create a horizontal pod autoscaler with minimum and maximum replicas.
The easiest way to do the next deployment is to use this same lower bound. When combining it with autoscaling should I set replicas to the minimum desired as used before or should I get the current number of replicas and start from there? This would involve an extra roundtrip to the api, so if it's not needed would be preferable.
There are two interpretations of your question:
1. You have an existing Deployment object and you want to update it - "deploy new version of your app".
In this case you don't need to change either replicas in Deployment object (it's managed by horizontal pod autoscaler) or horizontal pod autoscaler configuration. It will work out of the box. It's enough to change the important bits of deployment spec.
See rolling update documentation for more details.
2. You have an existing Deployment object and you want to create second one with the same application
If you create a separate application it may have a different load characteristics, so it's probably that the desired number of replicas will be different. In any case HPA will adjust it relatively quickly, so IMO setting initial number of replicas to the same number is not needed.