consensus algorithm: what will happen if an odd cluster becomes even because of a node failure? - consensus

Consensus algorithm (e.g. raft) requires the cluster contains an odd number of nodes to avoid the split-brain problem.
Say I have a cluster of 5 nodes, what will happen if only one node fails? The cluster has 4 nodes now, which breaks the odd number rule, will the cluster continue to behave right?
One solution is to drop one more node to make the cluster contain only 3 nodes, but what if the previously failed node comes back? then the cluster has 4 nodes again, and we have to bring the afore-dropped node back in order to keep the cluster odd.
Do implementations of the consensus algorithm handle this problem automatically, or I have to do it in my application code (for example, drop a node)?

Yes, the cluster will continue to work normally. A cluster of N nodes, and if N is odd (N = 2k + 1), can handle k node fails. As long as a majority of nodes is alive, it can work normally. If one node fails, and we still have the majority, everything is fine. Only when you lose majority of nodes, you have a problem.
There is no reason to force the cluster to have an odd number of nodes, and implementations don't consider this as a problem and thus don't handle it (drop nodes).
You can run a consensus algorithm on an even number of nodes, but it usually makes more sense to have it odd.
3 node cluster can handle 1 node fail (the majority is 2 nodes).
4 node cluster can handle 1 node fail (the majority is 3 nodes).
5 node cluster can handle 2 node fail (the majority is 3 nodes).
6 node cluster can handle 2 node fail (the majority is 4 nodes).
I hope this makes it more clear why it makes more sense to have the cluster size to be an odd number, it can handle the same number of node failures with fewer nodes in the cluster.

Related

Opensearch: Data node costs

I don't understand the costs of having 1 data node vs having 2 or more data nodes.
Will I have the same cost regardless of the number of nodes?
If I have 2 data nodes, that means that I will have double the cost of the instances?
Thanks
Depends on the instance size: i3.2xlarge would be ~2x more expensive than i3.xlarge.
If you use one instance size then yes, 2 nodes would be 2x more expensive than 1 node but you'll get more resilience (if one node goes down your cluster can still get updates and serve data) and rolling restarts.
Though, Opensearch requires an odd number of nodes for master election to work reliably so 3 smaller nodes might be better than 2 larger ones.

How to configure Kubernetes cluster autoscaler to scale down only?

I'd like to run the kubernetes cluster autoscaler so that unneeded nodes will be removed automatically, but I don't want the autoscaler to add nodes automatically. I prefer to handle scaling up myself. Is this possible?
I found maxNodesTotal, but I worry the semantics of setting this to 0 might mean all my nodes will go away. I also found scaleDownEnabled, but no corresponding option for scaling up.
Kubernetes Cluster Autoscaler or CA will attempt scale up whenever it will identify pending pods waiting to be scheduled to run but request more resources(CPU/RAM) than any available node can serve.
You can use the parameter maxNodeTotal to limit the maximum number of nodes CA would be allowed to spin up.
For example if you don't want your cluster to consist of any more than 3 nodes during peak utlization than you would set maxNodeTotal to 3.
There are different considerations that you should be aware of in terms of cost savings, performance and availability.
I would try to list some related to cost savings and efficient utilization as I suspect you might be more interested in that aspect.
Make sure you size your pods in consistency to their actual utlization, because scale up would get triggered by Pods resource request and not actual Pod resource utilization.
Also, bigger Pods are less likely to fit together on the same node, and in addition CA won't be able to scale down any semi-utilised nodes, resulting in resource spending.
Since you tagged this question with EKS, I will assume you are on AWS. On AWS the ASG (Auto Scaling Group) for each NodeGroup has a Max setting that is honoured by the cluster autoscaler. You can set this to prevent scaling above the set number of nodes. If the Min and Max on the ASG are the same value, then the autoscaler will never scale up or down. If the Min and Max are different, then the autoscaler can scale both up and down between those number of nodes. This is not exactly "never scale up", but it limits the upper end.
If you have multiple NodeGroups (ASGs), then each one can have different Min and Max nodes values.
You can also configure the cluster autoscaler itself in different ways. For example, you can set the utilization threshold. If a node's utilization fall under this threshold then the cluster autoscaler considers the node for scale down. See the FAQ.
The FAQ entry above that one may also apply. You can add an annotation to any node you do not want considered for scale down by the cluster autoscaler. Set: kubectl annotate node <nodename> cluster-autoscaler.kubernetes.io/scale-down-disabled=true or annotate the nodes as they are created. You can do this with entries in your AWS node group setup.

Can we have more than 1024 nodes in Couchbase?

Disclaimer : Just started NoSQL.
As per my understanding, in case of multiple nodes, 1024 V Buckets will be divided symmetrically inbetween available nodes.
Say in case of 2 nodes system, 512 V Buckets will be residing in each node.
Similarly in case of 4 nodes, 256 V Buckets will be residing on each nodes.
On Extrapolating same distribution, How the system will behave in case 1025th Node is being added to the cluster?
Couchbase has a fixed number of vbuckets, they will always be 1024. This also means that the maximum number of nodes a couchbase cluster could have is 1024, and this 10x bigger than the biggest clusters we have so far. (Yes, some clients have clusters with ~100 nodes in it )
The advantage of sharding data into 1024 vbuckets is that you won't ever need to reshard your data (an expensive operation in mongo, for instance). It also makes couchbase super easy to scale out ( as we just need to move some buckets to the new node) and also super easy to recover from a node failure (as we just need to guarantee the correct number of replicas of each bucket)

How databricks do auto scaling for a cluster

I have a databricks cluster setup with auto scale upto 12 nodes.
I have often observed databricks scaling cluster from 6 to 8, then 8 to 11 and then 11 to 14 nodes.
So my queries -
1. Why is it picking up 2-3 nodes to be added at one go
2. Why auto scale is triggered as I see not many jobs are active or heavy processing on cluster. CPU usage is pretty low.
3. While auto scaling why is it leaving notebook in waiting state
4. Why is it taking up to 8-10 min to auto scale
Thanks
I am trying to investigate why data bricks is auto scaling cluster when its not needed
When you create a cluster, you can either provide a fixed number of workers for the cluster or provide a minimum and maximum number of workers for the cluster.
When you provide a fixed size cluster, Databricks ensures that your cluster has the specified number of workers. When you provide a range for the number of workers, Databricks chooses the appropriate number of workers required to run your job. This is referred to as autoscaling.
With autoscaling, Databricks dynamically reallocates workers to account for the characteristics of your job. Certain parts of your pipeline may be more computationally demanding than others, and Databricks automatically adds additional workers during these phases of your job (and removes them when they’re no longer needed).
Autoscaling makes it easier to achieve high cluster utilization, because you don’t need to provision the cluster to match a workload. This applies especially to workloads whose requirements change over time (like exploring a dataset during the course of a day), but it can also apply to a one-time shorter workload whose provisioning requirements are unknown. Autoscaling thus offers two advantages:
Workloads can run faster compared to a constant-sized
under-provisioned cluster.
Autoscaling clusters can reduce overall costs compared to a
statically-sized cluster.
Databricks offers two types of cluster node autoscaling: standard and optimized.
How autoscaling behaves
Autoscaling behaves differently depending on whether it is optimized or standard and whether applied to an interactive or a job cluster.
Optimized
Scales up from min to max in 2 steps.
Can scale down even if the cluster is not idle by looking at shuffle
file state.
Scales down based on a percentage of current nodes.
On job clusters, scales down if the cluster is underutilized over
the last 40 seconds.
On interactive clusters, scales down if the cluster is underutilized
over the last 150 seconds.
Standard
Starts with adding 4 nodes. Thereafter, scales up exponentially, but
can take many steps to reach the max.
Scales down only when the cluster is completely idle and it has been
underutilized for the last 10 minutes.
Scales down exponentially, starting with 1 node.

Hierarchical quorums in Zookeeper

I am trying to understand hierarchical quorums in Zookeeper. The documentation here
gives an example but I am still not quiet sure I understand it. My question is, if I have a two node Zookeeper cluster (I know it is not recommended but let's consider it for the sake of this example)
server.1 and
server.2,
can I have hierarchical quorums as follows:
group.1=1:2
weight.1=2
weight.2=2
With the above configuration:
Even if one node goes down I still have enough votes (?) to
maintain a quorum ? is this a correct statement ?
What is the zookeeper quorum value here (2 - for two nodes or 3 -
for 4 votes)
In a second example, say I have:
group.1=1:2
weight.1=2
weight.2=1
In this case if server.2 goes down,
Should I still have sufficient votes (2) to maintain a quorum ?
As far as I understand from the documentation, When we give weight to a node, then the majority varies from being the number of nodes. For example, if there are 10 nodes and 3 of the nodes have been given 70 percent of weightage, then it is enough to have those three nodes active in the network. Hence,
You don't have enough majority since both nodes have equal weight of 2. So, if one node goes down, we have only 50 percent of the network being active. Hence quorum is not achieved.
Since total weight is 4. we require 70 percent of 4 which would be 2.8 so closely 3, since we have only two nodes, both needs to be active to meet the quorum.
In the second example, it is clear from the weights given that 2/3 of the network would be enough (depends on the configuration set by us, I would assume 70 percent always,) if 65 percent is enough to say that network is alive, then the quorum is reached with one node which has weightage 2.