In HA kubernetes clusters, we configure multiple control planes(master nodes), but how does multiple control planes sync their data? When we create a pod using kubectl command, the request went through the cloud load balancer to one of the control plane. I want to understand how other control planes sync their data with the one that got the new request?
First of all, please note that API Server is the only component that directly talks with the etcd.
Every change made on the Kubernetes cluster ( e.g. kubectl create) will create appropriate entry in etcd database and everything you will get from a kubectl get command is stored in etcd.
In this article you can find detailed explanation of communication between API Server and etcd.
Etcd uses RAFT protocol for leader election and that leader handles all client requests which need cluster consensus ( requests that do not require consensus can be processed by any cluster member ):
etcd is built on the Raft consensus algorithm to ensure data store consistency across all nodes in a cluster—table stakes for a fault-tolerant distributed system.
Raft achieves this consistency via an elected leader node that manages replication for the other nodes in the cluster, called followers. The leader accepts requests from the clients, which it then forwards to follower nodes. Once the leader has ascertained that a majority of follower nodes have stored each new request as a log entry, it applies the entry to its local state machine and returns the result of that execution—a ‘write’—to the client. If followers crash or network packets are lost, the leader retries until all followers have stored all log entries consistently.
More information about etcd and raft consensus algorithm can be found in this documentation.
Related
To my knowledge, etcd uses Raft as a consensus and leader selection algorithm to maintain a leader that is in charge of keeping the ensemble of etcd nodes in sync with data changes within the etcd cluster. Among other things, this allows etcd to recover from node failures in the cluster where etcd runs.
But what about etcd managing other clusters, i.e. clusters other than the one where etcd runs?
For example, say we have an etcd cluster and separately, a DB (e.g. MySQL or Redis) cluster comprised of master (read and write) node/s and (read-only) replicas. Can etcd manage node roles for this other cluster?
More specifically:
Can etcd elect a leader for clusters other than the one running etcd and make that information available to other clusters and nodes?
To make this more concrete, using the example above, say a master node in the MySQL DB cluster mentioned in the above example goes down. Note again, that the master and replicas for the MySQL DB are running on a different cluster from the nodes running and hosting etcd data.
Does etcd provide capabilities to detect this type of node failures on clusters other than etcd's automatically? If yes, how is this done in practice? (e.g. MySQL DB or any other cluster where nodes can take on different roles).
After detecting such failure, can etcd re-arrange node roles in this separate cluster (i.e. designate new master and replicas), and would it use the Raft leader selection algorithm for this as well?
Once it has done so, can etcd also notify client (application) nodes that depend on this DB and configuration accordingly?
Finally, does any of the above require Kubernetes? Or can etcd manage external clusters all by its own?
In case it helps, here's a similar question for Zookeper.
etcd's master election is strictly for electing a leader for etcd itself.
Other clusters, however can use a distributed strongly-consistent key-value store (such as etcd) to implement their own failure detection, leader election and to allow clients of that cluster to respond.
Etcd doesn't manage clusters other than its own. It's not magic awesome sauce.
If you want to use etcd to manage a mysql cluster, you will need a component which manages the mysql nodes and stores cluster state in etcd. Clients can watch for changes in that cluster state and adjust.
I want to understand what could be the possible impact of a master node failure in a k8s cluster with only one master node with internal etcd store.
As per my understanding, all kinds of deployed workload containers (including stateless and stateful sets with persistent volume claims) running on worker nodes would keep on running until recreation of any container is required as they don't have a direct functional dependency on the master node and etcd store for their core functions. And, the unavailability of the master node only affects the control plane operations for the cluster.
Is my understanding correct? If not, could you please explain the impact of the master node failure on my workload running on that cluster?
I understand that the best way to achieve HA for k8s cluster is to set up a multi-master cluster with possibly externalizing etcd stores also for decoupling of them. This question is to understand the exact impact of the master node failure to take an informed call before configuring a multi-master cluster.
Etcd operators on the quorum system so as long as the cluster sees a majority it will continue operating. If the failed node was the current leader, the others would trigger an election after the heartbeat timeout.
For kube-apiserver, it's a horizontal service so losing a node is not interesting, just like any other webapp. Some (most) controllers are singletons, but they run on every control plane node and use kube-apiserver for leader elections so as with Etcd, if the leader dies then a few seconds later another copy will get the leader lock and take over.
When I've one etcd cluster separate from master, one by region (three regions), the master's write in each etcd at same time or there's one active node with other nodes at standby? The nodes switch data between them?
The main mechanism how ETCD stores key value sensitive data across Kubernetes cluster based on Raft consensus algorithm. It is a comprehensive way for distributing configuration, state and metadata information within a cluster and monitoring for any changes to the data stack.
Assuming that Master node is handling all core components inventory, it plays a role of the main contributor for managing ETCD database and has a responsibility of the leader to keep a consistent state for the other ETCD members located on worker nodes according to the distributed consensus algorithm based on a quorum model.
However, single Master node configuration does not guarantee cluster resistance to any of possible outages, therefore multi master nodes setup is more efficient way to accomplish High availability for ETCD storage as it provides consistent replica set for ETCD members distributed within a separate nodes.
In order to establish data reliability it is important to periodically backup ETCD cluster via built-in etcdctl command line tool or making snapshot for the volume where ETCD storage located.
You might be able to find more specific information about ETCD in the relevant Github project documentation.
For HA and Quorum I will install three master / etc nodes in three different data centers.
But I want to configure one node to never become a leader. Only acts as follower for etcd quorum.
Is this possible?
I believe, today it is not a supported option and is not recommended.
what you want is to have 3 node control plane ( including etcd ) and one of the node should participate in leader election but not become leader and shouldnt store data. you are looking for some kind of ARBITER feature that exists in mongodb HA cluster.
ARBITER feature is not supported in ETCD. you might need to raise a PR to get that addressed.
The controller manager and scheduler always connect the local apiserver. You might want to route those calls to apiserver on the active master. You might need to open another PR for kubernetes community to get that addressed.
here is example of clustered-durable-subscription and here is clustered-static-discovery, In clustered-static-discovery connecting with only one server (cluster auto connected with another server using cluster configuration).
As per doc
Normally durable subscriptions exist on a single node and can only
have one subscriber at any one time, however, with ActiveMQ Artemis
it's possible to create durable subscription instances with the same
name and client-id on different nodes of the cluster, and consume from
them simultaneously. This allows the work of processing messages from
a durable subscription to be spread across the cluster in a similar
way to how JMS Queues can be load balanced across the cluster
Should i need to add additional config for static cluster, or durable-subscription will work fine with static cluster without set the client id and subscription for all node of(As i have mentioned in static cluster we only make connection with one node)
The "static" part of the "clustered-static-discovery" really only refers to cluster node discovery (as the name suggests). Once the cluster nodes are discovered and the cluster is formed then the cluster will behave the same as if the discovery were dynamic (e.g. using UDP multicast). In other words, a clustered durable subscription should work the same no matter what mechanism was used on the server-side for cluster node discovery.