Kubernetes change multi-master node to single master node on failure - kubernetes

Is there any way to convert/change multi-master nodes(3 masters, HA & LB) to single master in stacked etcd configuration?
In 3 master nodes, it only tolerates 1 failure right?
So if 2 of these master node goes down, the control plane wouldn't work.
what I need to do is convert these 3masters to a single master? is there any way to do this to minimalize the downtime of the control plane? (in case the other 2 masters need some time to turn back on)
the test I've done:
I've tried to restore etcd snapshot to a fully different environment with a new setup of 1 master & 2 workers, and it seems to work fine.. the status of the other 2 master nodes is not ready, 2 worker node is ready, and request to api-server is working normally.
But, if I restore etcd snapshot to the original environment.. after resetting the last master node with kubeadm reset, the cluster seems to be broken.. the status of 2 workers is not ready, seems like it has different certificates.
any suggestion on how to make this works?
UPDATE: so apparently, I could restore etcd snapshot directly without
doing "kubeadm reset", even if doing reset.. as long as we update the
certificates, the cluster should be restored successfully.
BUT now I run into a different issue, after restoring the etcd
snapshot.. everything works fine, so basically I want to add a new
Control Plane to this Cluster, the current node status is:
master1 ready
master2 not-ready
master3 not-ready
before I add new CP, I removed the 2 failed master node from the
cluster. after removing it I tried to join new CP to cluster, and the
join process stuck at :
[etcd] Waiting for the new etcd member to join the cluster. This can
take up to 40s [kubelet-check] Initial timeout of 40s passed.
the original master node is broken again, now I can't access the
api-server. do you guys have any idea what's going wrong?

Related

Slave nodes without master running

I have 1 master node and 2 slave nodes on Jboss EAP 7
All 3 nodes are running and during this.
-If the master nodes go down suddenly, will the slave nodes be still able to run on their own and work normally? what is the disadvantage of running slave nodes alone in this scenario? is this will be possible?
-If one of the slave node goes down while the master is down ,can the slave node can be brought up independently and work normally?
If the DC (master node) goes down, the slave nodes will still be Up & Running.
The problem with the slave nodes is that they cannot be restarted unless the master node (DC) is up. Bear in mind that the DC, holds the whole DOMAIN configuration. That's why is very important to consider the HA of the Domain Controller from the beginning.

Does kubernetes restore the worker node if worker node dies?

I am creating kubernetes cluster which include: 1 master node (M1), 2 worker nodes (W1 and W2)
Using deployment to create pods with replica count 5.
If pod dies kubernetes is re creating the pods. Count remains 5.
Lets suppose if W2 worker node dies due to any reason.
In this case does kubernetes will create a new node or just run all the replicas on the same node W1.
If i want to restore the died node automatically how can i do that?
This mostly depends on how you deployed things. Most cloud-integrated installers and hosted providers (GKE, EKS, AKS, Kops) use a node group of some kind so a fully failed node (machine terminated) would be replaced at that level. If the node is up but jammed, that would generally be solved by cluster-autoscaler starting a new node for you. Some installers that don't make per-cloud assumptions (Kubespray, etc) leave this up to you to handle yourself.

Behaviour of Multi node Kubernetes cluster with a single master when the master goes down?

What would be the behavior of a multi node kubernetes cluster if it only has a single master node and if the node goes down?
The control plane would be unavailable. Existing pods would continue to run, however calls to the API wouldn't work, so you wouldn't be able to make any changes to the state of the system. Additionally self-repair systems like pods being restarted on failure would not happen since that functionality lives in the control plane as well.
You wouldn't be able to create or query kubernetes objects(pods, deployments etc) since the required control plane components(api-server and etcd) are not running.
Existing pods on the worker nodes will keep running. If a pod crashes, kubelet on that node would restart it as well.
If worker node goes down while master is down, even the pods created by a controllers like deployment/replicaset won't be re-scheduled to different node since controller-manager(control plane component) is not running.

kubernetes - can we create 2 node master-only cluster with High availability

I am new to the Kubernetes and cluster.
I would like to bring up an High Availability Master Only Kubernetes Cluster(Need Not to!).
I have the 2 Instances/Servers running Kubernetes daemon, and running different kind of pods on both the Nodes.
Now I would like to somehow create the cluster and if the one of the host(2) down, then all the pods from that host(2) should move to the another host(1).
once the host(2) comes up. the pods should float back.
Please let me know if there is any way i can achieve this?
Since your requirement is to have a 2 node master-only cluster and also have HA capabilities then unfortunately there is no straightforward way to achieve it.
Reason being that a 2 node master-only cluster deployed by kubeadm has only 2 etcd pods (one on each node). This gives you no fault tolerance. Meaning if one of the nodes goes down, etcd cluster would lose quorum and the remaining k8s master won't be able to operate.
Now, if you were ok with having an external etcd cluster where you can maintain an odd number of etcd members then yes, you can have a 2 node k8s cluster and still have HA capabilities.
It is possible that master node serves also as a worker node however it is not advisable on production environments, mainly for performance reasons.
By default, kubeadm configures master node so that no workload can be run on it and only regular nodes, added later would be able to handle it. But you can easily override this default behaviour.
In order to enable workload to be scheduled also on master node you need to remove from it the following taint, which is added by default:
kubectl taint nodes --all node-role.kubernetes.io/master-
To install and configure multi-master kubernetes cluster you can follow this tutorial. It describes scenario with 3 master nodes but you can easily customize it to your needs.

Kubernetes cluster without kubernetes Master

I have kubernetes cluster working fine. I have one master node and 5 worker nodes, and all these are running pods. When all the nodes are on and if the kubernetes master goes down/ powered off, will the worker nodes keep working as normally.?
If master node is down and one of the worker node also goes down and then come back online after sometime. Then will the pod automatically be started on that worker as the master node is still down.?
When all the nodes are on and if the kubernetes master goes down/ powered off, will the worker nodes keep working as normally.?
Yes, they will work in their last state.
If master node is down and one of the worker node also goes down and then come back online after sometime. Then will the pod automatically be started on that worker as the master node is still down.?
No.
As you can read in Kubernetes Components section:
Master components provide the cluster’s control plane. Master components make global decisions about the cluster (for example, scheduling), and detecting and responding to cluster events (starting up a new pod when a replication controller’s ‘replicas’ field is unsatisfied).