Kubernetes node without master - kubernetes

Cluster consists of one master and one worker node. If the master is down and worker is restarted no workloads (deployments) are started on boot. How and if it is possible to make worker resume last state without master?
Kubernetes 1.18.3
On worker node are installed: kubelet, kubectl, kubeadm

Ideally you should have more than one(typically a odd number like 3 or 5) node serving as master and accessible from worker nodes via a LoadBalancer.
The state is stored in ETCD which is accessed by worker nodes via the API Server. So without master nodes running there is no way for workers to know the desired state.
Although it's not recommended you but can use static pod as potential solution here.Static Pods are managed directly by the kubelet daemon on a specific node, without the API server observing them.Unlike Pods that are managed by the control plane (for example, a Deployment ), instead the kubelet watches each static Pod (and restarts it if it crashes).
The caveat of using static pod is since those pods are not dependent on API Server Hence static Pods cannot be managed with kubectl or other Kubernetes API clients.

Related

Worker node in kops cluster goes to not ready state as load increases

I deployed my frontend and backend application in my kops cluster on AWS ec2 with master size of t2 medium , when I increase the load on my applications, my both worker node goes to not ready state and the pods changes their state to pending state,
how can I resolve this issue my cluster is in production at moment.
You should firstly run kubectl get events -n default to see why the nodes go into NotReady.
Usually your cluster is overloaded. Try using cluster autoscaler to dynamically manage your cluster capacity. Also ensure you have proper resource requests on your Pods.

Controlling pods kubelet vs. controller in control plane

I'm a little confused, I've been ramping up on Kubernetes and I've been reading about all the different objects ReplicaSet, Deployment, Service, Pods etc.
In the documentation it mentions that the kubelet manages liveness and readiness checks which are defined in our ReplicaSet manifests.
Reference: https://kubernetes.io/docs/tasks/configure-pod-container/configure-liveness-readiness-startup-probes/
If this is the case does the kubelet also manage the replicas? Or does that stay with the controller?
Or do I have it all wrong and it's the kubelet that is creating and managing all these resources on a pod?
Thanks in advance.
Basically kubelet is called "node agent" that runs on each node. It get notified through kube apiserver, then it start the container through container runtime, it works in terms of Pod Spec. It ensures the containers described in the Pod Specs are running and healthy.
The flow of kubelet tasks is like: kube apiserver <--> kubelet <--> CRI
To ensure whether the pod is running healthy it uses liveness probe, if it gets an error it restarts the pod.
kubelet does not maintain replicas, replicas are maintained by replicaset. As k8s doc said: A ReplicaSet's purpose is to maintain a stable set of replica Pods running at any given time. As such, it is often used to guarantee the availability of a specified number of identical Pods.
See more of ReplicaSet
For more info you can see: kubelet
When starting your journey with Kubernetes it is important to understand its main components for both Control Planes and Worker Nodes.
Based on your question we will focus on two of them:
kube-controller-manager:
Logically, each controller is a separate process, but to reduce
complexity, they are all compiled into a single binary and run in a
single process.
Some types of these controllers are:
Node controller: Responsible for noticing and responding when nodes go down.
Job controller: Watches for Job objects that represent one-off tasks, then creates Pods to run those tasks to completion.
Endpoints controller: Populates the Endpoints object (that is, joins Services & Pods).
Service Account & Token controllers: Create default accounts and API access tokens for new namespaces.
kubelet:
An agent that runs on each node in the cluster. It makes sure that
containers are running in a Pod.
The kubelet takes a set of PodSpecs that are provided through various
mechanisms and ensures that the containers described in those PodSpecs
are running and healthy. The kubelet doesn't manage containers which
were not created by Kubernetes.
So answering your question:
If this is the case does the kubelet also manage the replicas? Or does
that stay with the controller?
No, replication can be managed by the Replication Controller, a ReplicaSet or a more recommended Deployment. Kubelet runs on Nodes and makes sure that the Pods are running according too their PodSpecs.
You can find synopsis for kubelet and kube-controller-manager in the linked docs.
EDIT:
There is one exception however in a form of Static Pods:
Static Pods are managed directly by the kubelet daemon on a specific
node, without the API server observing them. Unlike Pods that are
managed by the control plane (for example, a Deployment); instead, the
kubelet watches each static Pod (and restarts it if it fails).
Note that it does not apply to multiple replicas.

Are the master and worker nodes the same node in case of a single node cluster?

I started a minikube cluster (single node cluster) on my local machine with the command:
minikube start --driver=virtualbox
Now, when I execute the command:
kubectl get nodes
it returns:
NAME STATUS ROLES AGE VERSION
minikube Ready master 2m59s v1.19.0
My question is: since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
The answer to your question is yes in your case your master node is itself a worker node.
Cluster The group of vm or physical computers.
Master is the where control plane component installed such as etcd,controller-manager,api-server which are necessary to control the whole cluster state. In best practices and big production cluster never ever use master node to schedule application related workload.
Worker node is the simple plane VM where docker and kubernetes packages installed but not installed the control-plane component etc. Normally worker node is used to handle your application related workload.
And if you have only one machine where you configure kubernetes then it becomes single node kubernetes. and it act as a master/worker.
I hope this helps you to unsderstand
since the cluster has only one node and according to the previous command it is a master node, what is the worker node? Are the master and worker nodes the same node in case of a single node cluster?
Yes, using Minikube, you only use a single node. And your workload is scheduled to execute on the same node.
Typically, Taints and Tolerations is used on master nodes to prevent workload to be scheduled to those nodes.

How is High Availability Master selected?

So I just started kubernetes and wanted to know if I create multiple masters then how the scheduling of pods is done and if the master goes down what happens to the worker nodes connected to it?
How is High Availability Master selected?
The etcd database underneath is where most of the high availability comes from. It uses an implementation of the raft protocol for consensus.
etcd requires a quorum of N/2 + 1 instances to be available for kubernetes to be able to write updates to the cluster. If you have less than 1/2 available, etcd will go into "read" only mode which means nothing new can be scheduled.
kube-apiserver will run on multiple nodes in active/active mode. All instances use the same etcd cluster so present the same data. The worker nodes will need some way to load balance / failover to the available apiservers. The failover requires a component outside of kubernetes, like HAProxy or a load balancer device (like AWS provides).
kube-scheduler will run on multiple master nodes and should access the local instance of kube-apiserver. The scheduler will elect a leader that locks the data it manages. The current leader information can be found in the endpoint:
kubectl -n kube-system get endpoints kube-scheduler \
-o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
kube-controller-manager will run on multiple master nodes and should access the local instance of kube-apiserver. The controllers will elect a leader that locks the data it manages. Leader information can be found in the endpoint:
kubectl -n kube-system get endpoints kube-controller-manager \
-o jsonpath='{.metadata.annotations.control-plane\.alpha\.kubernetes\.io/leader}'
if the master goes down what happens to the worker nodes connected to it?
They continue running in their current state. No new pods will be scheduled and no changes to the existing state of the cluster will be pushed out. Your pods will continue to run until they fail in a way the local kubelet can't recover.

How to start with kubernetes?

I have two IP'S master node and worker node? I need to deploy some services using these. I don't know anything about kubernetes ,what is master node and worker node?
How do I start?
You should start from the very basic things..
Kubernetes concept page is your starting point.
The Kubernetes Master is a collection of three processes that run on a
single node in your cluster, which is designated as the master node.
Those processes are: kube-apiserver, kube-controller-manager and
kube-scheduler.
Each individual non-master node in your cluster runs
two processes: kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking
services on each node.
Regarding you question in comment: read Organizing Cluster Access Using kubeconfig Files. Make sure you have kubeconfig file in the right place..