Stop scheduling pods on kubernetes master - kubernetes

For testing purpose, I have enabled pod scheduling on kubernetes master node with the following command
kubectl taint nodes --all node-role.kubernetes.io/master-
Now I have added worker node to the cluster and I would like to stop scheduling pods on master. How do I do that?

You simply taint the node again.
kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule

Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Even placed a taint on Master node,you can specify a toleration for a pod in the PodSpec, the pod would be able to schedule onto Master node:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
To learn more,see Taints and Tolerations

Related

Create same master and working node in kubenetes

I am preparing dev environment and want to create a single host to be master and worker node for kubernetes.
How can I achieve my goal?
The difference between master node and worker node is that "regular pods cannot be scheduled on a master node because of a taint"
You just need to remove node-role.kubernetes.io/master:NoSchedule taint so that pods can be scheduled on that (master) node.
Following is the command:
kubectl taint nodes <masternodename> node-role.kubernetes.io/master:NoSchedule-
The master node is responsible for running several Kubernetes processes that are absolutely necessary to run and manage the cluster properly. [1]
The worker nodes are the part of the Kubernetes clusters which actually execute the containers and applications on them. [1]
Worker nodes are generally more powerful than master nodes because they have to run hundreds of clusters on them. However, master nodes hold more significance because they manage the distribution of workload and the state of the cluster. [1]
By removing taint you will be able to schedule pods on that node.
You should firstly check the present taint by running:
kubectl describe node <nodename> | grep Taints
In case the present one is master node you should remove that taint by running:
kubectl taint node <mastername> node-role.kubernetes.io/master:NoSchedule-
References:
[1] - What is Kubernetes cluster? What are worker and master nodes?
See also:
Creating a cluster with kubeadm,
This four similar questions:
Master tainted - no pods can be deployed
Remove node-role.kubernetes.io/master:NoSchedule taint,
Allow scheduling of pods on Kubernetes master?
Are the master and worker nodes the same node in case of a single node cluster?
Taints and Tolerations.
You have to remove the NoSchedule taint from the MASTER node.
I just spun up a kubeadm node and the taint is on my control-plane, not master.
So I did the following (sydney is the node name):
$kubectl describe node sydney | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
$kubectl taint nodes sydney node-role.kubernetes.io/control-plane:NoSchedule-
node/sydney untainted
$kubectl describe node sydney | grep Taints
Taints: <none>

Kubernetes cordon node command

If I cordon a node, I make it unschedulable, so this command applies a taint on this node to mark it as NoSchedule with kubernetes specific key. But then when I create a taint with NoSchedule effect and add another key value, for ex. env=production and create a toleration on pod to match this key and effect NoSchedule - pod anyway won't be scheduled on this node. Why so?
Maybe cordon command somehow internally marks node as no schedule and not only applies a taint
P.S After running kubectl uncordon <node> the toleration worked
You are right as you have already applied the
cordon on the node
K8s won't schedule the PODs on it unless and until you mark the node with uncordon.
kubectl uncordon <node>

Is there a toleration which lets a workload get deployed to every node including master in the kubernetes cluster

Is there a toleration which lets workloads get deployed to every node including master nodes in the cluster regardless of any taints that any node has.
This toleration will deploy your workload into every node including master nodes in your kubernetes cluster regardless of any taints on any nodes.
tolerations:
- operator: Exists
If you want to deploy in every node, why not using a daemonset instead?

Kubernetes - Enable automatic pod rescheduling on taint/toleration

In the following scenario:
Pod X has a toleration for a taint
However node A with such taint does not exists
Pod X get scheduled on a different node B in the meantime
Node A with the proper taint becomes Ready
Here, Kubernetes does not trigger an automatic rescheduling of the pod X on node A as it is properly running on node B. Is there a way to enable that automatic rescheduling to node A?
Natively, probably not, unless you:
change the taint of nodeB to NoExecute (it probably already was set) :
NoExecute - the pod will be evicted from the node (if it is already running on the node), and will not be scheduled onto the node (if it is not yet running on the node).
update the toleration of the pod
That is:
You can put multiple taints on the same node and multiple tolerations on the same pod.
The way Kubernetes processes multiple taints and tolerations is like a filter: start with all of a node’s taints, then ignore the ones for which the pod has a matching toleration; the remaining un-ignored taints have the indicated effects on the pod. In particular,
if there is at least one un-ignored taint with effect NoSchedule then Kubernetes will not schedule the pod onto that node
If that is not possible, then using Node Affinity could help (but that differs from taints)

Argo Workflow distribution on KOPS cluster

Using KOPS tool, I deployed a cluster with:
1 Master
2 slaves
1 Load Balancer
Now, I am trying to deploy an Argo Workflow, but I don't know the process. Will it install on Node or Master of the k8s cluster I built? How does it work?
Basically, if anyone can describe the functional flow or steps of deploying ARGO work flow on kubernetes, it would be nice. First, I need to understand where is it deployed on Master or Worker Node?
Usually, kops creates Kubernetes cluster with taints on a master node that prevent regular pods scheduling on it.
Although, there was an issues with some cluster network implementation, and sometimes you are getting a cluster without taints on the master.
You can change taints on the master node by running the following commands:
add taints (no pods on master):
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
remove taints (allow to schedule pods on master):
kubectl taint nodes --all node-role.kubernetes.io/master-
If you want to know whether the taints are applied to the master node of not, run the following command:
kubectl get node node-master --export -o yaml
Find a spec: section. In case the taints are present, you should see something like this:
...
spec:
externalID: node-master
podCIDR: 192.168.0.0/24
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...