Create same master and working node in kubenetes - kubernetes

I am preparing dev environment and want to create a single host to be master and worker node for kubernetes.
How can I achieve my goal?

The difference between master node and worker node is that "regular pods cannot be scheduled on a master node because of a taint"
You just need to remove node-role.kubernetes.io/master:NoSchedule taint so that pods can be scheduled on that (master) node.
Following is the command:
kubectl taint nodes <masternodename> node-role.kubernetes.io/master:NoSchedule-

The master node is responsible for running several Kubernetes processes that are absolutely necessary to run and manage the cluster properly. [1]
The worker nodes are the part of the Kubernetes clusters which actually execute the containers and applications on them. [1]
Worker nodes are generally more powerful than master nodes because they have to run hundreds of clusters on them. However, master nodes hold more significance because they manage the distribution of workload and the state of the cluster. [1]
By removing taint you will be able to schedule pods on that node.
You should firstly check the present taint by running:
kubectl describe node <nodename> | grep Taints
In case the present one is master node you should remove that taint by running:
kubectl taint node <mastername> node-role.kubernetes.io/master:NoSchedule-
References:
[1] - What is Kubernetes cluster? What are worker and master nodes?
See also:
Creating a cluster with kubeadm,
This four similar questions:
Master tainted - no pods can be deployed
Remove node-role.kubernetes.io/master:NoSchedule taint,
Allow scheduling of pods on Kubernetes master?
Are the master and worker nodes the same node in case of a single node cluster?
Taints and Tolerations.

You have to remove the NoSchedule taint from the MASTER node.
I just spun up a kubeadm node and the taint is on my control-plane, not master.
So I did the following (sydney is the node name):
$kubectl describe node sydney | grep Taints
Taints: node-role.kubernetes.io/control-plane:NoSchedule
$kubectl taint nodes sydney node-role.kubernetes.io/control-plane:NoSchedule-
node/sydney untainted
$kubectl describe node sydney | grep Taints
Taints: <none>

Related

Can pods be deployed on k3s nodes with roles control-plane,etcd,master

I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?
Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.

How to simulate nodeNotReady for a node in Kubernetes

My ceph cluster is running on AWS with 3 masters 3 workers configuration. When I do kubectl get nodes it shows me all the nodes in the ready state.
Is there is any way I can simulate manually to get nodeNotReady error for a node?.
just stop kebelet service on one of the node that you want to see as NodeNotReady
If you just want NodeNotReady you can delete the CNI you have installed.
kubectl get all -n kube-system find the DaemonSet of your CNI and delete it or just do a reverse of installing it: kubectl delete -f link_to_your_CNI_yaml
You could also try to overwhelm the node with too many pods (resources). You can also share your main goal so we can adjust the answer.
About the answer from P Ekambaram you could just ssh to a node and then stop the kubelet.
To do that in kops you can just:
ssh -A admin#Node_PublicDNS_name
systemctl stop kubelet
EDIT:
Another way is to overload the Node which will cause: System OOM encountered and that will result in Node NotReady state.
This is just one of the ways of how to achieve it:
SSH into the Node you want to get into NotReady
Install Stress
Run stress: stress --cpu 8 --io 4 --hdd 10 --vm 4 --vm-bytes 1024M --timeout 5m (you can adjust the values of course)
Wait till Node crash.
After you stop the stress the Node should get back to healthy state automatically.
Not sure what is the purpose to simulate NotReady
if the purpose is to not schedule any new pods then you can use kubectl cordon node
NODE_NAME This will add the unschedulable taint to it and prevent new pods from being scheduled there.
If the purpose is to evict existing pod then you can use kubectl drain NODE_NAME
In general you can play with taints and toleration to achieve your goal related to the above and you can much more with those!
Now NotReady status comes from the taint node.kubernetes.io/not-ready Ref
Which is set by
In version 1.13, the TaintBasedEvictions feature is promoted to beta and enabled by default, hence the taints are automatically added by the NodeController
Therefore if you want to manually set that taint kubectl taint node NODE_NAME node.kubernetes.io/not-ready=:NoExecute the NodeController will reset it automatically!
So to absolutely see the NotReady status this is the best way
Lastly, if you want to remove your networking in a particular node then you can taint it like this kubectl taint node NODE_NAME dedicated/not-ready=:NoExecute

How to remove NotReady nodes from kubernetes cluster automatically

I'm running the kuberenets cluster on bare metal servers and my cluster nodes keep added and removed regularly. But when a node is removed, kubernetes does not remove it automatically from nodes list and kubectl get nodes keep showing NotReady nodes. Is there any automated way to achieve this? I want similar behavior for nodes as kubernetes does for pods.
To remove a node follow the below steps
Run on Master
# kubectl cordon <node-name>
# kubectl drain <node-name> --force --ignore-daemonsets --delete-emptydir-data
# kubectl delete node <node-name>
You can use this little bash command, or set it as a cron-job.
kubectl delete node $(kubectl get nodes | grep NotReady | awk '{print $1;}')

Stop scheduling pods on kubernetes master

For testing purpose, I have enabled pod scheduling on kubernetes master node with the following command
kubectl taint nodes --all node-role.kubernetes.io/master-
Now I have added worker node to the cluster and I would like to stop scheduling pods on master. How do I do that?
You simply taint the node again.
kubectl taint nodes master node-role.kubernetes.io/master=:NoSchedule
Taints and tolerations work together to ensure that pods are not scheduled onto inappropriate nodes. One or more taints are applied to a node; this marks that the node should not accept any pods that do not tolerate the taints. Tolerations are applied to pods, and allow (but do not require) the pods to schedule onto nodes with matching taints.
Even placed a taint on Master node,you can specify a toleration for a pod in the PodSpec, the pod would be able to schedule onto Master node:
tolerations:
- key: node-role.kubernetes.io/master
effect: NoSchedule
To learn more,see Taints and Tolerations

Argo Workflow distribution on KOPS cluster

Using KOPS tool, I deployed a cluster with:
1 Master
2 slaves
1 Load Balancer
Now, I am trying to deploy an Argo Workflow, but I don't know the process. Will it install on Node or Master of the k8s cluster I built? How does it work?
Basically, if anyone can describe the functional flow or steps of deploying ARGO work flow on kubernetes, it would be nice. First, I need to understand where is it deployed on Master or Worker Node?
Usually, kops creates Kubernetes cluster with taints on a master node that prevent regular pods scheduling on it.
Although, there was an issues with some cluster network implementation, and sometimes you are getting a cluster without taints on the master.
You can change taints on the master node by running the following commands:
add taints (no pods on master):
kubectl taint node kube-master node-role.kubernetes.io/master:NoSchedule
remove taints (allow to schedule pods on master):
kubectl taint nodes --all node-role.kubernetes.io/master-
If you want to know whether the taints are applied to the master node of not, run the following command:
kubectl get node node-master --export -o yaml
Find a spec: section. In case the taints are present, you should see something like this:
...
spec:
externalID: node-master
podCIDR: 192.168.0.0/24
taints:
- effect: NoSchedule
key: node-role.kubernetes.io/master
...