Kubernetes: taints and toleration - kubernetes

We use kubernetes with aws cloud and for one node I applied taints using the kubectl command. When I restart that node, taints are lost.
So I wanted to know, do kubernetes taints persist or not?

Taints applied to a node via kubectl will not persist a reboot, but depending on your Kubernetes install method, etc, you can manually register a node with the --register-with-taints flag to add taints that should persist the restarts.
Kubernetes Node Docs

Related

Select all pods running on a K8s node for Cilium Policy

I am trying to use Cilium Egress Gateway Policy in my K8s cluster. I want to apply policy on all pods scheduled on Node X. How can I do that?
Using the podSelector field, I can pick pods which matchLabels. Theer is also a special label io.kubernetes.pod.namespace to select pods in a namespace. But I don't know how to filter for the pod's scheduled node (spec.nodeName).
Another possible solution is that I write a daemon set, which will get all pods on the node, and then call api-server to add a label for nodeName. But I need guidance how to write such daemon set, or if it's even secure to have the api-server credentials on the node.

Can pods be deployed on k3s nodes with roles control-plane,etcd,master

I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?
Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.

kubernetes do not schedule anything unless specified

kubernetes do not schedule anything to node unless specified
I am adding a node to the cluster. But i don't want pods to scheduled to it. I only want the services which are specified to run on this to be scheduled.
You can add a taint to the node and add toleration for the taint in pod spec for pods which you want to be scheduled in that node.
https://kubernetes.io/docs/concepts/scheduling-eviction/taint-and-toleration/

Is there a toleration which lets a workload get deployed to every node including master in the kubernetes cluster

Is there a toleration which lets workloads get deployed to every node including master nodes in the cluster regardless of any taints that any node has.
This toleration will deploy your workload into every node including master nodes in your kubernetes cluster regardless of any taints on any nodes.
tolerations:
- operator: Exists
If you want to deploy in every node, why not using a daemonset instead?

Kubernetes: Deploy daemon set to all nodes except for master node

I have kubernetes running on version 1.5 with two nodes and one master nodes. I would like to deploy fluentd as a daemon set onto all nodes, but the master node (the master node spams warning messages as it can't find logs). How can I avoid deploying to the master node?
So to make a pod not schedule on a master node you need to add the following
nodeSelector:
kubernetes.io/role: node
This will make the pod schedule on only nodes. The above example shows the default label for node in kops provisioned cluster. Please very the key value if you have have provisioned the cluster from a different provider
You can use a label for your slave nodes and use that label in a selector for the daemon set, which will only deploy on the nodes that have that label.
Inversely, you can define a negative selector to assign the daemon set to pods that don't have a label. In your case, the pod that doesn't have the master's label.
You're looking for the Taints and Tolerations features. Using these you can define that given node in "tainted" in particular way preventing pods scheduling on this node unless they have a toleration matching that taint.