I'd like to prepare multiple yaml files customizing arguments of flannel (DaemonSet) and run the flannel pod of the node with yaml matching the condition expressed by the label. Can I label a worker node before joining Kubernetes master ?
You can specify --node-labels when you're kubelet is starting, which will apply the labels to the nodes; but ONLY during registration.
This will not work if your kubelet is starting up and the node is already a member of the cluster.
Kubelet Docs
Related
I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?
Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.
Is there a toleration which lets workloads get deployed to every node including master nodes in the cluster regardless of any taints that any node has.
This toleration will deploy your workload into every node including master nodes in your kubernetes cluster regardless of any taints on any nodes.
tolerations:
- operator: Exists
If you want to deploy in every node, why not using a daemonset instead?
I have two IP'S master node and worker node? I need to deploy some services using these. I don't know anything about kubernetes ,what is master node and worker node?
How do I start?
You should start from the very basic things..
Kubernetes concept page is your starting point.
The Kubernetes Master is a collection of three processes that run on a
single node in your cluster, which is designated as the master node.
Those processes are: kube-apiserver, kube-controller-manager and
kube-scheduler.
Each individual non-master node in your cluster runs
two processes: kubelet, which communicates with the Kubernetes Master.
kube-proxy, a network proxy which reflects Kubernetes networking
services on each node.
Regarding you question in comment: read Organizing Cluster Access Using kubeconfig Files. Make sure you have kubeconfig file in the right place..
We use kubernetes with aws cloud and for one node I applied taints using the kubectl command. When I restart that node, taints are lost.
So I wanted to know, do kubernetes taints persist or not?
Taints applied to a node via kubectl will not persist a reboot, but depending on your Kubernetes install method, etc, you can manually register a node with the --register-with-taints flag to add taints that should persist the restarts.
Kubernetes Node Docs
I have kubernetes running on version 1.5 with two nodes and one master nodes. I would like to deploy fluentd as a daemon set onto all nodes, but the master node (the master node spams warning messages as it can't find logs). How can I avoid deploying to the master node?
So to make a pod not schedule on a master node you need to add the following
nodeSelector:
kubernetes.io/role: node
This will make the pod schedule on only nodes. The above example shows the default label for node in kops provisioned cluster. Please very the key value if you have have provisioned the cluster from a different provider
You can use a label for your slave nodes and use that label in a selector for the daemon set, which will only deploy on the nodes that have that label.
Inversely, you can define a negative selector to assign the daemon set to pods that don't have a label. In your case, the pod that doesn't have the master's label.
You're looking for the Taints and Tolerations features. Using these you can define that given node in "tainted" in particular way preventing pods scheduling on this node unless they have a toleration matching that taint.