Can pods be deployed on k3s nodes with roles control-plane,etcd,master - kubernetes

I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?

Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.

Related

Can a worker node in kubernetes can run two different pods?

In kubernetes we can pods on worker nodes and pods share the resources and IP address,but what if we run two diiferent pods on a same worker node does that mean that both the pods will have different IP address?
To answer the main question - yes. A node can and does run different pods. Even if you have only one Deployment you can run
kubectl describe nodes my-node
Or even
kubectl get pods --all-namespaces
To see some pods that kubernetes uses for its control plane on each node.
About the second question, it really depends on your deployment, id recommend on reading about kube proxy which is a pod running on every node! (Regarding your first question) and is in charge of the networking layer and communication within the cluster
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-proxy/
The pods will have their own IP address within that node, and there are ways to directly communicate with pods
https://superuser.openstack.org/articles/review-of-pod-to-pod-communications-in-kubernetes/
https://kubernetes.io/docs/concepts/cluster-administration/networking/

Is there a toleration which lets a workload get deployed to every node including master in the kubernetes cluster

Is there a toleration which lets workloads get deployed to every node including master nodes in the cluster regardless of any taints that any node has.
This toleration will deploy your workload into every node including master nodes in your kubernetes cluster regardless of any taints on any nodes.
tolerations:
- operator: Exists
If you want to deploy in every node, why not using a daemonset instead?

Why I can not get master node information in full-managed kubernetes?

everyone.
Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.
I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-istio-test-01-pool-01-030fc539-c6xd Ready <none> 3m13s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k Ready <none> 3m18s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685 Ready <none> 3m18s v1.13.11-gke.14
$
Off course, I can get worker nodes information. This information is same with GKE web console.
By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 262d v1.14.1
node01 Ready <none> 140d v1.14.1
node02 Ready <none> 140d v1.14.1
$
This result includes master node information.
I'm curious why I cannot get the master node information in full-managed kubernetes cluster.
I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed.
I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.
At the least, I'm just learning English now. Please correct me if I'm wrong.
It's a good question!
The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.
The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.
https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes
Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.

Difference between daemonsets and deployments

In Kelsey Hightower's Kubernetes Up and Running, he gives two commands :
kubectl get daemonSets --namespace=kube-system kube-proxy
and
kubectl get deployments --namespace=kube-system kube-dns
Why does one use daemonSets and the other deployments?
And what's the difference?
Kubernetes deployments manage stateless services running on your cluster (as opposed to for example StatefulSets which manage stateful services). Their purpose is to keep a set of identical pods running and upgrade them in a controlled way. For example, you define how many replicas(pods) of your app you want to run in the deployment definition and kubernetes will make that many replicas of your application spread over nodes. If you say 5 replica's over 3 nodes, then some nodes will have more than one replica of your app running.
DaemonSets manage groups of replicated Pods. However, DaemonSets attempt to adhere to a one-Pod-per-node model, either across the entire cluster or a subset of nodes. A Daemonset will not run more than one replica per node. Another advantage of using a Daemonset is that, if you add a node to the cluster, then the Daemonset will automatically spawn a pod on that node, which a deployment will not do.
DaemonSets are useful for deploying ongoing background tasks that you need to run on all or certain nodes, and which do not require user intervention. Examples of such tasks include storage daemons like ceph, log collection daemons like fluentd, and node monitoring daemons like collectd
Lets take the example you mentioned in your question: why iskube-dns a deployment andkube-proxy a daemonset?
The reason behind that is that kube-proxy is needed on every node in the cluster to run IP tables, so that every node can access every pod no matter on which node it resides. Hence, when we make kube-proxy a daemonset and another node is added to the cluster at a later time, kube-proxy is automatically spawned on that node.
Kube-dns responsibility is to discover a service IP using its name and only one replica of kube-dns is enough to resolve the service name to its IP. Hence we make kube-dns a deployment, because we don't need kube-dns on every node.

Kubernetes: taints and toleration

We use kubernetes with aws cloud and for one node I applied taints using the kubectl command. When I restart that node, taints are lost.
So I wanted to know, do kubernetes taints persist or not?
Taints applied to a node via kubectl will not persist a reboot, but depending on your Kubernetes install method, etc, you can manually register a node with the --register-with-taints flag to add taints that should persist the restarts.
Kubernetes Node Docs