Why I can not get master node information in full-managed kubernetes? - kubernetes

everyone.
Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.
I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-istio-test-01-pool-01-030fc539-c6xd Ready <none> 3m13s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k Ready <none> 3m18s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685 Ready <none> 3m18s v1.13.11-gke.14
$
Off course, I can get worker nodes information. This information is same with GKE web console.
By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 262d v1.14.1
node01 Ready <none> 140d v1.14.1
node02 Ready <none> 140d v1.14.1
$
This result includes master node information.
I'm curious why I cannot get the master node information in full-managed kubernetes cluster.
I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed.
I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.
At the least, I'm just learning English now. Please correct me if I'm wrong.

It's a good question!
The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.
The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.
https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes

Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.

Related

Can pods be deployed on k3s nodes with roles control-plane,etcd,master

I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?
Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.

Kubernetes pod can't communicate with other pods in the same node

We are using Kubernetes 1.21.7 , Istio 1.11.4 , Flannel 0.14.0 .
kubectl get nodes
NAME STATUS ROLES AGE VERSION
k8s-d0 Ready control-plane,master 204d v1.21.7
k8s-d1 Ready <none> 204d v1.21.7
k8s-d2 Ready <none> 204d v1.21.7
If pod-a and pod-b are in the same node, for example k8s-d1, they can't communicate (using curl for example). But if I force pods to be in different nodes, they communicate just fine.
This issue only occurs in "istio-system" namespace, but it seems it is not an Istio bug (I already tried opening an issue here , but unsuccessful)
I figured out what was missing:
modprobe br_netfilter
echo "br_netfilter" >> /etc/modules-load.d/modules.conf
At same point, I restarted those nodes and br_netfilter didn't load up automatically. Now that it is written in /etc/modules-load.d/modules.conf , it does apply on boot.
Thank you for your support.

Kubernetes v1.20 endpoints resource cannot view controller manger or scheduler

I want to check the election of basic components, but the information displayed is different in different versions of kubernetes binary installation methods.
Is the corresponding information cancelled in kubernetes v1.20 +? Or is there any other way to view the election of basic components?
The following kubernetes configuration parameters are consistent, except that the binary executable file is replaced
Kubernetes v1.20.8 or Kubernetes v1.20.2
$ kubectl get endpoints -n kube-system
No resources found in kube-system namespace.
Kubernetes v1.19.12
$ kubectl get endpoints -n kube-system
NAME ENDPOINTS AGE
kube-controller-manager <none> 9m12s
kube-scheduler <none> 9m13s
I found the cause of the problem
The difference between the two versions is the default value of --leader-select resource-lock
Kubernetes v1.20.8 or Kubernetes v1.20.2
--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "leases")
Kubernetes v1.19.12
--leader-elect-resource-lock string The type of resource object that is used for locking during leader election. Supported options are 'endpoints', 'configmaps', 'leases', 'endpointsleases' and 'configmapsleases'. (default "endpointsleases")
When I don't set --leader-select-resource-lock string in controller-manager or scheduler in v1.20.8, the default value is leaders.
so I can use the following command to view the information of component leaders.
$ kubectl get leases -n kube-system
NAME HOLDER AGE
kube-controller-manager master01_dec12376-f89e-4721-92c5-a20267a483b8 45h
kube-scheduler master02_c0c373aa-1642-474d-9dbd-ec41c4da089d 45h

How to start & stop Kubernetes 1.8.5 cluster?

Question
What are the commands to start/stop the K8S cluster? After installation is done following Using kubeadm to Create a Cluster, restarted the CentOS server and the K8S cluster is not running after restart.
There are services mentioned in Fedora (Single Node) listing services but there are no such services installed via kubeadm.
Failed to restart etcd.service: Unit not found.
Failed to restart kube-apiserver.service: Unit not found.
Failed to restart kube-controller-manager.service: Unit not found.
Environment
CentOS 7 on Virtual Box. K8S 1.8.5
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 36m v1.8.5
node01 Ready <none> 35m v1.8.5
node02 Ready <none> 35m v1.8.5
As you are using kubeadm to initiate and administrate the k8s cluster.As I understand kubeadm use following approach
Systemd manage only kubelet service on the node.
Kubelet create and manage k8s control plane componenets (kube-api server, kube-controller-manager , etcd and scheduler, kube-proxy) as a static pod.
Kubelet access their json manifest files from /etc/kubernetes/manifests.
So if you want to remove control plane components you just need to move these manifest files in another directory.

kubernetes: how to make sure that no user pods are run on master

I'm currently using Kubernetes for our staging environment - and because it is only a small one, I'm only using one node for master and for running my application pods on there.
When we switch over to production, there will be more than one node - at least one for master and one bigger node for the application pods. Do I have to make sure that all my pods are running on a different node than master or does Kubernetes take care of that automagically?
If you look at the output of kubectl get nodes, you'll see something like:
~ kubectl get nodes
NAME STATUS AGE VERSION
test-master Ready,SchedulingDisabled 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-f635 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-fzu7 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-vc1p Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
The SchedulingDisabled tag ensures that we do not schedule any pods onto that node, and each of your HA master nodes should have that by default.
It is possible to set other nodes to SchedulingDisabled as well by using kubectl cordon.
you can add the --register-schedulable=false parameter to the kubelet running on your master.