GEK does not automatically start new node after I delete a node from the GKE cluster - kubernetes

I created a cluster:
gcloud container clusters create test
so there will be 3 nodes:
kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-test-default-pool-cec920a8-9cgz Ready <none> 23h v1.9.7-gke.5
gke-test-default-pool-cec920a8-nh0s Ready <none> 23h v1.9.7-gke.5
gke-test-default-pool-cec920a8-q83b Ready <none> 23h v1.9.7-gke.5
then I delete a node from the cluster
kubectl delete node gke-test-default-pool-cec920a8-9cgz
node "gke-test-default-pool-cec920a8-9cgz" deleted
no new node is created.
Then I delete all nodes. still there is no new node created.
kubectl get nodes
No resources found.
Am I doing something wrong? I suppose it can automatically bring up new node if some node died.

After running kubectl delete node gke-test-default-pool-cec920a8-9cgz run gcloud compute instances delete gke-test-default-pool-cec920a8-9cgz
This will actually delete VM (kubectl delete only "disconnects" it from the cluster). GCP will recreate the VM and it will automatically rejoin the cluster.

Kubernetes is a system for managing workloads and not the machines. Kubernetes node object reflects the state of the underlying infrastructure.
As such node objects are automatically managed by Kubernetes. "kubectl delete node" simply removes a serialized object from Kubernetes "etcd" storage. It does nothing to delete VM on GCE side where the kubernetes node is hosted. "kubectl delete node" is not meant to be used to remove nodes. Node pool itself carries the desired declared state, which cannot be altered by the "kubectl delete node" command.
If you want to remove a node you should resize the instance group.

Related

Can pods be deployed on k3s nodes with roles control-plane,etcd,master

I have followed this tutorial https://vmguru.com/2021/04/how-to-install-rancher-on-k3s/
At the end of it I end up with a running k3s cluster with 3 nodes
kubectl get nodes
NAME STATUS ROLES AGE VERSION
master1 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master2 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
master3 Ready control-plane,etcd,master 7d20h v1.23.5+k3s1
The cluster is using embeded etcd datastore
I am confused because I am able to deploy to workloads to this cluster. I thought I could only deploy workload to nodes with a role of Worker?
In other tutorials, the end result is master and worker roles on different nodes, so I am not even sure how I managed to get this combination of roles. Has something changed in the k3s distribution perhaps. The author used 1.19 I am using 1.23?
Nodes have taints so pods don't deploy on them. With most Kubernetes distributions today you can safely get rid of these taints. Then, if you deploy workloads on these nodes, the scheduler will not ignore the control plane nodes for other workloads.
To see if a node has taints run kubectl describe node <node_name> and look for the taints field.
Additionally you can give workloads tolerations, so their pods will ignore taints. See more in the Kubernetes docs about Taints and tolerations.
This is necessary for single node clusters, which otherwise wouldn't work. Distributions like k3s or microk8s are easy to set up single node clusters. So that's why the taints are off by default.
I'm only guessing here: But Roles seem to be just an abstraction on how your k8s distribution is handling taints and tolerations. The role master doesn't necessarily mean that this node will be tainted for normal workloads.

Running Kubectl inside a worker node

I did an SSH on a worker node inside the cluster and I run kubectl in there. I created a PV, a PVC and a deployment. I read on the documentation that PV is a cluster-wide object. My question is what happens in this case? In other words, Does running kubectl inside a worker node has the same effect as running it from master node?
Short answer: yes. kubectl connects to the configured API server which controls the whole cluster.

Why I can not get master node information in full-managed kubernetes?

everyone.
Please teach me why kubectl get nodes command does not return master node information in full-managed kubernetes cluster.
I have a kubernetes cluster in GKE. When I type kubectl get nodescommand, I get below information.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
gke-istio-test-01-pool-01-030fc539-c6xd Ready <none> 3m13s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-d74k Ready <none> 3m18s v1.13.11-gke.14
gke-istio-test-01-pool-01-030fc539-j685 Ready <none> 3m18s v1.13.11-gke.14
$
Off course, I can get worker nodes information. This information is same with GKE web console.
By the way, I have another kubernetes cluster which is constructed with three raspberry pi and kubeadm. When I type kubectl get nodes command to this cluster, I get below result.
$ kubectl get nodes
NAME STATUS ROLES AGE VERSION
master Ready master 262d v1.14.1
node01 Ready <none> 140d v1.14.1
node02 Ready <none> 140d v1.14.1
$
This result includes master node information.
I'm curious why I cannot get the master node information in full-managed kubernetes cluster.
I understand that the advantage of a full-managed service is that we don't have to manage about the management layer. I want to know how to create a kubernetes cluster which the master node information is not displayed.
I tried to create a cluster with "the hard way", but couldn't find any information that could be a hint.
At the least, I'm just learning English now. Please correct me if I'm wrong.
It's a good question!
The key is kubelet component of the Kubernetes.
Managed Kubernetes versions run Control Plane components on masters, but they don't run kubelet. You can easily achieve the same on your DIY cluster.
The kubelet is the primary “node agent” that runs on each node. It can register the node with the apiserver using one of: the hostname; a flag to override the hostname; or specific logic for a cloud provider.
https://kubernetes.io/docs/reference/command-line-tools-reference/kubelet/
When the kubelet flag --register-node is true (the default), the kubelet will attempt to register itself with the API server. This is the preferred pattern, used by most distros.
https://kubernetes.io/docs/concepts/architecture/nodes/#self-registration-of-nodes
Because there are no nodes with that role. The control plane for GKE is hosted within their own magic system, not on your own nodes.

pods are restarted automatically in the node which is added to the existing kubeadm cluster

Recently added a kubenode to the existing kubeadm cluster using
kubeadm join --token (TOKEN) (MASTER IP):6443
and with
--discovery-token-ca-cert-hash.
The node attached successfully and it is listed in the "kubectl get nodes".
Now the pods are assigned to the node, but those pods are restarted automatically and it seems this pods cannot communicate the other node pods also.

kubernetes: how to make sure that no user pods are run on master

I'm currently using Kubernetes for our staging environment - and because it is only a small one, I'm only using one node for master and for running my application pods on there.
When we switch over to production, there will be more than one node - at least one for master and one bigger node for the application pods. Do I have to make sure that all my pods are running on a different node than master or does Kubernetes take care of that automagically?
If you look at the output of kubectl get nodes, you'll see something like:
~ kubectl get nodes
NAME STATUS AGE VERSION
test-master Ready,SchedulingDisabled 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-f635 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-fzu7 Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
test-minion-group-vc1p Ready 23h v1.6.0-alpha.0.1862+59cfdfb8dba60e
The SchedulingDisabled tag ensures that we do not schedule any pods onto that node, and each of your HA master nodes should have that by default.
It is possible to set other nodes to SchedulingDisabled as well by using kubectl cordon.
you can add the --register-schedulable=false parameter to the kubelet running on your master.