I have setup a kubernete cluster with kubeadm in 2 beta metal. the cluster works well.
the kubeadm command I used:
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
When I do the the performance test, I need to run more pods in the work node. I met a mistake,
the PodCIDR in my work node is,PodCIDR: 10.244.1.0/24, so i can run 254 pods at most, although i have change the max-pod to 500.
So my question is how can i change the PodCIDR in my node.
I have trid to change to /etc/kubernetes/manifests/kube-controller-manager.yaml set
- --node-cidr-mask-size=16, and restart the kubelet, but it makes no effort.
PodCIDR is managed by the CNI, depends on what CNI you're using. You might want to change the CNI config and restart every node.
Here is the reference: https://capstonec.com/help-i-need-to-change-the-pod-cidr-in-my-kubernetes-cluster/
Related
I have a Kubernetes lab environment for studying an online course.
I missed a step in the installation instructions and didn't change the criSocket setting.
How can I change this setting and keep the rest of the cluster configuration?
I don't want to regenerate default cluster config, as I did in when I installed Kuberentes:
kubeadm config print init-defaults | tee ClusterConfiguration.yaml
The cluster contains 1 control plane node and 3 worker nodes.
cri-socket is a setting for kubelet.
If you have already done some specific settings for the CRI that you want to use, I guess you can switch to the other CRI by editing /var/lib/kubelet/kubeadm-flags.env.
After stopping kubelet, add/modify --container-runtime-endpoint=... on that file and restart kubelet. Then kubelet will use a new CRI which is specified there.
This article may help you: https://dev.to/stack-labs/how-to-switch-container-runtime-in-a-kubernetes-cluster-1628
Upgrade Kube-aws v1.15.5 cluster to the next version 1.16.8.
Use Case:
I want to keep the Same node label for Master and Worker nodes as I'm using in v1.15 .
When I tried to upgrade the cluster to V1.16 the --node-labels is restricted to use 'node-role'
If I keep the node role as "node-role.kubernetes.io/master" the kubelet fails to start after upgrade. if I remove the label, kubectl get node output shows none for the upgraded node.
How do I reproduce?
Before the upgrade I took a backup of 'cp /etc/sysconfig/kubelet /etc/sysconfig/kubelet-bkup' have removed "-role" from it and once the upgrade is completed, I have moved the kubelet sysconfig by replacing the edited file 'mv /etc/sysconfig/kubelet-bkup /etc/sysconfig/kubelet'. Now I could able to see the Noderole as Master/Worker even after kubelet service restart.
The Problem I'm facing now?
Though I perform the upgrade on the existing cluster successfully. The cluster is running in AWS as Kube-aws model. So, the ASG would spin up a new node whenever Cluster-Autoscaler triggers it.
But, the new node fails to join to the cluster since the node label "node-role.kubernetes.io/master" exists in the code base.
How can I add the node-role dynamically in the ASG scale-in process?. Any solution would be appreciated.
Note:
(Kubeadm, kubelet, kubectl )- v1.16.8
I have sorted out the issue. I have created a Python code that watches the node events. So whenever ASG spins up a new node, after it joins to the cluster, the node wil be having a role "" , later the python code will add a appropriate label to the node dynamically.
Also, I have created a docker image with the base of python script I created for node-label and it will run as a pod. The pod will be deployed into the cluster and it does the job of labelling the new nodes.
Ref my solution given in GitHub
https://github.com/kubernetes/kubernetes/issues/91664
I have created as a docker image and it is publicly available
https://hub.docker.com/r/shaikjaffer/node-watcher
Thanks,
Jaffer
I have been setting up multi node kubernetes cluster using kubeadm.Setup included 1 master and worker node each. I have created the VM using vagrant.
I followed the docs,
https://kubernetes.io/docs/setup/independent/install-kubeadm/
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm
Created 2 VM's using vagrant
IP: Master- 192.168.33.10 , Worker- 192.168.1.21 (Both host only network)
I have experienced 2 scenarios,
Case 1:
Ran kubeadm init --pod-network-cidr=10.244.0.0/16 successfully with all pods running.
Installed "Canal" pod network add on.
Followed all the instructions given at the end of the successfull kubeadm init command.
SSH into 2nd VM and ran kubeadm join .. command and I am struck at "[preflight] Running pre-flight checks"
Case 2:
Did the same process with tag --apiserver-advertise-address=192.168.33.10
Successfully ran the command kubeadm init --apiserver-advertise-address=192.168.33.10
But when I ran the command kubectl get nodes it only showed the master node. (expected the worker node to show too).
Kindly help me understand how can I complete this setup. Thank you.
I have github repository which does exactly what you want. I am pretty sure that you will get idea from it. If anything is not clear, please update with comment or original post.
I am trying to set up an horizontal pod auto scaling in GKE. No proper documentation found to reduce the --horizontal-pod-autoscaler-sync-period to 5 sec using kube-controller-manager.
In the below link it says there is a possibility of changing the flags:
https://kubernetes.io/docs/reference/command-line-tools-reference/kube-controller-manager/
Is there any proper implementation steps to this?
You are not able do this on GKE, EKS and other managed clusters.
In order to change/add flags in kube-controller-manager - you should have access to your /etc/kubernetes/manifests/ directory on master node and be able to modify parameters in /etc/kubernetes/manifests/kube-controller-manager.yaml.
GKE, EKS and other clusters manages only by their providers without getting you permissions to have access to master nodes.
But you can create cluster with kubeadm init and configure/change as you like.
you can stop your minikube cluster and start it with your extra configs ...
minikube start --extra-config 'controller-manager.horizontal-pod-autoscaler-sync-period=5s'
for more details, you can go through https://minikube.sigs.k8s.io/docs/handbook/config/#modifying-kubernetes-defaults
When I run kubeadm::init command, all pods are running except coredns pods. when I describe the pods, its showing something cni initialization failed.
do I need any network plugin to be installed before running kubeadm::init??
No, the network add-on is only added after kubeadm init, the documentation is explicit on this topic: https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network