How to change the kubernetes worker node ip address correctly? - kubernetes

I have tried to change the NIC ip of the worker node directly. It seems that the master node automatically updates the ip information of the worker node. And it does not have any negative impact on the kubernetes cluster. Is it the simple and correct way to change the worker node ip? Or are there some other important steps that I have missed?

I created a mini cluster using kubeadm with two ubuntu18.04 VMs in one public network.
Indeed changing IP address of the worker node doesn't affect the cluster at all unless new IP address doesn't interfere with --pod-network-cidr.
Kubelet is responsible for it and it uses several options:
The kubelet is the primary "node agent" that runs on each node. It can
register the node with the apiserver using one of: the hostname; a
flag to override the hostname; or specific logic for a cloud provider.
For instance if you decide to change a hostname of worker node, it will become unreachable.
There are two ways to change IP address properly:
Re-join the worker node with new IP (already changed) to the cluster
Configure kubelet to advertise specific IP address.
Last option can be done by following:
modifying /etc/systemd/system/kubelet.service.d/10-kubeadm.conf with adding KUBELET_EXTRA_ARGS=--node-ip %NEW_IP_ADDRESS%.
sudo systemctl daemon-reload since config file was changed
sudo systemctl restart kubelet.service
Useful links:
Specify internal ip for worker nodes - (it's a bit old in terms of how it's done (it should be done as I described above), however the idea is the same).
CLI tools - Kubelet

Related

How to select a specific network interface when joining a node in Kubernetes?

I have a single master cluster with 3 worker nodes. The master node has one network interface of 10Gb capacity and all worker nodes have two interfaces: 10Gb and 40Gb interface. They are all connected via a switch.
By default, Kubernetes binds to the default network eth0 which is 10Gb for the worker nodes. How do I specify the 40Gb interface at joining?
The kubeadm init command has a --apiserver-advertise-address argument but this is for the apiserver. Is there any equivalent option for the worker nodes so the communciation between master and worker (and between workers) are realised on the 40Gb link?
Please note that this is a bare-metal on-prem installation with OSS Kubernetes v1.20.
You can use the --hostname-override flag to override the default kubelet behavior. The default name of the kubelet equals to the hostname and it's ip address default to the interface's ip address default gateway.
For more details please visit this issue.
There is nothing specific, you would have to manage this at the routing level. If you're using BGP internally it would usually do this automatically because the faster link will have a higher metric but if you're using a simpler static routing setup then you may need to tweak things.
Pods live on internal virtual adapters so they don't listen on any physical interface (for all CNIs I know of anyway, except the AWS one).

Move kubernetes (kubespray) nodes to another IP range

I installed a kubernetes cluster by using kuberspray on my internal network, 192.168.0.0/24.
Now I need more nodes and these nodes will be located on other networks.
So I will set up a VPN between the current nodes and the new nodes.
The problem is that I cannot find any information specifically related to kubespray on how to change the internal IPs of the nodes in order to "move them on the VPN".
I think after moving the nodes on the VPN, then it's just a matter of installing the new nodes in the cluster and I'm set.
So: Using kubespray (or manually if not possible via kubespray directly) how can I change the internal IPs of the nodes in order to move them on the VPN?
Kubespray supports kubeadm for cluster creation since v2.3 and deprecated non-kubeadm deployment starting from v2.8.
I assume that you can use kubeadm with your Kubespray installation.
I see two ways to achieve your goal. Both from Kubernetes side:
By using ifconfig command:
run kubeadm reset on the node you want to reconfig
run ifconfig <network interface> <IP address>
run kubeadm join in order to add the node again with the new IP
By editing the kubelet.conf file:
run systemctl status kubelet to find out the location of your kubelet.conf (usually /etc/kubernetes/kubelet.conf)
edit it by adding KUBELET_EXTRA_ARGS=--node-ip=<IP_ADDRESS>
run systemctl daemon-reload
run systemctl restart kubelet
Please let me know if that helped.

Initiate a Kubernetes cluster with Public IPs

I have some VMs on top of a private cloud (OpenStack). While trying to make a cluster on the master node, it initiates the cluster on its private IP by default. When I tried to initiate a cluster based on public IP of master node, using --apiserver-advertise-address=publicIP flag, it gives error.
Initiation phase stops as below:
[wait-control-plane] Waiting for the kubelet to boot up the control
plane as static Pods from directory "/etc/kubernetes/manifests". This
can take up to 4m0s [kubelet-check] Initial timeout of 40s passed.
I've noticed that I do not see the public IP of VM from inside it (running "ip addr"), but VMs are reachable via their public IPs.
Is there a way to setup a Kubernetes cluster on top of "public IPs" of nodes at all?
Private IP addresses are used for communication between instances, and public addresses are used for communication with networks outside the cloud, including the Internet. So it's recommended to setup cluster only on private adresses.
When you launch an instance, it is automatically assigned a private IP address that stays the same until you explicitly terminate the instance. Rebooting an instance has no effect on the private IP address.
A pool of floating IP addresses, configured by the cloud administrator, is available in OpenStack Compute. The project quota defines the maximum number of floating IP addresses that you can allocate to the project.
This error is likely caused by:
The kubelet is not running
The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
systemctl status kubelet
journalctl -xeu kubelet
Try to add floating IPs of machines to /etc/hosts file on master node from which you want to deploy cluster and run installation again.

Hostname and IP address modification of 'k8s-master' in kubernetes

Is it mandatory to set hostname 'k8s-master' for master node in kubernetes?
Can we change IP address of the master node and child node after successful installation of kubernetes?
In general, master node name composed from hostname of the machine where you bootstrap Kubernetes cluster, moreover master node name can be changed via building cluster procedure through kubeadm install tool:
kubeadm init --node-name.
Solution provided by #valerius257 for Kubernetes master node IP replacement in a comment from #char works pretty fine, that was already checked in my environment. Answer retains for any further contributors research.

Kubernetes wireguard flannel overlay network on VMs blocked by kubefirewall

I’m wondering if anyone has been able to get Kubernetes running properly over the Wireguard VPN.
I created a 2 node cluster on 2 VM’s linked by wireguard. The master node with the full control plane works fine and can accept worker nodes over the wireguard interface. I set the nodeip for kubelet to the wireguard ip and also set the iface argument for flannel to use the wireguard interface instead of the default. This seems to work well so far.
The problem arises when I try to join the worker node into the cluster via the join command.
Note that I also edited the node ip of kubelet to be the wireguard ip on the worker node.
On join all traffic to the node is dropped by the “Kubernetes Firewall”. By the kubernetes firewall I mean if you check iptables after issuing the join command on the worker node you will see KUBE-FIREWALL which drops all marked packets. The firewall is standard as its the same on the master but I presume that the piece I’m missing is what to do to get traffic flowing on the worker node after joining to the master node.
I’m unable to even ping google.com or communicate with the master over the Wireguard tunnel. Pods can’t be scheduled either. I have manually deleted the KUBE-FIREWALL rule as a test which then allows pods to be scheduled and regular traffic to flow on the worker node but Kubelet will quickly recreate the rule after around a minute.
I’m thinking a route needs to be created before the join or something along those lines.
Has anyone tried this before would really appreciate any suggestions for this.
After getting some help I figured out that the issue was Wiregaurd related. Specifically when running wg-quick as a service which apparently creates an ip rule that routes ALL outgoing traffic via wg0 interface, except WG background secured channel. This causes issues when trying to connect a worker to a cluster and so simply manually creating and starting the wg0 interface with something like the below will work
ip link add dev wg0 type wireguard
ip addr add 10.0.0.4/24 dev wg0
wg addconf wg0 /etc/wireguard/wg0.conf
ip link set wg0 up