How to set/change internal ip of a node in kuberenetes cluster? - kubernetes

I would like to know how internal-ip of the node in a kubernetes cluster is set. When I see in my environment it is taking its network ip as internal-ip. What if the IP address changes if the same node is reconnected from a different network. I also want to know how to set that internal ip of a node to 127.0.0.1 for running the cluster on the standalone server which will help to ship the entire cluster as ova image. Does my question makes sense? I would like know your opinions on this

Related

Does Kubernetes need to assign real IP addresses?

I am trying to understand Kubernetes and how it works under the hood. As I understand it each pod gets its own IP address. What I am not sure about is what kind of IP address that is.
Is it something that the network admins at my company need to pass out? Or is an internal kind of IP address that is not addressable on the full network?
I have read about network overlays (like Project Calico) and I assume they play a role in this, but I can't seem to find a page that explains the connection. (I think my question is too remedial for the internet.)
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
Kubernetes clusters
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
The thing with Kubernetes is that it is not a service like e.g. a Virtual Machine, but a cluster that has it's own networking functionality and management, including IP address allocation and network routing.
Your nodes may be virtual or physical machines, but they are registered in the NodeController, e.g. for health check and most commonly for IP address management.
The node controller is a Kubernetes master component which manages various aspects of nodes.
The node controller has multiple roles in a node’s life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).
Cluster Architecture - Nodes
IP address management
Kubernetes Networking depends on the Container Network Interface (CNI) plugin your cluster is using.
A CNI plugin is responsible for ... It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.
It is common that each node is assigned an CIDR range of IP-addresses that the nodes then assign to pods that is scheduled on the node.
GKE network overview describes it well on how it work on GKE.
Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network.
Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default).
Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster.
Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.
Kubernetes Pods are going to receive a real IP address like how's happening with Docker ones due to the brdige network interface: the real hard stuff to understand is basically the Pod to Pod connection between different nodes and that's a black magic performed via kube-proxy with the help of iptables/nftables/IPVS (according to which component you're running in the node).
A different story regarding IP addresses assigned to a Service of ClusterIP kind: in fact, it's a Virtual IP used to transparently redirect to endpoints as needed.
Kubernetes networking could look difficult to understand but we're lucky because Tim Hockin provided a really good talk named Life of a Packet that will provide you a clear overview of how it works.

Kubernetes wireguard flannel overlay network on VMs blocked by kubefirewall

I’m wondering if anyone has been able to get Kubernetes running properly over the Wireguard VPN.
I created a 2 node cluster on 2 VM’s linked by wireguard. The master node with the full control plane works fine and can accept worker nodes over the wireguard interface. I set the nodeip for kubelet to the wireguard ip and also set the iface argument for flannel to use the wireguard interface instead of the default. This seems to work well so far.
The problem arises when I try to join the worker node into the cluster via the join command.
Note that I also edited the node ip of kubelet to be the wireguard ip on the worker node.
On join all traffic to the node is dropped by the “Kubernetes Firewall”. By the kubernetes firewall I mean if you check iptables after issuing the join command on the worker node you will see KUBE-FIREWALL which drops all marked packets. The firewall is standard as its the same on the master but I presume that the piece I’m missing is what to do to get traffic flowing on the worker node after joining to the master node.
I’m unable to even ping google.com or communicate with the master over the Wireguard tunnel. Pods can’t be scheduled either. I have manually deleted the KUBE-FIREWALL rule as a test which then allows pods to be scheduled and regular traffic to flow on the worker node but Kubelet will quickly recreate the rule after around a minute.
I’m thinking a route needs to be created before the join or something along those lines.
Has anyone tried this before would really appreciate any suggestions for this.
After getting some help I figured out that the issue was Wiregaurd related. Specifically when running wg-quick as a service which apparently creates an ip rule that routes ALL outgoing traffic via wg0 interface, except WG background secured channel. This causes issues when trying to connect a worker to a cluster and so simply manually creating and starting the wg0 interface with something like the below will work
ip link add dev wg0 type wireguard
ip addr add 10.0.0.4/24 dev wg0
wg addconf wg0 /etc/wireguard/wg0.conf
ip link set wg0 up

Kubernetes cannot access app via POD IP

I set up a cluster with 2 machines, which are not in the same local subnet but they can connect each other, machine A is Master + Node and machine B is Node. Then I use flannel (subnet 172.16.0.0/16) as the network plugin. After deployed apps, I encountered a problem that I can access the app via POD IP on machine A, but I cannot access the same app on machine B via POD IP, and curl command would say No route to the host172.16.0.x`.
I think there is no route rules to other machine, but I don't know how to configure the network. Could anyone help to explain if I miss something important? Thank you very much.
I use this kubernetes/contrib ansible script to deploy cluster, and did not change any configuration about flannel.
You can use the type:NodePort to access the pod over all of the node's IPs

Can't get to GCE instance from k8s pods on the same subnet

I have a cluster with container range 10.101.64.0/19 on a net A and subnet SA with ranges 10.101.0.0/18. On the same subnet, there is VM in GCE with IP 10.101.0.4 and it can be pinged just fine from within the cluster, e.g. from a node with 10.101.0.3. However, if I go to a pod on this node which got address 10.101.67.191 (which is expected - this node assigns addresses 10.101.67.0/24 or something), I don't get meaningful answer from that VM I want to access from this pod. Using tcpdump on icmp, I can see that when I ping that VM machine from the pod, the ping gets there but I don't receive ACK in the pod. Seems like VM is just throwing it away.
Any idea how to resolve it? Some routes or firewalls? I am using the same topology in the default subnet created by kubernetes where this work but I cannot find anything relevant which could explain this (there are some routes and firewall rules which could influence it but I wasn't successful when trying to mimic them in my subnet)
I think it is a firewall issue.
Here I've already provided the solution on Stakoverflow.
It may help to solve your case.

CIDR Address and advertise-address defining in Kubernetes Installation

I am trying to install Kubernetes in my on-premise server Ubuntu 16.04. And referring following documentation ,
https://medium.com/#Grigorkh/install-kubernetes-on-ubuntu-1ac2ef522a36
After installing kubelete kubeadm and kubernetes-cni I found that to initiate kubeadm with following command,
kubeadm init --pod-network-cidr=10.244.0.0/16 --apiserver-advertise-address=10.133.15.28 --kubernetes-version stable-1.8
Here I am totally confused about why we are setting cidr and api server advertise address. I am adding few confusion from Kubernetes here,
Why we are specifying CIDR and --apiserver-advertise-address here?
How I can find these two address for my server?
And why flannel is using in Kubernetes installation?
I am new to this containerization and Kubernetes world.
Why we are specifying CIDR and --apiserver-advertise-address here?
And why flannel is using in kubernetes installation?
Kubernetes using Container Network Interface for creating a special virtual network inside your cluster for communication between pods.
Here is some explanation "why" from documentation:
Kubernetes imposes the following fundamental requirements on any networking implementation (barring any intentional network segmentation policies):
all containers can communicate with all other containers without NAT
all nodes can communicate with all containers (and vice-versa) without NAT
the IP that a container sees itself as is the same IP that others see it as
Kubernetes applies IP addresses at the Pod scope - containers within a Pod share their network namespaces - including their IP address. This means that containers within a Pod can all reach each other’s ports on localhost. This does imply that containers within a Pod must coordinate port usage, but this is no different than processes in a VM. This is called the “IP-per-pod” model.
So, Flannel is one of the CNI which can be used for create network which will connect all your pods and CIDR option define a subnet for that network. There are many alternative CNI with similar functions.
If you want to get more details about how network working in Kubernetes you can read by link above or, as example, here.
How I can find these two address for my server?
API server advertise address has to be only one and static. That address using by all components to communicate with API server. Unfortunately, Kubernetes has no support of multiple API server addresses per master.
But, you can still use as many addresses on your server as you want, but only one of them you can define as --apiserver-advertise-address. The only one request for it - it has to be accessible from all your nodes in cluster.