Networking in K3S HA Cluster on Proxmox - kubernetes

I have my K3S HA Cluster deployed on my PromoxVE.
In my cluster, I have 3 master nodes with the control plane, etcd, kube-vip on it, and 3 worker nodes.
I have a question about networking, to have my cluster the right way. In my case, nodes are communicating with each other on the private network (private VLAN) and with the outside world with secondary public IP (2 network interfaces). That means each nodes has 1 IP on private network but also 1 public IP.
In my situation, I have set up kube-vip as a 1 entry point with another public IP address routing to Nginx Ingress, so now I must spend 7 public IP addresses (3x master, 3x worker, 1 kube-vpi), that's why I want to hide the cluster somehow.
What is the right way to do networking in this cluster? The main reason is, I don't want to spend any other public IP address for any other new workers.

Related

In Private GKE Cluster achieve dedicated public IP as source IP for each pod for outgoing traffic

Requirement : With private GKE ( version : 1.21.11-gke.1100 ), each pod is required to have a dedicated public IP as source IP when reaching to internet. It is not required for ingress but only for egress.
Solution tried : Cloud NAT. Works partially. Meaning, suppose we have 10 pods and each of them is made to run on a distinct node. Cloud NAT does not assign an unique IP to each pod even when the Minimum ports per VM instance is set to the maximum possible value of 57344.
Experiment Done: 10 NAT gateway IPs are assigned to the NAT Gateway. 8 pods are created, each running on a dedicated node. Cloud NAT assigned only 3 Cloud NAT IPs instead of 8 even though there aee 10 IPs available.
Cloud NAT is configured as below :
Configuration
Setting
Manual NAT IP address assignment
true
Dynamic port allocation
disabled
Minimum ports per VM instance
57344. This decides how many VMs can be assigned to the same Cloud NAT IP.
Endpoint-Independent Mapping
disabled
Instead of converting to a Public GKE cluster, is there an easier way of achieving this goal?
Has anyone ever done such a setup which is proved to work?
You can create the NAT gateway instance and forward the traffic from there.
Here terraform script to create : https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples
https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
If you are looking to use cloud NAT with route you can checkout this : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters
TF code for NAT : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84
Demo architecture : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture
That's expected behavior because that's what NAT does. Network Address Translation will always hide the Private IP Address of whatever is behind it (In this case a Pod or Node IP) And will forward traffic to the Internet using the Public NAT IP. Return traffic goes back to the Public NAT IP which knows to where Pod route the traffic back.
In other terms you have no ways using Managed Cloud NAT to ensure each pod in your cluster will get a Unique Public IP on Egress.
The only i can see to solve this is to:
Create a Public GKE cluster with 10 nodes (following your example) and using taints, tolerations and node selector run each pod on a dedicated node, this way when the pod Egress to the internet, it will use the Node Public IP.
Create a Multi-NIC GCE instance, deploy some proxy on it (HA proxy for example) and configure it to somewhere route Egress traffic using one of the Interfaces for each of the pods behind (Note that a multi-Nic node can only have 8 Interfaces).

Google Kubernetes Enginge NAT routing outgoing IP doesn't work

I want to connect GKE (Google Kubernetes Engine) cluster to MongoDB Atlas. But I need to green the IP of my nodes (allow them). But sometimes I have 3 nodes, sometimes I have 10 and sometimes nodes are falling down and re-creating - constant changing means a no single IP.
I have tried to create NAT on the GCP followed this guide: https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
Also I want to green my cluster's IP in the Google Maps APIs so I can use the Directions API, for example.
This is a common situation, since there may be many other third party APIs that I want to enable that require incoming requests from certain IPs only, besides Atlas or Google Maps..
How can I achieve this?
Private GKE cluster means the nodes do not have public IP addresses but you mentioned
the actual outbound transfer goes from the node's IP instead of
the NAT's
Looks like you have a public cluster of GKE, you have to use the same NAT option to get outbound egress single IP.
If you are using the ingress which means there is a single point for incoming request to cluster but if your Nodes have public IP PODs will use Node's IP when there is an outgoing request unless you use NAT or so.
Your single outbound IP will be there, so all requests going out of PODs won't have node's IP instead they will use the NAT IP.
how to set up the NAT gateway
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
Here is terraform ready for the GKE clusters, you just have to run this terraform example bypassing project ID and others vars.
The above terraform example will create the NAT for you and verify the PODs IP as soon as NAT is set. You mostly won't require any changes in NAT terraform script.
GitHub link: https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway
if you dont have idea of terraform you can follow this article to setup the NAT which will stop the SNAT for PODs : https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a
Private GKE cluster means the nodes do not have public IP addresses. If the service on the other end is receiving packets from node's own IP then you have a public cluster.
You can find further explanation in this document.
If you want a static, public IP for the entire GKE cluster, you should consider Ingress for External Load Balancing. You can find instructions on how to configure it here.

Network bottleneck in Kubernetes from my DNS-provider?

Let's say I have a Kubernetes setup that consist of the following:
3 Control Planes
2 Worker Nodes
The two Worker Nodes ensure my apps always can be deployed on more than one machine, and the three Control Planes ensures I always have something that can manage the Worker Nodes - redundancy everywhere.
Now, the bottleneck;
When my DNS-provider forwards mysite.com to a machine, it does so to my public IP.
This hits my router, and I need to forward that request to my cluster... but which machine do I forward that to?
I think I am missing something here.
If I have ingress setup, it allows me to take anything that resembles mysite.com/somepath and forward it to a load balancer, but how do I get from my router to my ingress?
Don't I need to point the router to the cluster by an IP-address?
And when that node is down, my cluster can't be accessed, right?

Kubernetes masters and nodes in different subnets

We want to setup a Kubernetes cluster with 3 masters inside sddc isolated cloud and the worker nodes inside the private network. The connection from private network to the cloud is implemented by Load Balance and all required ports to Kubernetes masters inside sddc are opened. We are using calico to setup the networking. When join the workers, the tunl0 interface is created without an IP and without routing. What's the right way to setup the connection in this case?

Does Kubernetes need to assign real IP addresses?

I am trying to understand Kubernetes and how it works under the hood. As I understand it each pod gets its own IP address. What I am not sure about is what kind of IP address that is.
Is it something that the network admins at my company need to pass out? Or is an internal kind of IP address that is not addressable on the full network?
I have read about network overlays (like Project Calico) and I assume they play a role in this, but I can't seem to find a page that explains the connection. (I think my question is too remedial for the internet.)
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
Kubernetes clusters
Is the IP address of a Pod a full IP address on my network (just like a Virtual Machine would have)?
The thing with Kubernetes is that it is not a service like e.g. a Virtual Machine, but a cluster that has it's own networking functionality and management, including IP address allocation and network routing.
Your nodes may be virtual or physical machines, but they are registered in the NodeController, e.g. for health check and most commonly for IP address management.
The node controller is a Kubernetes master component which manages various aspects of nodes.
The node controller has multiple roles in a node’s life. The first is assigning a CIDR block to the node when it is registered (if CIDR assignment is turned on).
Cluster Architecture - Nodes
IP address management
Kubernetes Networking depends on the Container Network Interface (CNI) plugin your cluster is using.
A CNI plugin is responsible for ... It should then assign the IP to the interface and setup the routes consistent with the IP Address Management section by invoking appropriate IPAM plugin.
It is common that each node is assigned an CIDR range of IP-addresses that the nodes then assign to pods that is scheduled on the node.
GKE network overview describes it well on how it work on GKE.
Each node has an IP address assigned from the cluster's Virtual Private Cloud (VPC) network.
Each node has a pool of IP addresses that GKE assigns Pods running on that node (a /24 CIDR block by default).
Each Pod has a single IP address assigned from the Pod CIDR range of its node. This IP address is shared by all containers running within the Pod, and connects them to other Pods running in the cluster.
Each Service has an IP address, called the ClusterIP, assigned from the cluster's VPC network.
Kubernetes Pods are going to receive a real IP address like how's happening with Docker ones due to the brdige network interface: the real hard stuff to understand is basically the Pod to Pod connection between different nodes and that's a black magic performed via kube-proxy with the help of iptables/nftables/IPVS (according to which component you're running in the node).
A different story regarding IP addresses assigned to a Service of ClusterIP kind: in fact, it's a Virtual IP used to transparently redirect to endpoints as needed.
Kubernetes networking could look difficult to understand but we're lucky because Tim Hockin provided a really good talk named Life of a Packet that will provide you a clear overview of how it works.