Google Kubernetes Enginge NAT routing outgoing IP doesn't work - kubernetes

I want to connect GKE (Google Kubernetes Engine) cluster to MongoDB Atlas. But I need to green the IP of my nodes (allow them). But sometimes I have 3 nodes, sometimes I have 10 and sometimes nodes are falling down and re-creating - constant changing means a no single IP.
I have tried to create NAT on the GCP followed this guide: https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
Also I want to green my cluster's IP in the Google Maps APIs so I can use the Directions API, for example.
This is a common situation, since there may be many other third party APIs that I want to enable that require incoming requests from certain IPs only, besides Atlas or Google Maps..
How can I achieve this?

Private GKE cluster means the nodes do not have public IP addresses but you mentioned
the actual outbound transfer goes from the node's IP instead of
the NAT's
Looks like you have a public cluster of GKE, you have to use the same NAT option to get outbound egress single IP.
If you are using the ingress which means there is a single point for incoming request to cluster but if your Nodes have public IP PODs will use Node's IP when there is an outgoing request unless you use NAT or so.
Your single outbound IP will be there, so all requests going out of PODs won't have node's IP instead they will use the NAT IP.
how to set up the NAT gateway
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
Here is terraform ready for the GKE clusters, you just have to run this terraform example bypassing project ID and others vars.
The above terraform example will create the NAT for you and verify the PODs IP as soon as NAT is set. You mostly won't require any changes in NAT terraform script.
GitHub link: https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway
if you dont have idea of terraform you can follow this article to setup the NAT which will stop the SNAT for PODs : https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a

Private GKE cluster means the nodes do not have public IP addresses. If the service on the other end is receiving packets from node's own IP then you have a public cluster.
You can find further explanation in this document.
If you want a static, public IP for the entire GKE cluster, you should consider Ingress for External Load Balancing. You can find instructions on how to configure it here.

Related

In Private GKE Cluster achieve dedicated public IP as source IP for each pod for outgoing traffic

Requirement : With private GKE ( version : 1.21.11-gke.1100 ), each pod is required to have a dedicated public IP as source IP when reaching to internet. It is not required for ingress but only for egress.
Solution tried : Cloud NAT. Works partially. Meaning, suppose we have 10 pods and each of them is made to run on a distinct node. Cloud NAT does not assign an unique IP to each pod even when the Minimum ports per VM instance is set to the maximum possible value of 57344.
Experiment Done: 10 NAT gateway IPs are assigned to the NAT Gateway. 8 pods are created, each running on a dedicated node. Cloud NAT assigned only 3 Cloud NAT IPs instead of 8 even though there aee 10 IPs available.
Cloud NAT is configured as below :
Configuration
Setting
Manual NAT IP address assignment
true
Dynamic port allocation
disabled
Minimum ports per VM instance
57344. This decides how many VMs can be assigned to the same Cloud NAT IP.
Endpoint-Independent Mapping
disabled
Instead of converting to a Public GKE cluster, is there an easier way of achieving this goal?
Has anyone ever done such a setup which is proved to work?
You can create the NAT gateway instance and forward the traffic from there.
Here terraform script to create : https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/master/examples
https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
If you are looking to use cloud NAT with route you can checkout this : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#private-clusters
TF code for NAT : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/terraform/network.tf#L84
Demo architecture : https://github.com/GoogleCloudPlatform/gke-private-cluster-demo/blob/master/README.md#demo-architecture
That's expected behavior because that's what NAT does. Network Address Translation will always hide the Private IP Address of whatever is behind it (In this case a Pod or Node IP) And will forward traffic to the Internet using the Public NAT IP. Return traffic goes back to the Public NAT IP which knows to where Pod route the traffic back.
In other terms you have no ways using Managed Cloud NAT to ensure each pod in your cluster will get a Unique Public IP on Egress.
The only i can see to solve this is to:
Create a Public GKE cluster with 10 nodes (following your example) and using taints, tolerations and node selector run each pod on a dedicated node, this way when the pod Egress to the internet, it will use the Node Public IP.
Create a Multi-NIC GCE instance, deploy some proxy on it (HA proxy for example) and configure it to somewhere route Egress traffic using one of the Interfaces for each of the pods behind (Note that a multi-Nic node can only have 8 Interfaces).

Forwarding all Kubernetes traffic through a single node

I have a Kubernetes cluster with multiple nodes in two different subnets (x and y). I have an IPsec VPN tunnel setup between my x subnet and an external network. Now my problem is that the pods that get scheduled in the nodes on the y subnet can't send requests to the external network because they're in nodes not covered by the VPN tunnel. Creating another VPN to cover the y subnet isn't possible right now. Is there a way in k8s to force all pods' traffic to go through a single source? Or any clean solution even if outside of k8s?
Posting this as a community wiki, feel free to edit and expand.
There is no built-in functionality in kubernetes that can do it. However there are two available options which can help to achieve the required setup:
Istio
If services are well known then it's possible to use istio egress gateway. We are interested in this use case:
Another use case is a cluster where the application nodes don’t have
public IPs, so the in-mesh services that run on them cannot access the
Internet. Defining an egress gateway, directing all the egress traffic
through it, and allocating public IPs to the egress gateway nodes
allows the application nodes to access external services in a
controlled way.
Antrea egress
There's another solution which can be used - antrea egress. Use cases are:
You may be interested in using this capability if any of the following apply:
A consistent IP address is desired when specific Pods connect to
services outside of the cluster, for source tracing in audit logs, or
for filtering by source IP in external firewall, etc.
You want to force outgoing external connections to leave the cluster
via certain Nodes, for security controls, or due to network topology
restrictions.

Kubernetes IP egress addressing

I question I have trouble finding an answer for is this:
When a K8s pod connects to an external service over the Internet, then that external service, what IP address does it see the pod traffic coming from?
I would like to know the answer in two distinct cases:
there is a site-to-site VPN between the K8s cluster and the remote service
there is no such VPN, the access is over the public Internet.
Let me also add the assumption that the K8s cluster is running on AWS (not with EKS,it is customer-managed).
Thanks for answering.
When the traffic leaves the pod and goes out, it usually undergoes NATing on the K8S Node, so the traffic in most cases will be coming with the Node's IP address in SRC. You can manipulate this process by (re-) configuring IP-MASQ-AGENT, which can allow you not to NAT this traffic, but then it would be up to you to make sure the traffic can be routed in the Internet, for example by using a cloud native NAT solution (Cloud NAT in case of GCP, NAT Gateway in AWS).

How to assign a single static source IP address for all pods of a service or deployment in kubernetes?

Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.

Set static IP for outcoming request

I need to access a service from my cluster on GKE outside of it. This service restricts access IP allowing just one IP. So, I have to set a NAT or something like that, but I don't really sure that is the right solution to set an external gateway/NAT on my GKE cluster. Can you help me, please?
You can achieve this by configuring a NAT Gateway.
Here's a guide: https://github.com/johnlabarge/gke-nat-example
The key steps to note are that you'll need to recreate your GKE cluster to apply a network tag to the nodes, and then use that tag in your GCP Route. (You cannot just apply the route to all nodes, as it would then be applied to your NAT Gateway instance(s) as well).
The other point to note (perhaps obviously) is that you cannot route all traffic through the NAT Gateway, unless you route all incoming traffic through it as well. I just it just for outbound traffic to a specific set of IPs which need a stable source.
You can use kubeip in order to assign IP addresses blog post