How to IP whitelist a Kubernetes cluster - kubernetes

So I have a kubernetes cluster running in Google Cloud. And from pods inside the cluster I need to access an external DB which has IP whitelisting configured. It seems that I need a static, shared IP for the cluster's outgoing traffic, what's the best approach?
Setting up a service IP seems irrelevant as that's for inbound traffic. I looked into Cloud NAT and it seems promising, but I'm not exactly sure about how to set that up. Any docs/tutorial would be helpful, thanks!

According the docs when traffic goes out of a kubernetes cluster in GKE it will get SNATed with the IP of the node. So you could whitelist the IPs of all GKE kubernetes cluster nodes.
Here is some best practices on connecting to external services from Kubernetes cluster. An example for connecting to Cloud SQL from Google Kubernetes Engine.
An example setup of Cloud NAT on GKE.

Related

How to forward traffic to an on-premise Kubernetes cluster

I'm trying to understand how traffic can be forwarded to an on-premise Kubernetes cluster.
It's clear to me that in a public Cloud provider, the underlying infrastructure of the Cloud can automatically manage and forward traffic to a Kubernetes distribution, such as EKS, GKE, AKS, by assining a LoadBalancer IP to a Kubernetes Service. Then, after a few seconds, this service will receive an external IP and will be reachable from the outside world.
On the other hand, in an on-premise Kubernetes cluster, by assigning a LoadBalancer IP to a service, it stays on pending forever, unless you assign a node IP, but what if you want to assign a different IP from a private IP range? In order to tackle this, in my homelab, I've deployed metallb inside my K3s cluster. The metallb is configured to use a private IP range of my network, let's say 10.0.0.0/24. Now, services of type LoadBalancer can consume an address of this range, e.g. my Ingress Controller can receive 10.0.2.3 as its external IP.
I can't understand what's metallb doing under the hood. How metallb "listens" to an address of the range and forwards traffic to my cluster. Can this be achieved without a metallb? I've tried setting an ExternalIP directly to a service of type LoadBalancer, but it never managed to claim that specific IP without it.
In addition, I'm aware that this can also be achieved with a "physical" load-balancer solution, such as NGINX and HAProxy, that sits in front of the cluster. To my understanding, technically this does the same thing as metallb. With such a solution configured, an address can be listened and be forwarded to the cluster. But my question here is, can this be achieved without those technologies? Can a Kubernetes Cluster listen to an external address and accept traffic without an intermediate solution? Maybe through Firewall rules and port-forwarding?
Your time is highly appreciated!
This involves some of the core networking concepts like NATing, you can have two networks one local and one external CIDR. For exposing the services you can NAT the local CIDR with external CIDR and configure required firewall rules for making your cluster serve the public.

How to do client-side load balancing of a GRPC service hosted inside a Kubernetes cluster from outside a Kubernetes cluster?

Scenario: We have a client outside a K8s cluster trying to access a GRPC service hosted inside a K8s cluster. Both the client and the service are part of the same VNET in Azure. We would like to use client-side load balancing for accessing this GRPC service.
Setup of our K8s cluster: Our K8s cluster is hosted inside an Azure VNET and uses Azure CNI networking model, so this means the pods in our cluster have the IP addresses from the VNET's IP address space. Please note we are not using AKS and are self-hosting the K8s cluster, but this whole question should not depend on this in my opinion.
Questions:
We would like to use client-side load balancing for accessing this GRPC service. If both our client and server were present inside the K8s cluster, then we could have used K8s headless service to get list of IP addresses. But in this case since client is outside the K8s cluster, we are looking for solutions on how to retrieve the IP addresses outside the K8s cluster?
Can K8s cluster create DNS records in a DNS server which is hosted outside the K8s cluster so that the client which is outside the K8s cluster can access the list of IP addresses from it?
Thanks for your help!
I found that I could solve the issue by using Extenral DNS. After wiring up my cluster with External DNS (which was linked to a Azure Private DNS zone), I created a headless service and found that on deployment of this service, DNS records were created in the Azure Private DNS zone. I was able to get the list of IP addresses of the pods by just doing a DNS lookup of the service's DNS name.

GKE cluster egress traffic coming out the nodes rather than the LB service

I'm new to GKE and K8S so please bare with me and my silliness. I currently have a GKE cluster that has two nodes in the default node pool and the cluster is exposed via a LoadBalancer type service.
These nodes are tasked with calling a Compute Engine instance via HTTP. I have a Firewall rule set in GCP to deny ingress traffic to the GCE instance except the one coming from the GKE cluster.
The issue is that the traffic isn't coming from the LoadBalancer's service IP but rather from the nodes themselves, so whitelisting the services' IP has no effect and I have to whitelist the IPs of the nodes instead of the cluster. This is not ideal, since each time a new node is created I have to change the Firewall rule. I understand that once you have a service set up in the cluster, all traffic will be directed towards the IP of the service, so why is this happening? What am I doing wrong? Please let me know if you need more details and thanks in advance.
YAML of the service:
https://i.stack.imgur.com/XBZmE.png
When you create a service on GKE, and you expose it to internet, a load balancer is created. This load balancer manage only the ingress traffic (traffic from internet to your GKE cluster).
When your pod initiate a communication, the traffic is not managed by the load balancer, but by the node that host the pod, if the node have a public IP (Instead of denied the traffic to GCE instance, simply remove the public IP, it's easier and safer!).
If you want to manage the IP for egress traffic originated by your pod, you have to set up a Cloud NAT on your GKE cluster.

Kubernetes IP conflict

I have the following context: There is a GCP VPC interconnect to the company where are a lot of services with ip address in the range of 10.7.114.0/23. Also in the GCP I have Kubernetes pods that consuming the mentioned on-premise 10.7.114.0/23 services.
The Kubernetes range is 10.194.0.0/19 and when from a pod I want to access the on-premise 10.7.114.0/23 services I've been blocked. The traceroute tracks to the ip 10.194.0.1 and after that is blocked.
Do you have any idea how to solve this problem?

Wrong IP from GCP kubernetes load balancer to app engine's service

I'm having some troubles with a nginx pod inside a kubernetes cluster located on GCP which should be able to access a service located on app engine.
I have set firewall rules in the app engine to deny all and only allow some ips but the ip which hits my app engine service isn't the IP of the load balancer of my Nginx but instead the IP of one of the node of the cluster.
An image is better than 1000 words, then here's an image of our architecture :
The problem is: The ip which hits app engine's firewall is IP A whereas I thought i'd be IP B. IP A changes everytime I kill/create the cluster. If it were IP B, I could easily open this IP in App engine's firewall rules as I've put her static. Anyone has an idea how to have IP B instead of IP A ?
Thanks
The IP address assigned to your nginx "load balancer" is (likely) not an IP owned or managed by your Kubernetes cluster. Services of type LoadBalancer in GKE use Google Cloud Load Balancers. These are an external abstraction which terminates inbound connections in Google's front-end infrastructure and passes traffic to the individual k8s nodes in the cluster for onward delivery to your k8s-hosted service.
Pods in a Kubernetes cluster will, by default, route egress traffic out of the cluster using the configuration of their host node. In GKE, this route corresponds to the gateway of the VPC in which the cluster (and, by extension, Compute Engine instances) exists. The public IP of cluster nodes will change as they are added and removed from the pool.
A workaround uses a dedicated instance with a static external IP to process egress traffic leaving your VPC (i.e. egress from your cluster). Google has a tutorial for this purpose here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
There are k8s-native solutions, but these will be unsuitable in a GKE context at present due to the inability to maintain any node with a non-ephemeral public IP.