How to create port forwarding from google kubernetes engine cluster to external IP address? - kubernetes

I use external docker registry in my GKE cluster to pull containers. This docker registry has security access list (which is basically list of client's public IP addresses). However, GKE cluster creates nodes with ephemeral IP addresses and makes it inconvenient to add each new IP address into access list.
How can I create proxy or port forwarding in google cloud to forward all requests via one external IP to access my external registry ?

You should use Cloud NAT. It will act at the proxy in your diagram and you can use the addresses of the NAT in the ACLs of the container registry. Also check out this tutorial on setting up Cloud NAT with GKE.

Related

What will happen if AWS Fargate Tasks are provisioned in private subnet with VPC Endpoints and NAT Gateway enabled?

Firstly, I have Fargate tasks in private subnets of a VPC and enable NAT Gateway to get connected with ECR for pulling the images & other on-premise servers via the internet. It works perfectly. Later I setup VPC endpoints for ECR (api & dkr), S3, Secrets, logs & remove NAT Gateway, it is working for communication with AWS Services but getting the problem for communicating with on-premise servers. So I enable NAT Gateway and then my application seems working perfectly with on-premise servers. But what I am still unclear is the communication with AWS Services (ECR, S3, Secrets and CloudWatch) happens via internet or private network with VPC endpoints? Please suggest me how to debug the communications.
Thank you for your advices in advance ~
I follow Use a private subnet with internet access & I can ssh into the tasks without VPC Endpoints & NAT gateway enabled. I cannot ssh when I try with VPC endpoints method as the communication happens via private link. I still cannot ssh with VPC endpoints method and NAT Gateway enabled.
--I think I should able to ssh as NAT Gateway is enabled now.-
The VPC endpoints you are creating are specifically "Interface Endpoints". When you create an interface endpoint, AWS
will add an elastic network interface (ENI) to your specified subnets and assign it a private IP address in your subnet's address space. In general,
you'll also tell AWS to add a DNS entry for that ENI which resolves the service's domain name against that IP (insetad
of the public IP). You can disable this, but it kind of defeats the purpose.
This effectively means that anytime you try to resolve the hostname for that service, it should resolve to your ENI's
IP address and thus go over privatelink. However, it is important to note that you need to configure your CLI/SDK for the region your ENI is in. Otherwise, it may use the generic DNS entry (which may point to us-east-1 specifically). That will resolve just fine (thanks to your NAT Gateway), but if you are running in another region, your traffic may route unexpectedly over the internet.
All of this is independent of SSH. Remember, VPC Interface Endpoints are only used to create a private IP address that
can be used to route to AWS services. If you are trying to SSH into a Fargate task, that task just needs to be routable. In this particular case, your Fargate tasks are running in your VPC, and are apparently directly routable. No NAT Gateway or interface endpoints should be necessary to reach them.

Google Kubernetes Enginge NAT routing outgoing IP doesn't work

I want to connect GKE (Google Kubernetes Engine) cluster to MongoDB Atlas. But I need to green the IP of my nodes (allow them). But sometimes I have 3 nodes, sometimes I have 10 and sometimes nodes are falling down and re-creating - constant changing means a no single IP.
I have tried to create NAT on the GCP followed this guide: https://medium.com/google-cloud/using-cloud-nat-with-gke-cluster-c82364546d9e
Also I want to green my cluster's IP in the Google Maps APIs so I can use the Directions API, for example.
This is a common situation, since there may be many other third party APIs that I want to enable that require incoming requests from certain IPs only, besides Atlas or Google Maps..
How can I achieve this?
Private GKE cluster means the nodes do not have public IP addresses but you mentioned
the actual outbound transfer goes from the node's IP instead of
the NAT's
Looks like you have a public cluster of GKE, you have to use the same NAT option to get outbound egress single IP.
If you are using the ingress which means there is a single point for incoming request to cluster but if your Nodes have public IP PODs will use Node's IP when there is an outgoing request unless you use NAT or so.
Your single outbound IP will be there, so all requests going out of PODs won't have node's IP instead they will use the NAT IP.
how to set up the NAT gateway
https://registry.terraform.io/modules/GoogleCloudPlatform/nat-gateway/google/latest/examples/gke-nat-gateway
Here is terraform ready for the GKE clusters, you just have to run this terraform example bypassing project ID and others vars.
The above terraform example will create the NAT for you and verify the PODs IP as soon as NAT is set. You mostly won't require any changes in NAT terraform script.
GitHub link: https://github.com/GoogleCloudPlatform/terraform-google-nat-gateway/tree/v1.2.3/examples/gke-nat-gateway
if you dont have idea of terraform you can follow this article to setup the NAT which will stop the SNAT for PODs : https://rajathithanrajasekar.medium.com/google-cloud-public-gke-clusters-egress-traffic-via-cloud-nat-for-ip-whitelisting-7fdc5656284a
Private GKE cluster means the nodes do not have public IP addresses. If the service on the other end is receiving packets from node's own IP then you have a public cluster.
You can find further explanation in this document.
If you want a static, public IP for the entire GKE cluster, you should consider Ingress for External Load Balancing. You can find instructions on how to configure it here.

Calling an external service that requires whitelist IP from Kubernetes application

I have a Kubernetes cluster running several different applications... one of my PHP applications is calling an external service that requires that the caller's IP address is whitelisted with the service. Since this is a Kubernetes cluster and the IP address can change, I could have the IP address that is currently running my application whitelisted, but it may not stay that way. Is there a "best practice" to whitelist an IP from a Kubernetes cluster?
To achieve this, you need to add IP addresses of your Kubernetes nodes to the whitelist of your external services. When you call something external from pod, your request goes through the node interface and has node’s external IP. In case your nodes have no external IPs and stay behind a router you need to add IP address of your router. Also, you might configure some kind of proxy, add proxy IP to the whitelist and every time go through this proxy to your external service.

Google Cloud kubernetes engine external ip

I have mongodb hosted in Mongo Atlas. There for security reasons I have whitelisted some IPs, that can access it.
Now that I have set up kubernetes cluster in google cloud, how can I make it able to access this mongodb service since I don't have clear IP for my cluster/swarm instances that get spawned on demand.
There is no way to get static IPs with GKE. You need to use a NAT gateway. You can configure a GCE VM to act as your NAT gateway for all outbound traffic from your cluster.
There have been multiple requests for a GCP native NAT feature and I believe that feature is on the way. In the meantime, a GCE VM acting as a NAT gateway is your best bet
EDIT you can now use Google Cloud NAT to assign a single (or multiple) static IPs to your cluster (or other Google resources)

Google Container Engine: assign static IP to nodes for outbound traffic

I am using Google Container Engine to launch a cluster that connects to remote services (in a different data center / provider). The containers that are connecting may not have a kubernetes service associated with them and don't need external in-bound ip addresses. However, I want to set up firewall rules on the remote machines and have a known subnet that the nodes will be within when I expand/reduce the cluster or if a node goes down and is re-built.
In looking at Google Networks they appear to be related to internal networks (e.g. 10.128.0.0, etc). The external IP lets me set up single static IP addresses but not a range and I don't see how to apply that to a node — applying to a load balancer won't change the outbound IP address.
Is there a way I can reserve a block of IP addresses for my cluster to use in my firewall rules on my remote servers? Or is there some other solution I'm missing for this kind of thing?
The proper solution for this is to use a VPN to connect the two networks. Google Cloud VPN allows you to create this on the Google side.