I have mongodb hosted in Mongo Atlas. There for security reasons I have whitelisted some IPs, that can access it.
Now that I have set up kubernetes cluster in google cloud, how can I make it able to access this mongodb service since I don't have clear IP for my cluster/swarm instances that get spawned on demand.
There is no way to get static IPs with GKE. You need to use a NAT gateway. You can configure a GCE VM to act as your NAT gateway for all outbound traffic from your cluster.
There have been multiple requests for a GCP native NAT feature and I believe that feature is on the way. In the meantime, a GCE VM acting as a NAT gateway is your best bet
EDIT you can now use Google Cloud NAT to assign a single (or multiple) static IPs to your cluster (or other Google resources)
Related
This is going to be more of a conceptual question.
I'm fairly new to Kubernetes and VPCs, and I'm currently studying in order to take part in designing a Kubernetes Cluster on GCP (Google Cloud Platform), and my role in that would be to address our security concerns.
Recently, I've been introduced to the concept of a "Private Kubernetes Cluster", which runs on a VPC and only allows traffic of allowed agents and from inside the VPC, with the Control Plane being accessible by a Bastion, for instance.
The thing is, I'm not sure if doing this would mean completely air-gapping the Cluster, blocking any access from the internet outside of the VPC or if I'm still able to use this to serve public web services, such as websites and APIs, whilst using the VPC to secure the control plane.
Any insights on that? I would also appreciate some documentation and related articles.
I still haven't got to the implementation part, since I'm trying to make sure I know what I'm doing beforehand.
Edit: According to the documentation, I am able to expose some of my cluster's nodes by using Cloud NAT. But would this defeat the purpose of even having a private cluster?
The thing is, I'm not sure if doing this would mean completely
air-gapping the Cluster, blocking any access from the internet outside
of the VPC or if I'm still able to use this to serve public web
services, such as websites and APIs, whilst using the VPC to secure
the control plane.
Yes, you will be able to Host your web application and you can expose those with the LoadBalancer even if you Cluster is private.
With a public cluster, your Worker node will be having the External/Public IPs while in private cluster worker nodes won't be having public IP.
You can create the service type LoadBalancer or use the Ingress to expose the application.
If public API access is required you can use the NAT gateway. you can configure your firewall rules to allow egress traffic to the specific public API endpoint you want to access.
Edit: According to the documentation, I am able to expose some of my
cluster's nodes by using Cloud NAT. But would this defeat the purpose
of even having a private cluster?
Yes right, The main advantage of Private GKE cluster I am seeing it does not have any Public/External IP address so can't be accessed from outside only accessed from within the VPC network. It can help protect clusters from un-auth access and reduce the surface of attacks on apps also.
Refer the Github for terraform and other details.
I have a kubernetes cluster with several nodes, and it is connecting to a SQL server outside of the cluster. How can I whitelist these (potentially changing) nodes on the SQL server firewall, without having to whitelist each Node's external IP independently?
Is there a clean solution for this? Perhaps some intra-cluster tooling to route all requests through a single node?
You would have to use a NAT. It is possible, but fiddly (we do this weekly in order to connect to a hosted service to make backups, and the hosted service only whitelists a specific IP.)
We used Terraform for spinning up a cluster, then deploying our backup job to it so it could connect to the hosted service, and since it was going via the NAT IP, the remote host would allow the connection.
We used Cloud NAT via Terraform (as we were on GKE): https://registry.terraform.io/modules/terraform-google-modules/cloud-nat/google/latest
Though there are surely similar options for whichever Kubernetes provider you are using. If you are running bare-metal, you'll need to do the routing yourself.
I question I have trouble finding an answer for is this:
When a K8s pod connects to an external service over the Internet, then that external service, what IP address does it see the pod traffic coming from?
I would like to know the answer in two distinct cases:
there is a site-to-site VPN between the K8s cluster and the remote service
there is no such VPN, the access is over the public Internet.
Let me also add the assumption that the K8s cluster is running on AWS (not with EKS,it is customer-managed).
Thanks for answering.
When the traffic leaves the pod and goes out, it usually undergoes NATing on the K8S Node, so the traffic in most cases will be coming with the Node's IP address in SRC. You can manipulate this process by (re-) configuring IP-MASQ-AGENT, which can allow you not to NAT this traffic, but then it would be up to you to make sure the traffic can be routed in the Internet, for example by using a cloud native NAT solution (Cloud NAT in case of GCP, NAT Gateway in AWS).
I use external docker registry in my GKE cluster to pull containers. This docker registry has security access list (which is basically list of client's public IP addresses). However, GKE cluster creates nodes with ephemeral IP addresses and makes it inconvenient to add each new IP address into access list.
How can I create proxy or port forwarding in google cloud to forward all requests via one external IP to access my external registry ?
You should use Cloud NAT. It will act at the proxy in your diagram and you can use the addresses of the NAT in the ACLs of the container registry. Also check out this tutorial on setting up Cloud NAT with GKE.
I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.