We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
Related
This is going to be more of a conceptual question.
I'm fairly new to Kubernetes and VPCs, and I'm currently studying in order to take part in designing a Kubernetes Cluster on GCP (Google Cloud Platform), and my role in that would be to address our security concerns.
Recently, I've been introduced to the concept of a "Private Kubernetes Cluster", which runs on a VPC and only allows traffic of allowed agents and from inside the VPC, with the Control Plane being accessible by a Bastion, for instance.
The thing is, I'm not sure if doing this would mean completely air-gapping the Cluster, blocking any access from the internet outside of the VPC or if I'm still able to use this to serve public web services, such as websites and APIs, whilst using the VPC to secure the control plane.
Any insights on that? I would also appreciate some documentation and related articles.
I still haven't got to the implementation part, since I'm trying to make sure I know what I'm doing beforehand.
Edit: According to the documentation, I am able to expose some of my cluster's nodes by using Cloud NAT. But would this defeat the purpose of even having a private cluster?
The thing is, I'm not sure if doing this would mean completely
air-gapping the Cluster, blocking any access from the internet outside
of the VPC or if I'm still able to use this to serve public web
services, such as websites and APIs, whilst using the VPC to secure
the control plane.
Yes, you will be able to Host your web application and you can expose those with the LoadBalancer even if you Cluster is private.
With a public cluster, your Worker node will be having the External/Public IPs while in private cluster worker nodes won't be having public IP.
You can create the service type LoadBalancer or use the Ingress to expose the application.
If public API access is required you can use the NAT gateway. you can configure your firewall rules to allow egress traffic to the specific public API endpoint you want to access.
Edit: According to the documentation, I am able to expose some of my
cluster's nodes by using Cloud NAT. But would this defeat the purpose
of even having a private cluster?
Yes right, The main advantage of Private GKE cluster I am seeing it does not have any Public/External IP address so can't be accessed from outside only accessed from within the VPC network. It can help protect clusters from un-auth access and reduce the surface of attacks on apps also.
Refer the Github for terraform and other details.
This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.
The scenario:
I have two K8s clusters. One is on-prem, the other is hosted in AWS. I could use Istio to make communication painless and do things like balloon capacity in AWS, but I'm getting hung up on trying to connect them. Reading the documentation, it looks like I need a VPN deployed inside of K8s if I want to have encrypted tunnels so that each internal network can talk to the other side. They're both non-overlapping 10-dots so I have that part done.
Is that correct or am I missing something on how to connect the two K8s clusters?
Having Istio in your cluster is independent of setting up basic communication in between your two clusters. There are a few options that I can think of here:
VPN between some nodes in both clusters like you mentioned.
BGP peering with Calico and your existing infrastructure.
A router in between your two clusters that understand the internal cluster IPs (This could be with BGP or static routes)
Kubernetes Federation. V1 is in alpha and V2 is in the implementation phase as of this writing. Not prod ready yet IMO.
OK I figured out I'm basically doing it wrong. Since istio uses TLS - I don't need the VPN for crypto, just connectivity, which is overkill since it's encrypting encrypted traffic. I just need some sort of connectivity between the clusters which we can facilitate on the existing link and I can use EIPs if I don't have that.
I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.
I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.