I have a Kubernetes cluster on Google Kubernetes Engine. I want to assign a static IP for all outgoing traffic of a cluster.
I already have reserved external IPs but I can't assign them to a cluster with the GCP console.
I found a solution to do it with the cli :
Static outgoing IP in Kubernetes
but it targets the VM and I will need to set it each time I deploy. So it's not targeting the cluster.
Can anybody provide any pointers? Thanks.
GKE currently doesn't have an option to create the cluster with all your nodes using a reserved public IP. All you get in advanced networking options is something like this:
You will have to use the gcloud API that you mentioned which should be easy to put in a script.
Or you can also use the UI by editing the instance(s) and going into 'Network Interfaces' like this:
I agree with something in the previous answer you can't do something like this directly in the cluster, but you can use another service to do what you are looking for: nat gateway that will use a fixe public ip.
For more security, you can even deploy the gateways in multiple zones to have some redundancy and your cluster will always have outgoing trafic go by the gateways.
I won't explain how it works here, because google already provided a tutorial to what you want to do here: https://cloud.google.com/solutions/using-a-nat-gateway-with-kubernetes-engine
Enjoy.
Related
We're integrating with a new partner that requires us to use VPN when communicating with them (over HTTPS). We're running all of our services in a (non-private) Google Kubernetes Engine (GKE) cluster and it's only a single pod that needs to communicate with the partner's API.
The problem we face is that our partner's VPN provider won't allow us to use the private IP-range provided by GKE, 10.244.0.0/14, because the subnet is too large.
Preferably, we don't want to deploy something outside our GKE cluster, like a Compute Engine instance, that is somehow used to proxy our traffic (we will of course do it if this is the only/best way to proceed). We're hoping that, perhaps, it'll be possible to create a new node pool in the same cluster with a different (smaller) subnet, but so far we haven't found a way to do this. We've also looked briefly at CloudVPN, but if we understand it correctly, it only works with private GKE clusters.
Question:
What's the recommended way to obtain a smaller subnet/IP-range for a pod in an existing (public) GKE cluster to allow it to communicate with a third-party API over VPN?
The problem I see is that you have to maintain your VPN connection within your pod, it is possible but looks like an antipattern.
I would recommend using CloudVPN in a separate GCP project (due to cost separation and security) to establish the connection with a specific and limited VPC and then route that traffic to the pod, that might be in a specific ip range as you mentioned.
Take a look at the docs on how to create the vpn:
https://cloud.google.com/network-connectivity/docs/vpn/concepts/overview
Redirect traffic between VPCs:
https://cloud.google.com/vpc/docs/vpc-peering
Create the nodepool with an IP range: https://cloud.google.com/sdk/gcloud/reference/container/node-pools/create
Assign your deployment to that nodepool
https://kubernetes.io/docs/concepts/scheduling-eviction/assign-pod-node/#nodeselector
This is my first question on Stack Overflow:
We are using Gcloud Kubernetes.
A customer specifically requested a VPN Tunnel to scrape a single service in our Cluster (I know ingress would be more suited for this).
Since VPN is IP based and Kubernetes changes these, I can only configure the VPN to the whole IP range of services.
I'm worried that the customer will get full access to all services if I do so.
I have been searching for days on how to treat incoming VPN traffic, but haven't found anything.
How can I restrict the access? Or is it restricted and I need netpols to unrestrict it?
Incoming VPN traffic can either be terminated at the service itself, or at the ingress - as far as I see it. Termination at the ingress would probably be better though.
I hope this is not too confusing, thanks you so much in advance
As you mentioned, an external Load Balancer would be ideal here as you mentioned, but if you must use GCP Cloud VPN then you can restrict access into your GKE cluster (and GCP VPC in general) by using GCP Firewall rules along with GKE internal LBs HTTP or TCP.
As a general picture, something like this.
Second, we need to add two firewall rules to the dedicated networks (project-a-network and project-b-network) we created. Go to Networking-> Networks and click the project-[a|b]-network. Click “Add firewall rule”. The first rule we create allows SSH traffic from the public so that we can SSH into the instances we just created. The second rule allows icmp traffic (ping uses the icmp protocol) between the two networks.
Consider a microservice X which is containerized and deployed in a kubernetes cluster. X communicates with a Payment Gateway PG. However, the payment gateway requires a static IP for services contacting it as it maintains a whitelist of IP addresses which are authorized to access the payment gateway. One way for X to contact PG is through a third party proxy server like QuotaGuard which will provide a static IP address to service X which can be whitelisted by the Payment Gateway.
However, is there an inbuilt mechanism in kubernetes which can enable a service deployed in a kube-cluster to obtain a static IP address?
there's no mechanism in Kubernetes for this yet.
other possible solutions:
if nodes of the cluster are in a private network behind a NAT then just add your network's default gateway to the PG's whitelist.
if whitelist can accept a cidr apart from single IPs (like 86.34.0.0/24 for example) then add your cluster's network cidr to the whitelist
If every node of the cluster has a public IP and you can't add a cidr to the whitelist then it gets more complicated:
a naive way would be to add ever node's IP to the whitelist, but it doesn't scale above tiny clusters few just few nodes.
if you have access to administrating your network, then even though nodes have pubic IPs, you can setup a NAT for the network anyway that targets only packets with PG's IP as a destination.
if you don't have administrative access to the network, then another way is to allocate a machine with a static IP somewhere and make it act as a proxy using iptables NAT similarly like above again. This introduces a single point of failure though. In order to make it highly available, you could deploy it on a kubernetes cluster again with few (2-3) replicas (this can be the same cluster where X is running: see below). The replicas instead of using their node's IP to communicate with PG would share a VIP using keepalived that would be added to PG's whitelist. (you can have a look at easy-keepalived and either try to use it directly or learn from it how it does things). This requires high privileges on the cluster: you need be able to grant to pods of your proxy NET_ADMIN and NET_RAW capabilities in order for them to be able to add iptables rules and setup a VIP.
update:
While waiting for builds and deployments during last few days, I've polished my old VIP-iptables scripts that I used to use as a replacement for external load-balancers on bare-metal clusters, so now they can be used as well to provide egress VIP as described in the last point of my original answer. You can give them a try: https://github.com/morgwai/kevip
There are two answers to this question: for the pod IP itself, it depends on your CNI plugin. Some allow it with special pod annotations. However most CNI plugins also involve a NAT when talking to the internet so the pod IP being static on the internal network is kind of moot, what you care about is the public IP the connection ends up coming from. So the second answer is "it depends on how your node networking and NAT is set up". This is usually up to the tool you used to deploy Kubernetes (or OpenShift in your case I guess). With Kops it's pretty easy to tweak the VPC routing table.
The scenario:
I have two K8s clusters. One is on-prem, the other is hosted in AWS. I could use Istio to make communication painless and do things like balloon capacity in AWS, but I'm getting hung up on trying to connect them. Reading the documentation, it looks like I need a VPN deployed inside of K8s if I want to have encrypted tunnels so that each internal network can talk to the other side. They're both non-overlapping 10-dots so I have that part done.
Is that correct or am I missing something on how to connect the two K8s clusters?
Having Istio in your cluster is independent of setting up basic communication in between your two clusters. There are a few options that I can think of here:
VPN between some nodes in both clusters like you mentioned.
BGP peering with Calico and your existing infrastructure.
A router in between your two clusters that understand the internal cluster IPs (This could be with BGP or static routes)
Kubernetes Federation. V1 is in alpha and V2 is in the implementation phase as of this writing. Not prod ready yet IMO.
OK I figured out I'm basically doing it wrong. Since istio uses TLS - I don't need the VPN for crypto, just connectivity, which is overkill since it's encrypting encrypted traffic. I just need some sort of connectivity between the clusters which we can facilitate on the existing link and I can use EIPs if I don't have that.
I have a Cloud MySQL instance which allows traffic only from whitelisted IPs. How do I determine which IP I need to add to the ruleset to allow traffic from my Kubernetes service?
The best solution is to use the Cloud SQL Proxy in a sidecar pattern. This adds an additional container into the pod with your application that allows for traffic to be passed to Cloud SQL.
You can find instructions for setting it up here. (It says it's for GKE, but the principles are the same)
If you prefer something a little more hands on, this codelab will walk you through taking an app from local to on a Kubernetes Cluster.
I am using Google Cloud Platform, so my solution was to add the Google Compute Engine VM instance External IP to the whitelist.