Right now im setting up a Kubernetes cluster with Azure Kubernetes Service (AKS).
Im using the feature "Bring your own Subnet" and Kubenet as a network mode.
As you can see in the diagram, on the left side is an example vm.
In the middle is a load balancer I set up in the cluster, who directs incoming traffic to all pods with the label "webserver", this works fine.
On the right side is an example node of the cluster.
My problem is the outgoing traffic of nodes. As you would expect, if you try to ssh into a vm in subnet 1 from a node in subnet 2, it uses the nodes-ip for connecting, the .198. (Red Line)
I would like to route the traffic over the load balancer, so the incoming ssh connection at the vm in subnet 1 has a source address of .196. (Green Line)
Reason: We have got a central firewall. To open ports, I have to specify the ip-address, from which the package is coming from. For this case, I would like to route the traffic over on central load balancer so only one ip has to be allowed through in the firewall. Otherwise, every package would have the source ip of the node.
Is this possible?
I have tried to look this use case up in the azure docs, but most of the times it talks about the usage of public ips, which i am not using in this case.
Related
I have a Kubernetes cluster with multiple nodes in two different subnets (x and y). I have an IPsec VPN tunnel setup between my x subnet and an external network. Now my problem is that the pods that get scheduled in the nodes on the y subnet can't send requests to the external network because they're in nodes not covered by the VPN tunnel. Creating another VPN to cover the y subnet isn't possible right now. Is there a way in k8s to force all pods' traffic to go through a single source? Or any clean solution even if outside of k8s?
Posting this as a community wiki, feel free to edit and expand.
There is no built-in functionality in kubernetes that can do it. However there are two available options which can help to achieve the required setup:
Istio
If services are well known then it's possible to use istio egress gateway. We are interested in this use case:
Another use case is a cluster where the application nodes don’t have
public IPs, so the in-mesh services that run on them cannot access the
Internet. Defining an egress gateway, directing all the egress traffic
through it, and allocating public IPs to the egress gateway nodes
allows the application nodes to access external services in a
controlled way.
Antrea egress
There's another solution which can be used - antrea egress. Use cases are:
You may be interested in using this capability if any of the following apply:
A consistent IP address is desired when specific Pods connect to
services outside of the cluster, for source tracing in audit logs, or
for filtering by source IP in external firewall, etc.
You want to force outgoing external connections to leave the cluster
via certain Nodes, for security controls, or due to network topology
restrictions.
I'm wondering if anyone can help with my issue, here's the setup:
We have 2 separate kubernetes clusters in GKE, running on v1.17, and they each sit in a separate project
We have set up VPC peering between the two projects
On cluster 1, we have 'service1' which is exposed by an internal HTTPS load balancer, we don't want this to be public
On cluster 2, we intend on being able to access 'service1' via the internal load balancer, and it should do this over the VPC peering connection between the two projects
Here's the issue:
When I'm connected via SSH on a GKE node on cluster 2, I can successfully run a curl request to access https://service1.domain.com running on cluster 1, and get the expected response, so traffic is definitely routing from cluster 2 > cluster 1. However, when I'm running the same curl command from a POD, running on a GKE node, the same curl request times out.
I have run as much troubleshooting as I can including telnet, traceroute etc and I'm really stuck why this might be. If anyone can shed light on the difference here that would be great.
I did wonder whether pod networking is somehow forwarding traffic over the clusters public IP rather than over the VPC peering connection.
So it seems you're not using a "VPC-native" cluster and what you need is "IP masquerading".
From this document:
"A GKE cluster uses IP masquerading so that destinations outside of the cluster only receive packets from node IP addresses instead of Pod IP addresses. This is useful in environments that expect to only receive packets from node IP addresses."
You can use ip-masq-agent or k8s-custom-iptables. After this, it will work since it will be like you're making a call from node, not inside of pod.
As mentioned in one of the answers IP aliases (VPC-native) should work out of the box. If using a route based GKE cluster rather than VPC-native you would need to use custom routes.
As per this document
By default, VPC Network Peering with GKE is supported when used with
IP aliases. If you don't use IP aliases, you can export custom routes
so that GKE containers are reachable from peered networks.
This is also explained in this document
If you have GKE clusters without VPC native addressing, you might have
multiple static routes to direct traffic to VM instances that are
hosting your containers. You can export these static routes so that
the containers are reachable from peered networks.
The problem your facing seems similar to the one mentioned in this SO question, perhaps your pods are using IPs outside of the VPC range and for that reason cannot access the peered VPC?
UPDATE: In Google cloud, I tried to access the service from another cluster which had VPC native networking enabled, which I believe allows pods to use the VPC routing and possibly the internal IPs.
Problem solved :-)
I have a k8s cluster on DigitalOcean using traefik 1.7 as Ingress Controller.
our domain point to the load balancer ip created by trafik.
All incomming request go through load balancer ip and be routed by trafik to proper service.
Now I want to perform HTTP requests from my services to an external system which only accepts registered IPs.
Can I provide them load balancer's IP and make all outbound requests go through load balancer IP? or I need to provide them all node's public IPs?
thanks
You can do either of them.
But the best solution to this would be to make all the traffic go through load balancer assuming this is some proxy server with tunnelling capabilities and open comms through load balancer IP on your external system. Because, imagine, right now you might be having a dozen of nodes running 100 micro services and now you opened your external system security group to allow traffic from dozen.
But in next few months you might go from 12 to 100 nodes and the overhead of updating your external system's security group whenever you add a node in DigitalOcean.
But you can also try a different approach by adding a standalone proxy server and route traffic through it from your pods. Something like [this] (Kubernetes outbound calls to an external endpoint with IP whitelisting).
Just a note, it's not just these options there are several ways one can achieve this, one another approach would be associating a NAT IP to all your nodes and keeping every node behind a private network would also work. It all depends on how you want to set it up and the purpose of the system you are planning to achieve.
Hope this helps.
Unfortunately, Ingress resources can't use outbound requests.
So you need to provide all nodes public IPs.
Another idea, if you use a forward proxy(e.g. nginx, haproxy), you can limit the nodes where forward proxy pods are scheduled by setting nodeSelector.
By doing so, I think you can limit the nodes that provide public IP addresses.
Egress packets from a k8s cluster to cluster-external services have node's IP as the source IP. So, you can register k8s nodes' IPs in the external system to allow egress packets from the k8s cluster.
https://kubernetes.io/docs/tutorials/services/source-ip/ says egress packets from k8s get source NAT'ed with node's IP:
Source NAT: replacing the source IP on a packet, usually with a node’s IP
Following can be used to send egress packets from a k8s cluster:
k8s network policies
calico CNI egress policies
istio egress gateway
nirmata/kube-static-egress-ip GitHub project
kube-static-egress-ip provides a solution with which a cluster operator can define an egress rule where a set of pods whose outbound traffic to a specified destination is always SNAT'ed with a configured static egress IP. kube-static-egress-ip provides this functionality in Kubernetes native way using custom rerources.
I have a Kubernetes cluster (1.3.2) in the the GKE and I'd like to connect VMs and services from my google project which shares the same network as the cluster.
Is there a way for a VM that's internal to the subnet but not internal to the cluster itself to connect to the service without hitting the external IP?
I know there's a ton of things you can do to unambiguously determine the IP and port of services, such as the ENVs and DNS...but the clusterIP is not reachable outside of the cluster (obviously).
Is there something I'm missing? An important component to this is that this is meant to be a service "public" to the project, such that I don't know which VMs on the project will want to connect to the service (this could rule out loadBalancerSourceRanges). I understand the endpoint which the services actually wraps is the internal IP I can hit, but the only good way to get to that IP is though the Kube API or kubectl, both of which are not prod-ideal ways of hitting my service.
Check out my more thorough answer here, but the most common solution to this is to create bastion routes in your GCP project.
In the simplest form, you can create a single GCE Route to direct all traffic w/ dest_ip in your cluster's service IP range to land on one of your GKE nodes. If that SPOF scares you, you can create several routes pointing to different nodes, and traffic will round-robin between them.
If that management overhead isn't something you want to do going forward, you could write a simple controller in your GKE cluster to watch the Nodes API endpoint, and make sure that you have a live bastion route to at least N nodes at any given time.
GCP internal load balancing was just released as alpha, so in the future, kube-proxy on GCP could be implemented using that, which would eliminate the need for bastion routes to handle internal services.
I am using Google Container Engine to launch a cluster that connects to remote services (in a different data center / provider). The containers that are connecting may not have a kubernetes service associated with them and don't need external in-bound ip addresses. However, I want to set up firewall rules on the remote machines and have a known subnet that the nodes will be within when I expand/reduce the cluster or if a node goes down and is re-built.
In looking at Google Networks they appear to be related to internal networks (e.g. 10.128.0.0, etc). The external IP lets me set up single static IP addresses but not a range and I don't see how to apply that to a node — applying to a load balancer won't change the outbound IP address.
Is there a way I can reserve a block of IP addresses for my cluster to use in my firewall rules on my remote servers? Or is there some other solution I'm missing for this kind of thing?
The proper solution for this is to use a VPN to connect the two networks. Google Cloud VPN allows you to create this on the Google side.