I'm having an issue with load balancing persistent tcp connections to my kubernetes replicas.
I have Unity3D clients outside of the kubernetes cluster.
My cluster is a baremetal cluster with metallb installed composed out of 3 nodes: 1 master and 2 workers.
As I have read there are two approaches:
1) client connects to all replicas and each time it needs to send a request it will do so on a random connection out of those that it has previously established. Periodically, it refreshes connections (in case autoscale happened or some of the persistent connections died).
The problem here is, I'm not sure how to access all replicas externally, headless services cannot be exposed externally.
2) service mesh ? I have vaguely read/understood that they might establish persistent tcp on your behalf. So something like this :
unity3d client <----persistent connection ---> controller <---persistent connection----> replicas
However, I'm not sure how to accomplish this and I'm not sure what will happen if the controller itself fails, will all the clients get their connections dropped ? As I see it, it will come down to the same issue as the one from 1), which is allowing a client to connect to multiple different replicas at the same time with a persistent TCP connection.
Part of question comes as a complement to this : https://learnk8s.io/kubernetes-long-lived-connections
In order to enable external traffic to your cluster you need an Ingress Gateway. Your ingress gateway could be the standard nginx Ingress, a gateway provided by a mesh like the Istio Gateway or a more specialized edge gateway like ambassador, traefik, kong, gloo, etc.
There are at least two ways you can perform load balancing in K8s:
Using a Service resource which is just a set of iptables rules managed by the kube-proxy process. This is L4 load balancing only. No L7 application protocols like HTTP2 or gRPC are supported. Depending on your case, this type of LB might not be ideal for long lived connections as connections will rarely be closed.
Using the L7 load balancing offered by any of the ingress controllers which will skip the iptables routing (using a headless Service) and allow for more advanced load balancing algorithms.
In order to benefit from the latter case you still need to ensure that connections are eventually terminated which is often done from the client to the proxy (while reusing connections from the proxy to the upstream). I'm not familiar with Unity3D connections but if terminating them is not an option you won't be able to do much load balancing after all.
When the controller fails, connections will be dropped and your client could either graciously re-attempt the connection or panic. It depends on how you code it.
Related
My architecture looks like this:
Here, the HTTPS requests first go to the route53 service for DNS resolution. Route53 forwards the request to the Network Load balancer. This service redirects the traffic to HAProxy pods running inside a Kubernetes cluster.
The HAProxy servers are required to read a specific request header and based on its value, it will route the traffic to backend. To keep things simple, I have kept a single K8 Backend cluster, but assume that there are more than 1 such backend cluster running.
Considering this architecture:
What is the best place to perform TLS termination? Should we do it at NLB (green box) or implement it at HAProxy (Orange box)?
What are the advantages and disadvantages of each scenario?
As you are using the NLB you can achieve End to end HTTPS also however it forces the service also to use.
You can terminate at the LB level if you have multiple LB backed by clusters, leveraging the AWS cert manage with LB will be an easy way to manage the multiple setups.
There is no guarantee that if anyone that enters in your network won't be able to exploit a bug capable of intercepting traffic between services, Software Defined Network(SDN) in your VPC is secure and protects from spoofing but no guarantee.
So there is an advantage if you use TLS/SSL inside the VPC also.
I've seen scenarios where requests from one workload, sent to a ClusterIP service for another workload with no affinities set, only get routed to a subset of the associated pods. The Endpoints object for this service does show all of the pod IPs.
I did a little experiment to figure out what is happening.
Experiment
I set up minikube to have a "router" workload with 3 replicas sending requests to a "backend" workload also with 3 pods. The router just sends a request to the service name like http://backend.
I sent 100 requests to the router service via http://$MINIKUBE_IP:$NODE_PORT, since it's exposed as a NodePort service. Then I observed which backend pods actually handled requests. I repeated this test multiple times.
In most cases, only 2 backend pods handled any requests, with the occasional case where all 3 did. I didn't see any where all requests went to one in these experiments, though I have seen it happen before running other tests in AKS.
This led me to the theory that the router is keeping a persistent connection to the backend pod it connects to. Given there are 3 routers and 3 backends, there's an 11% chance all 3 routers "stick" to a single backend, a 67% chance that between the 3 routers, they stick to 2 of the backends, and a 22% chance that each router sticks to a different backend pod (1-to-1).
Here's one possible combination of router-to-backend connections (out of 27 possible):
Disabling HTTP Keep-Alive
If I use a Transport disabling HTTP Keep-Alives in the router's http client, then any requests I make to the router are uniformly distributed between the different backends on every test run as desired.
client := http.Client{
Transport: &http.Transport{
DisableKeepAlives: true,
},
}
resp, err := client.Get("http://backend")
So the theory seems accurate. But here's my question:
How does the router using HTTP KeepAlive / persistent connections actually result in a single connection between one router pod and one backend pod?
There is a kube-proxy in the middle, so I'd expect any persistent connections to be between the router pod and kube-proxy as well as between kube-proxy and the backend pods.
Also, when the router does a DNS lookup, it's going to find the Cluster IP of the backend service every time, so how can it "stick" to a Pod if it doesn't know the Pod IP?
Using Kubernetes 1.17.7.
This excellent article covers your question in detail.
Kubernetes Services indeed do not load balance long-lived TCP connections.
Under the hood Services (in most cases) use iptables to distribute connections between pods. But iptables wasn't designed as a balancer, it's a firewall. It isn't capable to do high-level load balancing.
As a weak substitution iptables can create (or not create) a connection to a certain target with some probability - and thus can be used as L3/L4 balancer. This mechanism is what kube-proxy employs to somewhat imitate load balancing.
Does iptables use round-robin?
No, iptables is primarily used for firewalls, and it is not designed to do load balancing.
However, you could craft a smart set of rules that could make iptables behave like a load balancer.
And this is precisely what happens in Kubernetes.
If you have three Pods, kube-proxy writes the following rules:
select Pod 1 as the destination with a likelihood of 33%. Otherwise, move to the next rule
choose Pod 2 as the destination with a probability of 50%. Otherwise, move to the following rule
select Pod 3 as the destination (no probability)
What happens when you use keep-alive with a Kubernetes Service?
Let's imagine that front-end and backend support keep-alive.
You have a single instance of the front-end and three replicas for the backend.
The front-end makes the first request to the backend and opens the TCP connection.
The request reaches the Service, and one of the Pod is selected as the destination.
The backend Pod replies and the front-end receives the response.
But instead of closing the TCP connection, it is kept open for subsequent HTTP requests.
What happens when the front-end issues more requests?
They are sent to the same Pod.
Isn't iptables supposed to distribute the traffic?
It is.
There is a single TCP connection open, and iptables rule were invocated the first time.
One of the three Pods was selected as the destination.
Since all subsequent requests are channelled through the same TCP connection, iptables isn't invoked anymore.
Also it's not quite correct to say that kube-proxy is in the middle.
It isn't - kube-proxy by itself doesn't manage any traffic.
All that it does - it creates iptables rules.
It's iptables who actually listens, distributes, does DNAT etc.
Similar question here.
We have a gRPC application deployed in a cluster (v 1.17.6) with Istio (v 1.6.2) setup. The cluster has istio-ingressgateway setup as the edge LB, with SSL termination. The istio-ingressgateway is fronted by an AWS ELB (classic LB) in passthrough mode. This setup is fully functional and the traffic flows as intended, in general. So the setup looks like:
ELB => istio-ingressgateway => virtual service => app service => [(envoy)pods]
We are running load tests on this setup using GHZ (ghz.sh), running external to the application cluster. From the tests we’ve run, we have observed that each of the app container seems to get about 300 RPS routed to it, no matter the configuration of the GHZ test. For reference, we have tried various combos of --concurrency and --connection settings for the tests. This ~300 RPS is lower than what we expect from the app and, hence, requires a lot more PODs to provide the required throughput.
We are really interested in understanding the details of the physical connection (gRPC/HTTP2) setup in this case, all the way from the ELB to the app/envoy and the details of the load balancing being done. Of particular interest is the the case when the same client, GHZ e.g., opens up multiple connections (specified via the --connection option). We have looked at Kiali and it doesn’t give us the appropriate visibility.
Questions:
How can we get visibility into the physical connections being setup from the ingress gateway to the pod/proxy?
How is the “per request gRPC” load balancing happening?
What options might exist to optimize the various components involved in this setup?
Thanks.
1.How can we get visibility into the physical connections being setup from the ingress gateway to the pod/proxy?
If Kiali doesn't show what exactly you need, maybe you could try with Jaeger?
Jaeger is an open source end to end distributed tracing system, allowing users to monitor and troubleshoot transactions in complex distributed systems.
There is istio documentation about Jaeger.
Additionally Prometheus and Grafana might be helpful here, take a look here.
2.How is the “per request gRPC” load balancing happening?
As mentioned here
By default, the Envoy proxies distribute traffic across each service’s load balancing pool using a round-robin model, where requests are sent to each pool member in turn, returning to the top of the pool once each service instance has received a request.
If you wan't to change the default round-robin model you can use Destination Rule for that. Destination rules let you customize Envoy’s traffic policies when calling the entire destination service or a particular service subset, such as your preferred load balancing model, TLS security mode, or circuit breaker settings.
There is istio documentation about that.
More about load balancing in envoy here.
3.What options might exist to optimize the various components involved in this setup?
I'm not sure if there is anything to optimize in istio components, maybe some custom configuration in Destination Rule?
Additional Resources:
itnext.io
medium.com
programmaticponderings.com
I've been looking into Kubernetes networking, more specifically, how to serve HTTPS users the most efficient.
I was watching this talk: https://www.youtube.com/watch?v=0Omvgd7Hg1I and from 22:18 he explains what the problem is with a load balancer that is not pod aware. Now, how they solve this in kubernetes is by letting the nodes also act as a 'router' and letting the node pass the request on to another node. (explained at 22:46). This does not seem very efficient, but when looking around SoundCloud (https://developers.soundcloud.com/blog/how-soundcloud-uses-haproxy-with-kubernetes-for-user-facing-traffic) actually seems to do something similar to this but with NodePorts. They say that the overhead costs less than creating a better load balancer.
From what I have read an option might be using an ingress controller. Making sure that there is not more than one ingress controller per node, and routing the traffic to the specific nodes that have an ingress controller. That way there will not be any traffic re-routing needed. However, this does add another layer of routing.
This information is all from 2017, so my question is: is there any pod aware load balancer out there, or is there some other method that does not involve sending the http request and response over the network twice?
Thank you in advance,
Hendrik
EDIT:
A bit more information about my use case:
There is a bare-metal setup with kubernetes. The firewall load balances the incomming data between two HAProxy instances. These HAProxy instances do ssl termination and forward the traffic to a few sites. This includes an exchange setup, a few internal IIS sites and a nginx server for a static web app. The idea is to transform the app servers into kubernetes.
Now my main problem is how to get the requests from HAProxy into kubernetes. I see a few options:
Use the SoundCloud setup. The infrastructure could stay almost the same, the HAProxy server can still operate the way they do now.
I could use an ingress controller on EACH node in the kubernetes cluster and have the firewall load balance between the nodes. I believe it is possible to forward traffic from the ingress controller to server outside the cluster, e.g. exchange.
Some magic load balancer that I do not know about that is pod aware and able to operate outside of the kubernetes cluster.
Option 1 and 2 are relatively simple and quite close in how they work, but they do come with a performance penalty. This is the case when the node that the requests gets forwarded to by the firewall does not have the required pod running, or if another pod is doing less work. The request will get forwarded to another node, thus, using the network twice.
Is this just the price you pay when using Kubernetes, or is there something that I am missing?
How traffic heads to pods depend on whether a managed cluster is used.
Almost all cloud providers can forward traffic in a cloud-native way in their managed K8s clusters. First, you can a managed cluster with some special network settings (e.g. vpc-native cluster of GKE). Then, the only thing you need to do is to create a LoadBalancer typed Service to expose your workload. You can also create Ingresses for your L7 workloads, they are going to be handled by provided IngressControllers (e.g. ALB of AWS).
In an on-premise cluster without any cloud provider(OpenStack or vSphere), the only way to expose workloads is NodePort typed Service. It doesn't mean you can't improve it.
If your cluster is behind reverse proxies (the SoundCloud case), setting externalTrafficPolicy: Local to Services could break traffic forwarding among work nodes. When traffic received through NodePorts, they are forwarded to local Pods or dropped if Pods reside on other nodes. Reserve proxy will mark these NodePort as unhealthy in the backend health check and reject to forward traffic to them. Another choice is to use topology-aware service routing. In this case, local Pods have priorities and traffic is still forwarded between node when no local Pods matched.
For IngressController in on-prem clusters, it is a little different. You may have some work nodes that have EIP or public IP. To expose HTTP(S) services, an IngressController usually deployed on those work nodes through DaemeaSet and HostNetwork such that clients access the IngressController via the well-known ports and EIP of nodes. These work nodes regularly don't accept other workloads (e.g. infra node in OpenShift) and one more forward on the Pod network is needed. You can also deploy the IngressController on all work nodes as well as other workloads, so traffic could be forwarded to a closer Pod if the IngressController supports topology-aware service routing although it can now.
Hope it helps!
We have two types of services that we run on AWS EKS:
external-facing services which we expose through an application-level load balancer using aws-alb-ingress-controller
internal-facing services which we use both directly through the service name (for EKS applications) and through an internal application-level loadbalancer also using aws-alb-ingress-controller (for non-EKS applications)
I would like to understand the performance implications of choosing Nodeport, ClusterIP or Headless Service for both the external and internal services. I have the setup working with all three options.
If I understanding the networking correctly, it seems that a Headless Service requires less hops and would hence be (slightly) faster? This article however seems to suggest that a Headless Service would not be properly load balanced when called directly. Is this correct? And would this still hold when called through the external (or internal) ALB?
Is there any difference in performance for NodePort vs ClusterIP?
Finally, what is the most elegant/performant way of using internal services from outside of the cluster (where we don't have access to the Kubernetes DNS) but within the same VPC? Would it be to use ClusterIp and specify the IP address in the service definition so it remains stable? Or are there better options?
I've put more detailed info on the each of the connection forwarding types and how the services are forwarded down under the headings belowfor context to my answers.
If I understanding the networking correctly, it seems that a Headless Service requires less hops and would hence be (slightly) faster?
Not substantially faster. The "extra hop" is the packet traversing local lookup tables which it traverses anyway so not a noticeable difference. The destination pod is still going to be the same number of actual network hops away.
If you have 1000's of services that run on a single pod and could be headless then you might use that to limit the number of iptables NAT rules and speed rule processing up (see iptables v ipvs below).
Is < a headless service not load balanced > correct? And would this still hold when called through the external (or internal) ALB?
Yes it is correct, the client (or ALB) would need to implement the load balancing across the Pod IP's.
Is there any difference in performance for NodePort vs ClusterIP?
A NodePort has a possible extra network hop from the entry node to the node running the pod. Assuming the ClusterIP ranges are routed to the correct node (and routed at all)
If you happen to be using a service type: LoadBalancer this behaviour can change by setting [.spec.externalTrafficPolicy to Local][https://kubernetes.io/docs/concepts/services-networking/service/#aws-nlb-support] which means traffic will only be directed to a local pod.
Finally, what is the most elegant/performant way of using internal services from outside of the cluster
I would say use the AWS ALB Ingress Controller with the alb.ingress.kubernetes.io/target-type: ip annotation. The k8s config from the cluster will be pushed out to the ALB via the ingress controller and address pods directly without traversing any connection forwarding or extra hops. All cluster reconfig will be automatically pushed out.
There is a little bit of latency for config to get to the ALB compared to cluster kube-proxy reconfiguration. Something like a rolling deployment might not be as seamless as the updates arrive after a pod is gone. The ALB's are equipped to handle the outage themselves, eventually.
Kubernetes Connection Forwarding
There is a kube-proxy process running on each node which manages how and where connections are forwared. There are 3 options for how kube-proxy does that: Userspace proxy, iptables or IPVS. Most clusters will be on iptables and that will cater for the vast majority of use cases.
Userspace proxy
The forwarding is via a process that runs in userspace to terminate and forward the connections. It's slow. It's unlikely you are using it, don't use it.
iptables
iptables forwards connections in kernel via NAT, which is fast. This is most common setup and will cover 90% of use cases. New connections are shared evenly between all nodes running pods for a service.
IPVS
Runs in kernel, it is fast and scalable. If you shift a traffic to a large number of apps this might improve the forwarding performance. It also supports different service load balancing modes:
- rr: round-robin
- lc: least connection (smallest number of open connections)
- dh: destination hashing
- sh: source hashing
- sed: shortest expected delay
- nq: never queue
Access to services
My explanations are iptables based as I haven't done much detailed work with ipvs clusters yet. I'm gonna handwave the ipvs complexity away and say it's basically the same as iptables, just with faster rule processing as the number of rules increases on huge clusters (i.e number of pods/services/network policies).
I'm also ignoring the userspace proxy in the description, due to the overhead just don't use it.
The basic thing to understand is a "Service ClusterIP" is a virtual construct in the cluster that only exists as rule for where the traffic should go. Every node maintains this rule mapping of all ClusterIP/port to PodIP/port (via kube-proxy)
Nodeport
ALB routes to any node, The node/nodeport forwards the connection to a pod handling the service. This could be a remote pod which would involve sending traffic back out over the "wire".
ALB > wire > Node > Kernel Forward to SVC ( > wire if remote node ) > Pod
ClusterIP
Using the ClusterIP for direct access depends on the Service cluster IP ranges being routed to the correct node. Sometimes they aren't routed at all.
ALB > wire > Node > Kernel Forward to SVC > Pod
The "Kernel Forward to SVC" step can be skipped with an ALB annotation without using a headless service.
Headless Service
Again, Pod IP's aren't always addressable from outside the cluster depending on the network setup. You should be fine on EKS.
ALB > wire > Node > Pod
Note
I'll suffix this with requests are probably looking at < 1ms of additional latency if a connection is forwarded to a node in a VPC. Enhanced networking instances at the low end of that. Inter availability-zone comms might be a tad higher than intra-AZ. If you happened to have a geographically separated cluster it might increase the importance of controlling traffic flow. For example having a tunnelled calico network that actually jumped over a number of real networks.
what is the most elegant/performant way of using internal services from outside of the cluster (where we don't have access to the Kubernetes DNS) but within the same VPC?
For this to achieve, I think you should have a look at a Service Mesh. For example, Istio(https://istio.io). It handles your internal service calls manually so that the call doesn't have to go through Kubernetes DNS. Please have a look at Istio's docs (https://istio.io/docs) for more info.
Also, you can have a look at Istio at EKS (https://aws.amazon.com/blogs/opensource/getting-started-istio-eks)
Headless service will not have any load balancing at L4 layer but if you use it behind an ALB you are getting load balancing at L7 layer.
Nodeport internally uses cluster IP but because your request may randomly be routed to a pod on another host when it could have been routed to a pod on the same host, avoiding that extra hop out to the network. Nodeport is generally a bad idea for production usage.
IMHO best way to access internal services from outside of the cluster will be using ingress.
You can use nginx as ingress controller where you deploy the nginx ingress controller on your cluster and expose it via a LoadBalancer type service using ALB. Then you can configure path or host based routing using ingress api to route traffic between backend kubernetes services.