Ingress Controller Layer 7 load balancing - kubernetes

Services, as defined by the Service API, allow for a basic level of Layer 3/4 load balancing.
Ingress is the most useful if you want to expose multiple services under the same IP address, and these services all use the same L7 protocol (typically HTTP).
If ingress controller routes request to service itself and service allows a basic level Layer3/4 round robin load balancing to pods , where is Layer 7 load balancing here ? It is just layer 7 for routing not load balancing, right?
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
name: web-ingress
spec:
rules:
- host: kubernetes.foo.bar
http:
paths:
- backend:
serviceName: appsvc
servicePort: 80
path: /app

Ingress's main job is L7 routing. The backend services do the actual pod level load balancing. But some Ingress controllers are bootstrapped with some load balancing policy settings, such as the load balancing algorithm, backend weight scheme, rate limiting etc.
https://kubernetes.io/docs/concepts/services-networking/ingress/#loadbalancing
See the features's supported by Traefik other than just L7 routing here:
https://docs.traefik.io/v1.7/configuration/backends/kubernetes/#general-annotations

Related

Ingress expose the service with the type clusterIP

Is it possible to expose the service by ingress with the type of ClusterIP?
apiVersion: v1
kind: Service
metadata:
name: my-service
spec:
selector:
app: my-service
ports:
- name: my-service-port
port: 4001
targetPort: 4001
---
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: app-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: my.example.com
http:
paths:
- path: /my-service
backend:
serviceName: my-service
servicePort: 4001
I know the service can be exposed with the type of NodePort, but it may cost one more NAT connection, if someone could show me what's the fastest way to detect internal service from the world of internet in the cloud.
No, clusterIP is only reachable from within the cluster. An Ingress is essentially just a set of layer 7 forwarding rules, it does not handle the layer 4 requirements of exposing the internals of your cluster to the outside world. At least 1 NAT step is required.
For Ingress to work, though, you need to have at least one service involved that exposes your workload externally, so nodePort or loadBalancer. Your ingress controller and the infrastructure of your cluster will determine which of the two services you will need to use.
In the case of Nginx ingress, you need to have a single LoadBalancer service which the ingress will use to bridge traffic from outside the cluster to inside it. After that, you can use clusterIP services for each of your workloads.
In your above example, as long as the nginx ingress controller is correctly configured (with a loadbalancer), then the config you are using should work fine.
In short : YES
Now to the elaborate answer...
First thing first, let's have a look at what the official documentation says :
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster.
[...]
An Ingress controller is responsible for fulfilling the Ingress, usually with a load balancer...
What's confusing here is the term Load balancer. In the definition above, we are talking about the classic and well known in the web load balancer.
This one has nothing to do with kubernetes !
So back to the definition, to use an Ingress and make it work, we need a kubernetes resource called IngressController. And this resource happen to be a load balancer ! That's it.
However, you have to keep in mind that there is a difference between a load balancer in the outside world and a kubernetes service of type type:LoadBalancer.
So in summary (and in order to redirect the traffic from the outside world to your k8s clusterIp service) :
Do you need a Load balancer to make your kind:Ingress works ? Yes, this is the kind:IngressController kubernetes resource.
Do you need a kubernetes service type:LoadBalancer or type:NodePort to make your kind:Ingress works ? Definitely no ! A service type:ClusterIP works just fine !

HAproxy as ingress controller

What kind of load balancing HAproxy ingress controller capable of.
Can it do load balancing on a Pod level ? or it does it on a Node level load-balancing.
Thanks
Yaniv
An ingress provides load balancing, name based virtual hosting, SSL/TLS termination. Yes, it will do load balancing on services ( backed by pods ). Here is the sample Ingress kubernetes object manifest file.
Example:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sample-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- host: foo.bar.com
http:
paths:
- path: /foo
backend:
serviceName: service1 ( Backed by service1 pod )
servicePort: 4200
- path: /bar
backend:
serviceName: service2 ( Backed by service2 pod )
servicePort: 8080
As mentioned in the official documentation:
The ingress controller gives you the ability to:
Use only one IP address and port and direct requests to the correct pod based on the Host header and request path
Secure communication with built-in SSL termination
Apply rate limits for clients while optionally whitelisting IP addresses
Select from among any of HAProxy's load-balancing algorithms
Get superior Layer 7 observability with the HAProxy Stats page and Prometheus metrics
Set maximum connection limits to backend servers to prevent overloading services
Also I recommend the following resources:
HAProxy Kubernetes Ingress Controller
L7 routing is one of the core features of Ingress, allowing incoming
requests to be routed to the exact pods that can serve them based on
HTTP characteristics such as the requested URL path. Other features
include terminating TLS, using multiple domains, and, most
importantly, load balancing traffic.
GitHub
I hope it helps.

Difference between kunernetes Service and Ingress

I want to create a load balancer for 4 http server pods.
I have one mysql pod too.
Everything works fine, i have created a loadbalancer service for http, and another service for mysql.
I have read i should create an ingress too. But i do not understand what is an ingress because everything works with Services.
What is the value-add of an Ingress ?
Thanks
Since you have single service serving http, your current solution using LoadBalancer service type works fine. Imagine you have multiple http based services that you want to make externally available on different routes. You would have to create a LoadBalancer services for each of them and by default you would get a different IP address for each of them. Instead you can use an Ingress, which sits infront of these services and does the routing.
Example ingress manifest:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: test-ingress
annotations:
nginx.ingress.kubernetes.io/rewrite-target: /
spec:
rules:
- http:
paths:
- path: /cart
backend:
serviceName: cart
servicePort: 80
- path: /payment
backend:
serviceName: payment
servicePort: 80
Here you have two different HTTP services exposed by an Ingress on a single IP address. You don't need a LoadBalancer per service when using an Ingress.
The Service of type LoadBalancer relies on a third-party LoadBalancer and IP provisioning thing somewhere that deals with getting Layer 3 traffic (IP) from outside to the Nodes on some high-numbered NodePort.
A Ingress relies on a third-party Ingress Controller to accept Layer 3 traffic, open it up to Layer 7 (eg, terminate TLS) and do protocol-specific routing (eg by http fqdn/path) to some other Service (probably of type ClusterIP) inside the cluster.
If all your service should be explictly exposed without any further filtering or other options, a LoadBalancer and no Ingress might be the right choice....but LoadBalancers dont do much on their own....they just expose the Service to the outside world....very little in the way of traffic shpaing, A/B testing, etc
However, if you want to put multiple services behind a single IP/VIP/certificate, or you want to direct traffic in some weird ways (like based on Header:, client type, percentage weighting, etc), you'd probably want an Ingress (which itself would be exposed by a LoadBalancer Service)

Why is there an ADDRESS for the ingress-service? What's the use of that ADDRESS?

I deploy my cluster on GKE with an Ingress Controller
I use Helm to install the following:
Installed Ingress Controller
Deployed Load Balancer Service (Create a Load Balancer on GCP as well)
I also deployed the Ingress Object (Config as below)
Then I observed the following status ...
The Ingress Controller is exposed (By Load Balancer Service) with two endpoints: 35.197.XX.XX:80, 35.197.XX.XX:443
These two endpoints are exposed by the Cloud load balancer.
I have no problem with it.
However, when I execute kubectl get ing ingress-service -o wide, it prints out the following info.
NAME HOSTS ADDRESS PORTS AGE
ingress-service k8s.XX.com.tw 34.87.XX.XX 80, 443 5h50m
I really don't under the use of the IP under the ADDRESS column.
I can also see that Google add some extra info to the end of my Ingress config file about load balancer IP for me.
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
....(ommitted)
spec:
rules:
- host: k8s.XX.com.tw
http:
paths:
- backend:
serviceName: client-cluster-ip-service
servicePort: 3000
path: /?(.*)
- backend:
serviceName: server-cluster-ip-service
servicePort: 5000
path: /api/?(.*)
tls:
- hosts:
- k8s.XX.com.tw
secretName: XX-com-tw
status:
loadBalancer:
ingress:
- ip: 34.87.XX.XX
According to Google's doc, this (34.87.XX.XX) looks like an external IP, but I can't access it with http://34.87.XX.XX
My question is that since we already have an external IP (35.197.XX.XX) to receive the traffic, why do we need this ADDRESS for the ingress-service?
If it's an internal or external IP ADDRESS?
What is this ADDRESS bound to?
What exactly is this ADDRESS used for?
Can anyone shed some light? Thanks a lot!
If you simply go take a look at the documentation you will have your answer.
What is an ingress ressource: https://kubernetes.io/docs/concepts/services-networking/ingress/#what-is-ingress
So following the doc:
Ingress exposes HTTP and HTTPS routes from outside the cluster to services within the cluster. Traffic routing is controlled by rules defined on the Ingress resource.
To be more precise on cloud provider, the ingress will create a load-balancer to expose the service to the internet. The cocumentation on the subject specific to gke: https://cloud.google.com/kubernetes-engine/docs/tutorials/http-balancer
That explains why you have an external ip for the ingress.
What you should do now:
If you don't want to expose HTTP or/and HTTPS ports just delete the ingress ressource, you don't use it so it's pretty much useless.
If you are using HTTP/HTTPS ressources, change your service type to nodePort and leave the management of the load balancer to the ingress.
My opinion is that, as you are deploying the ingress-controller, you should select the second option and leave the management of the load-balancer to it. For the ingress of the ingress-controller, don't define rules just the backend to the nodePort service, the rules should be defined in specific ingress for each app and be managed by the ingress-controller.

Default Load Balancing in Kubernetes

I've recently started working with Kubernetes clusters. The flow of network calls for a given Kubernetes service in our cluster is something like the following:
External Non-K8S Load Balancer -> Ingress Controller -> Ingress Resource -> Service -> Pod
For a given service, there are two replicas. By looking at the logs of the containers in the replicas, I can see that calls are being routed to different pods. As far as I can see, we haven't explicitly set up any load-balancing policies anywhere for our services in Kubernetes.
I've got a few questions:
1) Is there a default load-balancing policy for K8S? I've read about kube-proxy and random routing. It definitely doesn't appear to be round-robin.
2) Is there an obvious way to specify load balancing rules in the Ingress resources themselves? On a per-service basis?
Looking at one of our Ingress resources, I can see that the 'loadBalancer' property is empty:
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
annotations:
ingress.kubernetes.io/rewrite-target: /
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"extensions/v1beta1","kind":"Ingress","metadata":{"annotations":{"ingress.kubernetes.io/rewrite-target":"/","nginx.ingress.kubernetes.io/rewrite-target":"/"},"name":"example-service-ingress","namespace":"member"},"spec":{"rules":[{"host":"example-service.x.x.x.example.com","http":{"paths":[{"backend":{"serviceName":"example-service-service","servicePort":8080},"path":""}]}}]}}
nginx.ingress.kubernetes.io/rewrite-target: /
creationTimestamp: "2019-02-13T17:49:29Z"
generation: 1
name: example-service-ingress
namespace: x
resourceVersion: "59178"
selfLink: /apis/extensions/v1beta1/namespaces/x/ingresses/example-service-ingress
uid: b61decda-2fb7-11e9-935b-02e6ca1a54ae
spec:
rules:
- host: example-service.x.x.x.example.com
http:
paths:
- backend:
serviceName: example-service-service
servicePort: 8080
status:
loadBalancer:
ingress:
- {}
I should specify - we're using an on-prem Kubernetes cluster, rather than on the cloud.
Cheers!
The "internal load balancing" between Pods of a Service has already been covered in this question from a few days ago.
Ingress isn't really doing anything special (unless you've been hacking in the NGINX config it uses) - it will use the same Service rules as in the linked question.
If you want or need fine-grained control of how pods are routed to within a service, it is possible to extend Kubernetes' features - I recommend you look into the traffic management features of Istio, as one of its features is to be able to dynamically control how much traffic different pods in a service receive.
I see two options that can be used with k8s:
Use istio's traffic management and create a DestinationRule. It currently supports three load balancing modes:
Round robin
Random
Weighted least request
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
...
spec:
...
subsets:
- name: test
...
trafficPolicy:
loadBalancer:
simple: ROUND_ROBIN
Use lb_type in envoy proxy with ambassador on k8s. More info about ambassador is in https://www.getambassador.io.