I use MetalLB and Nginx-ingress controller to provide internet access to my apps.
I see that in most configurations, the service is set to ClusterIP, as the ingress will send traffic there.
My question is: does this end up with double load balancing, that is, one from MetalLB to my ingress, and another from my ingress to the pods via ClusterIP?
If so, is this how it is supposed to be, or is there a better way?
Metallb doesn't receive and forward any traffic, so
from MetalLB to my ingress
doesn't really make sense. Metallb just configures kubernetes services with an external ip and tells your surrounding infrastructure where to find it. Still with your setup there will be double load-balancing:
Traffic reaches your cluster and is load-balanced between your nginx pods. Nginx handles the request and forwards it to the application, which will result in a second load-balancing.
But this makes total sense, because if you're using an ingress-controller, you don't want all incoming traffic to go through the same pod.
Using an ingress-controller with metallb can be done and can improve stability while performing updates on you application, but it's not required.
Metallb is a solution to implement kubernetes services of type LoadBalancing when there is no cloud provider to do that for you.
So if you don't need layer 7 load-balancing mechanism you can instead of using a service of type ClusterIP with an ingress-controller just use a service of type LoadBalancing. Metallb will give that service an external ip from your pool and announce it to it's peers.
In that case, when traffic reaches the cluster it will only be load-balanced once.
Related
Exposing Service from a BareMetal(Kubeadm) Build Kubernetes Cluster to the outside world. I am trying to access my Nginx as a service outside of the cluster to get NGINX output in the web browser.
For that, I have created a deployment and service for NGINX as shown below,
As per my search, found that we have below to expose to outside world
MetalLb
Ingress NGINX
Some HELM resources
I would like to know all these 3 or any more approaches in such way it help me to learn new things.
GOAL
Exposing Service from a BareMetal(Kubeadm) Built Kubernetes Cluster to the outside world.
How Can I make my service has its own public IP to access from the outside cluster?
You need to set up MetalLB to get an external IP address for the LoadBalancer type services. It will give a local network IP address to the service.
Then you can do port mapping (configuration in the router) of incoming traffic of port 80 and port 443 to your external service IP address.
I have done a similar setup you can check it here in detail:
https://developerdiary.me/lets-build-low-budget-aws-at-home/
You need to deploy an ingress controller in your cluster so that it gives you an entrypoint where your applications can be accessed. Traditionally, in a cloud native environment it would automatically provision a LoadBalancer for you that will read the rules you define inside your Ingress object and route your request to the appropriate service.
One of the most commonly used ingress controller is the Nginx Ingress Controller. There are multiple ways you can use to deploy it (mainfests, helm, operators). In case of bare metal clusters, there are multiple considerations which you can read here.
MetalLB is still in beta stage so its your choice if you want to use. If you don't have a hard requirement to expose the ingress controller as a LoadBalancer, you can expose it as a NodePort Service that will accessible across all your nodes in the cluster. You can then map that NodePort Service in your DNS so that the ingress rules are evaluated.
For a sample microservice based architecture deployed on Google kubernetes engine, I need help to validate my understanding :
We know services are supposed to load balance traffic for pod replicaset.
When we create an nginx ingress controller and ingress definitions to route to each service, a loadbalancer is also setup automatically.
had read somewhere that creating nginx ingress controller means an nginx controller (deployment) and a loadbalancer type service getting created behind the scene. I am not sure if this is true.
It seems loadbalancing is being done by services. URL based routing is
being done by ingress controller.
Why do we need a loadbalancer? It is not meant to load balance across multiple instances. It will just
forward all the traffic to nginx reverse proxy created and it will
route requests based on URL.
Please correct if I am wrong in my understanding.
A Service type LoadBalancer and the Ingress is the way to reach your application externally, although they work in a different way.
Service:
In Kubernetes, a Service is an abstraction which defines a logical set of Pods and a policy by which to access them (sometimes this pattern is called a micro-service). The set of Pods targeted by a Service is usually determined by a selector (see below for why you might want a Service without a selector).
There are some types of Services, and of them is the LoadBalancer type that permit you to expose your application externally assigning a externa IP for your service. For each LoadBalancer service a new external IP will be assign to it.
The load balancing will be handled by kube-proxy.
Ingress:
An API object that manages external access to the services in a cluster, typically HTTP.
Ingress may provide load balancing, SSL termination and name-based virtual hosting.
When you setup an ingress (i.e.: nginx-ingress), a Service type LoadBalancer is created for the ingress-controller pods and a Load Balancer in you cloud provider is automatically created and a public IP will be assigned for the nginx-ingress service.
This load balancer/public ip will be used for incoming connection for all your services, and nginx-ingress will be the responsible to handle the incoming connections.
For example:
Supose you have 10 services of LoadBalancer type: This will result in 10 new publics ips created and you need to use the correspondent ip for the service you want to reach.
But if you use a ingress, only 1 IP will be created and the ingress will be the responsible to handle the incoming connection for the correct service based on PATH/URL you defined in the ingress configuration. With ingress you can:
Use regex in path to define the service to redirect;
Use SSL/TLS
Inject custom headers;
Redirect requests for a default service if one of the service failed (default-backend);
Create whitelists based on IPs
Etc...
A important note about Ingress Load balancing in ingress:
GCE/AWS load balancers do not provide weights for their target pools. This was not an issue with the old LB kube-proxy rules which would correctly balance across all endpoints.
With the new functionality, the external traffic is not equally load balanced across pods, but rather equally balanced at the node level (because GCE/AWS and other external LB implementations do not have the ability for specifying the weight per node, they balance equally across all target nodes, disregarding the number of pods on each node).
An ingress controller(nginx for example) pods needs to be exposed outside the kubernetes cluster as an entry point of all north-south traffic coming into the kubernetes cluster. One way to do that is via a LoadBalancer. You could use NodePort as well but it's not recommended for production or you could just deploy the ingress controller directly on the host network on a host with a public ip. Having a load balancer also gives ability to load balance the traffic across multiple replicas of ingress controller pods.
When you use ingress controller the traffic comes from the loadBalancer to the ingress controller and then gets to backend POD IPs based on the rules defined in ingress resource. This bypasses the kubernetes service and load balancing(by kube-proxy at layer 4) offered by kubernetes service.Internally the ingress controller discovers all the POD IPs from the kubernetes service's endpoints and directly route traffic to the pods.
It seems loadbalancing is being done by services. URL based routing is being done by ingress controller.
Services do balance the traffic between pods. But they aren't accessible outside the kubernetes in Google Kubernetes Engine by default (ClusterIP type). You can create services with LoadBalancer type, but each service will get its own IP address (Network Load Balancer) so it can get expensive. Also if you have one application that has different services it's much better to use Ingress objects that provides single entry point. When you create an Ingress object, the Ingress controller (e.g. nginx one) creates a Google Cloud HTTP(S) load balancer. An Ingress object, in turn, can be associated with one or more Service objects.
Then you can get the assigned load balancer IP from ingress object:
kubectl get ingress ingress-name --output yaml
As a result your application in pods become accessible outside the kubernetes cluster:
LoadBalancerIP/url1 -> service1 -> pods
LoadBalancerIP/url2 -> service2 -> pods
This is more a design question than an issue. We have deployed in our company our own Kubernetes infrastructure and we are trying to use ingresses and NGINX ingress controller to externally expose our services, but since it is not a cloud environment such as GCP or AWS, we can't use service type "LoadBalancer". Should we just expose our ingress controller through a service type "NodePort"? Is that the normal way to go for production environments (non-cloud)?
From what I've read in another post, one suitable recommendation is to use NodePort, and manually point yet another external load balancer to the port on your Kubernetes nodes.
It just seems that exposing the ingress controller through this mechanism is somehow not very practical or robust (e.g. you don’t know what port your service is going to be allocated, and the port might get re-allocated at some point, etc.)
Is there any other mechanism maybe to expose the ingress controller to the external world?
The Loadbalancer service approach is one way to do it but behind it it's nothing more than a nodeport on the cluster.
Even if you use a service that create a LB on cloud provider, the LB needs to have a target port to communicate with the cluster.
When using a nginx-ingress that will mostly handle web requests, it's common usage to put an ingress in front of a nodeport service.
So with this I think using NodePort services is a good idea to do what you want ;)
This is my opinion, I'm interested if anyone else has another way to do it.
You can specify the port via nodePort in the service. Then it would not be random.
I'm not sure how load balancing works with Ingress.
If I understand correctly, what happens is actually something like this:
I fail to see how the load balancing is performed.
What is wrong in the above scheme that I have drawn?
Can you help me rectify it?
Notes:
- The following answer tells me that the Ingress controller itself is of type 'loadbalancer': Ingress service type
- I use kind ClusterIP because I don't want to expose the loadbalancer to the outside world. The following article does not support this claim, where the load balancer would be provided by the service:
https://medium.com/google-cloud/kubernetes-nodeport-vs-loadbalancer-vs-ingress-when-should-i-use-what-922f010849e0
The ClusterIP services themselves perform load balancing. The naming can be confusing as LoadBalancer services are not the only services that involve load balancing - LoadBalancer actually means something more like 'cloud provider please create an external load balancer and point it at this service'. The kubernetes ClusterIP services also load-balance across Pods in different Nodes using the kube-proxy. If you don't want kubernetes to do load balancing then you have to specifically disable it by creating a headless service.
It seems like the first scheme you drew is correct. But I think you get confused in terminology. Particularly in the difference between ingress and ingress-controller.
Ingress is a type of resources in k8s (like Service, Deployment, ReplicaSet etc). We use ingress if we want to expose some services to an external world with binding to some path and host (i.e. myapp.com/api -> my-api-service).
The job of ingress-controller is to handle creation/update/deletion of ingress resources and implement all the functionality needed for ingress. Under the hood ingress-controller is a simple deployment exposed as LoadBalancer or NodePort service depending on where k8s is deployed. And image-controller forwards received request further to one of pods of service which matches host and path in some of the deployed ingress resources.
I got curious and thought: why would I need an Ingress for load balancing on layer 7 if the only thing it does is forward the traffic to a Service that implements the load balancing on layer 4?
Most Ingress controller implementations I looked up, talk to the Kubernetes API server to keep track of all Pods associated with a Service. Instead of forwarding traffic to the Service, they skip the intermediary and directly forward to the Pods. Since an ingress controller operates on layer 7, it enables a more application-oriented load balancing.
I have a question related to Kubernetes networking.
I have a microservice (say numcruncherpod) running in a pod which is serving requests via port 9000, and I have created a corresponding Service of type NodePort (numcrunchersvc) and node port which this service is exposed is 30900.
My cluster has 3 nodes with following IPs:
192.168.201.70,
192.168.201.71
192.168.201.72
I will be routing the traffic to my cluster via reverse proxy (nginx). As I understand in nginx I need to specify IPs of all these cluster nodes to route the traffic to the cluster, is my understanding correct ?
My worry is since nginx won't have knowledge of cluster it might not be a good judge to decide the cluster node to which the traffic should be sent to. So is there a better way to route the traffic to my kubernetes cluster ?
PS: I am not running the cluster on any cloud platform.
This answer is a little late, and a little long, so I ask for forgiveness before I begin. :)
For people not running kubernetes clusters on Cloud Providers there are 4 distinct options for exposing services running inside the cluster to the world outside.
Service of type: NodePort. This is the simplest and default. Kubernetes assigns a random port to your service. Every node in the cluster listens for traffic to this particular port and then forwards that traffic to any one of the pods backing that service. This is usually handled by kube-proxy, which leverages iptables and load balances using a round-robin strategy. Typically since the UX for this setup is not pretty, people often add an external "proxy" server, such as HAProxy, Nginx or httpd to listen to traffic on a single IP and forward it to one of these backends. This is the setup you, OP, described.
A step up from this would be using a Service of type: ExternalIP. This is identical to the NodePort service, except it also gets kubernetes to add an additional rule on all kubernetes nodes that says "All traffic that arrives for destination IP == must also be forwarded to the pods". This basically allows you to specify any arbitrary IP as the "external IP" for the service. As long as traffic destined for that IP reaches one of the nodes in the cluster, it will be routed to the correct pod. Getting that traffic to any of the nodes however, is your responsibility as the cluster administrator. The advantage here is that you no longer have to run an haproxy/nginx setup, if you specify the IP of one of the physical interfaces of one of your nodes (for example one of your master nodes). Additionally you cut down the number of hops by one.
Service of type: LoadBalancer. This service type brings baremetal clusters at parity with cloud providers. A fully functioning loadbalancer provider is able to select IP from a pre-defined pool, automatically assign it to your service and advertise it to the network, assuming it is configured correctly. This is the most "seamless" experience you'll have when it comes to kubernetes networking on baremetal. Most of LoadBalancer provider implementations use BGP to talk and advertise to an upstream L3 router. Metallb and kube-router are the two FOSS projects that fit this niche.
Kubernetes Ingress. If your requirement is limited to L7 applications, such as REST APIs, HTTP microservices etc. You can setup a single Ingress provider (nginx is one such provider) and then configure ingress resources for all your microservices, instead of service resources. You deploy your ingress provider and make sure it has an externally available and routable IP (you can pin it to a master node, and use the physical interface IP for that node for example). The advantage of using ingress over services is that ingress objects understand HTTP mircoservices natively and you can do smarter health checking, routing and management.
Often people combine one of (1), (2), (3) with (4), since the first 3 are L4 (TCP/UDP) and (4) is L7. So things like URL path/Domain based routing, SSL Termination etc is handled by the ingress provider and the IP lifecycle management and routing is taken care of by the service layer.
For your use case, the ideal setup would involve:
A deployment for your microservice, with health endpoints on your pod
An Ingress provider, so that you can tweak/customize your routing/load-balancing as well as use for SSL termination, domain matching etc.
(optional): Use a LoadBalancer provider to front your Ingress provider, so that you don't have to manually configure your Ingress's networking.
Correct. You can route traffic to any or all of the K8 minions. The K8 network layer will forward to the appropriate minion if necessary.
If you are running only a single pod for example, nginx will most likely round-robin the requests. When the requests hit a minion which does not have the pod running on it, the request will be forwarded to the minion that does have the pod running.
If you run 3 pods, one on each minion, the request will be handled by whatever minion gets the request from nginx.
If you run more than one pod on each minion, the requests will be round-robin to each minion, and then round-robin to each pod on that minion.