I have a Kubernetes installation on-premise and it seems to be working fine.
I am now trying to install MetalLb to use load-balancer service.
Our network guy gave me IP ranges of 11.240.15.192/27 which can be used for Kubernetes cluster load-balancing service.
My cluster runs on 11.211.220.X and I have one master and three worker nodes.
My question is, what do I need to provide as IP range in config map of a MetalLb load balancer?
Do I need to physically attach call those IP to any of the nodes before MetalLb can use it to hand-ver IP addresses for my service?
These questions are never being answered.. All the setup either use MiniKube or installation in local network where 192.168.X.X range is fully available.
When I assigned 11.240.15.192-11.240.15.223 to configmap and created a service of type load balancer, it was still in External IP was still in Pending state for a while.
I then Applied changes manually to the service as follows:
...
spec:
type: LoadBalancer
externalIPs:
- 11.240.15.192
It still couldn't connect my sample nginx deployment on port 80
Then to experiment with it, I changed "ExternalIps" to one of the Kubernetes Node IP address and now I can access Nginx index page. This raises a big concern since I only have three worker nodes and I probably can run only three services on port 80 using up IP address of each node.
Can someone please guide me where exactly I need to make changes so that I can use whole range of IP addresses?
I guess you don't have to assign the external IP, It will be assigned automatically from the pool you assigned and in sequence.
Related
I understand that, in Cloud scenarios, a LoadBalancer resource refers to and provisions an external layer 4 load balancer. There is some proprietary integration by which the cluster configures this load balancing. OK.
On-prem we have no such integration so we create our own load balancer outside of the cluster which distributes traffic to all nodes - effectively, in our case to the ingress.
I see no need for the cluster to know anything about this external load balancer.
What is a LoadBalancer* then in an on-prem scenario? Why do I have them? Why do I need one? What does it do? What happens when I create one? What role does the LoadBalancer resource play in ingress traffic. What effect does the LoadBalancer have outside the cluster? Is a LoadBalancer just a way to get a new IP address? How/why/ where does the IP point to and where does the IP come from?
All questions refer to the Service of type “LoadBalancer” inside cluster and not my load balancer outside the cluster of which the cluster has no knowledge.
As pointed out in the comments a kubernetes service of type LoadBalancer can not be used by default with on-prem setups. You can use metallb to setup a service of that type in an on prem environment.
Kubernetes does not offer an implementation of network load balancers (Services of type LoadBalancer) for bare-metal clusters. [...] If you’re not running on a supported IaaS platform (GCP, AWS, Azure…), LoadBalancers will remain in the “pending” state indefinitely when created. [...] MetalLB aims to redress this imbalance by offering a network load balancer implementation that integrates with standard network equipment, so that external services on bare-metal clusters also “just work” as much as possible.
You can for example use the BGP mode to advertise the service's IP to your router, read more on that in the docs.
The project is still in beta but is promoted as production ready and used by several bigger companies.
Edit
Regarding your question in the comments:
Can I just broadcast the MAC address of my node and manually add the IP I am broadcasting to the LoadBalancer service via kubectl edit?
Yes that would work too. That's basically what metallb does, announcing the IP and updating the service.
Why need a software then? Imaging having 500 hosts that come and go with thousends of services of type LoadBalancer that come and go. You need an automation here.
Why does Kubernetes need to know this IP?
It doesn't. If you don't use an external ip, the service is still usable via it's NodePort, see for example the istio docs (with a little more details added by me):
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 192.12.129.119 <Pending> [...],80:32123/TCP,443:30994/TCP,[...]
Here the external IP is not set and stays in <Pending>. You can still use the service by pointing your traffic to <Node-IP>:32123 for plain http and to <Node-IP>:30994 for https. As you can see above those ports are mapped to 80 and 443.
If the external ip is set you can direct traffic directly to port 80 and 443 on the external load balancer. Kube-Proxy will create an iptables chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.
To investigate that set up a service of type LoadBalancer, make sure it has an external ip, connect to the host and run the iptables-save command (e.g. iptables-save | less). Search for the external ip and follow the chain until you end up at the pod.
help me figure it out.
I have a Bare Metal Kubernetes cluster with three nodes, each node has a public ip.
I have installed MetalLB and IngressController.
It is not clear to me which IP should I redirect domains and subdomains to so that they can be resolved by the Ingress Controller?
I need to initially define on which node the Ingress Controller will be launched?
I need to install the Ingress Controller, and then look at the worker node, on which it will be installed and send all domains or subdomains there?
What happens if, after restarting the cluster, the ingress controller will be deployed on another node?
All the tutorials I've seen show how it works locally or with a cloud load balancer.
Help me understand how this should work correctly.
Usually, when you install MetalLB, you configure a pool of addresses which can be used to assign new IPs at LoadBalancer services whenever they are created. Such IP addresses need to be available, they cannot be created out of nothing of course.. they could be in lease from your hosting provider for example.
If instead you have a private Bare Metal cluster which serves only your LAN network, you could just select a private range of IP addresses which are not used.
Then, once MetalLB is running, what happens is the following:
Someone / something creates a LoadBalancer services (an HELM Chart, a user with a definition, with commands, etc)
The newly created service needs an external IP. MetalLB will select one address from the configured selected range and assign it to that service
MetalLb will start to announce using standard protocol that the IP address can now be reached by contacting the cluster, it can work either in Layer2 mode (one node of the cluster holds that additional IP address) or BGP (true load balancing across all nodes of the cluster)
From that point, you can just reach the new service by contacting this newly assigned IP address (which is NOT the ip of any of the cluster nodes)
Usually, the Ingress Controller will just bring a LoadBalancer service (which will grab an external IP address from MetalLb) and then, you can reach hte Ingress Controller from that IP.
As for your other questions, you don't need to worry about where the Ingress Controller is running or similar, it will be automatically handled.
The only thing you may want to do is to make the domain names which you want to serve point to the external IP address assigned to the Ingress Controller.
Some docs:
MetalLB explanations
Bitnami MetalLB chart
LoadBalancer service docs
As an alternative (especially when you want "static" ip addresses) I should mention HAProxy, installed external to kubernetes cluster in a bare_server/vm/lxc_container/etc. and configured to send all incoming 80/433 traffic to the NodePort of ingress controller on all kubernetes workers (if no ingress pod is running on that worker traffic will be forwarded by kubernetes).
Of course, nowadays ip addresses are also "cattle", not "pets" anymore, so MetalLB is more of a "kubernetish" solution, but who knows ...
This is the link describing HAProxy solution (I am not affiliated with the author):
https://itnext.io/bare-metal-kubernetes-with-kubeadm-nginx-ingress-controller-and-haproxy-bb0a7ef29d4e
I'm new in kubernetes.
We have 50 ip addresses and ip addresses have a request limit. The limit is a value kept in the database. We want load balancer to choose it based on the one that has the most limits in the database. Can Kubernetes do that?
Firstly I advice you to read official documentation about networking in Kubernetes - you can find it here: kubernetes-networking. Especially read about services. Original Load Balancer in Kubernetes never checks application-specific databases.
An abstract way to expose an application running on a set of
Pods as a network service. With Kubernetes you don't need to modify your application to use an unfamiliar service discovery mechanism. Kubernetes gives Pods their own IP addresses and a single DNS name for a set of Pods, and can
load-balance across them.
Example on service type clusterIP.
Kubernetes assigns a stable, reliable IP address to each newly-created
Service (the
ClusterIP) from the cluster's pool of available Service IP addresses. Kubernetes also assigns a hostname to the ClusterIP, by adding a DNS entry. The ClusterIP and hostname are unique within the cluster and do not change throughout the lifecycle of the Service. Kubernetes only
releases the ClusterIP and hostname if the Service is deleted from the cluster's configuration. You can reach a healthy Pod running your application using either the ClusterIP or the hostname of the Service.
Take a look how it looks in GKE: GKE-IP-allocation.
You can specify also your own cluster IP address as part of a
Service creation request - set the .spec.clusterIP field.
The IP address that you choose must be a valid IPv4 or IPv6 address from within the service-cluster-ip-range CIDR range that is configured for the API server. If you try to create a Service with an invalid clusterIP address value, the API server will return a 422 HTTP status code to indicate that there's a problem.
To sum up. Kubernetes load balancer never does deep dive into your app. To connect to your app you need to create service. Kubernetes assigns a stable, reliable IP address to each newly-created Service from which you can access your app within or from outside the cluster. You can also manually assign IP per service.
I have installed my kubernetes cluster on Jelastic. Now, I tried to define a service of LoadBalancer type and would like it to be provided with an external IP. The external IP is currently marked as pending. What should I do to make it non-pending? Do I have to provide the worker nodes with an external IPv4?
In my current setup, my worker nodes have no IPv4 because I put an nginx load-balancer in front of the cluster:
The IPv4 is set on the nginx node. Is that a problem? If I want to access my loadbalancer service inside of my kubernetes cluster, what should I do?
For LoadBalancer service type to work, the cloud provider must implemenet the relevant APIs to get it to work.
With regard to Jelastic, as per their docs, they don't support it https://docs.jelastic.com/kubernetes-exposing-services/:
Jelastic PaaS does not support the LocaBalancer service type currently.
In Jelastic Public IP addresses have to be attached to worker nodes.
Every worker node has ingress controller instance running (based oт nginx/haproxy/traefik) with http/https listeners that can forward traffic to the required service.
You have just to bind your domain as CNAME to Environment FQDN and every your worker node can accept requests in RR-DNS mode.
Does this scenario works for you or you have a specific requirement to use external load balancer?
By default, when Public IPs are not attached to worker instances the traffic is going through the Shared Load Balancer.
P.S. If you install Certification Manager Addon to your K8s cluster - you can also issue free Let's Encrypt certificates.
I have 5 VPS with a public network interface for each, for which I have configured a VPN.
3 nodes are Kubernetes masters where I have set the Kubelet --node-ip flag as their private IP address.
One of the 3 nodes have a HAProxy load balancer for the Kubernetes masters, listening on the private IP, so that all the nodes used the private IP address of the load balancer in order to join the cluster.
2 nodes are Kubernetes workers where I didn't set the Kubelet --node-ip flag so that their node IP is the public address.
The cluster is healthy and I have deploy my application and its dependencies.
Now I'd like to access the app from the Internet, so I've deployed a edge router and created a Kubernetes Service with the type LoadBalancer.
The service is well created but never takes the worker nodes' public IP addresses as EXTERNAL-IP.
Assigning the IP addresses manually works, but obviously want that to be automatic.
I have read about the MetalLb project, but it doesn't seem to fit in my case as it is supposed to have a range of IP addresses to distribute, while here I have one public IP address per node, and not in the same range.
So who can I configure Kubernetes so that my Service of type LoadBalancer gets automatically the public IP addresses as EXTERNAL-IP?
I finally can answer myself in two times.
Without an external Load Balancer
Firstly, in order to solve the problem from my question, the only way I found which worked quite well was to set the externalIPs of my LoadBalancer service with the IP addresses of the Kubernetes worker nodes.
Those nodes were running Traefik and therefor had it listening on ports 80 and 443.
After that, I've created as many A DNS entries as I have Kubernetes worker nodes, pointing each to the Kubernetes respective worker node public IP address. This setup makes the DNS server returning the list of IP addresses, in a random order, and then the web browser will take care of trying the first IP address, then the second one if the first is down and so on.
The downside of this, is when you want to drain a node for maintenance, or when it crashes, the web browser will wast time trying to reach it until it tries the next IP address.
So here come the second option: External Load balancer.
With an external Load Balancer
I took another VPS where I've installed HAproxy and configured a SSL passthrough of the Kubernetes API port so that it load balancer the trafic to the master nodes, without terminating it.
With this solution, I removed the externalIPs field from my Service and I've installed MetalLB with a single IP address configured with this manifest:
apiVersion: v1
kind: ConfigMap
metadata:
namespace: metallb-system
name: config
data:
config: |
address-pools:
- name: staging-public-ips
protocol: layer2
addresses:
- 1.2.3.4/32
When the LoadBalancer Service is created, MetalLB assigns this IP address and calls the Kubernetes APIs accordingly.
This has solved my issue to integrate my Kubernetes cluster with Gitlab.
WARNING: MetalLB will assign only once the IP address so that if you have a second LoadBalancer Service, it will remain in Pending state forever, until you give a new IP address to MetalLB.