Real IP (Domains and Subtomains) on Bare Metal Cluster with MatalLB and Ingress - kubernetes

help me figure it out.
I have a Bare Metal Kubernetes cluster with three nodes, each node has a public ip.
I have installed MetalLB and IngressController.
It is not clear to me which IP should I redirect domains and subdomains to so that they can be resolved by the Ingress Controller?
I need to initially define on which node the Ingress Controller will be launched?
I need to install the Ingress Controller, and then look at the worker node, on which it will be installed and send all domains or subdomains there?
What happens if, after restarting the cluster, the ingress controller will be deployed on another node?
All the tutorials I've seen show how it works locally or with a cloud load balancer.
Help me understand how this should work correctly.

Usually, when you install MetalLB, you configure a pool of addresses which can be used to assign new IPs at LoadBalancer services whenever they are created. Such IP addresses need to be available, they cannot be created out of nothing of course.. they could be in lease from your hosting provider for example.
If instead you have a private Bare Metal cluster which serves only your LAN network, you could just select a private range of IP addresses which are not used.
Then, once MetalLB is running, what happens is the following:
Someone / something creates a LoadBalancer services (an HELM Chart, a user with a definition, with commands, etc)
The newly created service needs an external IP. MetalLB will select one address from the configured selected range and assign it to that service
MetalLb will start to announce using standard protocol that the IP address can now be reached by contacting the cluster, it can work either in Layer2 mode (one node of the cluster holds that additional IP address) or BGP (true load balancing across all nodes of the cluster)
From that point, you can just reach the new service by contacting this newly assigned IP address (which is NOT the ip of any of the cluster nodes)
Usually, the Ingress Controller will just bring a LoadBalancer service (which will grab an external IP address from MetalLb) and then, you can reach hte Ingress Controller from that IP.
As for your other questions, you don't need to worry about where the Ingress Controller is running or similar, it will be automatically handled.
The only thing you may want to do is to make the domain names which you want to serve point to the external IP address assigned to the Ingress Controller.
Some docs:
MetalLB explanations
Bitnami MetalLB chart
LoadBalancer service docs

As an alternative (especially when you want "static" ip addresses) I should mention HAProxy, installed external to kubernetes cluster in a bare_server/vm/lxc_container/etc. and configured to send all incoming 80/433 traffic to the NodePort of ingress controller on all kubernetes workers (if no ingress pod is running on that worker traffic will be forwarded by kubernetes).
Of course, nowadays ip addresses are also "cattle", not "pets" anymore, so MetalLB is more of a "kubernetish" solution, but who knows ...
This is the link describing HAProxy solution (I am not affiliated with the author):
https://itnext.io/bare-metal-kubernetes-with-kubeadm-nginx-ingress-controller-and-haproxy-bb0a7ef29d4e

Related

How to forward traffic to an on-premise Kubernetes cluster

I'm trying to understand how traffic can be forwarded to an on-premise Kubernetes cluster.
It's clear to me that in a public Cloud provider, the underlying infrastructure of the Cloud can automatically manage and forward traffic to a Kubernetes distribution, such as EKS, GKE, AKS, by assining a LoadBalancer IP to a Kubernetes Service. Then, after a few seconds, this service will receive an external IP and will be reachable from the outside world.
On the other hand, in an on-premise Kubernetes cluster, by assigning a LoadBalancer IP to a service, it stays on pending forever, unless you assign a node IP, but what if you want to assign a different IP from a private IP range? In order to tackle this, in my homelab, I've deployed metallb inside my K3s cluster. The metallb is configured to use a private IP range of my network, let's say 10.0.0.0/24. Now, services of type LoadBalancer can consume an address of this range, e.g. my Ingress Controller can receive 10.0.2.3 as its external IP.
I can't understand what's metallb doing under the hood. How metallb "listens" to an address of the range and forwards traffic to my cluster. Can this be achieved without a metallb? I've tried setting an ExternalIP directly to a service of type LoadBalancer, but it never managed to claim that specific IP without it.
In addition, I'm aware that this can also be achieved with a "physical" load-balancer solution, such as NGINX and HAProxy, that sits in front of the cluster. To my understanding, technically this does the same thing as metallb. With such a solution configured, an address can be listened and be forwarded to the cluster. But my question here is, can this be achieved without those technologies? Can a Kubernetes Cluster listen to an external address and accept traffic without an intermediate solution? Maybe through Firewall rules and port-forwarding?
Your time is highly appreciated!
This involves some of the core networking concepts like NATing, you can have two networks one local and one external CIDR. For exposing the services you can NAT the local CIDR with external CIDR and configure required firewall rules for making your cluster serve the public.

Kuberbetes: DNS server, Ingress controller and Metal LB communucations

I'm unable to wrap my head around the concepts of interconnectivity among DNS, Ingress controller, MetalLb and Kubeproxy.
I know what these resources/tools/services are for and get the concepts of them individually but unable to form a picture of them working in tandem.
For example, in a bare metal setup, a client accesses my site - https://mytestsite.com, still having doubts , how effectively it lands to the right pod and where the above mentioned services/resources/tools comes into picture & at what stage ?
Ex. How DNS talks to my MetalLB, if the client accesses my MetalLB hosting my application and how LB inturn speaks to IngressController and finally where does Kube-proxy comes into play here.
I went thru the K8s official documentation and few others as well but still kind of stumped here. Following article is really good but I'm unable to stitch the pieces together.
https://www.disasterproject.com/kubernetes-with-external-dns/
Kindly redirect me to the correct forum, if it is not the right place, thanks.
The ingress-controller creates a service of type LoadBalancer that serves as the entry point into the cluster. In a public cloud environment, a loadbalancer like ELB on AWS would create the counter part and set the externalIP of that service to it's ip. It is like a service of type NodePort but it also has an ExternalIP, which corresponds to the actual ip of the counterpart, a load balancer like ELB on aws.
In a bare metal environment, no external load balancer will be created, so the external ip would stay in <Pending> state forever. Here for example the service of the istio ingress controller:
$ kubectl get svc istio-ingressgateway -n istio-system
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S)
istio-ingressgateway LoadBalancer 192.12.129.119 <Pending> [...],80:32123/TCP,443:30994/TCP,[...]
In that state you would need to call http://<node-ip>:32123 to reach the http port 80 of the ingress controller service, which would be then forwarded to your pod (more on that in a bit).
When you're using metallb, it will update the service with an external ip so you can call http://<ip> instead. Metallb will also announce that ip, e.g. via BGP, so other know where to send traffic to, when someone would call the ip.
I havn't used external DNS and only scanned the article but I guess that you can use that to also have a dns record to be created so someone can call your service by it's domain, not only by it's ip. So you can call http://example.com instead.
This is basically why you run metallb and how it interacts with your ingress controller. The ingress controller creates an entry point into the cluster and metallb configures it and attracts traffic.
Until now the call to http://example.com can reach your cluster, but it needs to also reach the actual application, running in a pod inside the cluster. That's kube-proxy's job.
You read a lot about service of different types and all this kind of stuff, but in the end it all boils down to iptables rules. kube-proxy will create a bunch of those rules, that form a chain.
SSH into any kubernetes worker, run iptables-save | less command and search for the external ip configured on your ingress-controller's service by metallb. You'll find a chain with the destination of you external ip, that basically leads from the external IP over the service ip with a load balancer configuration to a pod ip.
In the end the whole chain would look something like this:
http://example.com
-> http://<some-ip> (domain translated to ip)
-> http://<node-ip>:<node-port> (ingress-controller service)
---
-> http://<cluster-internal-ip>:<some-port> (service of your application)
-> http://<other-cluster-internal-ip>:<some-port> (ip of one of n pods)
where the --- line shows the switch from cluster external to cluster internal traffic. The cluster-internal-ip will be from the configured service-cdir and the other-cluster-internal-ip will be from the configured pod-cidr.
Note that there are different ways to configure cluster internal traffic routing, how to run kube-proxy and some parts might even be a bit simplified, but this should give you a good enough understanding of the overall concept.
Also see this answer on the question 'What is a Kubernetes LoadBalancer On-Prem', that might provide additional input.

Service external IP pending on kubernetes hosted on jelastic

I have installed my kubernetes cluster on Jelastic. Now, I tried to define a service of LoadBalancer type and would like it to be provided with an external IP. The external IP is currently marked as pending. What should I do to make it non-pending? Do I have to provide the worker nodes with an external IPv4?
In my current setup, my worker nodes have no IPv4 because I put an nginx load-balancer in front of the cluster:
The IPv4 is set on the nginx node. Is that a problem? If I want to access my loadbalancer service inside of my kubernetes cluster, what should I do?
For LoadBalancer service type to work, the cloud provider must implemenet the relevant APIs to get it to work.
With regard to Jelastic, as per their docs, they don't support it https://docs.jelastic.com/kubernetes-exposing-services/:
Jelastic PaaS does not support the LocaBalancer service type currently.
In Jelastic Public IP addresses have to be attached to worker nodes.
Every worker node has ingress controller instance running (based oт nginx/haproxy/traefik) with http/https listeners that can forward traffic to the required service.
You have just to bind your domain as CNAME to Environment FQDN and every your worker node can accept requests in RR-DNS mode.
Does this scenario works for you or you have a specific requirement to use external load balancer?
By default, when Public IPs are not attached to worker instances the traffic is going through the Shared Load Balancer.
P.S. If you install Certification Manager Addon to your K8s cluster - you can also issue free Let's Encrypt certificates.

metallb load balancer ip address range

I have a Kubernetes installation on-premise and it seems to be working fine.
I am now trying to install MetalLb to use load-balancer service.
Our network guy gave me IP ranges of 11.240.15.192/27 which can be used for Kubernetes cluster load-balancing service.
My cluster runs on 11.211.220.X and I have one master and three worker nodes.
My question is, what do I need to provide as IP range in config map of a MetalLb load balancer?
Do I need to physically attach call those IP to any of the nodes before MetalLb can use it to hand-ver IP addresses for my service?
These questions are never being answered.. All the setup either use MiniKube or installation in local network where 192.168.X.X range is fully available.
When I assigned 11.240.15.192-11.240.15.223 to configmap and created a service of type load balancer, it was still in External IP was still in Pending state for a while.
I then Applied changes manually to the service as follows:
...
spec:
type: LoadBalancer
externalIPs:
- 11.240.15.192
It still couldn't connect my sample nginx deployment on port 80
Then to experiment with it, I changed "ExternalIps" to one of the Kubernetes Node IP address and now I can access Nginx index page. This raises a big concern since I only have three worker nodes and I probably can run only three services on port 80 using up IP address of each node.
Can someone please guide me where exactly I need to make changes so that I can use whole range of IP addresses?
I guess you don't have to assign the external IP, It will be assigned automatically from the pool you assigned and in sequence.

Multiple Host Kubernetes Ingress Controller

I've been studying Kubernetes for a few weeks now, and using the kube-lego NGINX examples (https://github.com/jetstack/kube-lego) have successfully deployed services to Kubernetes cluster using Rancher on DigitalOcean.
I've deployed sample static sites, Wordpress, Laravel, Craft CMS, etc. All of which use custom Namespaces, Deployment, Secrets, Containers with external registries, Services, and Ingress Definitions.
Using the example (lego) NGINX Ingress Controller setup, I'm able to apply DNS to the exposed IP address of my K8s cluster, and have the resulting sites appear.
What I don't know, though, is how to allow for multiple hosts to have Ingress Controllers service the same deployments, and thus provide HA Ingress to the cluster. (by applying an external load balancer service, or geo-ip, or what-have-you).
Rancher (stable) allows me to add multiple hosts, I've spun up 3 to 5 at a time, and Kubernetes is configured and deployed across all Hosts. Furthermore, I'll define many replicas and/or deployments (listed above) and they will be spread over the cluster and accessible as would be expected. I've even specified multiple replicas of the Ingress Controller, but of course they all get scheduled on the same host, giving me only one IP address of Ingress.
So how do I allow multiple hosts (each with their own public facing IP address) to allow ingress into the cluster? I've also read about setting up multiple Ingress Controllers, but then you must specify what deployment/services are being serviced by what Ingress Controller, which then totally defeats the purpose.
Maybe I'm missing something, but if K8s multi-host is supposed to provide HA, and the Host with the Ingress Controller goes down, then the service will be rescheduled on the other Hosts, but the IP address that everything is pointing to will be dead, and thus an outage. Any way to have multiple IP Addresses to the same set of deployment/services?
I investigated my setup a bit more today, and I think I found out why I was having difficulty. The "LoadBalancer" is often mentioned as for use with Cloud Providers (in both docs, and what #fiunchinho describes). I was using it with a Rancher setup, which auto creates an HA-Proxy LoadBalancer ingress for you on the hosts.
By default, it will just schedule it on one of the hosts. You can specify that you want it scheduled globally buy providing an 'annotation' of io.rancher.scheduler.global: "true".
Like so:
annotations:
# Create load balancers on every host in the environment
io.rancher.scheduler.global: "true"
http://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/
I preferred LoadBalancer over NodePort because I wanted the ability to send port 80 (and in the future port 443) to any of the Nodes, and have them successfully fulfil my request by inspecting the Host header, and directing as-needed.
These LBs can also be setup in the Rancher UI under the "Infrastructure Stack" menu. I have successfully removed the single LB, and re-added one with an "Always run one instance of this container on every host" option enabled.
After this was configured, I could make a request to any of the Hosts for any of the Ingresses, and get a response, no matter what host the container was scheduled on.
https://rancher.com/docs/rancher/v1.6/en/rancher-services/load-balancer/
So cool!
The ingress controller is deployed like any regular pod. That means that you can have as many replicas as you'd like, which will be spread among all your nodes.
You need a Service object that group all the pods for the ingress controller.
Then you just need to expose that Service to outside the cluster. You can do that using a LoadBalancer service if you are on a cloud provider. Or you can use just a NodePort service.
The point is that the service will balance the traffic that your ingress controller receives between all the pods that are running on different kubernetes nodes. If one of the nodes goes down, it doesn't really matter, because there are other nodes containing ingress controller pods.