HAProxy redirect request from HAProxy load balancer to another load balancer - haproxy

I have a GCP external load balancer that is only exposed to port 443 and I have HAProxy load balancer that is exposed to many ports. How can I redirect the traffic coming from the HAProxy load balancer to another load balancer so that I dont have to specify the backends.

Related

SSL termination for Kubernetes NGINX Ingress load balancer exposing a EMQX broker in GCP

I am currently trying to do a SSL termination for a EMQX Broker implemented in GKE.
The implementation of the EMQX broker exposed by a NGINX Ingress L4 load balancer was successful. I am able to display the dashboard and connect to the broker successfully via the LB IP.
I've tried creating an NGINX Ingress pointing to the broker and the L4 load balancer but can't add SSL to it via a google managed certificate.
I've also tried creating a Google TPC/UDP Load Balancer but only the dashboard is displayed and I can't connect to the broker maybe because the HTTP to TCP traffic is not pointed to the correct port? I'm not sure.
I thought that maybe implemented a L7 Load Balancer that points to the Backend Service created by the Ingress pointing to the L4 Load Balancer ports would be an option but couldn't make it work.
Has anyone been able to implement this architecture and can provide me with an example of it? Basically I want to connect to the broker via WSS with a custom domain using Kubernetes with a google managed certificate.
Thanks.

Load balancing in front of Traefik edge router

Looking at OpenShift HA proxy or Traefik project: https://docs.traefik.io/.
I can see Traefik ingress controller is deployed as a DaemonSet.
It enables to route traffic to correct services/endpoints using virtual host.
Assuming I have a Kubernetes cluster with several nodes.
How can I avoid to have a single point of failure?
Should I have a load balancer (or DNS load balancing), in front of my nodes?
If yes, does it mean that:
Load balancer will send traffic to one node of k8s cluster
Traefik will send the request to one of the endpoint/pods. Where this pod could be located in a different k8s node?
Does it mean there would be a level of indirection?
I am also wondering if the F5 cluster mode feature could avoid such indirection?
EDIT: when used with F5 Ingress resource
You can have a load balancer (BIG IP from F5 or a software load balancer) for traefik pods. When client request comes in it will sent to one of the traefik pods by the load balancer. Once request is in the traefik pod traefik will send the request to IPs of the kubernetes workload pods based on ingress rules by getting the IPs of those pods from kubernetes endpoint API.You can configure L7 load balancing in traefik for your workload pods.
Using a software reverse proxy such as nginx and exposing it via a load balancer introduces an extra network hop from the load balancer to the nginx ingress pod.
Looking at the F5 docs BIG IP controller can also be used as ingress controller and I think using it that way you can avoid the extra hop.

How does an external load balancer learn of istio ingress gateways

When using an external load balancer with istio ingress gateways (multiple replicas spread across different nodes), how does it identify which istio ingress gateway it can possibly hit i.e. I can manually access nodeip:nodeport/endpoint for any node manually but how is an external load balancer expected to know all nodes.
Is this manually configured or does the load balancer consume this info from an API
Is there a recommended strategy for bypassing an external load balancer eg. roundrobin across a DNS which is aware of the node ip / port ?
The root of this question is - how do we avoid a single point of failure . Using multiple istio ingress gateway replicas achieves this in istio but then the the external load balancer / load balancer cluster needs to know the replicas . Is this automated or a manual config or is there a single virtual endpoint that the external load balancer hits?
External load balancers are generally configured to do health check on your set of nodes (over /healthz endpoint or some other method), and balance the incoming traffic using an LB algorithm, by sending the packets it receives to one of the healthy nodes over the service's NodePort.
In fact, that's mostly the reason why NodePort type services exist in the first place - they don't have much of an usage by themselves, but they are the intermediate steps between modes LoadBalancer and ClusterIP.
How does the load balancer know about the nodes? It heavily depends on the load balancer. As an example, if you use MetalLB in BGP mode, you need to add your nodes as peers to your external BGP router (either manually or in an automated way). MetalLB takes care of advertising the IPs of the LoadBalancer type services to the router. This means, that router effectively becomes the load balancer of your cluster.
There are also a number of enterprise-grade commercial Kubernetes load balancers out there, such as F5 Big-IP.
Enable ClusterIP for service rather than Node Port. Any LB can be used along with the ingress. But it depends on the platform you are using . It's bare metal or open shift , IBM Cloud, Google cloud. Once the ingress controller ( Metalb, ngnix, Traffic) is able to communicate any LB like F5 GTM or LTM can be set up in front.

Forwarding traffic from a DigitalOcean Load Balancer to a Kubernetes Service not working

I created a kubernetes service that is exposed via type: nodePort. I can access the service in my browser if I enter http://PublicDropletIp:31433.
Now I want to use a DigitalOcean Load Balancer to forward traffic from port 80 to the service. So I set a rule for the Load Balancer to forward http/80 to Droplet http/31433.
Unforutnatly this doesn't work. If I enter the load balancer IP in the browser I get: 503 Service Unavailable.
Does anyone know how I can expose the service so that the Load Balancer can forward traffic to it?
I had this same issue and ended up on this thread. If anyone else is looking, I resolved it by configuring the firewall on my server.
To answer the question above, the firewall should be configured to accept tcp connections from the load balancer's ip on port 31433.

Load balancing subdomains in Azure Service Fabric

I have two web apps sitting in my FrontEnd node type of a Service Fabric cluster. One app is listening on port 7000, the other 8000, but both ultimately sitting behind the same load balancer with public IP 1.2.3.4.
I would like to configure the DNS to point both app1.mydomain.com and app2.mydomain.com to IP 1.2.3.4, but the load balancer to route route app1 (port 80 or 443) subdomain requests to port 7000 and app2 to port 8000.
Is this possible, or do I need to set up two load balancers with two public IPs?
It seems I can add multiple IP addresses to the same Load Balancer, and associated different rules for each IP address added.