Requests Queue during Failover - haproxy

I would like to setup an HAProxy + Keepalived in OVH server.
When I move the IP Failover with the OVH API it moves it in ~5min.
So during this 5 minutes all requests sent are failing.
That's why I would like to know how to cache or queue theses requests..
How can I do ?

Related

Delayed Unauthorized responses from AKS

We use AKS's kube-apiserver for leader-election from a VM, external to k8s cluster but in the same VNET. We use a k8s client from the client-go package. The client tries to get/update the lease object every 2 sec. We observe occasional failures (every few hours) caused by delayed "Unauthorized" responses. The client-go HTTP client refreshes creds/tokens when it receives an Unauthorized response. However, when the Unauthorized response is extremely delayed (sometimes up to 30 sec!), the timeouts of HTTP client or leader-election mechanism kick in.
We could tune the HTTP client and leader-election timeouts but we do need a fast failover (e.g., up to 30 sec), so it would be great to eliminate these delays.
What is the reason these Unauthorized responses get so delayed?

Kubernetes options request Load Balancer latency of 10 seconds - how to debug?

I have an issue on the backend responding to Options requests.
Sometimes the serve responds in a few ms to the options request, but many times it takes 10 seconds or more. I have a feeling the load-balancer / ingress is holding back since the backend server is not doing this locally and it just has a nodejs app.use(cors()).
The main question is, how can I debug where this goes wrong?
I can see incoming requests but I cannot find for sure if it is the loadbalancer, or the loadbalancer waiting for a response from server.

Kubernetes load balance HTTP/1.1 requests

As we know, by default HTTP 1.1 uses persistent connections which is a long-lived connection. For any service in Kubernetes, for example, clusterIP mode, it is L4 based load balancer.
Suppose I have a service which is running a web server, this service contains 3 pods, I am wondering whether HTTP/1.1 requests can be distributed to 3 pods?
Could anybody help clarify it?
This webpage perfectly address your question: https://learnk8s.io/kubernetes-long-lived-connections
In the spirit of StackOverflow, let me summarize the webpage here:
TLDR: Kubernetes doesn't load balance long-lived connections, and some Pods might receive more requests than others.
Kubernetes Services do not exist. There's no process listening on the IP address and port of a Service.
The Service IP address is used only as a placeholder that will be translated by iptables rules into the IP addresses of one of the destination pods using cleverly crafted randomization.
Any connections from clients (regardless from inside or outside cluster) are established directly with the Pods, hence for an HTTP 1.1 persistent connection, the connection will be maintained between the client to a specific Pod until it is closed by either side.
Thus, all requests that use a single persistent connection will be routed to a single Pod (that is selected by the iptables rule when establishing connection) and not load-balanced to the other Pods.
Additional info:
By W3C RFC2616 (https://www.w3.org/Protocols/rfc2616/rfc2616-sec8.html#sec8.1.3), any proxy server that serves between client and server must maintain HTTP 1.1 persistent connections from client to itself and from itself to server.

Connect to On Premises Service Fabric Cluster

I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.

HAProxy load balance

I trying to use haproxy to loadbalance 2 virtual machines. To avoid confusion, yes I have to 2 virtual machines and on both of them I have installed these components "HAProxy", "Keepalived" and "web application". Futhermore I have configured Floating IP.
So basic flow I want to achieve is - Master "HaProxy" takes all requests coming from "Floating IP" and load balance traffic. "Keepalived" checks if master server is online, I want to direct traffic to my second VM "HAProxy".
My question how to direct traffic to Backup VM HAProxy if master fails?