Azure load balance traffic distribution - server

I have 2 VMs with Windows server 2016 on Azure. I want to the setup load balancer in front of both VMs so that every request to the VMs coming through the load balancer and load balancer distribute it to healthy backend instances.
My question is that what is the default behaviour of traffic distribution in Azure LB.
How can we distribute traffic in the round robin?
Please assist.

Default is round robin. the only quirk is that you setup probes for this to work, and if probe detect failure LB wont distribute traffic to failed host(S).

Related

K3s high available cluster on cloud

I am trying to create a cluster using k3s with n number of nodes, where all the nodes are servers only, and there are no agent/worker nodes. By default, a k3s server works as an agent. And I am trying to add a load balancer that can work for both API-Server and client side. I mean it can serve the 443 port (webserver) as well. I need a hybrid solution for the load balancer as I want to support on-prem and cloud both.
I was going through metalLB as it doesn't support cloud. Here it is mentioned that metalLb is not compatible with cloud providers. I am confused a bit about the load balancer where to add it?
Below I am adding an image of an architecture I want to build, not sure where the load balancer should I keep?
Please someone guide me in a right direction so I can complete the flow.
Thanks,

Service Fabric Load Balancer not forwarding traffic correctly

I have a ASP.NET website that connects to a set of WCF services in a service fabric cluster behind an internal load balancer. The service connection strings in the website points to the address of the internal load balancer. There are three nodes in the cluster and three copies of backend services.
When I manually restart one of the node, I find that the website failed to load correctly because the load balancer seems to be still forwarding requests to the service in the restarting node. Shouldn't the load balancer forward requests to the two other available services? Does anyone know whats going on here?

How can I achieve an active/passive setup across multiple kubernetes clusters?

We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!

Accessing API from outside the on-premises Service Fabric cluster

I am very new to Service Fabric. We are developing an API to run inside a Service Fabric cluster. In production we have a 3 virtual machine cluster. In DEV & UAT, we connect the API directly with the server name, as it is a single PC server. I want to run the API in all 3 nodes, and introduce a API gateway running on top. The gateway will do a bit of load balancing as well. Again, the gateway API will run in a single node and from outside I don't know which node it is running on. How should I communicate to the gateway?
Thank you in advance.
Regards,
Zubi Rabbi
Introduce an external Load Balancer (like Azure Load Balancer) on top of your cluster, to receive and forward traffic to (healthy) cluster services.
I do recommend to run your gateway on all nodes, so it doesn't matter which node you talk to. This increases availability and performance.

Connect to On Premises Service Fabric Cluster

I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.