Service Fabric Load Balancer not forwarding traffic correctly - azure-service-fabric

I have a ASP.NET website that connects to a set of WCF services in a service fabric cluster behind an internal load balancer. The service connection strings in the website points to the address of the internal load balancer. There are three nodes in the cluster and three copies of backend services.
When I manually restart one of the node, I find that the website failed to load correctly because the load balancer seems to be still forwarding requests to the service in the restarting node. Shouldn't the load balancer forward requests to the two other available services? Does anyone know whats going on here?

Related

ECS with Route 53 Service discovery

According to AWS documentation:
You can configure Service Discovery for an ECS Service that is behind
a Load Balancer, but Service Discovery traffic is always routed to the
Task and not the Load Balancer.
If this the case, how does the load balancing happens here?
Also, without the Load Balancer, how does the service discovery works, will the traffic routed to a random Container Instances?
TL;DR Yes, the traffic will be sent to random instances.
When you use ECS Service Discovery, you have two options for discovering your services. One is via Route 53 DNS, which in case of ECS Service Discovery leverages Multivalue Routing Policy, so that your client application receives up to eight healthy endpoints, selected at random.
The other option is to use Cloud Map DiscoverInstances API, which returns up to 100 endpoints for a given service name, selected at random.

Amazon ECS service access and load balancing in microservice architecture

Can someone explain the load balancing mechanism in AWS ECS for me? I clearly understand how inter service communication is handled within a kubernetes cluster, there is an automatic load balancer applied when accessing a defined internal service. This means Container/Pod scalability is simply predefined:
when a Pod-1A from within the service-A is accessing another Pod-1B
from within a different Service-B (Service to Service communication)
this call is automatically load balanced to this Pod-1B from
service-B.
So with service Registry in kubernetes we simply need to define Services and communication is automatically load balanced to the available Pods within the services.
So assuming that Pods are equal to Tasks and Services are equal to Services in AWS ECS, how is this load balancing mechanism handled wihtin ECS? Do we really need to apply an Elastic Load balancer at the task/pod level manually compared to kubernetes? (So that we need to define manually a load balancer for every service, to make this service and its tasks with its container scalable?)
Edit:
What is the reason in AWS ECS, to define a service which instantiates
multiple replicas of a Task, when no load balancer has been defined?
Will the traffic be routed only to the same Task replica (Container)
all the time? (No scaling at all?)
Please note, this is not about access from external ip addresses, where an ingress controller is needed. I am talking about microservices where each service exposes its own http api to communicate with other services within the cluster (internal microservice Application), typically there is an API Gateway handling external traffic (ingress controller).

Can i use a GCP HTTPS Load Balancer to route between a bucket backend and a Kubernetes service?

i wanted to understand what are my load balancing options in a scenario where i want to use a single HTTPS Load Balancer on GCP to serve some static content from a bucket and dynamic content using a combination of react front end and express backend on Kubernetes.
Additional info:
i have a domain name registered outside of Google Domains
I want to serve all content over https
I'm not starting with anything big. Just getting started with a more or less hobby type project which will attract very little traffic in the near future.
I dont mind serving my react front end, express backend from app engine if that helps simplify this somehow. however, in such a case, i would like to understand if i still want something on kubernetes, will i be able to communicate between app engine and kubernetes without hassles using internal IPs. And how would i load balance that traffic!!
Any kind of network blueprint in the public domain that will guide me will be helpful.
I did quite a bit of reading on NodePort/LoadBalancer/Ingress which has left me confused. from what i understand, LoadBalancer does not work with HTTP(S) traffic, operates more at TCP L4 Level, so probably not suitable for my use case.
Ingress provisions a dedicated Load Balancer of its own on which i cannot put my own routes to a backend bucket etc, which means i may need a minimum of two load balancers? and two IPs?
NodePort exposes a port on all nodes, which means i need to handle load balancing myself even if my HTTPS Load balancer routing can somehow help.
Any guidance/pointers will be much appreciated!
EDIT: Found some information on Network Endpoint Groups (NEG) while researching. Looking promising. will investigate. Any thoughts about taking this route? https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg
EDIT: Was able to get this working using a combination of NEGs and Nginx reverse proxies.
In order to resolve your concerns please start with:
Choosing the right loadbalncer:
Network load balancer (Layer 4 load balancing or proxy for applications that rely on TCP/SSL protocol) the load is forwarding into your systems based on incoming IP protocol data, such as address, port, and protocol type.
The network load balancer is a pass-through load balancer, so your backends receive the original client request. The network load balancer doesn't do any Transport Layer Security (TLS) offloading or proxying. Traffic is directly routed to your VMs.
Network loadbalancers terminatese TLS on backends that are located in regions appropriate to your needs
HTTP(s) loadbalancer is a proxy-based, regional Layer 7 load balancer that enables you to run and scale your services behind a private load balancing IP address that is accessible only in the load balancer's region in your VPC network.
HTTPS and SSL Proxy load balancers terminate TLS in locations that are distributed globally.
An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients
You have the option to use Google-managed SSL certificates (Beta) or to use certificates that you manage yourself.
Technical Details
When you create an Ingress object, the GKE Ingress controller configures a GCP HTTP(S) load balancer according to the rules in the Ingress manifest and the associated Service manifests. The client sends a request to the HTTP(S) load balancer. The load balancer is an actual proxy; it chooses a node and forwards the request to that node's NodeIP:NodePort combination. The node uses its iptables NAT table to choose a Pod. kube-proxy manages the iptables rules on the node. Routes traffic is going to a healthy Pod for the Service specified in your rules.
Per buckets documentation:
An HTTP(S) load balancer can direct traffic from specified URLs to either a backend bucket or a backend service.
Bucket should be public while using Loadbalncer- Creating buckets bucket
During LoaBalancer set-up you can choose backend service and backend bucket. You can find more information in the docs.
Please take a look also for this two tutorials here and here how to build application using cloud storage.
Hope this help.
Additional resources:
Loadbalancers, Controllers

Connect to On Premises Service Fabric Cluster

I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.

Not able to call a web service hosted in Service Fabric

I've published a OWIN hosted web service to my remote cluster. I'm using a custom port 4444 created during the cluster creation. I see the AppPort rule for 4444. I'm also able to remote to one of the VM, and invoke the service locally. However, I'm still not able to call it remotely. It hangs for a while and doesn't return anything.
Start with this guide and make sure you have the Azure Load Balancer configured properly: https://azure.microsoft.com/en-us/documentation/articles/service-fabric-connect-and-communicate-with-services/#service-fabric-in-azure
The trick is to make sure that when the load balancer sends traffic on a particular port to a node in the cluster there is a service instance there listening on that port. By default, the load balancer simply sends traffic to all nodes, so you have to make sure that you have a service instance listening on each node, or if not then have a load balancer probe actively checking which nodes do have a service instance listening on that port.