There are two mod cluster load balancers running in my network and I want to exclude one from picking up my jboss application server nodes.
I want the nodes to be served exclusively by one of the balancers. How do I achieve this?
I solved this problem by changing the multicast ip:port in the load balancer and jboss application servers.
The multicast was set to default for all instances and thus why both load balancers were picking up my nodes. By setting the multicast address to a specific ip:port combination in one of the load balancers and the application servers, I was able to restrict application servers to the one load balancer.
Related
i wanted to understand what are my load balancing options in a scenario where i want to use a single HTTPS Load Balancer on GCP to serve some static content from a bucket and dynamic content using a combination of react front end and express backend on Kubernetes.
Additional info:
i have a domain name registered outside of Google Domains
I want to serve all content over https
I'm not starting with anything big. Just getting started with a more or less hobby type project which will attract very little traffic in the near future.
I dont mind serving my react front end, express backend from app engine if that helps simplify this somehow. however, in such a case, i would like to understand if i still want something on kubernetes, will i be able to communicate between app engine and kubernetes without hassles using internal IPs. And how would i load balance that traffic!!
Any kind of network blueprint in the public domain that will guide me will be helpful.
I did quite a bit of reading on NodePort/LoadBalancer/Ingress which has left me confused. from what i understand, LoadBalancer does not work with HTTP(S) traffic, operates more at TCP L4 Level, so probably not suitable for my use case.
Ingress provisions a dedicated Load Balancer of its own on which i cannot put my own routes to a backend bucket etc, which means i may need a minimum of two load balancers? and two IPs?
NodePort exposes a port on all nodes, which means i need to handle load balancing myself even if my HTTPS Load balancer routing can somehow help.
Any guidance/pointers will be much appreciated!
EDIT: Found some information on Network Endpoint Groups (NEG) while researching. Looking promising. will investigate. Any thoughts about taking this route? https://cloud.google.com/kubernetes-engine/docs/how-to/standalone-neg
EDIT: Was able to get this working using a combination of NEGs and Nginx reverse proxies.
In order to resolve your concerns please start with:
Choosing the right loadbalncer:
Network load balancer (Layer 4 load balancing or proxy for applications that rely on TCP/SSL protocol) the load is forwarding into your systems based on incoming IP protocol data, such as address, port, and protocol type.
The network load balancer is a pass-through load balancer, so your backends receive the original client request. The network load balancer doesn't do any Transport Layer Security (TLS) offloading or proxying. Traffic is directly routed to your VMs.
Network loadbalancers terminatese TLS on backends that are located in regions appropriate to your needs
HTTP(s) loadbalancer is a proxy-based, regional Layer 7 load balancer that enables you to run and scale your services behind a private load balancing IP address that is accessible only in the load balancer's region in your VPC network.
HTTPS and SSL Proxy load balancers terminate TLS in locations that are distributed globally.
An HTTP(S) load balancer acts as a proxy between your clients and your application. If you want to accept HTTPS requests from your clients
You have the option to use Google-managed SSL certificates (Beta) or to use certificates that you manage yourself.
Technical Details
When you create an Ingress object, the GKE Ingress controller configures a GCP HTTP(S) load balancer according to the rules in the Ingress manifest and the associated Service manifests. The client sends a request to the HTTP(S) load balancer. The load balancer is an actual proxy; it chooses a node and forwards the request to that node's NodeIP:NodePort combination. The node uses its iptables NAT table to choose a Pod. kube-proxy manages the iptables rules on the node. Routes traffic is going to a healthy Pod for the Service specified in your rules.
Per buckets documentation:
An HTTP(S) load balancer can direct traffic from specified URLs to either a backend bucket or a backend service.
Bucket should be public while using Loadbalncer- Creating buckets bucket
During LoaBalancer set-up you can choose backend service and backend bucket. You can find more information in the docs.
Please take a look also for this two tutorials here and here how to build application using cloud storage.
Hope this help.
Additional resources:
Loadbalancers, Controllers
I have a ASP.NET website that connects to a set of WCF services in a service fabric cluster behind an internal load balancer. The service connection strings in the website points to the address of the internal load balancer. There are three nodes in the cluster and three copies of backend services.
When I manually restart one of the node, I find that the website failed to load correctly because the load balancer seems to be still forwarding requests to the service in the restarting node. Shouldn't the load balancer forward requests to the two other available services? Does anyone know whats going on here?
We have 2 kubernetes clusters hosted on different data centers and we're deploying the applications to both these clusters. We have an external load balancer which is outside the clusters but the the load balancer only accepts static IPs. We don't have control over the clusters and we can't provision a static IP. How can we go about this?
We've also tried kong as an api gateway. We were able to create an upstream with targets as load balanced application endpoints and providing different weights but this doesn't give us active/passive or active/failover. Is there a way we can configure kong/nginx upstream to achieve this?
Consider using HA proxy, where you can configure your passive cluster as backup upstream, and you will get active/passive cluster working. As mentioned in this nice guide about HA proxy
backup meaning it won’t participate in the load balance unless both
the nodes above have failed their health check (more on that later).
This configuration is referred to as active-passive since the backup
node is just sitting there passively doing nothing. This enables you
to economize by having the same backup system for different
application servers.
Hope it helps!
I have 2 VMs with Windows server 2016 on Azure. I want to the setup load balancer in front of both VMs so that every request to the VMs coming through the load balancer and load balancer distribute it to healthy backend instances.
My question is that what is the default behaviour of traffic distribution in Azure LB.
How can we distribute traffic in the round robin?
Please assist.
Default is round robin. the only quirk is that you setup probes for this to work, and if probe detect failure LB wont distribute traffic to failed host(S).
I need to cater for 20,000 connected users in a 4-node ejabberd cluster. How would you distribute incoming connections over multiple ejabberd nodes?
To load balance XMPP TCP/IP traffic, you simply need to but a TCP/IP load balancer. From HAProxy, to Amazon LBS or BIGIP, choose your favorite one.
A way to load balancing without introducing SPOFs is to use multiple SRV records.
IF the clients you are providing the service to support it (i.e. they perform DNS queries to _xmpp-client._tcp.yourdomain), then you get load balancing (with "weights" inside the same priority group) and failover too (assigning a lower priority to a failover group).