I have no prior experience with Wildfly or JBOSS before implementing this scenario.
I have Wildfly 10 running in Domain mode.
2 hosts each running 1 wildfly server connected to a single datasource.
Server1 -Master- Domain Controller
Server2 - Slave
The Datasource is configured under the "DEFAULT" Profile
The Deployment is under the "FULL" profile.
I now need to add a load balancing into the equation but I only want to use Wildfly. I have read the following article to set up a Static Load balncer as a reverse proxy https://docs.jboss.org/author/display/WFLY10/Using+Wildfly+as+a+Load+Balancer
I have a 3rd Server that I want to configure as the Load Balancer.
Do I configure this as a "SLAVE" in the domain but add it to the LOAD-BALANCER Profile on the Domain Controller? When I do this, it cannot find and connect to the Master (Server1)!
Please can someone tell me the basic set up I need to have on this server for me to be in a position to follow the steps in the above article and configure it as a reverse-proxy/static load balancer?
Many Thanks
If you wish to use wildfly as load balancer with modcluster/static load balancing configuration, then you don't need to include the server (which will act as load balancer) in cluster/domain. You can separaty invoke the load balancer server. Wildfly10 distribution has already one example file - standalone-load-balancer.xml (inside - \docs\examples\configs), which can be used directly.
This file is having minimum configuration required for using wildfly 10.1 as load balancer.
Once the server is up using that file, it will automatically discovers the worker nodes that are participating in clustering (provided multicast address and ports are working and accessible in the network).
Also please note that all worker nodes should have different node name otherwise if some nodes lie on the same machine and if they are not invoked with different node names then they may get rejected by the load balancer server.
Below is the command to invoke the wildfly server with specific node name ---
standalone.bat -Djboss.node.name=<specify the node name here>
Basic setup will be like below --
[1] Invoke the two separate nodes (wildfly instance) with configuration - standalone-ha.xml/standalone-full-ha.xml with some web-app (e.g. cluster-demo.war). Please note that deployment descriptor of web application must have tag inside it, otherwise cluster will not get set up after the invocation of two worker nodes.
[2] After success of 1st step, user can see message - received new cluster view in console log of worker nodes.
[3] LOAD BALANCER CONFIGURATION --
[3.1] DYNAMIC LOAD BALANCER (using modcluser configuration)--------
Invoke the third instance of wildfly with configuration - standalone-load-balancer.xml
If load balancer detects all the worker nodes then user will see the log message - registering node - nodeName, in console log of load balancer server.
[3.2] STATIC LOAD BALANCER CONFIGURATION -----
cli---
/subsystem=undertow/configuration=handler/reverse-proxy=my-handler1:add()
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host111/:add(host=localhost, port=9080)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=remote-host222/:add(host=localhost, port=10080)
/subsystem=undertow/configuration=handler/reverse-proxy=my-handler1/host=host11:add(outbound-socket-binding=remote-host111, scheme=http, instance-id=cluster-demoroute, path=/cluster-demo)
/subsystem=undertow/configuration=handler/reverse-proxy=my-handler1/host=host22:add(outbound-socket-binding=remote-host222, scheme=http, instance-id=cluster-demoroute, path=/cluster-demo)
/subsystem=undertow/server=default-server/host=default-host/location=/cluster-demo:add(handler=my-handler1)
configuration replacement-------------
[A] Add below reverse-proxy tag inside subsystem-undertow/handlers tag--
<reverse-proxy name="my-handler1">
<host name="host11" outbound-socket-binding="remote-host111" path="/cluster-demo" instance-id="cluster-demoroute"/>
<host name="host22" outbound-socket-binding="remote-host222" path="/cluster-demo" instance-id="cluster-demoroute"/>
</reverse-proxy>
[B] Add location tag inside subsystem - undertow/server=default-server/default-host
<location name="/cluster-demo" handler="my-handler1"/>
[C] Add below inside socket-binding-group tag
<outbound-socket-binding name="remote-host111">
<remote-destination host="localhost" port="9080"/>
</outbound-socket-binding>
<outbound-socket-binding name="remote-host222">
<remote-destination host="localhost" port="10080"/>
</outbound-socket-binding>
Related
Please confirm if these are true, or please point to the official AWS documentations that describes how to use dynamic port mapping with NLB and run multiple same tasks in an ECS ES2 instance. I am not using Fargate.
ECS+NLB does NOT support dynamic port mapping, hence
ECS+NLB can only allow 1 task (docker container) per EC2 instance in an ECS service
This is because:
AWS ECS Developer Guide - Creating a Load Balancer only mentions ALB that can use dynamic port, and not mention on NLB.
Application Load Balancers offer several features that make them attractive for use with Amazon ECS services:
* Application Load Balancers allow containers to use dynamic host port mapping (so that multiple tasks from the same service are allowed per container instance).
ECS task creation page clearly states that dynamic port is for ALB.
Network Load Balancer for inter-service communication quotes a response from the AWS support:
"However, I would like to point out that there is currently an ongoing issue with the NLB functionality with ECS, mostly seen with dynamic port mapping where the container is not able to stabilize due to health check errors, I believe the error you're seeing is related to that issue. I can only recommend that you use the ALB for now, as the NLB is still quite new so it's not fully compatible with ECS yet."
Updates
Found a document stating NLB supports dynamic port. However, if I switch ALB to NLB, ECS service does not work. When I log into an EC2 instance, an ECS agent is running but no docker container is running.
If someone managed to make ECS(EC2 type)+NLB work, please provide the step by step how it has been done.
Amazon ECS Developer Guide - Service Load Balancing - Load Balancer Types - NLB
Network Load Balancers support dynamic host port mapping. For example, if your task's container definition specifies port 80 for an NGINX container port, and port 0 for the host port, then the host port is dynamically chosen from the ephemeral port range of the container instance (such as 32768 to 61000 on the latest Amazon ECS-optimized AMI). When the task is launched, the NGINX container is registered with the Network Load Balancer as an instance ID and port combination, and traffic is distributed to the instance ID and port corresponding to that container. This dynamic mapping allows you to have multiple tasks from a single service on the same container instance.
I've followed the steps from Microsoft to create a Multi-Node On-Premises Service Fabric cluster. I've deployed a stateless app to the cluster and it seems to be working fine. When I have been connecting to the cluster I have used the IP Address of one of the nodes. Doing that, I can connect via Powershell using Connect-ServiceFabricCluster nodename:19000 and I can connect to the Service Fabric Explorer website (http://nodename:19080/explorer/index.html).
The examples online suggest that if I hosted in Azure I can connect to http://mycluster.eastus.cloudapp.azure.com:19000 and it resolves, however I can't work out what the equivalent is on my local. I tried connecting to my sample cluster: Connect-ServiceFabricCluster sampleCluster.domain.local:19000 but that returns:
WARNING: Failed to contact Naming Service. Attempting to contact Failover Manager Service...
WARNING: Failed to contact Failover Manager Service, Attempting to contact FMM...
False
WARNING: No such host is known
Connect-ServiceFabricCluster : No cluster endpoint is reachable, please check if there is connectivity/firewall/DNS issue.
Am I missing something in my setup? Should there be a central DNS entry somewhere that allows me to connect to the cluster? Or am I trying to do something that isn't supported On-Premises?
Yup, you're missing a load balancer.
This is the best resource I could find to help, I'll paste relevant contents in the event of it becoming unavailable.
Reverse Proxy — When you provision a Service Fabric cluster, you have an option of installing Reverse Proxy on each of the nodes on the cluster. It performs the service resolution on the client’s behalf and forwards the request to the correct node which contains the application. In majority of the cases, services running on the Service Fabric run only on the subset of the nodes. Since the load balancer will not know which nodes contain the requested service, the client libraries will have to wrap the requests in a retry-loop to resolve service endpoints. Using Reverse Proxy will address the issue since it runs on each node and will know exactly on what nodes is the service running on. Clients outside the cluster can reach the services running inside the cluster via Reverse Proxy without any additional configuration.
Source: Azure Service Fabric is amazing
I have an Azure Service Fabric resource running, but the same rules apply. As the article states, you'll need a reverse proxy/load balancer to resolve not only what nodes are running the API, but also to balance the load between the nodes running that API. So, health probes are necessary too so that the load balancer knows which nodes are viable options for sending traffic to.
As an example, Azure creates 2 rules off the bat:
1. LBHttpRule on TCP/19080 with a TCP probe on port 19080 every 5 seconds with a 2 count error threshold.
2. LBRule on TCP/19000 with a TCP probe on port 19000 every 5 seconds with a 2 count error threshold.
What you need to add to make this forward-facing is a rule where you forward port 80 to your service http port. Then the health probe can be an http probe that hits a path to test a 200 return.
Once you get into the cluster, you can resolve the services normally and SF will take care of availability.
In Azure-land, this is abstracted again to using something like API Management to further reverse proxy it to SSL. What a mess but it works.
Once your load balancer is set up, you'll have a single IP to hit for management, publishing, and regular traffic.
I am trying to configure a wildfly server with a load balancer for learning purposes. Here's what I got:
Three VMs, only accessible by their IPs.
One is 152.238.224.58 - my load balancer
An other one is 152.238.224.59 - my first backend server
The last one is 152.238.224.60 - my second backend server
I find the wildfly documentation to be rather poor, but after watching Stuart Douglas's explanation on how the load balancer works, I currently have my first VM running a cluster of servers. Load balancing works, but everything is on the same VM (the first one). What I'd rather have is the load balancer acting as a proxy for the two backend servers.
I've tried the method described on the Wildfly documentation but didn't manage to make it work.
What would I need to do to have the first VM load-balancing across the two second VMs? To go even further, how difficult would it be to have the first VM act as a load-balancer between VM-2 and VM-3, where VM-2 and VM-3 are clusters (would they then have their own load-balancer?)?
Thanks a lot for any indication.
From WildFly version 10.1 there is a load balancer profile as a part of WildFly installation. Just use it. I'm providing sample steps here (based on my demo scripts for MS Azure).
Load balancer
Use the standalone-load-balancer.xml profile for the load balancer. WildFly 10.1 has the profile within the examples. WildFly 11 has it as a standard profile in the configuration directory.
WILDFLY_HOME=/path/to/wildfly
# MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
MY_IP=152.238.224.58
# Skip following command in WildFly 11
cp $WILDFLY_HOME/docs/examples/configs/standalone-load-balancer.xml \
$WILDFLY_HOME/standalone/configuration/
# run the load balancer profile
$WILDFLY_HOME/bin/standalone.sh -b $MY_IP -bprivate $MY_IP -c standalone-load-balancer.xml
This script uses for communication between worker nodes and load balancer public network. If you want to use a private network (highly recommended), then set the correct IP address of the balancer for private interface (-bprivate).
Worker nodes
Run the server with the HA (or Full HA) profile, which has modcluster component included. If the UDP multicast is working in your environment, the workers should work out of the box without any change. If it's not the case, then configure the IP address of the load-balancer statically.
WILDFLY_HOME=/path/to/wildfly
MY_IP=$(ip route get 8.8.8.8 | awk '{print $NF; exit}')
# Configure static load balancer IP address.
# This is necessary when UDP multicast doesn't work in your environment.
LOAD_BALANCER_IP=152.238.224.58
$WILDFLY_HOME/bin/jboss-cli.sh <<EOT
embed-server -c=standalone-ha.xml
/subsystem=modcluster/mod-cluster-config=configuration:write-attribute(name=advertise,value=false)
/socket-binding-group=standard-sockets/remote-destination-outbound-socket-binding=proxy1:add(host=$LOAD_BALANCER_IP,port=8090)
/subsystem=modcluster/mod-cluster-config=configuration:list-add(name=proxies,value=proxy1)
EOT
# start the woker node with HA profile
$WILDFLY_HOME/bin/standalone.sh -c standalone-ha.xml -b $MY_IP -bprivate $MY_IP
Again, to make it safe, you should configure MY_IP as an address from the private network.
There are two mod cluster load balancers running in my network and I want to exclude one from picking up my jboss application server nodes.
I want the nodes to be served exclusively by one of the balancers. How do I achieve this?
I solved this problem by changing the multicast ip:port in the load balancer and jboss application servers.
The multicast was set to default for all instances and thus why both load balancers were picking up my nodes. By setting the multicast address to a specific ip:port combination in one of the load balancers and the application servers, I was able to restrict application servers to the one load balancer.
I am new to JBoss. Basically I have managed to cluster 2 nodes with:
Node 1: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.51.28 -Djboss.messaging.ServerPeerID=1
Node 2: run.bat -c all -g DefaultPartition –u 230.0.0.4 -b 10.67.50.21 -Djboss.messaging.ServerPeerID=2
I know that if i cofigure a Apache load balancing(Mod_JK) to sit infront of the cluster, the client simply just punch in the IP of the Apache, and Apache will redirect the traffic to the nodes.
But I do not want to have a Apache infront of the cluster. So how do my client access the cluster?? Do i need to configure something in JBoss, or isit a MUST to have a load balancer for the client to access the cluster??
MANY thanks in advance....
Apache is not strictly needed to perform failover. But you will need some infrastructure level to redirect the requests to the other server when the first one is down.
To achieve failover with JBoss; the default is to use several JBoss nodes (in cluster mode to replicate session data) and in-front a network http level infrastructure that route the request to the correct JBoss instance. Several routing strategies can be performed e.g. load balancing the session base on available nodes (the default used by most Java EE systems) or one node taking all load and ip change done automatically if the environment detect one node is down.
The first one is provided by the mod_jk and is probably simpler at a correct price.
To perform high availability you will need a complete redondent infrastructure, router, switch etc. and several reverse proxy (the Apache node) below a hardware HA load balancer.
If you only have 2 JBoss node, how the request going to the down node will be rerouted to the fail over node?
If it helps, re-brand the Apache node to "Fail over request router"...