how to configure loadbalancer in fuse cluster? - jbossfuse

I had setup cluster in Jboss fuse 6.3.0 for server01 and server02 node. Can anyone suggest me that how to configure loadbalancer in fuse cluster?
Till now i have done following things to achieve clustering on Jboss Fuse.
1.I had setup 2 Fuse on 2 different servers, and then join them together to form a fabric cluster.
2.I changed in file "org.apache.karaf.management.cfg" under etc/ and change the RMI related port.
rmiRegistryPort=1199
rmiServerport=445
Also there are a few more port you need to adjust,org.apache.karaf.shell.cfg, change the:
sshPort
and last inside system.properties
org.osgi.service.http.port
activemq.port
activemq.jmx.url
Then getting back to setting the cluster of fabric, start up JBoss Fuse by going into bin/ and execute fuse on server one. After it starts up, create a fabric by entering the following command:
fabric:create --wait-for-provisioning
4.This spins up a fabric on a container call root, now, start up JBoss Fuse on server2 by going to bin/ and execute fuse on server two. And instead of creating a fabric, we are going to join, by entering following command, as fabric:join [options] zookeeperUrl [containerName].
fabric:join --zookeeper-password admin 192.168.0.1:2181 root1
Then go to your fuse command line console in Server1 and type:
config:edit io.fabric8.zookeeper
config:proplist
This will give you your zookeeper details:
JBossFuse:karaf#root> config:proplist
service.pid = io.fabric8.zookeeper
zookeeper.password = ZKENC=YWRtaW4=
zookeeper.url = 192.168.0.1:2181
fabric.zookeeper.pid = io.fabric8.zookeeper
6.And now Ihave successfully created a fabric on 2 servers
- If you type in container-list in the command line, you should be able to see we now have 2 working server
JBossFuse:karaf#root> container-list
[id] [version] [connected] [profiles] [provision status]
root* 1.0 true fabric, fabric-ensemble-0000-1, jboss-fuse-full success
root1 1.0 true fabric
7.And Now if you login to Fuse management console, under Runtime-> Containers you will see the 2 root containers on both servers
-root
-root1
But now my question is that how to achieve load balancer in server1 and server2. I have created my fabric profile and it shared with 2 nodes and then deployed CXF-RS webservice on my fabric profile but request is not going to server2.
Can anyone suggest me that where i need to configure load balancer configuration for Fuse server1 and server2?
Thanks,
Prakash

You can use HTTP Gateway for loadbalancing HTTP endpoints in JBoss Fuse Fabric mode.
For more info, see:
https://access.redhat.com/documentation/en-us/red_hat_jboss_fuse/6.3/html/fabric_guide/gateway

Related

Load balancing in JBoss with mod_cluster

Got a general question about load balancing setup in JBoss (7.1.1.Final). I'm trying to setup a clustered JBoss instance with a master and slave node and I'm using the demo app here (https://docs.jboss.org/author/display/AS72/AS7+Cluster+Howto) to prove the load balancing/session replication. I've basically followed through to just before the 'cluster configuration' section.
I've got the app deployed to the master and slave nodes and if I hit their individual IPs directly I can access the application fine. According to the JBoss logs and admin console the slave has successfully connected to the master. However, if I put something in the session on the slave, take the slave offline, the master cannot read the item that the slave put in the session.
This is where I need some help with the general setup. Do I have to have a separate apache httpd instance sat in front of JBoss to do the load balancing? I thought there was a load balancing capability built into JBoss that wouldn't need the separate server, or am I just completely wrong? If I don't need apache, please could you point me in the direction of instructions to setup the JBoss load balancing?
Thanks.
Yes, you need a Apache or any other software or hardware that allows you to perform load balancing of the HTTP request JBoss Application Server does not provide this functionality.
For proper operation of the session replication you should check that the server configuration and the application configuration is well defined.
On the server must have the cache enabled for session replication (you can use standalone-ha.xml or standalone-full-ha.xml file for initial config).
To configuring the application to replicate the HTTP session is done by adding the <distributable/> element to the web.xml.
You can see a full example in http://blog.akquinet.de/2012/06/21/clustering-in-jboss-as7eap-6/

Prevent deployment to entry node, only deploy to other nodes

I have a free OpenShift account with the default 3 gears. On this I have installed the WildFly 8.1 image using the OpenShift web console. I set the minimal and maximal scaling to 3.
What happens now is that OpenShift will create 3 JBoss WildFly instances:
One on the entry node (which is also running HAProxy)
One on an auxiliary node
One on another auxiliary node
The weird thing is that the JBoss WildFly instance on the entry node is by default disabled in the load balancer config (haproxy.conf). BUT, OpenShift is still deploying the war archive to it whenever I commit in the associated git repo.
What's extra problematic here is that because of the incredibly low number of max user processes (250 via ulimit -u), this JBoss WildFly instance on the entry node cannot even startup. During startup JBoss WildFly will throw random 'java.lang.OutOfMemoryError: unable to create new native thread' (and no, memory is fine, it's the OS process limit).
As a result, the deployment process will hang.
So to summarize:
A JBoss WildFly instance is created on the entry node, but disabled in the load balancer
JBoss WildFly in its default configuration cannot startup on the entry node, not even with a trivial war.
The deployer process attempts to deploy to JBoss WildFly on the entry node, despite it being disabled in the load balancer
Now my question:
How can I modify the deployer process (including the gear start command) to not attempt to deploy to the JBoss WildFly instance on the entry node?
When an app scales from 2 gears to 3, HAproxy stops routing traffic to your application on the headgear and routes it to the two other gears. This assures that HAproxy is getting the most CPU as possible as the application on your headgear (where HAproxy is running) is no longer serving requests.
The out of memory message you're seeing might not be an actual out of memory issue but a bug relating to ulimit https://bugzilla.redhat.com/show_bug.cgi?id=1090092.

deploynig different ear files in different clusters of same weblogic server domain

Hi I am new to this forum as well as weblogic server.My requirement is that I have an application that runs on a cluster having an admin server and three managed server MS1,MS2,MS3.Currently my application has two parts(or logic), both of which are in a single ear file.The part1 always occupies one server, say MS1 and rest in other two MS2 & MS3 .I want to divide my code in two different ear part1 and part2 with par1_ear deployed in MS1 and part2_ear deployed in MS2 and MS3 all running under same admin server
ear1 deployed in ----->MS1
ear2 deployed in ----->MS2 &MS3
All running under same managed server.
Can this be done if not other suggestion also welcome but i can have only one admin server and 3 clusters
Yes, when you deploy your .ear files you can target specific machines in a cluster. Your deployments don't have to go to all machines in a cluster.
Also, if you really only want one server in the cluster to handle some specific event you might want to look into Singleton Services.
Have you had experience deploying applications in Weblogic before?

Jboss Server with same port on the same machine

Can we run more than one instance of Jboss Server with same port on the same machine ? If yes how ?
Thanks
Amar
of course the only way to have two services listening on the same port is to make sure that they bind on different IP addresses. If you consider acceptable configure multiple addresses on the same interface, simply start each instance of JBoss with the flag "-b <address>"
Yes you can. All you need is to also run a Apache server instance and use it as a load balancer to a JBoss cluster and use the mod_proxy or mod_ajp plugin to load balance between multiple JBoss instances. To spin up multiple instances of JBoss 5 or JBoss 6 on Windows , use my script here (but you will have to enhance the configuration yourself to enable clustering and the Apache load balancer). Plus, my launch script requires you download stuff from the YAJSW server wrapper project.
I frequently run multiple jboss servers as a cluster and I always run a Apache server on port 80 and 443 that load balances to the JBoss instances. Here is am example post from my blog.
Yes, you can do it if your machine has several network interfaces (IP addresses) and you bind each Jboss instance to one different IP. For example, if your machines has two network interfaces: 192.168.1.1 and 192.168.1.2, you could run each instance with the command:
./run.sh -c instance1 -b 192.168.1.1
./run.sh -c instance2 -b 192.168.1.2
But the most common case is running several instances in the same machine using different ports each instance, you can achieve that with Jboss Ports Bindings.
Look for detailed info in this JBoss Web: Configuring Multiple JBoss Instances On One Machine.

Redeploy/Failover for Glassfish cluster on EC2?

I have a Tapestry application (WAR, no EJB) that ...
... I want to deploy on 2 EC2 small instances (for failover).
... uses Spring Security
... is stateful (very small session state)
... should be deployed on Glassfish 3.1 (seems to have best cluster support?)
... and has an elastic load balancer with sticky session in front of it
How can I configure a cluster to achieve minimal ('no') interruptions for the user experience in case A) a node fails and B) I deploy a new version?
Everything is explained here: http://download.oracle.com/docs/cd/E18930_01/html/821-2426/docinfo.html#scrolltoc
Basically, you setup a DAS (=master), which control nodes with instances on it. You could do all of this on the same machine (1 DAS, 1 node with multiple instances), although it would be a good idea to have at least 2.
You should then have at least one load balancer (apache, physical load balancer, whatever).
A) if a node fails, the load balancer can redirect all traffic to the other node
B)
deploy application, disabled, with new version (see "application versioning")
mark server A as unavailable
enable the new version on server A
mark server A as available and server B as unavailable
enable the new version on server B
mark server B as avalilable