Simple distributed queue in a jboss cluster? - queue

I am working on a server that acts as the web service and UI front end in front of another server. Both servers are clustered. One of the features of the UI is tasks that users work on. These tasks are queued up on activemq on the backend server and items are fetched through the frontend server. I want to build a simple in-memory queue so that I can feed these items to the UI as fast as possible. I want to avoid having to configure another activemq server. My current approach is to just distribute the queue using Infinispan, but this feels inefficient. Is there a better way using something already included in JBoss?

You can just use standard JMS and the it will go over the HornetQ built into JBoss EAP 6. By default it will use infinispan in-memory.
There is a good article for setting up a clustered queue in AS 7 (EAP 6 should be the same) here:
http://blog.akquinet.de/2012/11/24/clustering-of-the-messaging-subsystem-hornetq-in-jboss-as7-and-eap-6/
In order to monitor the number of items in the queue, you can use JMX:
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
<hornetq-server>
<clustered>true</clustered>
<jmx-management-enabled>true</jmx-management-enabled>
<!-- rest of config here -->
</hornetq-server>
</subsystem>
Once JMX is enabled, you can use HornetQ specific code in order to see the queue length. This question gives an example of that: How to find a horneq Queue length
Also might be worth noting: JBoss EAP 7 is going to switch from HornetQ to ActiveMQ.

Related

Configuring HTTP thread size in JBOSS eap 7

I cannot find any documentation for configuring how many requests can be processed by JBoss EAP7 simultaneusly. I see something like HTTP connector and thread pool for 6.4 version but the 7 version misses that:
Make the HTTP web connector use this thread pool
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/administration_and_configuration_guide/sect-connector_configuration
So how to configure that for example only 300 requests at one time can be processed and other have to wait for their turn, so that too many simultaneous requests wouldnt kill the server? I know, that my application is effitient enough serving up to 300 requests, after that problems may occur..
JBoss EAP7 uses Undertow as the default web container, In Undertow by default all listener will use the default worker which is provided by IO subsystem.This worker instance manages the listeners (AJP/HTTP/HTTPS) IO threads.
The IO Threads are responsible to handle incoming requests.The IO subsystem worker will provide the following options to tune it further.
You could try the following:
<subsystem xmlns="urn:jboss:domain:io:2.0">
<worker name="default" task-max-threads="128"/>
<buffer-pool name="default"/>
</subsystem>

How to configure JDK8 JMX port to allow VisualVM to monitor all threads

I need to monitor a JDK 8 JVM on a production Red Hat 6 Linux server that allows me no direct access. I'm able to connect to the JMX port using VisualVM.
I can see the Monitor tab: CPU usage; heap and metaspace; loaded and unloaded classes; total number of threads.
However, I can't see dynamic data on the Threads tab. It shows me total number of threads and daemon threads, but no real time status information by thread name.
I don't know how to advise the owners of the server how to configure the JDK so I can see dynamic data on the Threads tab.
It works fine on my local machine. I can see the status of every thread by name, with color coded state information, if I point VisualVM to a Java process running locally.
What JVM setting makes dynamic thread data available on the JMX port?
Update:
I should point out that I'm using JBOSS 6.x.
If I look at my local JBOSS 6.x standalone.xml configuration I see the following subsystem entry for JMX:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<expose-resolved-model/>
<expose-expression-model/>
<remoting-connector/>
</subsystem>
I can see all dynamic thread information when running on my local machine.
I'm asking the owners of the production instance if the standalone.xml includes this subsystem. I'm hoping they will say that theirs is different. If it is, perhaps modifying the XML will make the data I need available.

JBOSS AS 7 Load Balancing with Server Failover

I have 2 instances of Jboss servers running on eg: 127.0.0.1 and 127.0.0.2.
I have implemented Jboss load balancing, but am not sure how to achieve server failover. I do not have a webserver to monitor the heartbeat and hence using mod_cluster is out the question. Is there any way I can achieve failover using only the two available servers?
Any help would be appreciated. Thanks.
JBoss clustering automatically provides JNDI and EJB failover and also HTTP session replication.
If your JBoss AS nodes are in a cluster then the failover should just work.
The Documentation refers to an older version of JBoss (5.1) but it has clear descriptions of how JBoss clustering works.
You could spun up another instance to server as your domain controller, and the two instances you already have will be your hosts. Then you could go through the domain controller, and it will do the work for you. However, I haven't seen instances going down to often, it usually servers that do, and it looks like you are using just one server (i might be wrong) for both instances, so i would consider splitting it up.

JBoss 7.1.1 and the EJB 3.1 Timer Service

I am thinking about porting a Spring Quartz based application to EJB 3.1 to see if EJB has improved. I am having problems understanding how fail-over works with the Schedule Timer Service. In Quartz, there are database tables which clustered Quartz instances use. If one node in your cluster crashes, jobs will still get executed on other nodes.
I have been looking at how the Timer Service persists things and it appears to use the file system of the server the Timer was created on. Is this true? I do not see how this would be possible as it would render the Timer Service unusable since it would not support failover.
So i must be missing something. Can anyone help me out with this?
The EJB timer service is simply not as advanced as Quartz (with or without Spring).
EJB timers are persisted to an unknown location. It may happen to be the file-system, but it could also be the Windows registry if you happen to be running on Windows, or it could be an LDAP server or whatever.
There was an issue on the EJB spec JIRA for some time about this, and it was discussed on the spec mailing list, but then it was brutally dropped and closed because no one bothered to reply anyone (perhaps because a lot of people were on vacation at the time). It's one of the lamest reasons to close an issue if you'd ask me, but I guess the spec lead sometimes must resort to such measures.
Anyway, in JBoss AS persisting happens to an embedded relational datasource, that on its turn writes to the filesystem. Via propriatary configuration you can point this datasource to any remote DB. Fail-over would have to come from propriatary JBoss functionality as well. Although EJB forbids lots of things for the sake of potential clustering, there's no explicit clustering support in the spec and thus specifically EJB timers are not cluster aware.
Not sure if this was available at the time of the question but you can use the 'cluster-ha-singleton' for this, it allows you to create a singleton timer that is invoked from a single cluster node, in case of failover of the chosen node a new node is elected to run the singleton (and therefore the timers)
http://www.jboss.org/quickstarts/eap/cluster-ha-singleton/
It mentions EAP but I am running on AS 7.2.0 fine, the jars are already included in /modules/org/jboss/

Alternatives to JMS for queuing

We have a REST web service that receives requests from external systems and makes updates to our DB accordingly. I'm looking to implement a caching/queuing solution for the requests that come in, as we've had some DB server challenges lately, and have lost some messages when the DB server went down.
Before I start putting together a simple persistent file-based queue, I'm wanting to see if there are any good alternatives to JMS as it's use is restricted in our environment.
Current platforms:
Jboss 4.3
Richfaces 3.3
Spring 3.0.5
RESTEasy
** UPDATES **
Per skaffman's question below, my requirements for clustering, transactions, etc.
Clustering: Our web and app servers are all clustered, so the queue(s) will need to be able to process items from all cluster nodes. However, our commits are essentially atomic, so ordering and synchronization issues are extremely minimal. Thread and cluster-safety is not really a factor. Separate/Independent queues on each cluster would be sufficient.
Transactions: Again, due to the atomic nature of our data, transactional needs are minmal/not required outside of each individual request.
Security: Moderate concern, but I would anticipate that to be handled by our regular security on the Web Service. I wouldn't anticipate anything reading or writing to the queue(s) other than the web-app itself. That would only be necessary in instances of high volume or when the DB is unavailable.
Thanks,
Mike
For one project we did use a queue (HornetQ) but was integrated in the war and deployable on a Tomcat because the customer did not want Weblogic or JBoss application servers, but if your restricting policy goes to your application architecture as well such solution would be forbidden.
For another project we did not use any JMS implementation and we make the asynchronous implementation by using a message database and the Service Activator of the spring-integration framework for consuming the events.
That way any message publisher just insert a row in a DB table and the Service Activator trigs the event and call any other service (Spring, Web-service, etc...).