How to configure JDK8 JMX port to allow VisualVM to monitor all threads - jboss

I need to monitor a JDK 8 JVM on a production Red Hat 6 Linux server that allows me no direct access. I'm able to connect to the JMX port using VisualVM.
I can see the Monitor tab: CPU usage; heap and metaspace; loaded and unloaded classes; total number of threads.
However, I can't see dynamic data on the Threads tab. It shows me total number of threads and daemon threads, but no real time status information by thread name.
I don't know how to advise the owners of the server how to configure the JDK so I can see dynamic data on the Threads tab.
It works fine on my local machine. I can see the status of every thread by name, with color coded state information, if I point VisualVM to a Java process running locally.
What JVM setting makes dynamic thread data available on the JMX port?
Update:
I should point out that I'm using JBOSS 6.x.
If I look at my local JBOSS 6.x standalone.xml configuration I see the following subsystem entry for JMX:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<expose-resolved-model/>
<expose-expression-model/>
<remoting-connector/>
</subsystem>
I can see all dynamic thread information when running on my local machine.
I'm asking the owners of the production instance if the standalone.xml includes this subsystem. I'm hoping they will say that theirs is different. If it is, perhaps modifying the XML will make the data I need available.

Related

Configuring HTTP thread size in JBOSS eap 7

I cannot find any documentation for configuring how many requests can be processed by JBoss EAP7 simultaneusly. I see something like HTTP connector and thread pool for 6.4 version but the 7 version misses that:
Make the HTTP web connector use this thread pool
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/administration_and_configuration_guide/sect-connector_configuration
So how to configure that for example only 300 requests at one time can be processed and other have to wait for their turn, so that too many simultaneous requests wouldnt kill the server? I know, that my application is effitient enough serving up to 300 requests, after that problems may occur..
JBoss EAP7 uses Undertow as the default web container, In Undertow by default all listener will use the default worker which is provided by IO subsystem.This worker instance manages the listeners (AJP/HTTP/HTTPS) IO threads.
The IO Threads are responsible to handle incoming requests.The IO subsystem worker will provide the following options to tune it further.
You could try the following:
<subsystem xmlns="urn:jboss:domain:io:2.0">
<worker name="default" task-max-threads="128"/>
<buffer-pool name="default"/>
</subsystem>

Simple distributed queue in a jboss cluster?

I am working on a server that acts as the web service and UI front end in front of another server. Both servers are clustered. One of the features of the UI is tasks that users work on. These tasks are queued up on activemq on the backend server and items are fetched through the frontend server. I want to build a simple in-memory queue so that I can feed these items to the UI as fast as possible. I want to avoid having to configure another activemq server. My current approach is to just distribute the queue using Infinispan, but this feels inefficient. Is there a better way using something already included in JBoss?
You can just use standard JMS and the it will go over the HornetQ built into JBoss EAP 6. By default it will use infinispan in-memory.
There is a good article for setting up a clustered queue in AS 7 (EAP 6 should be the same) here:
http://blog.akquinet.de/2012/11/24/clustering-of-the-messaging-subsystem-hornetq-in-jboss-as7-and-eap-6/
In order to monitor the number of items in the queue, you can use JMX:
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
<hornetq-server>
<clustered>true</clustered>
<jmx-management-enabled>true</jmx-management-enabled>
<!-- rest of config here -->
</hornetq-server>
</subsystem>
Once JMX is enabled, you can use HornetQ specific code in order to see the queue length. This question gives an example of that: How to find a horneq Queue length
Also might be worth noting: JBoss EAP 7 is going to switch from HornetQ to ActiveMQ.

Load balancing in JBoss with mod_cluster

Got a general question about load balancing setup in JBoss (7.1.1.Final). I'm trying to setup a clustered JBoss instance with a master and slave node and I'm using the demo app here (https://docs.jboss.org/author/display/AS72/AS7+Cluster+Howto) to prove the load balancing/session replication. I've basically followed through to just before the 'cluster configuration' section.
I've got the app deployed to the master and slave nodes and if I hit their individual IPs directly I can access the application fine. According to the JBoss logs and admin console the slave has successfully connected to the master. However, if I put something in the session on the slave, take the slave offline, the master cannot read the item that the slave put in the session.
This is where I need some help with the general setup. Do I have to have a separate apache httpd instance sat in front of JBoss to do the load balancing? I thought there was a load balancing capability built into JBoss that wouldn't need the separate server, or am I just completely wrong? If I don't need apache, please could you point me in the direction of instructions to setup the JBoss load balancing?
Thanks.
Yes, you need a Apache or any other software or hardware that allows you to perform load balancing of the HTTP request JBoss Application Server does not provide this functionality.
For proper operation of the session replication you should check that the server configuration and the application configuration is well defined.
On the server must have the cache enabled for session replication (you can use standalone-ha.xml or standalone-full-ha.xml file for initial config).
To configuring the application to replicate the HTTP session is done by adding the <distributable/> element to the web.xml.
You can see a full example in http://blog.akquinet.de/2012/06/21/clustering-in-jboss-as7eap-6/

Prevent deployment to entry node, only deploy to other nodes

I have a free OpenShift account with the default 3 gears. On this I have installed the WildFly 8.1 image using the OpenShift web console. I set the minimal and maximal scaling to 3.
What happens now is that OpenShift will create 3 JBoss WildFly instances:
One on the entry node (which is also running HAProxy)
One on an auxiliary node
One on another auxiliary node
The weird thing is that the JBoss WildFly instance on the entry node is by default disabled in the load balancer config (haproxy.conf). BUT, OpenShift is still deploying the war archive to it whenever I commit in the associated git repo.
What's extra problematic here is that because of the incredibly low number of max user processes (250 via ulimit -u), this JBoss WildFly instance on the entry node cannot even startup. During startup JBoss WildFly will throw random 'java.lang.OutOfMemoryError: unable to create new native thread' (and no, memory is fine, it's the OS process limit).
As a result, the deployment process will hang.
So to summarize:
A JBoss WildFly instance is created on the entry node, but disabled in the load balancer
JBoss WildFly in its default configuration cannot startup on the entry node, not even with a trivial war.
The deployer process attempts to deploy to JBoss WildFly on the entry node, despite it being disabled in the load balancer
Now my question:
How can I modify the deployer process (including the gear start command) to not attempt to deploy to the JBoss WildFly instance on the entry node?
When an app scales from 2 gears to 3, HAproxy stops routing traffic to your application on the headgear and routes it to the two other gears. This assures that HAproxy is getting the most CPU as possible as the application on your headgear (where HAproxy is running) is no longer serving requests.
The out of memory message you're seeing might not be an actual out of memory issue but a bug relating to ulimit https://bugzilla.redhat.com/show_bug.cgi?id=1090092.

LoadRunner suggested approach to monitoring cloud based JBoss infrastructure

My Project has a technical platform consisting of a cloud-based set up with JBoss nodes running on Linux VMs and databases connected to these further below.
Obviously I can configure each JBoss instance to accept Remote monitoring via JMX and use VisualVM to monitor them. But as the number of JBoss (combined app server and web app server) increases the monitoring gets out-of-hand as there is a lot of nodes to monitor. I have been thinking about using our JBoss Operations Networking (JON) and maybe monitor on this abstraction level, but is there a way to configure LoadRunner to monitor i.e. through JON?
General question:
Does anybody have experience in monitoring a could based JBoss infrastructure through LoadRunner or do you monitor through i.e. JON instead when running the LoadTest?
All Monitoring in SiteScope
Base operating system Monitors through SiteScope
JMX Monitoring in SiteScope
(Alternate route) SNMP Agents for JBOSS and your OS, through SiteScope
When you go to run the test, connect to your SiteScope instance from your LR/Performance Center controller and pull in the SiteScope Stats. As an alternative to SiteScope Business Availability Center can also be used.