Configuring HTTP thread size in JBOSS eap 7 - jboss

I cannot find any documentation for configuring how many requests can be processed by JBoss EAP7 simultaneusly. I see something like HTTP connector and thread pool for 6.4 version but the 7 version misses that:
Make the HTTP web connector use this thread pool
https://access.redhat.com/documentation/en-us/jboss_enterprise_application_platform/6.3/html/administration_and_configuration_guide/sect-connector_configuration
So how to configure that for example only 300 requests at one time can be processed and other have to wait for their turn, so that too many simultaneous requests wouldnt kill the server? I know, that my application is effitient enough serving up to 300 requests, after that problems may occur..

JBoss EAP7 uses Undertow as the default web container, In Undertow by default all listener will use the default worker which is provided by IO subsystem.This worker instance manages the listeners (AJP/HTTP/HTTPS) IO threads.
The IO Threads are responsible to handle incoming requests.The IO subsystem worker will provide the following options to tune it further.
You could try the following:
<subsystem xmlns="urn:jboss:domain:io:2.0">
<worker name="default" task-max-threads="128"/>
<buffer-pool name="default"/>
</subsystem>

Related

wildfly 14 + pooled-connection-factory min-pool-size

In wildfly activemq subsystem, I have defined min-pool-size to 40 in pooled-connection-factory. Deploy artemis externally and define ip and port in wildfly standalone-full-ha.xml. After starting artemis and wildfly server, how can i verify that wildfly is created connection (based on min-pool-size property) once server started?
I believe the pool in Wildfly is filled lazily so I don't believe that just starting Wildfly will necessarily cause a connection to be created. Something actually needs to use the pool to ensure it's working as expected. Confirming that the application using the pool is working as expected is the best way to know whether or not the pool is working as expected.
Also, I don't think the pooled-connection-factory actually exposes any metrics via the Wildfly management interfaces so that's probably why you're not seeing anything for it.

Does gRPC server spin up a new thread for each request?

I tried profiling a gRPC java server. And i see the below set of thread pools majorly.
grpc-default-executor Threads : Created 1 for each incoming request.
grpc-default-worker-ELG Threads: May be to listen on the incoming gRPC requests & assign to the above "grpc-default-executor" thread.
Overall, is gRPC java server, Netty style or Jetty/Tomcat style? Or it can configured to run as both ways?
gRPC Java server is exposed closer to Jetty/Tomcat style, except that it is asynchronous. That is, in normal Servlets each request consumes a thread until it is complete. While newer Servlet versions let you detach from the dedicated thread and continue work asynchronously (freeing the thread for other use) that is more uncommon. In gRPC you are free to work in either style. Note that gRPC uses a cachedThreadPool by default to reuse threads; on server-side it's a good idea to replace the default executor with your own, generally fixed-size, pool via ServerBuilder.executor().
Internally gRPC Java uses the Netty-style. That means fully non-blocking. You may use ServerBuilder.directExecutor() to run on the Netty threads. Although in that case you may want to specify the NettyServerBuilder.bossEventLoopGroup(), workerEventLoopGroup(), and for compatibility channelType().
As far as I know you can specify using the directExecutor() when building the GRPC server / client that will ensure all work is done in the IO thread and so threads will be shared. The default is to not do this for safety reasons as you will need to be very careful about what you do if you are in the IO Thread (like you should never block there).

How to configure JDK8 JMX port to allow VisualVM to monitor all threads

I need to monitor a JDK 8 JVM on a production Red Hat 6 Linux server that allows me no direct access. I'm able to connect to the JMX port using VisualVM.
I can see the Monitor tab: CPU usage; heap and metaspace; loaded and unloaded classes; total number of threads.
However, I can't see dynamic data on the Threads tab. It shows me total number of threads and daemon threads, but no real time status information by thread name.
I don't know how to advise the owners of the server how to configure the JDK so I can see dynamic data on the Threads tab.
It works fine on my local machine. I can see the status of every thread by name, with color coded state information, if I point VisualVM to a Java process running locally.
What JVM setting makes dynamic thread data available on the JMX port?
Update:
I should point out that I'm using JBOSS 6.x.
If I look at my local JBOSS 6.x standalone.xml configuration I see the following subsystem entry for JMX:
<subsystem xmlns="urn:jboss:domain:jmx:1.3">
<expose-resolved-model/>
<expose-expression-model/>
<remoting-connector/>
</subsystem>
I can see all dynamic thread information when running on my local machine.
I'm asking the owners of the production instance if the standalone.xml includes this subsystem. I'm hoping they will say that theirs is different. If it is, perhaps modifying the XML will make the data I need available.

Simple distributed queue in a jboss cluster?

I am working on a server that acts as the web service and UI front end in front of another server. Both servers are clustered. One of the features of the UI is tasks that users work on. These tasks are queued up on activemq on the backend server and items are fetched through the frontend server. I want to build a simple in-memory queue so that I can feed these items to the UI as fast as possible. I want to avoid having to configure another activemq server. My current approach is to just distribute the queue using Infinispan, but this feels inefficient. Is there a better way using something already included in JBoss?
You can just use standard JMS and the it will go over the HornetQ built into JBoss EAP 6. By default it will use infinispan in-memory.
There is a good article for setting up a clustered queue in AS 7 (EAP 6 should be the same) here:
http://blog.akquinet.de/2012/11/24/clustering-of-the-messaging-subsystem-hornetq-in-jboss-as7-and-eap-6/
In order to monitor the number of items in the queue, you can use JMX:
<subsystem xmlns="urn:jboss:domain:messaging:1.4">
<hornetq-server>
<clustered>true</clustered>
<jmx-management-enabled>true</jmx-management-enabled>
<!-- rest of config here -->
</hornetq-server>
</subsystem>
Once JMX is enabled, you can use HornetQ specific code in order to see the queue length. This question gives an example of that: How to find a horneq Queue length
Also might be worth noting: JBoss EAP 7 is going to switch from HornetQ to ActiveMQ.

Did netty.3.5.7 begin initializing worker pool with 200 threads?

Can anyone confirm if Netty 3.5.7 introduced a change that causes an NIO threadpool of 200 threads to be created?
We have a webapp that we're running in Tomcat 7 and I've noticed that at some point there is a new block of 200 NIO threads - all labeled "New I/O Worker #". I've verified that with 3.5.6, this threadpool is not initialized with 200 threads, but only a boss thread. As soon as I replaced the the jar with 3.5.7, I now have 200 NIO threads + the boss thread.
If this change was introduced with 3.5.7, is it possible to control the pool size with some external configuration? I ask because we don't explicitly use Netty, it's used by a 3rd party JAR.
Thanks,
Bob
Netty switched to not lazy start workers anymore because of the overhead of synchronization. I guess that could be the problem you see.
The only help here is to change the worker-count when create the Nio*ChannelFactory. 200 is a way to high anyway.