Jboss Performance Metrics - jboss

With Jboss 4 and 5 I could access the following metrics through the web-console (see sample screenshot). Is this information still available in Jboss 7? If so, where?
free Memory
total Memory
max Memory
max Threads
min Spare Threads
max Spare Threads
current Thread Count
current Thread Busy
max Processing Time
processing Time
request Count
error Count
bytes Received
bytes Sent

Related

NiFi: poor performance of ConsumeKafkaRecord_2_0 and ConsumeKafka_2_0

I'm trying to load messages from relatively large topic (billion+ records, more then 100 GiB, single partition) using Apache NiFi (nifi-1.11.4-RC1, OpenJDK 8, RHEL7), but performance seems to be far too low:
1248429 messages (276.2 MB) per 5 minutes for ConsumeKafka_2_0 and 295 batches (282.5 MB) for ConsumeKafkaRecord_2_0. I.e. only 4161 messages (920 KB) per second.
Results of kafka-consumer-perf-test.sh (same node, same consumer group and same topic) are more impressive:
263.4 MB (1190937 records) per second. Too much difference for any reasonable overhead.
I've configured cluster according to Best practices for setting up a high performance NiFi installation, but throughput didn't increase.
Each node has 256 GB RAM and 20 cores, Maximum Timer Driven Thread Count is set to 120, but NiFi GUI shows only 1 or 2 active threads, and CPU load is almost zero, so is disk queue.
I've tested several flows, but even ConsumeKafka_2_0 with autoterminated 'success' relationship shows the same speed.
Is it possible to increase performance of these processors? It looks like some artificial limit or throttle, because I couldn't find any bottleneck...
Help, please, I'm completely stuck!
UPD1:
# JVM memory settings
java.arg.2=-Xms10240m
java.arg.3=-Xmx10240m
Scheduling Strategy : Timer driven
Concurrent Tasks : 64
Run Schedule : 0 sec
Execution : All nodes
Maximum Timer Driven Thread Count : 120
Maximum Event Driven Thread Count : 20
UPD2:
When I consume topic with many partitions or several topics together with one ConsumeKafka_2_0 processor, or when I use several processors with different consumer groups with same topic, total throughput increases accordingly.
So, Maximum Timer Driven Thread Count and Concurrent Tasks aren't primary culprits. Problem is somewhere in task scheduling, or in processor itself.
We've had success increasing ConsumeKafka throughput by changing the processor's yield duration from 1 to 0 seconds and increasing the socket's buffer size to 1 MB.
receive.buffer.bytes=1048576
You may find other things to try here:
https://blog.newrelic.com/engineering/kafka-best-practices/

Mongodb pod consuming memory even though it is in idle state

When Inserting data in mongodb its memory usage increases then
the data base is dropped and connections are closed, but still the memory usage continue to increase.
I have already configured wiredTiger to 700mb
As you can see the graph in the screen shot attached down,at every 30 mins data insertion and deletion takes place , which consumes max 10 minutes of time and then the connection breaks but as you can see in graph the memory usage continues to increase which then reaches its max limit and then the kuberntes pod starts showing trouble

You have reached maximum pool size for given partition

can somebody please explain me what is the cause of this error:
You have reached maximum pool size for given partition
In latest 2.1.x version, you do not have this exception any more.
You merely wait till new connection will be available.
But I will explain it any way. To increase multiprocessor scalability pool is split on partitions and several threads work together on single partition.
Each partition has queue , when limit if connections for this queue is reached exception is thrown. But again it is already not the case for latest version.
So the best approach to fix this issue is to upgrade to latest version and set limit of maximum connections. Would be cool if you will add more information in your question , but I suppose that you use OrientGraphFactory which in latest version has maximum limit of connections equals to number of CPU cores.

Jboss Activemq 6.1.0 queue message processing slows down after 10000 messages

Below is the configuration:
2 JBoss application nodes
5 listeners on the application node with 50 threads each, supports
clustering and is set up as active-active listener, so they run on
both app nodes
The listener simply gets the message and logs the information into
database
50000 messages are posted into ActiveMQ using JMeter.
Here is the observation on first execution:
Total 50000 messages are consumed in approx 22 mins.
first 0-10000 messages consumed in 1 min approx
10000-20000 messages consumed in 2 mins approx
20000-30000 messages consumed in 4 mins approx
30000-40000 messages consumed in 6 mins approx
40000-50000 messages consumed in 8 mins
So we see the message consumption time is increasing with increasing number of messages.
Second execution without restarting any of the servers:
50000 messages consumed in 53 mins approx!
But after deleting data folder of activemq and restarting activemq,
performance again improves but degrades as more data enters the queue!
I tried multiple configuration in activemq.xml, but no success...
Anybody faced similar issue, and got any solution ? Let me know. Thanks.
I've seen similar slowdowns in our production systems when pending message counts go high. If you're flooding the queues then the MQ process can't keep all the pending messages in memory, and has to go to disk to serve a message. Performance can fall off a cliff in these circumstances. Increase the memory given to the MQ server process.
Also looks as though the disk storage layout is not particularly efficient - perhaps having each message as a file in a single directory? This can make access time rise as traversing disk directory takes longer.
50000 messages in > 20 mins seems very low performance.
Following configuration works well for me (these are just pointers. You may already have tried some of these but see if it works for you)
1) Server and queue/topic policy entry
// server
server.setDedicatedTaskRunner(false)
// queue policy entry
policyEntry.setMemoryLimit(queueMemoryLimit); // 32mb
policyEntry.setOptimizedDispatch(true);
policyEntry.setLazyDispatch(true);
policyEntry.setReduceMemoryFootprint(true);
policyEntry.setProducerFlowControl(true);
policyEntry.setPendingQueuePolicy(new StorePendingQueueMessageStoragePolicy());
2) If you are using KahaDB for persistence then use per destination adapter (MultiKahaDBPersistenceAdapter). This keeps the storage folders separate for each destination and reduces synchronization efforts. Also if you do not worry about abrupt server restarts (due to any technical reason) then you can reduce then disk sync efforts by
kahaDBPersistenceAdapter.setEnableJournalDiskSyncs(false);
3) Try increasing the memory usage, temp and storage disk usage values at server level.
4) If possible increase prefetchSize in prefetch policy. This will improve performance but also increases the memory footprint of consumers.
5) If possible use transactions in consumers. This will help to reduce the message acknowledgement handling and disk sync efforts by server.
Point 5 mentioned by #hemant1900 solved the problem :) Thanks.
5) If possible use transactions in consumers. This will help to reduce
the message acknowledgement handling and disk sync efforts by server.
The problem was in my code. I had not used transaction to persist the data in consumer, which is anyway bad programming..I know :(
But didn't expect that could have caused this issue.
Now 50000, messages are getting processed in less than 2 mins.

Diagnosing Akka--number of active and available threads?

In my application, I sometimes get into a state where work isn't getting done by my workers, but the CPU and disk are basically sitting idle. I'd like to be able to regular log (e.g. to statsd or similar) the number of active worker threads and maximum number of active worker threads for each scheduler. Then if we have problems, we can check the logs and cross-reference to determine if the problems coincided with our thread pools being totally full.
I can't seem to find any methods to determine, at the current moment in time, the total thread pool size and number of running threads in each pool. Where should I look?