Distributed Jmeter max number of servers - kubernetes

I use distributed jmeter for load tests in Kubernetes. To maximize the number of concurrent threads I use several Jmeter server instances.
My test plan has 2000 users, so I have 10000 concurrent users with 5 Jmeter servers. Each user send 1 request per second to Kafka. This runs without any problems.
But if I increase the number of server instances up to 10, Jmeter gets a lot of errors when sending requests and is not able to send the required rate of requests per second.
Is there a way to use more than 5 server instances in Jmeter (my cluster has 24vCPUs and 192gb ram)?

Theoretical maximum value of slaves is very high, you can have as many as 2 147 483 647 slaves which is a little bit more than 5.
So my expectation is that the problem is somewhere else, i.e. there is a maximum number of connections per IP defined or the broker is running out of resources
We cannot give meaningful advices unless we have more details, in the meantime you can check:
jmeter.log files for master and slaves
response data
kafka logs
resource consumption on kafka broker(s) side, it can be done using i.e. JMeter PerfMon Plugin

Related

ActiveMQ Artemis latency issue

I have a cluster with 6 nodes of ActiveMQ Artemis (version 2.27.1) - i work with 3 servers, which on each one there is master and slave node.
As part of my test, i send something like 10,000 messages per minute into specific queue and watch the consuming latency results using my OpenSearch dashboard.
When i start my nodes by pairs - master and his corresponding slave on each iteration, it works normally, and i dont see any latency.
But When i start the nodes by the official recommended order of Artemis documentation, or in any other way, i am having latency.
here are some results of my test, and the difference between the scenarios:
artemis-latency-test-results

Druid Tuning Configuration

iam beginner using druid and kafka.
I want to create interactive data realtime kafka-druid. I still confuse what tuning should i change of this configuration?
Thank in advance
It depends:
on your size of kafka's messages
on your producer's capacity and the rate of messages produced to Kafka
on your Kafka's servers, partitioning and lots of other factors.
on your druid's deployment model (single or cluster)
and ....
But the most important thing I see missing here is your task count, which means the amount of parallel processing (and since yours is Kafka, it means parallel consumers). Increase it and make sure your Druid host (or your middle manager host if it is a cluster) has the adequate cores for your tasks. And make sure in your middle-manager you have increased the total number of available tasks:
druid.worker.capacity

Consuming # subscription via MQTT causes huge queue on lost connection

I am using Artemis 2.14 and Java 14.0.2 on two Ubuntu 18.04 VM with 4 Cores an 16 GB RAM. My producers send approximately 2,000 Messages per seconds to approx 5,500 different Topics.
When I connect via the MQTT.FX client with certificate based authorization and do a subscription to # the MQTT.FX client dies after some time and in the web console I see a queue under # with my client id that won't be cleared by Artemis. It seems that this queue grows until the RAM is 100% used. After some time my Artemis Broker restarts itself.
Is this behaviour of Artemis normal? How can I tell Artemis to clean up "zombie" queues after some time?
I already tried to use this configuration parameters in different ways, but nothing works:
confirmationWindowSize=0
clientFailureCheckPeriod=30000
consumerWindowSize=0
Auto-created queues are automatically removed by default when:
consumer-count is 0
message-count is 0
This is done so that no messages are inadvertently deleted.
However, you can change the corresponding auto-delete-queues-message-count address-setting in broker.xml to -1 to skip the message-count check. Also, can adjust the auto-delete-queues-delay to configure a delay if needed.
It's worth noting that if you create a subscription like # (which is fairly dangerous) you need to be prepared to consume the messages as quickly as they are produced to avoid accumulation of messages in the queue. If accumulation is unavoidable then you should configure the max-size-bytes and address-full-policy according to your needs so the broker doesn't get overwhelmed. See the documentation for more details on that.

kafka Performance Reduced when adding more consumer or producer

I have 3 server with 10GB connection between them and run a Kafka cluster on 2 servers and generate some test in third server...
when I run a single java producer (in third server that is not in Kafka cluster) sending 1 million messages take 3 seconds, but when I run another java producer (with different topic) both of producers take 6 seconds for sending messages.
I sure network connection is not bottleneck (it is 10GB)
so why this problem happened and how can I solve this (I want both producers take 3 seconds) ?
Sounds like you are getting a consistent 333,333 messages/sec performance out of a two node kafka cluster, with zookeeper running on the same 2 machines as your 2 kafka brokers. You don’t say what size these messages are or what kind of disks you are using, or how much memory, or if you are publishing with acks=all, or what programming language you are using (I assume java) but that actually sounds like good consistent results that are probably disk IO bound on the brokers or cpu bound on your single client machine.

Kafka recommended system configuration

I'm expecting our influx in to Kafka to raise to around 2 TB/day over a period of time. I'm planning to setup a Kafka cluster with 2 brokers (each running on separate system). What is the recommended hardware configuration for handling 2 TB/day ?
To use as a base you could look here: https://docs.confluent.io/4.1.1/installation/system-requirements.html#hardware
You need to know the amount of messages you get per second/hour because this will determine the size of your cluster. For HD, it's not necessary to get SSD because the system will use RAM to store the data first. Still you could need quite speed hard disk to ensure that the flushing process of the queue will not slow your system.
I would also recommend to use 3 kafka broker and 3 or 4 zookeeper server too.