Apache Storm with Kafka spout bottleneck - apache-kafka

I'm trying to achieve max performance with my Storm setup. I'm sending tens of thousands of messages through Kafka, which would be received by the storm topology.
When I look in Storm UI, I noticed that all the messages are going to a single executor rather than load balanced between all the executors. (see attached screenshot).
Any reason for this and how can I load balance the Kafka messages?
Storm UI Screenshot

Since you have 3 partitions, try creating the Kafka Spout with a parallelism hint of 3 and HBase Bolt with a parallelism hint of 3. Use Partial Key grouping in the HBase Bolt to load balance the messages between the bolts on the basis of a key.

Related

KafkaStream Vs Flink

I have used the Flink for sending data from source to sink.
My flink app consumes the data from Kafka and send to the destination.
The destination is also kafka topic which has a different topic name.
The Flink is only used for delivering purpose without having any business logic.
In this case, I think that changing the flink to Kafka Stream will increase the throughput. Because the flink has no contribution except for delivering data from source to sink. Also my source and sink uses the kafka so I think that kafka streams will be faster in case of the delivery data.
I would appreciate if you could give you any opinion for my question.
Thanks.
There's no guarantee one will be faster than the other. You still need to do JVM and network tuning.
Either will work, but the limitation of Kafka Streams is that the data must remain in the same Kafka cluster. Flink has no such limitation.
Or you can simply use MirrorMaker for moving data between Kafka topics of different clusters.

how do i test Exactly Once Semantics working in my kafka streams application

i have a Kafka Streams DSL application, we have a requirement on exactly once processing, for the same i have added the configuration
streamConfig.put(processing.gurantee, "exactly_once");
I am using kafka 2.7
I have 2 queries
what's the difference between exactly_once and exactly_once_beta
how do i test this functionality to be sure my messages are getting processed only once
Thanks!
exactly_once_beta is an improvement over exactly_once. While exactly_once uses a transactional producer for each stream task (combination of sub-topology and input partition, exactly_once_beta uses a transactional producer for each stream thread of a Kafka Streams client.
Every producer comes with separate memory buffers, a separate thread, separate network connections which might limit scaling the number of input partitions (i.e. number of tasks). A high number of producers might also cause more load on the brokers. Hence, exactly_once_beta has better scaling characteristics. You can find more details in KIP-447.
Note that exactly_once will be deprecated and exactly_once_beta will be renamed to exactly_once_v2 in Apache Kafka 3.0. See KIP-732 for more details.
For tests you can get inspiration from the tests in the Apache Kafka repo:
https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/EosIntegrationTest.java
https://github.com/apache/kafka/blob/trunk/streams/src/test/java/org/apache/kafka/streams/integration/EOSUncleanShutdownIntegrationTest.java
https://github.com/apache/kafka/blob/trunk/tests/kafkatest/tests/streams/streams_eos_test.py
Basically, you need to create a failover scenario and verify that messages are not produced multiple times to the output topics. Note that messages may be processed multiple times, but the results in the output topics must appear as if they were only processed once. You can find a pretty good talk about exactly-once semantics that also explains the failover scenarios here: https://www.confluent.io/kafka-summit-london18/dont-repeat-yourself-introducing-exactly-once-semantics-in-apache-kafka/

Multiple Flink pipelines for the same Kafka topic

Background
We have a Kafka topic with a steady stream of data. To process it we have a stateless Flink pipeline that consumes that topic and writes to another topic.
From time to time we have bursts of information that our Flink is not configured to handle. We don't want to configure our Flink pipeline and cluster to always support the maximum load we can have, we want to dynamically scale according to the load. (budget reasons $$$)
Solutions we thought of
One way to do so is to add/remove nodes to the Flink cluster and change the parallelism of the Flink pipeline operators. This will require stopping the Flink job with a snapshot, reconfiguring the parallelism and restarting with new parallelism.
This would be great but we cannot allow ourselves the downtime it produces. We have to scale up/down without downtime.
If we would use regular Kafka consumers it would be as simple as adding a consumer (assuming we have enough Kafka partitions) and Kafka would redistribute the topic partitions between all the consumers.
The Flink Kafka consumer manages the partition assignment and the offset on its own which allows exactly-once semantics (we don't need it). The drawback is that a single Flink job always uses all the topic partitions.
We thought we could create another instance of Flink that would subscribe to the same topic with the same group and let Kafka distribute the partitions between them. But for that we would need the Kafka Flink consumer to let Kafka manage which partitions are assigned to which consumer.
What are we looking for
We couldn't find a library that contains such a consumer or a configuration in the existing consumer. We could write it on our own (not so difficult) but if there is an existing solution we'd rather use it.
Are we missing something? Are we misunderstanding something? Is there a better solution?
Thanks!
The most straightforward approach, since you said that at worst you'll need double the capacity, would be to modify your topology to be able to write Kafka messages you can't process quickly enough to a second overflow Kafka topic. Both input and output Kafka topic names would be configurable. Maybe you would have a threshold backlog delay that automatically triggers this writing or maybe you would have a flag in the topology that you can externally set while the topology is running. That's a design detail you can work through that has operational implications.
This gives you a Flink topology that can handle some maximum number of messages in a timely fashion while writing the rest of the messages that can't be handled to a second Kafka topic. You can then run a second instance of the same Flink topology that reads from that secondary topic and writes, if necessary to a third topic. If the writing to the overflow topic happens very early in the topology processing, you could chain several of these instances together via Kafka with minimal latency and without having to reconfigure and restart any topologies.

Is using Kafka as an input source for Flink a performance bottleneck?

Flink allows to read from a Kafka topic, is that a performance bottleneck making Flink slower overall?
Kafka partitions can scale horizontally to accomodate for higher thoughput.
One Flink consumer thread can only be assigned to one Kafka partition.
So, if you have only 1 Kafka partition, and N+1 Flink executors, then you will have N idle tasks, which could be a bottleneck, sure, but that is a tradeoff of having total-ordering within a Kafka topic, not necessarily a Flink problem.
Otherwise, you would create your Kafka topics with ten to hundreds of partitions, and Flink would be fine to consume it.
It depends. If, for example, you have a slow network connection between your Kafka cluster and your Flink cluster, then that will become a performance bottleneck.

how to make spark streaming asynchronously when read from Kafka

I have one Kafka partition, and one sparkStreaming application. One server with 10 cores. When the spark streaming get one message from Kafka, the subsequent process will take 5 seconds(this is my code). So I found sparkStreaming read Kafka message very slow, I'm guessing that when spark read out one message then it will wait until the message was processed, so the reading and processing are synchronized.
I was wondering can I make the spark reading asynchronously? So the reading from Kafka won't be dragged by the subsequent processing. Then the spark will very quickly consume data from Kafka. And then I can focus on the slow data process inside spark. btw, I'm using foreachRDD function.
you can increase the number of partitions in kafka, it should improve the parallelism , also you can try with "Direct kafka receiver" which really improve the performance when your app is reading from kafka