I have a Kafka architecture program and want to stream different partitions in parallel on each consumer instance, do I use the mapNPar or should I create different consumer instances and then fork them and join them in a for flat map?
Related
which one is recommended to use :
1. Single kafka stream consuming from multiple topics
2. Different kafka streams consuming from different topics (I've used this one already with no issues encountered)
Is it possible to achieve #1 ? and if yes, what're the implications?
and if I use 'EXACTLY_ONCE' settings, what kind of complexities it'll bring?
kafka version : 2.2.0-cp2
Is it possible to achieve #1 (Single kafka stream consuming from multiple topics)
Yes, you can use StreamsBuilder#stream(Collection<String> topics)
If the data that you want to process is spread across multiple topics and that these multiple topics constitute one single source, then you can use this, but not if you want to process those topics in parallel.
It is like one consumer subscribing to all these topics which also means one thread for consuming all the topics. When you call poll() it returns ConsumerRecords from all the subscribed topics and not just one topic.
In Kafka streams, there is a term called Topology, which is basically a acyclic graph of sources, processors and sinks. A topology can contain sub-topologies.
Sub-topologies can then be executed as independent stream tasks through parallel threads (Reference)
Since each topology can have a source, which can be a topic, and if you want parallel processing of these topics, then you have to break-up your graph to sub-topologies.
If I use 'EXACTLY_ONCE' settings, what kind of complexities it'll bring?
When messages reach sink processor in a topology, then its source must be committed, where a source can be a single topic or collection of topics.
Multiple topics or one topic, we need to send offsets to the transaction from the producer, which is basically Map<TopicPartition, OffsetMetadata> that should be committed when the messages are produced.
So, I think it should not introduce any complexities whether it is single topic having 10 partitions or 10 topics with 1 partition each, because offset is at the TopicPartition level and not at topic level.
I have 2 instances of my application for kafka streams consuming 2 partitions in a single topic.
will the single partitions data be only in one application or both applications? and Say if one applications instance is down will i have issues. how will interactive queries solve this ?
do i need to use globalktable?
Each kafka stream application instance will be mapped to one or more partition, based on how many partitions the input topics have.
If you run 2 instances for an input topic with 2 partitions, each partition will consume from one partition. If one instance goes down, kafka stream will rebalance the work load on the first instance and it will consumer from both partition.
You can refer the architecture here in detail : https://docs.confluent.io/current/streams/architecture.html
Having two Kafka topics with two partitions each. Their messages are keyed by the same param id: Integer.
I have two instances of a Kafka Streams application, so each of them would be assigned two partitions (tasks) one per topic.
Now, imagine that the partition having message ids =1 from topic A is assigned to the KStreams app instance A and the partition with message ids =1 from topic B is assigned to app instance B, how can a join of those two KStreams ever work if the data from the topics may not be collocated ( as would happen in this example for keys/ids=1)?
There are ways to do it... if storage is not an issue or frequency if messages are low then you can use the GlobalKtables for one of the topic. It will cost more memory as all the partitions will be synced on all instances of Streams app.
https://docs.confluent.io/current/streams/concepts.html#globalktable
Other way is to use the Kafka streams interactive queries to discover the data on other stream instances.
https://kafka.apache.org/10/documentation/streams/developer-guide/interactive-queries.html
For KStreams joins - you need to have same number of partitions for both the topics as well as same partitioning strategy. In that way all consumers will read the partitions for both topic in same way.
nice reference Blog for partitioning - https://medium.com/#anyili0928/what-i-have-learned-from-kafka-partition-assignment-strategy-799fdf15d3ab
I am trying to understand the architecture of Kafka streams API and came across this in the documentation:
An application's processor topology is scaled by breaking it into multiple tasks
What are all the criteria to break up the processor topology into tasks? Is it just the number of partitions in the stream/topic or something more.
Tasks can then instantiate their own processor topology based on the assigned partitions
Can someone explain what the above means with an example? If the tasks are created only with the purpose of scaling, shouldn't they all have the same topology?
Tasks are atomic parallel units of processing.
A topology is divided into sub-topologies (sub-topologies are "connected components" that forward data in-memory; different sub-topologies are connected via topics). For each sub-topology the number of input topic partitions determines the number of tasks that are created. If there are multiple input topics, the maximum number of partitions over all topics determines the number of tasks.
If you want to know the sub-topologies of your Kafka Streams application, you can call Topology#describe(): the returned TopologyDescription can either be just printed via toString() or one can traverse sub-topologies and their corresponding DAGs.
A Kafka Streams application has one topology that may have one or more sub-topologies. You can find a topology with 2 sub-topologies in the article Data Reprocessing with the Streams API in Kafka: Resetting a Streams Application.
Currently I have one Kafka topic.
Now I need to run multiple consumer so that message can be read and processed in parallel.
Is this possible.
I am using python and pykafka library.
consumer = topic.get_simple_consumer(consumer_group=b"charlie",
auto_commit_enable=True)
Is taking same message in both consumer. I need to process message only once.
You need to use BalancedConsumer instead of SimpleConsumer:
consumer = topic.get_balanced_consumer(consumer_group=b"charlie",
auto_commit_enable=True)
You should also ensure that the topic you're consuming has at least as many partitions as the number of consumers you're instantiating.
Generally you need multiple partitions and multiple consumer to do this, or, something like Parallel Consumer (PC) to sub divide the single partition.
However, it's recommended to have at least 3 partitions and have at least three consumers running in a group, to utilise high availability. You can again use PC to process all these partitions, sub divided by key, in parallel.
PC directly solves for this, by sub partitioning the input partitions by key and processing each key in parallel.
It also tracks per record acknowledgement. Check out Parallel Consumer on GitHub (it's open source BTW, and I'm the author).
Yes you can have multiple consumers reading from the same topic in parallel provided you use the same consumer group id and the number of partitions of the topic should greater than the consumers otherwise some of the consumers will not be assigned any partitions and those consumers won't fetch any data