I have been looking at understanding how in Kafka stream, a Stream Thread switch between the execution of the Tasks it is assigned with, but could not find the answer online.
Thread pool are well understood in Java, we know we should not block in our code, as this can quickly lead to thread starvation. In other words, a task will execute until it finishes, then the thread takes anything else that has been submitted to the thread pool.
Hence in the same spirit, I am wondering given that task read data that continuously come through their input partitions, which technically never ends, how does a Stream thread, switch between task?
This information can be helpful in deciding how many task we are prepared to pack per stream thread, depending on what we know about our workload.
The Kafka StreamThread is not analogous to a threadpool. A StreamThread processes one record at a time, from the first Task in the topology down through the completion of the Topology for that one Record.
Once the Record has been processed, a new input Record is fetched from the Input Topics.
Only one StreamThread may process records from any given partition; however, one StreamThread may process multiple partitions. Therefore it is useless to have more StreamThread's than input partitions.
I would recommend over-allocating the number of partitions so that you can increase the number of stream threads as you scale up.
Related
is it possible to pick the packets by consumers after defined time in the packet by kafka consumer or how can we achieve this in kafka?
Found related question, but it didn't help. As I see: Kafka is based on sequential reads from file system and can be used only to read topics straightforward keeping message ordering. Am I right?
same is possible with rabbitMQ.
If I understand the question, you would need to consume the data, deserialize it and inspect the time field. Then append to some priority queue data structure and start a background timer thread to check if events from this queue should further be processed, and not block the Kafka consumer.
The only downside to this approach is that you then need to worry about processing and committing "shorter time" events that are read by the consumer while waiting for previously consumed "longer time". Otherwise, a restart of your client will drop all events from an in memory queue and start consuming after the last committed record.
You might be able to workaround this using a persistent "outbox pattern" database table, or otherwise tracking offsets and processed records manually, and seeking past any duplicates
Let's imaging a simple message processing pipeline, like on the image below:
A group of consumers listens to a topic, picks messages one by one, does some sort of processing and sends them over to the next topic.
Some messages crash the consumer or make it stuck forever (so then a liveness probe kills the consumer after timeout).
In this case a consumer is not able to commit the offset, so the malicious message gets picked up by another consumer. And also makes it crash.
Ideally we want to move the message to a dead letter topic after N such attempts.
This can be achieved by introducing a shared storage:
But this creates coupling between the services and introduces a Single Point of Failure (SPOF) which is the shared database.
I'm looking for ideas on how to work this around with stateless services.
If your context is correct with this approach (that's something you should judge, as I'm only trying to give a suggestion), please consider decoupling the consumption and the processing.
In your case, the consumer is stopped, not because it was not able to read from kafka, and/or the kafka broker wasn't able to provide messages, but because the processing of the message was too slow and/or unsuccesful.
The consumer, in fact, was correctly receiving the messages. It was the processing of them that made it be declared dead.
First of all, the KafkaConsumer javadoc block regarding this (just above the constructor summary). The second option is the one quoted here
2. Decouple Consumption and Processing
Another alternative is to have one or more consumer threads that do
all data consumption and hands off ConsumerRecords instances to a
blocking queue consumed by a pool of processor threads that actually
handle the record processing. This option likewise has pros and cons:
PRO: This option allows independently scaling the number of consumers
and processors. This makes it possible to have a single consumer that
feeds many processor threads, avoiding any limitation on partitions.
CON: Guaranteeing order across the processors requires particular care
as the threads will execute independently an earlier chunk of data may
actually be processed after a later chunk of data just due to the luck
of thread execution timing. For processing that has no ordering
requirements this is not a problem.
CON: Manually committing the position becomes harder as it requires
that all threads co-ordinate to ensure that processing is complete for
that partition.**
Esentially, works like this. The consumer keeps reading and gives the responsibility of the processing and process-timeout management to the processor threads .
The error handling of the message processing would be responsibility of the processor threads as well. For example, if a timeout is thrown or an exception occurs, the processor will send the message to your defined "dead" queue, or whatever management of this you wish to perform, without involving the consumer. Regardless of the processor threads' success or fail, the consumer will continue its job and never be considered dead for not calling poll() in the specified timeout.
You should control the amount of messages the consumer retrieves in its poll call in order not to saturate the processors. Its a game regarding how fast the processors finish their job, how many messages the consumer retrieves (max.poll.records) at each iteration, and what's the specified timeout for the consumer.
Decoupled workflow
The first element to be quoted is the queue (with a limited size, which you should also manage in order not getting too filled - OOM).
This queue would be the link between consumer and processor threads, essentially a buffer that could dynamically get bigger or smaller depending on the specific word load at each time; It would manage overloads, something like a dam, or barrier, to find a similarity.
----->WORKERTHREAD1
KAFKA <------> CONSUMER ----> QUEUE -----|
----->WORKERTHREAD2
What you get is a second queue-lag mechanism:
1. Kafka Consumer LAG (the messages still to be read from the partition/topic)
2. Queue LAG (received messages still need to be processed)
--->WORKERTHREAD1
KAFKA <--(LAG)--> CONSUMER ----> QUEUE --(LAG)--|
--->WORKERTHREAD2
The queue could be some kind of synchronized queue, such a ConcurrentLinkedQueue. for example. Or you could manage yourself the synchronization with a customized queue.
Essentially, the duties would be divided, and the consumer is given the easiest one (as its the one that is most crucial).
Responsibilities:
Consumer
consume-->send to queue
Workers
read from queue|-->[manage timeout]
|==>PROCESS MESSAGE ==> send to topic
|-->[handle failed messages]
You should also manage if the processor threads die/deadlock; but usually those mechanisms are already implemented in most of ThreadPool variants.
I suggest the workers to share a unique KafkaProducer; The producer is thread safe and since the output topic would be the same for the group of consumers, this would also increase its performance. Also from the Kafka Producer javadoc:
The producer is thread safe and sharing a single producer instance
across threads will generally be faster than having multiple
instances.
In resume, each consumer thread feeds n processor threads. Some variants could be:
- 1 consumer - 1 worker (no processing paralellization, just division of duties)
- 1 consumer - 2 workers
- 1 consumer - 4 workers
- 2 consumers - 4 workers (2 for each)
- 2 consumers - 8 workers (4 for each)
...
Read carefully the pros and contras from this mechanism in the javadoc, and judge if this could be a solution to your specific case.
In my oppinion, there's a PRO that doesn't get reflected in the docs, which is the root of this answer/suggestion:
Consumption shouldn't be affected by processing. This approach avoids any consumer thread being considered dead due to a slow processing of the messages, and offers an extra "safety-window" thanks to the queue. I'm not saying that, at the point in which all processors fail for every message, or the queue hits maximum size, for example, the consumer would continue happily as if that didn't affect it; It will in fact be stopped by processing, but much, much later and due to bigger reasons that couldn't be avoided. This approach offers some extra time, or extra shield, for that to happen. Just like a dam can fail if it can't hold any more water.
Well, hope you take this as a suggestion, and may it be helpful somehow. It may avoid most of the dead consumer issues you're having. If well managed, it's a good approach for 24/7 real time data workflow.
Hi I am trying to get a bit more of an understanding on the kafka streams threading model and I am looking at this example in the confluent docs https://docs.confluent.io/current/streams/architecture.html#example
I understand that this example is for a single 'kafka streams app' that, in the first diagram, is deployed on a single machine and allowed to use two threads (configurable). It splits itself across the the two threads leading to 3 separate 'tasks' that, I think, do the same thing as each other they are just parallelized. That much I think I understand.
My question is what if I deploy a second totally different 'kafka streams app' with its own unique client id on that same machine and in the same jvm. Will this second kafka streams app be able to use the same two (share) threads as the first or does the first stream monopolise the threads it is allowed to use.
another way of asking this might be is the minimum number of threads necessary, equal to the number of separate Kafka stream apps running on the machine?
Threads are owned by KafkaStreams instances. Thus, if you create and start multiple KafkaStreams each instance has its own threads -- they are not shared.
Btw: the number of tasks is independent of the number of KafkaStreams instances and the number of threads. The number of tasks depends on the number of partitions of your input topic as well as the structure of your topology DAG.
Also, the number of tasks effectively limits the overall parallelism. Each task is executed by exactly one thread. If you have more threads than tasks, some threads will be idle as there is no task that can be assigned to them.
One more thing: for a parallelism point of view, there is not difference if you start one KafkaStreams instance and configure it with 3 threads, or if you start three KafkaStreams instances with one thread each. All available tasks will be evenly distributed over all available threads.
We have a problem like this:
We have jobs consistently coming from a queue.
These jobs are of different types, and different types of jobs are processed at different speed. We are processing these jobs concurrently, so we don't demand jobs processed in order, so strictly we don't need a FIFO queue. In fact, blocking one type of jobs because of the slowness of another type of jobs is not allowed.
If we can create a topic for every type of jobs, then the problem is solved. But the problem is that we have so many types of jobs, potentially 10k of them.
Currently we are preparing to build a "queue" system for this problem, perhaps on top of a database. But using a DB as a queue is considered as anti-pattern by a lot of people.
Is there any message queue system that can handle 10k topics (or subtopics)?
One of the requirement is that the consumer side need change the topics it subscribes consistently. So if takes a lot of
Thanks for your advice.
What should be the better approach while implementing kafka consumer.
Objective is read from Kafka and write back to db. Millions of Rows
Approach 1 :
Per Partition - Per Consumer - Wait for message to consume(i.e. written back to db) then proceed to next in polling loop.
Approach 2 :
Per Partition - Per Consumer - Send Record to worker thread or threadpool to be written back to db and later on commit the offset and keep on polling. Offset Management needs to be taken taken care. In this don't wait for message to written back to DB. Just keep on polling, pass the message to worker thread.
Any insights on both of them ?
Thanks
Approach 1:
The approach is applicable only if it is possible for you to estimate the message processing time otherwise it is not recommended.
Problem: In this approach the main problem is keeping the consumer alive, If you will wait for the messages to be completely processed before calling the poll() again, you have to make sure that your consumer should be alive until it calls poll() because kafka maintains a property named "session.timeout.ms". The kafka broker/cluster takes it action on the value of this property, if consumer is unable to call poll() again with in the time period of "session.timeout.ms", broker will mark consumer dead and it will be kicked out. Now, when consumer will finish the message processing and will call poll() again, it is considered as a new joiner and will again give the set of records starting from the offset as it was before. Keeping this scenario in mind, consumer will be stuck in an infinite loop where it will never proceed its offset.
Possible solution 1: To use this approach you need a good value of following property "session.timeout.ms" with the following side effects:
1: Value too low: Consumer will be marked dead as described above and will never proceed its offset, however messages will be processed but every time it finish the messages it will get the previous messages + new messages again.
2: Value too high: Broker will be very late in detecting the genuine failure of consumer that will result in record duplication and will effect the overall throughput.
Possible Solution 2: (Only valid for version 0.10.1.x) Official fix by Kafka in release (0.10.1.0).
In this approach, two notable entities are introduced: a new property "max.poll.interval.ms" that sets the maximum delay between client calls to poll() and a background thread that is responsible for keeping the consumer alive. So, in a scenario, when consumer calls a method poll() and then gets busy in message processing , the internal background thread will keep the heart beat alive and as a result consumer will stay alive. However, this internal background thread will itself remain alive until the timeout value for the property “max.poll.interval.ms” remains valid. So, this thread will wait for the consumer to call poll() with in the time period value of “max.poll.interval.ms” if not, it will send a leave request and will die itself as well."
Again the tricky part in this solution is to find a suitable value of this property: "max.poll.interval.ms" (very important, This time will be the time for which background thread will keep the heartbeat alive without the need of explicit calling poll()).
Approach 2: Using a worker thread is a good idea but then you have to maintain an internal queue or validation for received messages which can be complex and also you need to use manual commits against auto commits. For more information about commits see this and search heading "Commits and Offsets".
Problem: In this approach the main problem is to keep track of messages received and messages processed successfully. As, your consumer will receive the message it will pass message to respective worker thread and will commit the offset and move forward to receive more messages. During this process you have to take care of following issues:
What if the message is received and offset committed but later for whatever reason the worker thread failed to process the message, now how to get that message again ?
What if messages are received by consumer but there are no free worker threads to process ?
Solution: There can be different ways to resolve the above issues and one way is to use the internal queue to keep the messages and manual commits that will be sent only when worker thread will report the successful processing of the message. However a very careful implementation is required because it can leads to complex code and can also results in memory management or threading issues.
Suggestion: Depending upon your requirements, you can use one approach or the other with implementing fixed for the possible issues as described above. However I would recommend a more robust solution will be to use partition pause/resume. In very abstract way your consumer should do following steps:
1: poll () for messages.
2: Pause all the respective topics/partitions.
3: Assigned messages to worker threads and wait for their processing.
4: Keep calling poll() but as the partitions are paused there will be no extra message received while consumer will be kept alive. (Make sure no new topic is registered during this point)
5: If all worker threads should report message processing success/failure then commit the offsets accordingly.
6: Resume all the partitions.
Note: There can be better ways or other solutions possible depending upon your scenario and requirements. It's just an idea or one of the possible solutions.