As Kafka has a topic based pub-sub architecture how can I handle One-to-One and Group Messaging part of web application using Kafka?
I am using SpringBoot+Angular stack and Docker Kafka server.
I'll write another answer here.
Based on my experience with the chatting service. You only need one topic for all the messages. Using a well designed Message body.
public class Message {
private String from; // user id
private String to; // user id or group id
}
Then you can create like 100 partitions for this topic and create two consumers to consume them (50 partitions for one consumer in the beginning).
Then if your system reaches the bottleneck, you can easier scale X more consumers to handle the load.
How to do distribute the messages in the consumer. I used to send the messages to the Mobile app, so all the app has a long-existing connection to the server, and the server sends the messages to the app by that channel. For group chat, I create a Redis cache to store all the active users in the group, so I can easier get the users who belong to this group, send them the messages.
And another thing, Kafka stateless, means Kafka doesn't de-coupled from the business logic, only acts as a message system, transfers the messages. If you connect your business logic to Kafka, like create a topic "One-to-One" and delete some after they finished, Kafka will be very messy.
By One-to-One, I suppose you mean one producer and one consumer i.e. using at as a queue.
This is certainly possible with Kafka. You can have one consumer subscribe to a topic and and restrict others by not giving them authorization . See Authorization in Kafka
Note that once a message is consumed, it is not deleted, rather it is committed so that the same consumer will not consume it again.
By Group Messaging, I suppose you mean one producer > multiple consumers or
multiple-producer > multiple-consumers
This is also possible, a producer can produce messages to a topic and multiple consumers can consume them.
If all the consumers have the same group id, then each consumer in the group gets only a subset of messages.
If they have different group ids then each consumer will get all messages.
Multiple producers also can produce to the same topic.
A consumer can also subscribe to multiple topics.
Ok, It's a very complicated question, I try to type some simple basic information.
Kafka topics are divided into a number of partitions. Partitions allow you to parallelize a topic by splitting the data in a particular topic across multiple brokers — each partition can be placed on a separate machine to allow for multiple consumers to read from a topic in parallel.
So if you are using partitions, means you have multiple consumers to consume some in parallel.
consumer groups for a given topic — each consumer within the group reads from a unique partition and the group as a whole consumes all messages from the entire topic.
Basically, you can have only one group, then the message will not be processed twice in the same consumer group, and this is how Kafka delivers exactly once.
If you need two consumer groups, you need to think about why you need two? Are the consumers in two groups handling the different logic?
There is more, please check the official document, or you can answer a smaller question.
Related
We have multiple consumers(separate microservices) for our topic and the events which we are publishing on the topic is intended for separate micro services or only one consumer at a time?
Can someone suggest what is the best approach to implement this?
eg. I have partition 0 & 1 in my Kafka topic which is being consumed by CG-A and CG-B.
I am publishing something like this
record-1 for CG-A then record-2 for CG-B then again record-3 for CG-A.
How do i make sure that CG-A consumes record-1 from the offset.
Producers and consumers are completely decoupled. Your producer cannot send records "to a consumer".
Consumers always read all records from the topic partitions they've been assigned, regardless of what processes produced into them.
If only certain records are meant for certain consumer groups, then that's processing logic unique to your own applications post-consumption from Kafka. I.e. add conditional statements to filter those events
One application (producer) is publishing messages and these messages are being consumed by another application (with multiple consumers). Producer sends data with field country and we will have multiple consumers in our application, each consumer will subscribe to specific country.
From what I have been reading so far, we can have 2 approaches to filter message:
Filter data on consumer side: Producer can add country in message
header. Consumer will receive all data and filter country it needs by checking
from message header. Not sure if we can/should have multiple Consumers with different filters on different countries? Or just one Consumer that filters out the list of countries and we do aggregation by countries on our own?
One topic with separate partition for separate
country: We will have a custom partitioner on Producer so it can send
message to a specific partition. Consumers will be directed to the
right partition for consuming country specific message.
My question is should we choose option 1 or 2? We are expecting to receive hundreds of messages every few seconds.
In my experience typically the first approach is used.
The second option is problematic. What if you add a new country? You will need to add a partition to the topic, which is possible but not straightforward. You will also need to change the logic on the producer and conusmer side. If consumers are just subscribed to the topic, then in case of failure partitions will be automatically assigned to the alive consumers inside the consumer group. In your case you will need to handle the failures with the programming logic.
Another approach is to have a topic per country.
One more approach is to publish all the data into one topic and then distribute data to other topics(each per consumer) with Kafka Streams application. If the requirements change then you change the implementation of Kafka Streams app.
I'm currently designing an application which is will have hundreds of log-compacted topics. Each topic is related to a failover group and should have a dynamic (e.g., to be changed on demand) set of producers and consumers.
For example, let's say I have 3 failover instances related to topic T1. Each of those failover instances should have the same data / state (eventually consistent). And each of the instances may consume and produce messages on that topic.
As I understand, I need to assign different group IDs for each consumer/producer in order to have every instance read the topic entirely.
Though given that the number of readers and writers for a topic are not fixed, how is it possible to avoid reading ones own messages for that topic?
Sure, I could add a source ID to the message and just dismiss the message when the consumer figures out that he is about to read a message he previously produced himself. But I'd rather avoid the data transfer entirely.
Producers and consumers are independent processes. If you subscribe to the same topic that's being produced to without some extra processing logic, you'll end up with an infinite loop.
You also cannot have more consumers than partitions, so the dynamic consumer amount will be limited by that.
need to assign different group IDs for each consumer/producer in order to have every instance read the topic entirely
Not necessarily. You've mentioned you have compacted topics, so I assume you are using Kafka Streams. In the Streams API, you can set num.standby.replicas for copying statestore data across instances of the same application.id
As part of design decision at my client site, the components(microservice) involved in http request-response flow are allowed to produce messsages on a kafka topic, but not allowed to consume messages from kafka topic.
Such components(microservice) can read & write database, can talk to other components, can produce messages on a topic, but cannot consume messages from a kafka topic.
Instead ,the design suggest to write separate utilities that consume messages from kafka topics and store in database. Components(microservice) involved in request-response flow, will read that information from database.
What are the design flaws, if such components(microservice) consume kafka topics? Why the design is suggesting to write separate utilities to consume kafka topic and store in database, so that components can read those information from database.
Kafka Topics are divided into partitions, and for each consumer group, the partitions are distributed among the various consumers in that group. Each consumer is responsible for consuming the messages in the partitions is gets assigned.
Presumably, your request handling machines are clustered and load balanced. There are two ways you might have those machines subscribe to Kafka topics, and both of those ways are broken:
You could put your request handling machines in different consumer groups. In this case, each one will have to consume all of the messages. That is probably not what you want, which is typically to have each consumer pull from the queue and have each message processed only once. Also, the consumers will be out of sync and will process the messages ad different rates.
You could put your request handling machines in the same consumer groups. In this case, each one will only have access to the partitions that it is assigned. Every machine will see different message streams. This, of course, is also not what you want. Clients would get different results depending on which machine the load balancer directed them to.
If you want all of your request handling machines to pull from the same queue of messages across the whole topic, then they need to communicate with a single consumer that is assigned all the partitions.
If one kafka consumer application is reading message from kafka, another was not able to read and vice-versa.
We are running two independent applications, one will process message and another will read and put into a database.
Message which is been processing in first application is not available in the second application
Without seeing the code I can only guess ... :-)
I think that you have a topic with only one partition and both consumer applications are in the same consumer group. In this case only one consumer gets messages from the only one partition in the topic.
If you want both applications receiving messages from same topic, you need to put them into different consumer groups (group.id parameter for the consumer properties).