Modelling a Kafka cluster - apache-kafka

I have an API endpoint that accepts events with a specific user ID and some other data. I want those events broadcasted to some external locations and I wanted to explore using Kafka as a solution for that.
I have the following requirements:
Events with the same UserID should be delivered in order to the external locations.
Events should be persisted.
If a single external location is failing, that shouldn't delay delivery to other locations.
Initially, from some reading I did, it felt like I want to have N consumers where N is the number of external locations I want to broadcast to. That should fulfill requirement (3). I also probably want one producer, my API, that will push events to my Kafka cluster. Requirement (2) should come in automatically with Kafka.
I was more confused regarding how to model the internal Kafka cluster side of things. Again, from the reading I did, it sounds like it's a bad practice to have millions of topics, so having a single topic for each userID is not an option. The other option I read about is having one partition for each userID (let's say M partitions). That would allow requirement (1) to happen out of the box, if I understand correctly. But that would also mean I have M brokers, is that correct? That also sounds unreasonable.
What would be the best way to fulfill all requirements? As a start, I plan on hosting this with a local Kafka cluster.

You are correct that one topic per user is not ideal.
Partition count is not dependent upon broker count, so this is a better design.
If a single external location is failing, that shouldn't delay delivery to other locations.
This is standard consumer-group behavior, not topic/partition design.

Related

Kafka balancing load between multiple tenants

I'm considering Kafka as one of several technologies to serve as a message broker for worker nodes that will eventually send push notifications to users. An important constraint is that I don't want one tenant to monopolize resources such that it inserts a million notification messages and prevents other tenants from receiving their notifications in a reasonable time. In other words I want to each tenant to have their messages processed at about the same rate. My options seem to be either create a topic for each tenant or a partition for each tenant. Both seem problematic and/or frowned upon.
Creating a topic for each tenant seems like a logistical nightmare. Every time a new tenant gets added to the application the consumers would somehow have to be notified to subscribe to the topic.
Creating a partition for each tenant doesn't seem quite as bad but seems like it is frowned upon. However, based on my understanding of how load is distributed between partitions and consumers, if multiple tenants shared the same partition there is a possibility that one tenants messages will get stuck behind another's which is not how I want to balance the load.
What is my best option? Is there a third possibility I'm not considering? Is Kafka not the right tool for the job?
Thanks!
If you let multiple "tenants" share a partition, your fear of one tenant hijacking a partition might come true. In that case, you may have no choice other than to create topic per tenant. How could you address the administration?
You could set auto.create.topics.enable to true so that a tenant could create a topic just by sending message to it.
Registering dynamically created topics to consumers are not complicated if your topic names follow a pattern. Your consumers should subscribe to topics which matches the given pattern.
public void subscribe(java.util.regex.Pattern pattern)
Subscribe to all topics matching specified pattern to get dynamically assigned
partitions. The pattern matching will be done periodically against topics
existing at the time of check.
How quick the consumers can detect new topics is configurable using metadata.max.age.ms (default is 5 minutes)
If you are going to create thousands of topics, you might want to check the performance though (see)
One solution that i can think of is.. Assuming you are using AWS
[topic1] --> [kafka cosumer]
-->
[s3://bucket/tenant1] --> Listener --> nonjava-Lambda
[s3://bucket/tenant2] --> Listener --> nonjava-Lambda
[s3://bucket/tenant3] --> Listener --> nonjava-Lambda
on s3 have folders tenant wise. Configure s3 listener on the tenant folder level
On the topic have a kafka consumer which dumps a list of tenant messages into the tenant folder (so assume some files with 1 msg; some with 100 msgs)
Since kafka is super fast (20k 800bytes-msgs/sec can be dequeued) all you have to do is implement the s3 listener lambda (in go/ python/ nodejs; not java) and get the work done.
You may say that on high load the overall throughput may decrease significantly as we are involving writing to s3 (which is on average of 300 msgs/sec) ; But remember that you are writing in batches. Meaning by the time you complete the 1st write; and you have enough messages accumulated inthe topic which all go into 1 file in the next iteration of s3 write. So my wild guess the over all throughput may decrease but not worst-ly

Kafka: multiple consumers in the same group

Let's say I have a Kafka cluster with several topics spread over several partitions. Also, I have a cluster of applications act as clients for Kafka. Each application in that cluster has a client that is subscribed to a same set of topics, which is identical over the whole cluster. Also, each of these clients share same Kafka group ID.
Now, speaking of commit mode. I really do not want to specify offset manually, but I do not want to use autocommit either, because I need to do some handing after I receive my data from Kafka.
With this solution, I expect to occur "same data received by different consumers" problem, because I do not specify offset before I do reading (consuming), and I read data concurrently from different clients.
Now, my question: what are the solutions to get rid of multiple reads? Several options coming to my mind:
1) Exclusive (sequential) Kafka access. Until one consumer committed read, no other consumers access Kafka.
2) Somehow specify offset before each reading. I do not even know how to do that with assumption that read might fail (and offset will not be committed) - we gonna need some complicated distributed offset storage.
I'd like to ask people experienced with Kafka to recommend something to achieve behavior I need.
Every partition is consumed only by one client - another client with the same group ID won't get access to that partition, so concurrent reads won't occur...

Apache Kafka isolate different producers

I'm working on a project where different producers (each one represented by another customer) can send events to my service.
This service is responsible for receiving those events and storing them in intermediate Kafka topic, later we are fetching and processing those events.
The problem is that one customer can flood events and effect processing of events of another customers, i'm trying to find a best way to create a level of isolation between different customers!
So far, i was able to solve this, by creating different topic for each customer.
Although this solution temporary solved the issue, it seems that Kafka is not designed to handle well huge number of topics 100k+ as our producers (customers) number grew up we started to experience that controlled restart of a single broker takes up to a few hours.
Can anyone suggest a better way to create level of isolation between producers?
You can take a look at Kafka limits, that is done on Kafka broker level. By configuring producers to have different user / client-id each, you could achieve some level of limiting (so that one producer does not flood others).
See https://kafka.apache.org/documentation.html#design_quotas
With the number (100k+) that you mentioned I think that you will probably need to solve this issue in your service that sits before Kafka.
Kafka can most probably (without knowing exact numbers) handle the load that you throw at it, but there is a limit to the number of partitions per broker that can be handled in a performant way. As usual there are no fixed limits for this, but I'd say the number of partitions per broker is more in the lower 4-figures, so unless you have a fairly large cluster you probably have many more than that. This can lead to longer restart times as all these partitions have to be recovered. What you could try is to experiment with the num.recovery.threads.per.data.dir parameter and set this higher, which could bring your restart times down.
I'd recommend consolidating topics to get the number down though and implementing some sort of flow control in the service that your customers talk to, maybe add a load balancer to be able to scale that service ..

Correlating in Kafka and dynamic topics

I am building a correlated system using Kafka. Suppose, there's a service A that performs data processing and there're its thousands of clients B that submit jobs to it. Bs are short-lived, they appear on the network, push the data to A and then two important things happen:
B will immediately receive a status from A;
B then will either
drop out completely, stay online to receive further updates on
status, or will sporadically pop back on to check the status.
(this is not dissimilar to grid computing or mpi).
Both points should be achieved using a well-known concept of correlationId: B possesses a unique id (UUID in my case), which it sends to A in headers, which, in turn, uses it as Reply-To topic to send status updates to. Which means it has to create topics on the fly, they can't be predetermined.
I have auto.create.topics.enable switched on, and it indeed creates topics dynamically, but existing consumers are not aware of them and require to be restarted [to fetch topic metadata i suppose, if i understood the docs right]. I also checked consumer's metadata.max.age.ms setting, but it doesn't help it seems, even if i set it to a very low value.
As far as i've read, this is yet unanswered, i.e.: kafka filtering/Dynamic topic creation, kafka consumer to dynamically detect topics added, Can a Kafka producer create topics and partitions? or answered unsatisfactory.
As there're hundreds of As and thousands of Bs, i can't possibly use shared topics or anything like it, lest i overload my network. I can use Kafka's AdminTools, or whatever it's called, to pre-create topics, but i find it somehow silly (even though i saw real-life examples of people using it to talk to Zookeeper and Kafka infrastructure itself).
So the question is, is there a way to dynamically create Kafka topics in a way that makes both consumer and producer aware of it without being restarted or anything? And, in the worst case, will AdminTools really help it and on which side must i use it - A or B?
Kafka 0.11, Java 8
UPDATE
Creating topics with AdminClient doesn't help for whatever reason, consumers still throw LEADER_NOT_AVAILABLE when i try to subscribe.
Ok, so i’d answer my own question.
Creating topics with AdminClient works only if performed before corresponding consumers are created.
Changed the topology i have, taking into account 1) and introducing exchange of correlation ids in message headers (same as in JMS). I also had to implement certain topology management methodologies, grouping Bs into containers.
It should be noted that, as many people have said, this only works when Bs are in single-consumer groups and listen to topics with 1 partition.
To get some idea of the work i'm into, you might have a look at the middleware framework i've been working on https://github.com/ikonkere/magic.
Creating an unbounded number of topics is not recommended. Id advise to redesign your topology/system.
Ive thought of making dynamic topics myself but then realized that eventually zookeeper will fail as it will run out of memory due to stale topics (imagine a year from now on how many topics could be created). Maybe this could work if you make sure you have some upper bound on topics ever created. Overall an administrative headache.
If you look up using Kafka with request response you will find others also say it is awkward to do so (Does Kafka support request response messaging).

apache- kafka with 100 millions of topics

I'm trying to replace rabbit mq with apache-kafka and while planning, I bumped in to several conceptual planning problem.
First we are using rabbit mq for per user queue policy meaning each user uses one queue. This suits our need because each user represent some job to be done with that particular user, and if that user causes a problem, the queue will never have a problem with other users because queues are seperated ( Problem meaning messages in the queue will be dispatch to the users using http request. If user refuses to receive a message (server down perhaps?) it will go back in retry queue, which will result in no loses of message (Unless queue goes down))
Now kafka is fault tolerant and failure safe because it write to a disk.
And its exactly why I am trying to implement kafka to our structure.
but there are problem to my plannings.
First, I was thinking to create as many topic as per user meaning each user would have each topic (What problem will this cause? My max estimate is that I will have around 1~5 million topics)
Second, If I decide to go for topics based on operation and partition by random hash of users id, if there was a problem with one user not consuming message currently, will the all user in the partition have to wait ? What would be the best way to structure this situation?
So as conclusion, 1~5 millions users. We do not want to have one user blocking large number of other users being processed. Having topic per user will solve this issue, it seems like there might be an issue with zookeeper if such large number gets in (Is this true? )
what would be the best solution for structuring? Considering scalability?
First, I was thinking to create as many topic as per user meaning each user would have each topic (What problem will this cause? My max estimate is that I will have around 1~5 million topics)
I would advise against modeling like this.
Google around for "kafka topic limits", and you will find the relevant considerations for this subject. I think you will find you won't want to make millions of topics.
Second, If I decide to go for topics based on operation and partition by random hash of users id
Yes, have a single topic for these messages and then route those messages based on the relevant field, like user_id or conversation_id. This field can be present as a field on the message and serves as the ProducerRecord key that is used to determine which partition in the topic this message is destined for. I would not include the operation in the topic name, but in the message itself.
if there was a problem with one user not consuming message currently, will the all user in the partition have to wait ? What would be the best way to structure this situation?
This depends on how the users are consuming messages. You could set up a timeout, after which the message is routed to some "failed" topic. Or send messages to users in a UDP-style, without acks. There are many ways to model this, and it's tough to offer advice without knowing how your consumers are forwarding messages to your clients.
Also, if you are using Kafka Streams, make note of the StreamPartitioner interface. This interface appears in KStream and KTable methods that materialize messages to a topic and may be useful in a chat applications where you have clients idling on a specific TCP connection.