I am new in Kafka and micronaut and I do not understand the usage of #KafkaKey. What I found on internet is :
The Kafka key can be specified by providing a parameter annotated with
#KafkaKey. If no such parameter is specified the record is sent with a null key.
So what exactly it means? How it will effect me if I do not use it ?
Most important effect of Kafka message keys is partitioning. For example if the key chosen was a user id then all data for a given user would be sent to the same partition. If you wouldn't specify the key of messages Kafka would use round-robin strategy for message distribution.
Kafka preserves the order within the partitions. As you specify a key for a particular message type, the message type is bound to a particular partition associated with that key. Since the order of messages is preserved in a partition, you can preserve the message order by specifying a key. This is particularly useful if you are working with state machines.
Related
I am trying to debug a issue for which I am trying to prove that each distinct key only goes to 1 partition if the cluster is not rebalancing.
So I was wondering for a given topic, is there a way to determine which partition a key is send to?
As explained here or also in the source code
You need the byte[] keyBytes assuming it isn't null, then using org.apache.kafka.common.utils.Utils, you can run the following.
Utils.toPositive(Utils.murmur2(keyBytes)) % numPartitions;
For strings or JSON, it's UTF8 encoded, and the Utils class has helper functions to get that.
For Avro, such as Confluent serialized values, it's a bit more complicated (a magic byte, then a schema ID, then the data). See Wire format
In Kafka Streams API, You should have a ProcessorContext available in your Processor#init , which you can store a reference to and then access in your Processor#process method, such as ctx.recordMetadata.get().partition() (recordMetadata returns an Optional)
only goes to 1 partition
This isn't a guarantee. Hashes can collide.
It makes more sense to say that a given key isn't in more than one partition.
if the cluster is not rebalancing
Rebalancing will still preserve a partition value.
when you send message,
Partitions are determined by the following classes
https://github.com/apache/kafka/blob/trunk/clients/src/main/java/org/apache/kafka/clients/producer/internals/DefaultPartitioner.java
If you want change logics, implement org.apache.kafka.clients.producer.Partitioner interface and,
set ProduceConfig's 'partitioner.class'
reference docuement :
https://kafka.apache.org/documentation/#producerconfigs
So I have a Kafka topic with multiple partitions and on it I'm producing messages. I want my messages to be partitioned based on user id. I can achieve this either by using UserId as the message key or by writing a custom partitioner. How do I figure out which is the right solution, what are the pros and cons?
As you know using user-id as the key, you are sure that messages with same user-id will be delivered always to the same partition but you can't decide the partition itself. I mean that the default partitioner process an hash on the key % number of partitions for having the destination partition.
If in your application you need that messages with a specific user-id go to a specific partition (i.e. you want that user-id beginning with "A" go to partition 0) you need to write a custom partitioner.
If you have no restrictions I think that the default partitioner using user-id as key works fine for you.
In any case after sending and on receiving you got information about the partition.
All. Forgive me I was just learning the Apache Kafka. When I was reading the document of Kafka. It mentioned a phrase named semantic partition function.
As the document says.
Producers publish data to the topics of their choice. The producer is
responsible for choosing which record to assign to which partition
within the topic. This can be done in a round-robin fashion simply to
balance load or it can be done according to some semantic partition
function (say based on some key in the record). More on the use of
partitioning in a second!
What does it mean semantic partition in Kafka? So far I didn't found any more about it in the document. Could someone please help to explain more about it for better understanding? Thanks.
When the producer doesn't specify a key for messages, the round robin fashion is used. When a key is specified, the DefaultPartitioner just process an hash on the key (module the number of partitions). If you want, you can use your own partitioner class. The documentation wants just to say that : that the "semantic" for defining the destination partition is up to you, you can develop the "function" (really a partitioner class). For example, instead of using the Kafka key in the message you could have a payload, let me say a JSON, with some data and you want to use one of this info for processing the right destination partition.
All of the examples of Kafka | producers show the ProducerRecord's key/value pair as not only being the same type (all examples show <String,String>), but the same value. For example:
producer.send(new ProducerRecord<String, String>("someTopic", Integer.toString(i), Integer.toString(i)));
But in the Kafka docs, I can't seem to find where the key/value concept (and its underlying purpose/utility) is explained. In traditional messaging (ActiveMQ, RabbitMQ, etc.) I've always fired a message at a particular topic/queue/exchange. But Kafka is the first broker that seems to require key/value pairs instead of just a regulare 'ole string message.
So I ask: What is the purpose/usefulness of requiring producers to send KV pairs?
Kafka uses the abstraction of a distributed log that consists of partitions. Splitting a log into partitions allows to scale-out the system.
Keys are used to determine the partition within a log to which a message get's appended to. While the value is the actual payload of the message. The examples are actually not very "good" with this regard; usually you would have a complex type as value (like a tuple-type or a JSON or similar) and you would extract one field as key.
See: http://kafka.apache.org/intro#intro_topics and http://kafka.apache.org/intro#intro_producers
In general the key and/or value can be null, too. If the key is null a random partition will the selected. If the value is null it can have special "delete" semantics in case you enable log-compaction instead of log-retention policy for a topic (http://kafka.apache.org/documentation#compaction).
Late addition... Specifying the key so that all messages on the same key go to the same partition is very important for proper ordering of message processing if you will have multiple consumers in a consumer group on a topic.
Without a key, two messages on the same key could go to different partitions and be processed by different consumers in the group out of order.
Another interesting use case
We could use the key attribute in Kafka topics for sending user_ids and then can plug in a consumer to fetch streaming events (events stored in value attributes). This could allow you to process any max-history of user event sequences for creating features in your machine learning models.
I still have to find out if this is possible or not. Will keep updating my answer with further details.
I am trying to send the message to KafkaProducer using ProducerRecord.
new ProducerRecord(topicName,messageKey,message)
This uses DefaultPartitioner, DefaultPartitioner will use the hash of the key to ensure that all messages for the same key go to same Partition.
What is the difference between this, and using CustomPartitioner? I hope Custom Partitioner also used to send the message to same partition based on Key.
The default partitioning strategy is
If a partition is specified in the record, use it
If no partition is specified but a key is present choose a partition based on a hash of the key
If no partition or key is present choose a partition in a round-robin fashion
(This is pulled from the DefaultPartitioner source code)
The custom partitioner just lets you set your own strategy. So you could for example assign partitions randomly or if you somehow have prior knowledge of how large the partition will be assign it based off that. The default part of DefaultPartitioner is more about the round robin strategy. I'd imagine in most/all situations option 1 and 2 would be considered the norm.