Aug 2019 - Kafka Consumer Lag programmatically - apache-kafka

Is there any way we can programmatically find lag in the Kafka Consumer.
I don't want external Kafka Manager tools to install and check on dashboard.
We can list all the consumer group and check for lag for each group.
Currently we do have command to check the lag and it requires the relative path where the Kafka resides.
Spring-Kafka, kafka-python, Kafka Admin client or using JMX - is there any way we can code and find out the lag.
We were careless and didn't monitor the process, the consumer was in zombie state and the lag went to 50,000 which resulted in lot of chaos.
Only when the issue arises we think of these cases as we were monitoring the script but didn't knew it will be result in zombie process.
Any thoughts are extremely welcomed!!

you can get this using kafka-python, run this on each broker or loop through list of brokers, it will give all topic partitions consumer lag.
BOOTSTRAP_SERVERS = '{}'.format(socket.gethostbyname(socket.gethostname()))
client = BrokerConnection(BOOTSTRAP_SERVERS, 9092, socket.AF_INET)
client.connect_blocking()
list_groups_request = ListGroupsRequest_v1()
future = client.send(list_groups_request)
while not future.is_done:
for resp, f in client.recv():
f.success(resp)
for group in future.value.groups:
if group[1] == 'consumer':
#print(group[0])
list_mebers_in_groups = DescribeGroupsRequest_v1(groups=[(group[0])])
future = client.send(list_mebers_in_groups)
while not future.is_done:
for resp, f in client.recv():
#print resp
f.success(resp)
(error_code, group_id, state, protocol_type, protocol, members) = future.value.groups[0]
if len(members) !=0:
for member in members:
(member_id, client_id, client_host, member_metadata, member_assignment) = member
member_topics_assignment = []
for (topic, partitions) in MemberAssignment.decode(member_assignment).assignment:
member_topics_assignment.append(topic)
for topic in member_topics_assignment:
consumer = KafkaConsumer(
bootstrap_servers=BOOTSTRAP_SERVERS,
group_id=group[0],
enable_auto_commit=False
)
consumer.topics()
for p in consumer.partitions_for_topic(topic):
tp = TopicPartition(topic, p)
consumer.assign([tp])
committed = consumer.committed(tp)
consumer.seek_to_end(tp)
last_offset = consumer.position(tp)
if last_offset != None and committed != None:
lag = last_offset - committed
print "group: {} topic:{} partition: {} lag: {}".format(group[0], topic, p, lag)
consumer.close(autocommit=False)

Yes. We can get consumer lag in kafka-python. Not sure if this is best way to do it. But this works.
Currently we are giving our consumer manually, you also get consumers from kafka-python, but it gives only the list of active consumers. So if one of your consumers is down. It may not show up in the list.
First establish client connection
from kafka import BrokerConnection
from kafka.protocol.commit import *
import socket
#This takes in only one broker at a time. So to use multiple brokers loop through each one by giving broker ip and port.
def establish_broker_connection(server, port, group):
'''
Client Connection to each broker for getting consumer offset info
'''
bc = BrokerConnection(server, port, socket.AF_INET)
bc.connect_blocking()
fetch_offset_request = OffsetFetchRequest_v3(group, None)
future = bc.send(fetch_offset_request)
Next we need to get the current offset for each topic the consumer is subscribed to. Pass the above future and bc here.
from kafka import SimpleClient
from kafka.protocol.offset import OffsetRequest, OffsetResetStrategy
from kafka.common import OffsetRequestPayload
def _get_client_connection():
'''
Client Connection to the cluster for getting topic info
'''
# Give comma seperated info of kafka broker "broker1:port1, broker2:port2'
client = SimpleClient(BOOTSTRAP_SEREVRS)
return client
def get_latest_offset_for_topic(self, topic):
'''
To get latest offset for a topic
'''
partitions = self.client.topic_partitions[topic]
offset_requests = [OffsetRequestPayload(topic, p, -1, 1) for p in partitions.keys()]
client = _get_client_connection()
offsets_responses = client.send_offset_request(offset_requests)
latest_offset = offsets_responses[0].offsets[0]
return latest_offset # Gives latest offset for topic
def get_current_offset_for_consumer_group(future, bc):
'''
Get current offset info for a consumer group
'''
while not future.is_done:
for resp, f in bc.recv():
f.success(resp)
# future.value.topics -- This will give all the topics in the form of a list.
for topic in self.future.value.topics:
latest_offset = self.get_latest_offset_for_topic(topic[0])
for partition in topic[1]:
offset_difference = latest_offset - partition[1]
offset_difference gives the difference between the last offset produced in the topic and the last offset (or message) consumed by your consumer.
If you are not getting current offset for a consumer for a topic then it means your consumer is probably down.
So you can raise alerts or send mail if the offset difference is above a threshold you want or if you get empty offsets for your consumer.

The java client exposes the lag for its consumers over JMX; in this example we have 5 partitions...
Spring Boot can publish these to micrometer.

I'm writing code in scala but use only native java API from KafkaConsumer and KafkaProducer.
You need only know the name of Consumer Group and Topics.
it's possible to avoid pre-defined topic, but then you will get Lag only for Consumer Group which exist and which state is stable not rebalance, this can be a problem for alerting.
So all that you really need to know and use are:
KafkaConsumer.commited - return latest committed offset for TopicPartition
KafkaConsumer.assign - do not use subscribe, because it causes to CG rebalance. You definitely do not want that your monitoring process to influence on the subject of monitoring.
kafkaConsumer.endOffsets - return latest produced offset
Consumer Group Lag - is a difference between the latest committed and latest produced
import java.util.{Properties, UUID}
import org.apache.kafka.clients.consumer.KafkaConsumer
import org.apache.kafka.clients.producer.KafkaProducer
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.serialization.{StringDeserializer, StringSerializer}
import scala.collection.JavaConverters._
import scala.util.Try
case class TopicPartitionInfo(topic: String, partition: Long, currentPosition: Long, endOffset: Long) {
val lag: Long = endOffset - currentPosition
override def toString: String = s"topic=$topic,partition=$partition,currentPosition=$currentPosition,endOffset=$endOffset,lag=$lag"
}
case class ConsumerGroupInfo(consumerGroup: String, topicPartitionInfo: List[TopicPartitionInfo]) {
override def toString: String = s"ConsumerGroup=$consumerGroup:\n${topicPartitionInfo.mkString("\n")}"
}
object ConsumerLag {
def consumerGroupInfo(bootStrapServers: String, consumerGroup: String, topics: List[String]) = {
val properties = new Properties()
properties.put("bootstrap.servers", bootStrapServers)
properties.put("auto.offset.reset", "latest")
properties.put("group.id", consumerGroup)
properties.put("key.deserializer", classOf[StringDeserializer])
properties.put("value.deserializer", classOf[StringDeserializer])
properties.put("key.serializer", classOf[StringSerializer])
properties.put("value.serializer", classOf[StringSerializer])
properties.put("client.id", UUID.randomUUID().toString)
val kafkaProducer = new KafkaProducer[String, String](properties)
val kafkaConsumer = new KafkaConsumer[String, String](properties)
val assignment = topics
.map(topic => kafkaProducer.partitionsFor(topic).asScala)
.flatMap(partitions => partitions.map(p => new TopicPartition(p.topic, p.partition)))
.asJava
kafkaConsumer.assign(assignment)
ConsumerGroupInfo(consumerGroup,
kafkaConsumer.endOffsets(assignment).asScala
.map { case (tp, latestOffset) =>
TopicPartitionInfo(tp.topic,
tp.partition,
Try(kafkaConsumer.committed(tp)).map(_.offset).getOrElse(0), // TODO Warn if Null, Null mean Consumer Group not exist
latestOffset)
}
.toList
)
}
def main(args: Array[String]): Unit = {
println(
consumerGroupInfo(
bootStrapServers = "kafka-prod:9092",
consumerGroup = "not-exist",
topics = List("events", "anotherevents")
)
)
println(
consumerGroupInfo(
bootStrapServers = "kafka:9092",
consumerGroup = "consumerGroup1",
topics = List("events", "anotehr events")
)
)
}
}

if anyone is looking for consumer lag in confluent cloud here is simple script
BOOTSTRAP_SERVERS = "<>.aws.confluent.cloud"
CCLOUD_API_KEY = "{{ ccloud_apikey }}"
CCLOUD_API_SECRET = "{{ ccloud_apisecret }}"
ENVIRONMENT = "dev"
CLUSTERID = "dev"
CACERT = "/usr/local/lib/python{{ python3_version }}/site-packages/certifi/cacert.pem"
def main():
client = KafkaAdminClient(bootstrap_servers=BOOTSTRAP_SERVERS,
ssl_cafile=CACERT,
security_protocol='SASL_SSL',
sasl_mechanism='PLAIN',
sasl_plain_username=CCLOUD_API_KEY,
sasl_plain_password=CCLOUD_API_SECRET)
for group in client.list_consumer_groups():
if group[1] == 'consumer':
consumer = KafkaConsumer(
bootstrap_servers=BOOTSTRAP_SERVERS,
ssl_cafile=CACERT,
group_id=group[0],
enable_auto_commit=False,
api_version=(0,10),
security_protocol='SASL_SSL',
sasl_mechanism='PLAIN',
sasl_plain_username=CCLOUD_API_KEY,
sasl_plain_password=CCLOUD_API_SECRET
)
list_members_in_groups = client.list_consumer_group_offsets(group[0])
for (topic,partition) in list_members_in_groups:
consumer.topics()
tp = TopicPartition(topic, partition)
consumer.assign([tp])
committed = consumer.committed(tp)
consumer.seek_to_end(tp)
last_offset = consumer.position(tp)
if last_offset != None and committed != None:
lag = last_offset - committed
print("group: {} topic:{} partition: {} lag: {}".format(group[0], topic, partition, lag))
consumer.close(autocommit=False)

Related

kafka consumer is not able to produce output

I have written kafka consumer in scala. When I run consumer it is showing blank on console.
I have used below code:
val topicProducer = "testOutput"
val props = new Properties()
props.put("bootstrap.servers","host:9092,host:9092")
props.put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
props.put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
props.put("group.id", "test");
val kafkaConsumer = new KafkaConsumer[String, String](props)
val topic = Array("test").toList
kafkaConsumer.subscribe(topic)
val results = kafkaConsumer.poll(2000)
for ((record) <- results) {
producer.send(new ProducerRecord(topicProducer,"key","Value="+record.key()+" Record Key="+record.value()+"append"))
}
You also need to specify auto.offset.reset property so that your consumer is able to consume the messages from the beginning (equivalent to --from-beginning in the command-line )
props.put("auto.offset.reset", "earliest");
According to Kafka docs:
auto.offset.reset
What to do when there is no initial offset in ZooKeeper or if an
offset is out of range:
smallest : automatically reset the offset to the smallest offset
largest : automatically reset the offset to the largest offset
anything else: throw exception to the consumer
EDIT:
Alternatively, if you are using the old consumer API then instead of bootstrap-server host:9092 use the zookeeper parameter --zookeeper host:2181 .
If this does not solve the issue then try to delete /brokers in zookeeper
bin/zookeeper-shell <zk-host>:2181
and restart the kafka nodes
rmr /brokers

Spark Streaming from Kafka topic throws offset out of range with no option to restart the stream

I have a streaming job running on Spark 2.1.1, polling Kafka 0.10. I am using the Spark KafkaUtils class to create a DStream, and everything is working fine until I have data that ages out of the topic because of the retention policy. My problem comes when I stop my job to make some changes if any data has aged out of the topic I get an error saying that my offsets are out of range. I have done a lot of research including looking at the spark source code, and I see lots of comments like the comments in this issue: SPARK-19680 - basically saying that data should not be lost silently - so auto.offset.reset is ignored by spark. My big question, though, is what can I do now? My topic will not poll in spark - it dies on startup with the offsets exception. I don't know how to reset the offsets so my job will just get started again. I have not enabled checkpoints since I read that those are unreliable for this use. I used to have a lot of code to manage offsets, but it appears that spark ignores requested offsets if there are any committed, so I am currently managing offsets like this:
val stream = KafkaUtils.createDirectStream[String, T](
ssc,
PreferConsistent,
Subscribe[String, T](topics, kafkaParams))
stream.foreachRDD { (rdd, batchTime) =>
val offsets = rdd.asInstanceOf[HasOffsetRanges].offsetRanges
Log.debug("processing new batch...")
val values = rdd.map(x => x.value())
val incomingFrame: Dataset[T] = SparkUtils.sparkSession.createDataset(values)(consumer.encoder()).persist
consumer.processDataset(incomingFrame, batchTime)
stream.asInstanceOf[CanCommitOffsets].commitAsync(offsets)
}
ssc.start()
ssc.awaitTermination()
As a workaround I have been changing my group ids but that is really lame. I know this is expected behavior and should not happen, I just need to know how to get the stream running again. Any help would be appreciated.
Here is a block of code I wrote to get by this until a real solution is introduced to spark-streaming-kafka. It basically resets the offsets for the partitions that have aged out based on the OffsetResetStrategy you set. Just give it the same Map params, _params, you provide to KafkaUtils. Call this before calling KafkaUtils.create****Stream() from your driver.
final OffsetResetStrategy offsetResetStrategy = OffsetResetStrategy.valueOf(_params.get(ConsumerConfig.AUTO_OFFSET_RESET_CONFIG).toString().toUpperCase(Locale.ROOT));
if(OffsetResetStrategy.EARLIEST.equals(offsetResetStrategy) || OffsetResetStrategy.LATEST.equals(offsetResetStrategy)) {
LOG.info("Going to reset consumer offsets");
final KafkaConsumer<K,V> consumer = new KafkaConsumer<>(_params);
LOG.debug("Fetching current state");
final List<TopicPartition> parts = new LinkedList<>();
final Map<TopicPartition, OffsetAndMetadata> currentCommited = new HashMap<>();
for(String topic: this.topics()) {
List<PartitionInfo> info = consumer.partitionsFor(topic);
for(PartitionInfo i: info) {
final TopicPartition p = new TopicPartition(topic, i.partition());
final OffsetAndMetadata m = consumer.committed(p);
parts.add(p);
currentCommited.put(p, m);
}
}
final Map<TopicPartition, Long> begining = consumer.beginningOffsets(parts);
final Map<TopicPartition, Long> ending = consumer.endOffsets(parts);
LOG.debug("Finding what offsets need to be adjusted");
final Map<TopicPartition, OffsetAndMetadata> newCommit = new HashMap<>();
for(TopicPartition part: parts) {
final OffsetAndMetadata m = currentCommited.get(part);
final Long begin = begining.get(part);
final Long end = ending.get(part);
if(m == null || m.offset() < begin) {
LOG.info("Adjusting partition {}-{}; OffsetAndMeta={} Begining={} End={}", part.topic(), part.partition(), m, begin, end);
final OffsetAndMetadata newMeta;
if(OffsetResetStrategy.EARLIEST.equals(offsetResetStrategy)) {
newMeta = new OffsetAndMetadata(begin);
} else if(OffsetResetStrategy.LATEST.equals(offsetResetStrategy)) {
newMeta = new OffsetAndMetadata(end);
} else {
newMeta = null;
}
LOG.info("New offset to be {}", newMeta);
if(newMeta != null) {
newCommit.put(part, newMeta);
}
}
}
consumer.commitSync(newCommit);
consumer.close();
}
auto.offset.reset=latest/earliest will be applied only when consumer starts first time.
there is Spark JIRA to resolve this issue, till then we need live with work arounds.
https://issues.apache.org/jira/browse/SPARK-19680
Try
auto.offset.reset=latest
Or
auto.offset.reset=earliest
earliest: automatically reset the offset to the earliest offset
latest: automatically reset the offset to the latest offset
none: throw exception to the consumer if no previous offset is found for the consumer's group
anything else: throw exception to the consumer.
One more thing that affects what offset value will correspond to smallest and largest configs is log retention policy. Imagine you have a topic with retention configured to 1 hour. You produce 10 messages, and then an hour later you post 10 more messages. The largest offset will still remain the same but the smallest one won't be able to be 0 because Kafka will already remove these messages and thus the smallest available offset will be 10.
This problem was solved in the stream structuring structure by including "failOnDataLoss" = "false". It is unclear why there is no such option in the spark DStream framework.
This is a BIG quesion for spark developers!
In our projects, we tried to solve this problem by resetting the offsets form ealiest + 5 minutes ... it helps in most cases.

Batching not working in Kafka Producer with Scala

I am writing a Producer in Scala and I want to do batching. The way batching should work is, it should hold the messages in queue till it is full and then post all of them together on the topic. But somehow it's not working. The moment I start sending message, it starts posting the message one by one. Does anyone know how to use batching in Kafka Producer.
val kafkaStringSerializer = "org.apache.kafka.common.serialization.StringSerializer"
val batchSize: java.lang.Integer = 163840
val props = new Properties()
props.put("key.serializer", kafkaStringSerializer)
props.put("value.serializer", kafkaStringSerializer)
props.put("batch.size", batchSize);
props.put("bootstrap.servers", "localhost:9092")
val producer = new KafkaProducer[String,String](props)
val TOPIC="topic"
val inlineMessage = "adsdasdddddssssssssssss"
for(i<- 1 to 10){
val record: ProducerRecord[String, String] = new ProducerRecord(TOPIC, inlineMessage )
val futureResponse: Future[RecordMetadata] = producer.send(record)
futureResponse.isDone
println("Future Response ==========>" + futureResponse.get().serializedValueSize())
}
You have to set linger.ms in your props
By default, it is to zero, meaning that message is send immediatly if possible.
You can increase it (for example 100) so that batch occur - this means higher latency, but higher throughput.
batch.size is a maximum : if you reach it before linger.ms has passed, data will be sent without waiting more time.
To view the batches actually sent, you will need to configure your logging (batching s done on a background thread and you will not be able to view what batches are done with producer api - you can't send or receive batches, only send a record and receive its response, communication with broker via batch is done internally)
First, if not already done, bind a log4j properties file (Dlog4j.configuration=file:path/to/log4j.properties)
log4j.rootLogger=WARN, stderr
log4j.logger.org.apache.kafka.clients.producer.internals.Sender=TRACE, stderr
log4j.appender.stderr=org.apache.log4j.ConsoleAppender
log4j.appender.stderr.layout=org.apache.log4j.PatternLayout
log4j.appender.stderr.layout.ConversionPattern=[%d] %p %m (%c)%n
log4j.appender.stderr.Target=System.err
For example, I will receive
TRACE Sent produce request to 2: (type=ProduceRequest, magic=1, acks=1, timeout=30000, partitionRecords=({test-1=[(record=LegacyRecordBatch(offset=0, Record(magic=1, attributes=0, compression=NONE, crc=2237306008, CreateTime=1502444105996, key=0 bytes, value=2 bytes))), (record=LegacyRecordBatch(offset=1, Record(magic=1, attributes=0, compression=NONE, crc=3259548815, CreateTime=1502444106029, key=0 bytes, value=2 bytes)))]}), transactionalId='' (org.apache.kafka.clients.producer.internals.Sender)
Which is a batch of 2 data. Batch will contain records send to a same broker
Then, play with batch.size and linger.ms to see the difference. Note that a record contain some overhead, so a batch.size of 1000 will not contain 10 messages of size 100
Note that I did not find documentation which stated all logger and what they do (like log4j.logger.org.apache.kafka.clients.producer.internals.Sender). You can enable DEBUG/TRACE on rootLogger and find the data you want, or explore the code
Your are producing the data to the Kafka server synchronously. Means, the moment you call producer.send with futureResponse.get, it will return only after the data gets stored in the Kafka Server.
Store the response in a separate list, and call futureResponse.get outside the for loop.
With default configuration, Kafka supports batching, see linger.ms and batch.size
List<Future<RecordMetadata>> responses = new ArrayList<>();
for (int i=1; i<=10; i++) {
ProducerRecord<String, String> record = new ProducerRecord<>(TOPIC, inlineMessage);
Future<RecordMetadata> response = producer.send(record);
responses.add(response);
}
for (Future<RecordMetadata> response : responses) {
response.get(); // verify whether the message is sent to the broker.
}

Why doesn't a new consumer group id start from the beginning

I have a kafka 0.10 cluster with several topics that have messages produced to them.
When I subscribe to the topics with a KafkaConsumer and a new group Id I get no records returned, but if I subscribe to the topics with a ConsumerRebalanceListener that seeks to the beginning with the same group Id, then I get the records in the topic.
#Grab('org.apache.kafka:kafka-clients:0.10.0.0')
import org.apache.kafka.clients.consumer.KafkaConsumer
import org.apache.kafka.clients.consumer.ConsumerRecords
import org.apache.kafka.clients.consumer.ConsumerRecord
import org.apache.kafka.clients.consumer.ConsumerRebalanceListener
import org.apache.kafka.common.TopicPartition
import org.apache.kafka.common.PartitionInfo
Properties props = new Properties()
props.with {
put("bootstrap.servers","***********:9091")
put("group.id","script-test-noseek")
put("enable.auto.commit","true")
put("key.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
put("value.deserializer","org.apache.kafka.common.serialization.StringDeserializer")
put("session.timeout.ms",30000)
}
KafkaConsumer consumer = new KafkaConsumer(props)
def topicMap = [:]
consumer.listTopics().each { topic, partitioninfo ->
topicMap[topic] = 0
}
topicMap.each {topic, count ->
def stopTime = new Date().time + 30_000
def stop = false
println "Starting topic: $topic"
consumer.subscribe([topic])
//consumer.subscribe([topic], new CRListener(consumer:consumer))
while(!stop) {
ConsumerRecords<String, String> records = consumer.poll(5_000)
topicMap[topic] += records.size()
consumer.commitAsync()
if ( new Date().time > stopTime || records.size() == 0) {
stop = true
}
}
consumer.unsubscribe()
}
def total = 0
println "------------------- Results -----------------------"
topicMap.each { k,v ->
if ( v > 0 ) {
println "Topic: ${k.padRight(64,' ')} Records: ${v}"
}
total += v
}
println "==================================================="
println "Total: ${total}"
def dummy = "Process End"
class CRListener implements ConsumerRebalanceListener {
KafkaConsumer consumer
void onPartitionsAssigned(java.util.Collection partitions) {
consumer.seekToBeginning(partitions)
}
void onPartitionsRevoked(java.util.Collection partitions) {
consumer.commitSync()
}
}
The code is Groovy 2.4.x. And I've masked the bootstrap server.
If I uncomment the consumer subscribe line with the listener, it does what I expect it to do. But as it is I get no results.
Assume that I change the group Id for each run, just so as to not be picking up where another execution leaves off.
I can't see what I'm doing wrong. Any help would be appreciated.
If you use a new consumer group id and want to read the whole topic from beginning, you need to specify parameter "auto.offset.reset=earliest" in your properties. (default value is "latest")
Properties props = new Properties()
props.with {
// all other values...
put("auto.offset.reset","earliest")
}
On consumer start-up the following happens:
look for (valid) committed offset for use group.id
if (valid) offset is found, resume from there
if no (valid) offset is found, set offset according to auto.offset.reset

Kafka partition key not working properlyā€¸

I'm struggling with how to use the partition key mechanism properly. My logic is set the partition number as 3, then create three partition keys as "0", "1", "2", then use the partition keys to create three KeyedMessage such as
KeyedMessage(topic, "0", message)
KeyedMessage(topic, "1", message)
KeyedMessage(topic, "2", message)
After this, creating a producer instance to send out all the KeyedMessage.
I expecting each KeyedMessage should enter to different partitions according to the different partition keys, which means
KeyedMessage(topic, "0", message) go to Partition 0
KeyedMessage(topic, "1", message) go to Partition 1
KeyedMessage(topic, "2", message) go to Partition 2
I'm using Kafka-web-console to watch the topic status, but the result is not like what I'm expecting. KeyedMessage still go to partitions randomly, some times two KeyedMessage will enter the same partition even they have different partition keys.
To make my question more clear, I would like to post some Scala codes currently I have, and I'm using Kafka 0.8.2-beta, and Scala 2.10.4.
Here is the producer codes, I didn't use the custom partitioner.class :
val props = new Properties()
val codec = if(compress) DefaultCompressionCodec.codec else NoCompressionCodec.codec
props.put("compression.codec", codec.toString)
props.put("producer.type", if(synchronously) "sync" else "async")
props.put("metadata.broker.list", brokerList)
props.put("batch.num.messages", batchSize.toString)
props.put("message.send.max.retries", messageSendMaxRetries.toString)
props.put("request.required.acks",requestRequiredAcks.toString)
props.put("client.id",clientId.toString)
val producer = new Producer[AnyRef, AnyRef](new ProducerConfig(props))
def kafkaMesssage(message: Array[Byte], partition: Array[Byte]): KeyedMessage[AnyRef, AnyRef] = {
if (partition == null) {
new KeyedMessage(topic,message)
} else {
new KeyedMessage(topic,partition,message)
}
}
def send(message: String, partition: String = null): Unit = send(message.getBytes("UTF8"), if (partition == null) null else partition.getBytes("UTF8"))
def send(message: Array[Byte], partition: Array[Byte]): Unit = {
try {
producer.send(kafkaMesssage(message, partition))
} catch {
case e: Exception =>
e.printStackTrace
System.exit(1)
}
}
And here is how I use the producer, create a producer instance, then use this instance to send three message. Currently I create the partition key as Integer, then convert it to Byte Arrays:
val testMessage = UUID.randomUUID().toString
val testTopic = "sample1"
val groupId_1 = "testGroup"
print("starting sample broker testing")
val producer = new KafkaProducer(testTopic, "localhost:9092")
val numList = List(0,1,2);
for (a <- numList) {
// Create a partition key as Byte Array
var key = java.nio.ByteBuffer.allocate(4).putInt(a).array()
//Here I give a Array[Byte] key
//so the second "send" function of producer will be called
producer.send(testMessage.getBytes("UTF8"), key)
}
Not sure whether my logic is incorrect or I didn't understand the partition key mechanism correctly. Anyone could provides some sample code or explanation would be great!
People often assume partitioning is a way to separate business data on business categories, but this is not the right angle of viewing the partition.
Partitioning influence directly these subjects:
-performance (each partition can be consumed in parallel to other partitions)
-messages order (order of messages guaranteed only on partition level)
I will give an example how we create partitions:
You have a topic, say MyMessagesToWorld
You would like to transfer this topic (all MyMessagesToWorld) to some consumer.
You "weight" the whole "mass" of MyMessagesToWorld and found, that this is 10 kg.
You have following "business" categories in "MyMessagesToWorld ":
-messages to dad (D)
-messages to mom (M)
-messages to sis (S)
-messsages to grandma (G)
-messsages to teacher (T)
-messsages to girl friend (F)
You think, who is your consumers and found that your consumers are gnomes, that able consume like 1 Kg messages in an hour each.
You can employ up to 2 such gnomes.
1 gnome needs 10 hours to consume 10 kg messages, 2 gnomes need 5 hours.
So you decide employ all available gnomes to save time.
To create 2 "channels" for these 2 gnomes you create 2 partitions of this topic on Kafka. If you invision more gnomes, create more partitions.
You have 6 business categories inside and 2 sequential independent consumers - gnomes (consumer threads).
What to do?
Kafka's approach is following:
Suppose you have 2 kafka instances in cluster.
(The same example OK , if you have more instaces in cluster)
You set partition number to 2 on Kafka, example(use Kafka 0.8.2.1 as example):
You define your topic in Kafka, telling, that you have 2 partitions for that topic:
kafka-topics.sh(.bat) --create --zookeeper localhost:2181 --replication-factor 2 --partitions 2 --topic MyMessagesToWorld
Now the topic MyMessagesToWorld has 2 partitions: P(0) and P(1).
You chose number 2 (partitions), because you know, you have(invision) only 2 consuming gnomes.
You may add more partitions later, when more consumer gnomes will be employed.
Do not confuse Kafka consumer with such gnome.
Kafka consumer can employ N gnomes. (N parallel threads)
Now you create KEYs for your messages.
You need KEYS to distribute your messages among partitions.
Keys will be these letters of "business categories", that you defined before:
D,M,S,G,T,F, you think such letters are OK to be ID.
But in general case, whatever may be used as a Key:
(complex objects and byte arrays, anything...)
If you create NO partitioner, the default one will be used.
The default partitioner is a bit stupid.
It takes hashcode of each KEY and divides it by number of available partitions , the "reminder" will defind the number of partition for that key.
Example:
KEY M, hashcode=12345, partition for M = 12345 % 2 = 1
As you can imagine, with such partitioner in the best case you have 3 business categories landing in each partition.
In worse case you can have all business categories landing in 1 partition.
If you had 100000 business categories, it will statistically OK to distribute them by such algorithm.
But with only few categories, you can have not very fair distribution.
So, you can rewrite partitioner and distribute your business categories a bit wiser.
There is an example:
This partitioner distributes business categories equally between available partitions.
public class CustomPartitioner {
private static Map<String, Integer> keyDistributionTable = new HashMap<String, Integer>();
private static AtomicInteger sequence = new AtomicInteger();
private ReentrantLock lock = new ReentrantLock();
public int partition(ProducerRecord<String, Object> record, Cluster cluster) {
String key = record.key();
int seq = figureSeq(key);
List<PartitionInfo> availablePartitions = cluster.availablePartitionsForTopic(record.topic());
if (availablePartitions.size() > 0) {
int part = seq % availablePartitions.size();
return availablePartitions.get(part).partition();
} else {
List<PartitionInfo> partitions = cluster.partitionsForTopic(record.topic());
int numPartitions = partitions.size();
// no partitions are available, give a non-available partition
return seq % numPartitions;
}
}
private int figureSeq(String key) {
int sequentualNumber = 0;
if(keyDistributionTable.containsKey(key)){
sequentualNumber = keyDistributionTable.get(key);
}else{//synchronized region
//used only for new Keys, so high waiting time for monitor expected only on start
lock.lock();
try{
if(keyDistributionTable.containsKey(key)){
sequentualNumber = keyDistributionTable.get(key);
}else{
int seq = sequence.incrementAndGet();
keyDistributionTable.put(key, seq);
sequentualNumber = seq;
}
}finally{
lock.unlock();
}
}
return sequentualNumber;
}
}
The default partitioner looks at the key (as a byte array) and uses (% numPartitions) to convert that value to an integer value between 0 and the number of partitions-1 inclusive. The resulting integer is what determines the partition to which the message is written, not the value of the key as you're doing.
Had the same issue - just switch to the ByteArrayParitioner:
props.put("partitioner.class", "kafka.producer.ByteArrayPartitioner")