Apache kafka : broker leader -1 (topic received from Orion via Cygnus) - apache-kafka

I'm working with Apache Kafka and receiving topics from Orion Context Broker via Cygnus (Fiware Labs)
I'm receiving 10 topics, and I can see data arriving in the consumer console for 8 topics.
But for 2 others topics, I cannot see any data arriving. And there is no error code (the consumer is just empty). If i try to add a test line to the topic via the producer console, i get this error:
ERROR Error when sending message to topic sensors_presence2_sensors with key: null, value: 4 bytes with error: Batch Expired (org.apache.kafka.clients.producer.internals.ErrorLoggingCallback)
So I used the describe command and I get this :
Topic:sensors_presence2_sensors PartitionCount:1 ReplicationFactor:1 Configs:
Topic: sensors_presence2_sensors Partition: 0 Leader: -1 Replicas: 2 Isr:
I'm just starting with Kafka, so for the moment I have 1 broker(0) and no partition. But why is my leader -1 ? This broker do not even exist. How can I change that ? I didn't choose the configuration for my topic, they arrived automatically from Cygnus (Orion Context Broker) with a OrionKafkaSink.
An example of one of the 8 topics that works well :
Topic:sensors_presence1_sensors PartitionCount:1 ReplicationFactor:1 Configs:
Topic:sensors_presence1_sensors Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Thanks
Edit : In Cygnus logs, it shows that data is correctly send to kafka :
time=2016-03-02T11:07:09.504UTC | lvl=INFO | trans=1456915468-194-0000000039 | srv=egmmqtt | subsrv=egmmqttpath | function=persistAggregation | comp=Cygnus | msg=com.telefonica.iot.cygnus.sinks.OrionKafkaSink[279] : [kafka-sink] Persisting data at OrionKafkaSink. Topic (sensors_presence2_sensors), Data (...

The result of describe command is showing Replicas:2 and Isr: (empty) that means broker with id 2 was active at the time of creation of that topic and the same broker(id=2) is not active now. Because of that Isr( In sync replicas) showing empty.
There is no chance of getting Replicas: 2 when you have only one node (broker-id=0) kafka cluster. Make broker-2 up and everything will works well.
Hope this helps!

Related

Explain why metricbeat Kafka partition metric has a higher count than consumer metric

The problem
Hi, I am trying to visualize Kafka lags using Grafana. I have been trying to log kafka lags using Metricbeat and doing the math myself since Metricbeat does not support logging Kafka lags in the version that I am using (but it has been implemented recently). Instead of using max(partition.offset.newest) - max(consumergroup.offset) to calculate the lags, I am using sum(partition.offset.newest) - sum(consumergroup.offset) filtered on a particular kafka.topic.name. However, the sum does not tally, upon further investigation, I found out that the count does not even tally! The count for partition offsets is 30 per 10s while the count for consumergroup offsets is 12 per 10s. I expect the count for both to be the same
I do not understand why Metricbeat logs the partition more than the consumergroup. At first I thought it was because of my Metricbeat configuration where I have 2 host groups defined, which might caused it to be logged multiple times. However, after changing my configurations, the count just droppped by half.
TL;DR
Why is the Metricbeat counts of partition and consumergroup different?
Setup
Kafka 2 brokers
Kafka topic partitions:
Topic: xxx PartitionCount:3 ReplicationFactor:2 Configs:
Topic: xxx Partition: 0 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: xxx Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: xxx Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
Metricbeat config (modules.d/kafka.yml):
- module: kafka
#metricsets:
# - partition
# - consumergroup
period: 10s
hosts: ["xxx.yyy:9092"]
Versions
Kafka 2.11-0.11.0.0
Elasticsearch-7.2.0
Kibana-7.2.0
Metricbeats-7.2.0
after much debugging I have figured out what is wrong:
For some reason, my kafka broker 1 has only producer metric and no consumer metric, connecting to broker 2 solved this problem. Connecting both brokers will add both metrics together.
Lucene uses fuzzy search so my data has some other consumer groups inside as well. For exact word matching, use kafka.partition.topic.keyword: 'xxx' instead. This made the ratio of my kafka producer offset to consumer offset 2:1
metricbeat logs the replicas as well, so I need to set NOT kafka.partition.partition.is_leader: false to get all partition leaders. This made the consumer to partition ratio 1:1.
After the 3 steps is done, I can use the formula sum(partition.offset.newest) - sum(consumergroup.offset) to get the lags
However, I do not know why broker 1 doesn't have the consumer information.

Kafka configuration min.insync.replicas not working

Its my early days in learning kafka. And I am checking out every kafka property/concept in my local machine.
So I came across this property min.insync.replicas and here is my understanding. Please correct me if I've misunderstood anything.
Once a message is sent to a topic, the message must be written to at least min.insync.replicas number of followers.
min.insync.replicas also includes the leader.
If number of available live brokers( indirectly, in sync replicas ) are less than the specified min.insync.replicas , then producer will raise an exception failing to publish the message.
Following are the steps I followed to create the above scenario
Started 3 brokers in local with broker Ids 0, 1 and 2
created the topic insync and set min.insync.replicas to 2
using the following command
sudo ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic insync --config min.insync.replicas=2
Describe the topic resulted in the following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
At this point, I made sure the property I've provided is picked by kafka
I started sending messages and consuming them from terminal using following command
Producer: ./kafka-console-producer.sh --broker-list localhost:9092 --topic insync --producer.config ../config/producer.properties
Consumer: ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic insync
At this point, I was able to send and receive messages successfully.
Bought down 2 brokers (0 and 2) and described the topic and resulted in following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 1 Replicas: 2,0,1 Isr: 1
At this point, the In Sync Replicas are just 1(Isr: 1)
Then I tried to produce the message and it worked. I was able to send messages from console-producer and I could see those messages in console consumer.
My Kafka version: kafka_2.10-0.10.0.0
following are the producer properties:
bootstrap.servers=localhost:9092
compression.type=none
batch.size=20
acks=all
I expected the producer to fail with NotEnoughReplicasException as mentioned in this.
public class NotEnoughReplicasException
extends RetriableException
Number of insync replicas for the partition is lower than >min.insync.replicas
but it worked normally.
Am I missing something? How can I create the scenario?
*************** EDIT **********************
Instead of producing the messages from console producer, I tried to generate messages from java code. This time, I got the expected exception in the kafka broker. Although I expected it in the producer (java code). As this experiment is raising more questions, I've posted another question.
is acks set to "all"? if not, try setting it to all
I believe that error is for transactional producer, you may need to add this config:
transactional.id=TID-TEST
if still not working, please check your replicator factor and min insync isr for the internal topic: __transaction_state

Delete unused Kafka partition

I performed reassignment of a topic within my cluster. There were some problems throughout (running out of disk space on one of the target brokers), but I managed to fix it and the process completed succesfully.
It seems, however, that while one of the partitions was reassigned to another brokers, the data was not removed from the source broker's disk. And since the partion is quite big, I'd like it gone.
For obvious reasons I do not want to login to shell and rm -rf the directory. What would be the steps I could take to debug why the data was not deleted and then how to "encourage" the cluster to performe a clean up?
For a while thought that the retention policy might kick in and delete the data, but it's set to run every 10 minutes and it's been over a day since the reassignment has finished.
The replicas are as follows:
# bin/kafka-topics.sh -describe --zookeeper 1.2.3.4 --topic topic-name
Topic:topic-name PartitionCount:4 ReplicationFactor:3 Configs:retention.ms=3153600000000,compression.type=lz4
Topic: topic-name Partition: 0 Leader: 1014 Replicas: 1014,1012,1002 Isr: 1012,1002,1014
Topic: topic-name Partition: 1 Leader: 1007 Replicas: 1007,1006,1003 Isr: 1006,1007,1003 <--- this is the partition
Topic: topic-name Partition: 2 Leader: 1013 Replicas: 1013,1008,1001 Isr: 1013,1008,1001
Topic: topic-name Partition: 3 Leader: 1011 Replicas: 1011,1016,1010 Isr: 1010,1011,1016
And here we can see that the broker 1008 holds two partitions: 2 (it should) and 1 (it should not, this we need gone).
/data_disk_0/kafka-logs# cat meta.properties | grep broker.id
broker.id=1008
/data_disk_0/kafka-logs# du -h --max-depth=1 . | grep topic-name-1
295G ./topic-name-2
292G ./topic-name-1
edit: What's curious, all files in the topic directory (/data_disk_0/kafka-logs/topic-name-1/*) are opened by Kafka (checked with lsof). I don't know whether it's a default behaviour for Kafka to read all files in its data dir regardless of their status or it means that these files are still being somehow used.
It is not possible to delete partitions from a topic in Kafka. A partition is a file, Kafka assigns data to each partition depending on the Key. Let's say the partition 2 has data for the Key AAA, but the AAA Key isn't longer produced then you might see partition 2 not used.
Take a look to this video:
https://developer.confluent.io/learn-kafka/apache-kafka/partitions/#:~:text=Kafka%20Partitioning&text=Partitioning%20takes%20the%20single%20topic,many%20nodes%20in%20the%20cluster.
The only way is to delete the topic and create it again with the correct number of partitions

How to influnce into the kafka partition leader election process

I am going to setup a kafka cluster for our intensive messaging system.
Currently we have setup two kafka clusters, one based in London (LD) as primary and antoher one based in New York (NY) as DR ( backup), and we have made java clients to replicate data from LD to NY.
As Kafka has built-in features such as partitioning, and replication for scalibity, high availability and failover purpose so that we want to create a single bigger cluster comprising of both servers in London and New York
But...
We are having the problem with connectivity between NY and LD servers, the network speed is really bad.
I have performed server tests.
producer config:
- acks=1 ( requires acknowlegement from partition leader only)
- sending Async.
when producers in London sending messages to brokers in LD , the thoughput 100,000 msg /sec, providing message size is : 100bytes => 10MB/sec
when producers in London and sending message to broker in NY, the thoughput 10 msg/sec, providing message size is : 100bytes => 1KB/sec
So...
I am considering any way to make sure the producer/consumer take the advantage of locality that means if they are in the same network will send messages to the neariest broker.
Lets say: consumers in LD will send messages to LD based brokers.
(I understand that the write/read request only happens on partition leader).
Any suggestion would be highly appriciate.
From what I understood your current structure is:
1 Broker located in NY.
1 Broker located in LD.
n number of topics. (I am going to assume the number of topics is 1).
n number of partitions on the topic. (I am going to assume the number of partitions is 2).
Both of the partitions replicated over the brokers.
You want to make broker located in LD leader of all the partitions, so all the producers will interact with this broker and the broker located in NY will be used as replication. If this is the case, then, you can do the following:
Check the configuration of your topic:
./kafka-topics.sh --describe --topic stream-log
Topic:<topic-name> PartitionCount:2 ReplicationFactor:2 Configs:
Topic: stream-log Partition: 0 Leader: 0 Replicas: 0,1 Isr: 0,1
Topic: stream-log Partition: 1 Leader: 1 Replicas: 1,0 Isr: 1,0
And assuming:
LD Broker ID: 0
NY Broker ID: 1
You can observe how the leader of the partition 1 is handled by the broker 1 (NY), we want to modify that, to do so is necessary to reassign the partitions:
./kafka-reassign-partitions.sh --reassignment-json-file manual_assign.json --execute
The contents of the JSON file:
{"partitions": [
{"topic": "<topic-name>", "partition": 0, "replicas": [0,1]},
{"topic": "<topic-name>", "partition": 1, "replicas": [0,1]}
],
"version":1
}
Finally, to force kafka to update the leader, run:
./kafka-preferred-replica-election.sh
The last command will affect all the topics you have created if do not specify a list of topics, that should not be a problem but have it in mind.
Is worth to have a look to this guide, it explains something similar. And if you are curious you can check the official documentation of the tools here.

Reading from multiple broker kafka with flink

I want to read multiple kafka from flink.
I have a cluser of 3 computers for kafka. With the following topic
Topic:myTopic PartitionCount:3 ReplicationFactor:1 Configs:
Topic: myTopic Partition: 0 Leader: 2 Replicas: 2 Isr: 2
Topic: myTopic Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: myTopic Partition: 2 Leader: 1 Replicas: 1 Isr: 1
From Flink I execute the following code :
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "x.x.x.x:9092,x.x.x.x:9092,x.x.x.x:9092");
properties.setProperty("group.id", "flink");
DataStream<T> stream = env.addSource(new FlinkKafkaConsumer09<>("myTopic", new SimpleStringSchema(), properties)
stream.map(....)
env.execute()
I launch 3 times the same job.
If I execute this code with one broker it's work well but with 3 broke (on 3 different machine) only one partition is read.
(In this question) the solution proposed was
to create separate instances of the FlinkKafkaConsumer for each cluster (that's what you are already doing), and then union the resulting streams
It's not working in my case.
So my questions are :
Do I missing something ?
If we had a new computer in the Kafka cluster do we need to change flink's code to add a consumer for the new borker ? Or can we handle this automatically at runtime ?
It seems you've misunderstood the concept of Kafka's distributed streams.
Kafka topic consists of several partitions (3 in your case). Each consumer can consume one or more of these partitions. If you start 3 instances of your app with the same group.id, each consumer will indeed read data from just one broker – it tries to distribute the load evenly so it's one partition per consumer.
I recommend to read more about this topic, especially about the concept of consumer groups in Kafka documentation.
Anyway FlinkKafkaConsumer09 can run in multiple parallel instances, each of which will pull data from one or more Kafka partitions. You don't need to worry about creating more instances of the consumer. One instance of consumer can pull records from all of the partitions.
I have no idea why you're starting the job 3 times instead of once with parallelism set to 3. That would solve your problem.
DataStream<T> stream =
env.addSource(new FlinkKafkaConsumer09<>("myTopic", new SimpleStringSchema(), properties))
.setParallelism(3);