Kafka uncommitted messages - apache-kafka

Lets say the partition has 4 replicas (1 leader, 3 followers) and all are currently in sync. min.insync.replicas is set to 3 and request.required.acks is set to all or -1.
The producer send a message to the leader, the leader appends it to it's log. After that, two of the replicas crashed before they could fetch this message. One remaining replica successfully fetched the message and appended to it's own log.
The leader, after certain timeout, will send an error (NotEnoughReplicas, I think) to the producer since min.insync.replicas condition is not met.
My question is: what will happen to the message which was appended to leader and one of the replica's log?
Will it be delivered to the consumers when crashed replicas come back online and broker starts accepting and committing new messages (i.e. high watermark is forwarded in the log)?

If there is no min.insync.replicas available and producer uses ack=all, then the message is not committed and consumers will not receive that message, even after crashed replicas come back and are added to the ISR list again. You can test this in the following way.
Start two brokers with min.insync.replicas = 2
$ ./bin/kafka-server-start.sh ./config/server-1.properties
$ ./bin/kafka-server-start.sh ./config/server-2.properties
Create a topic with 1 partition and RF=2. Make sure both brokers are in the ISR list.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --create --topic topic1 --partitions 1 --replication-factor 2
Created topic "topic1".
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1,2
Run console consumer and console producer. Make sure produce uses ack=-1
$ ./bin/kafka-console-consumer.sh --new-consumer --bootstrap-server kafka-1:9092,kafka-2:9092 --topic topic1
$ ./bin/kafka-console-producer.sh --broker-list kafka-1:9092,kafka-2:9092 --topic topic1 --request-required-acks -1
Produce some messages. Consumer should receive them.
Kill one of the brokers (I killed broker with id=2). Check that ISR list is reduced to one broker.
$ ./bin/kafka-topics.sh --zookeeper zookeeper-1 --describe --topic topic1
Topic:topic1 PartitionCount:1 ReplicationFactor:2 Configs:
Topic: topic1 Partition: 0 Leader: 1 Replicas: 1,2 Isr: 1
Try to produce again. In the producer you should get some
Error: NOT_ENOUGH_REPLICAS
(one per retry) and finally
Messages are rejected since there are fewer in-sync replicas than required.
Consumer will not receive these messages.
Restart the killed broker and try to produce again.
Consumer will receive these message but not those that you sent while one of the replicas was down.

From my understanding, the watermark will not advance until both failed
follow-broker recovered and caught up.
See this blog post for more details: http://www.confluent.io/blog/hands-free-kafka-replication-a-lesson-in-operational-simplicity/

Error observerd
Messages are rejected since there are fewer in-sync replicas than required.
To resolve this i had increase the number of replication factors and it worked

Related

Kafka messages are stored in cluster/ensemble but aren't retrieved by consumer

I have a 2 server zookeeper+Kafka setup with the following config duplicated over the two servers:
Kafka config
broker.id=1 #2 for the second server
listeners=PLAINTEXT://0.0.0.0:9092
advertised.listeners=PLAINTEXT://server1_ip:9092
zookeeper.connect=server1_ip:2181,server2_ip:2181
num.partitions=3
offsets.topic.replication.factor=2 #how many servers should every topic be replicated to
zookeeper config
dataDir=/tmp/zookeeper
clientPort=2181
maxClientCnxns=200
admin.enableServer=false
server.1=server1_ip:2888:3888
server.2=server2_ip:2888:3888
initLimit=20
syncLimit=10
Successfully created a topic using:
/usr/local/kafka/bin/kafka-topics.sh --create --zookeeper server1_ip:2181,server2_ip:2181 --replication-factor 2 --partitions 3 --topic replicatedtest
Doing a Describe on topic using:
/usr/local/kafka/bin/kafka-topics.sh --bootstrap-server=server1_ip:2181,server2_ip:2181 --describe --topic replicatedtest
shows the following:
Topic: replicatedtest PartitionCount: 3 ReplicationFactor: 2 Configs: segment.bytes=1073741824
Topic: replicatedtest Partition: 0 Leader: 2 Replicas: 2,1 Isr: 2,1
Topic: replicatedtest Partition: 1 Leader: 1 Replicas: 1,2 Isr: 1,2
Topic: replicatedtest Partition: 2 Leader: 2 Replicas: 2,1 Isr: 2,1
At this point everything looks good. However when I push messages using the following:
/usr/local/kafka/bin/kafka-console-producer.sh --broker-list server1_ip:9092,server2_ip:9092 --topic replicatedtest
>Hello
>Hi
and call the consumer script using:
/usr/local/kafka/bin/kafka-console-consumer.sh --bootstrap-server server1_ip:9092,server2_ip:9092 --topic replicatedtest --from-beginning
The consumer just stalls.
When I check if these messages exist via an admin UI (KafkaMagic) they do come up. So looks like the messages are stored successfully but for some reason the consumer script can't get to them.
Any ideas?
Many thanks in advance!
==Edit==
added a 3rd server. Changed log level to TRACE in tools-log4j.properties and this is what the consumer script outputs:
https://docs.google.com/document/d/12nfML7M2a5QyXQswIZ_QVGuNqkc2DTRLPKqvRfDWKDY/edit?usp=sharing
Some corrections,
offsets.topic.replication.factor does not set default replication factor for the created topics, it is for the internal __offset topic that keeps your offsets of the consumer group
I have never heard or seen a setup with 2 zookeepers, the recommended is odd number where 1 / 3 or 5 at maximum , where usually 3 is more then enough.
for brokers as well the recommended setup is at least 3 replicas with minimum in sync replica of 2
For further assistance please provide the logs of the server / set consumer at debug / issue consumer-groups describe / issue the console consumer with --group test1 for easier investigation
Update: according to your provided docs
The group coordinator is not available
"
I faced a similar issue. The problem was when you start your Kafka broker there is a property associated with the offset topic replication factor which it default to 3
"
Can you do topic --list and make sure __consumer_offsets topic exists?
If not please create it and restart brokers/zookeeprs and try consume again
kafka-topics --bootstrap-server node1:9092 --partitions 50 --replication-factor 3 --create --topic __consumer_offsets
Looks like issue happened because I started off with one node and then decided to move to a cluster/ensemble setup that includes the original node. __consumer_offsets apparently needed to be reset in this case. This is what I ended up doing to solve issue:
stop zookeeper and kafka on all 3 servers
systemctl stop kafka
systemctl stop zookeeper
delete existing topic data
rm -rf /tmp/kafka-logs
rm -rf /tmp/zookeeper/version-2
delete __consumer_offsets (calling the same delete command on each zookeeper instance might not be necessary)
/usr/local/kafka/bin/zookeeper-shell.sh server1_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
/usr/local/kafka/bin/zookeeper-shell.sh server2_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
/usr/local/kafka/bin/zookeeper-shell.sh server3_ip:2181 <<< "deleteall /brokers/topics/__consumer_offsets"
restart servers
systemctl start zookeeper
systemctl start kafka
recreate __consumer_offsets
/usr/local/kafka/bin/kafka-topics.sh --zookeeper server1_ip:2181,server2_ip:2181,server3_ip:2181 --create --topic __consumer_offsets --partitions 50 --replication-factor 3
Solution was based off: https://community.microstrategy.com/s/article/Kafka-cluster-health-check-fails-with-the-error-Group-coordinator-lookup-failed-The-coordinator-is-not-available?language=en_US

Is the leader part of ISR list

If i set the min.insync.replicas to 1, the leader will wait for a follower to fetch this record or he will send an ack to producer considering himself one of the ISR list.
If you set min.insync.replicas to 1 it is sufficient that the leader will acknowledge the receival of data. The leader will not wait for any followers to acknowledge the data.
Maybe it is worth mentioning that the min.insync.replicas only comes into effect when you have set your producer configuration acks set to all (or -1).
The minimal allowed value for min.insync.replicas is 1, so even if your replication factor of the topic is set to 1 you can't get below 1.
Is the leader part of ISR list?
Yes, the leader is part of the ISR list. You can see this when calling the command line tool kafka-topics. In the result you will notice that the same broker number will show up as "Leader" and in "ISR":
./bin/kafka-topics.sh --bootstrap-server localhost:9092 --describe --topic test
Topic: test PartitionCount: 1 ReplicationFactor: 3
Topic: test Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3

Kafka - Troubleshooting .NotEnoughReplicasException

I started seeing the following error
[2020-06-12 20:09:01,324] ERROR [ReplicaManager broker=3] Error processing append operation on partition __consumer_offsets-10 (kafka.server.ReplicaManager)
org.apache.kafka.common.errors.NotEnoughReplicasException: The size of the current ISR Set(3) is insufficient to satisfy the min.isr requirement of 2 for partition __consumer_offsets-10
My setup is having three brokers and all brokers are up. Couple of
things i did before this error was about pop up
I configured min.isr to be 2 in all the brokers. I created a topic
with replication factor 3 and starting producing the message from a
producer with ack = 1 with two brokers down. I brought up all the
brokers and started consumer.
How to go about troubleshooting this error
Consumer is also NOT be able to see this message ( not sure why, the message is supposed to be treated as "committed" as one broker was up when the producer was running)
Couple of facts
It is interesting to see rebalancing didnt happen yet WRT preferred leader starategy
$ kafka-topics --zookeeper 127.0.0.1:2181 --topic stock-prices --describe
Topic: stock-prices PartitionCount: 3 ReplicationFactor: 3 Configs: min.insync.replicas=2
Topic: stock-prices Partition: 0 Leader: 1 Replicas: 1,3,2 Isr: 1,2,3
Topic: stock-prices Partition: 1 Leader: 1 Replicas: 2,1,3 Isr: 1,2,3
Topic: stock-prices Partition: 2 Leader: 1 Replicas: 3,2,1 Isr: 1,2,3
Here's your problem:
You have set min.insync.replicas=2, which means you need at least two broker up and running to publish a message to a topic. If you let down 2 brokers, then you have only one left. Which means your insync.replica requirement is not fulfilled.
This has nothing to do with the Consumers, since this is about the brokers. When you set acks=1 that means your producer gets the acknowledgement when the message is published to one broker. (It will not acknowledge all the replicas are created).
So the problem is, you have your Producer, which gets acknowledged that the message is received, when a single broker (The leader) gets the message. But the leader cannot put replicas since there aren't any brokers up to sync.
One way to get this done is to set the acks=all, so your producer won't get acknowledged until all the replicas are done. It'll retry until the at least 2 in sync replicas are online.

Kafka consumer not consuming from beginning

I have Kafka setup on my local machine and have started the zookeeper and a single broker server.
Now i have a single topic with following description:
~/Documents/backups/kafka_2.12-2.2.0/data/kafka$ kafka-topics.sh --zookeeper 127.0.0.1:2181 --topic edu-topic --describe
Topic:edu-topic PartitionCount:3 ReplicationFactor:1 Configs:
Topic: edu-topic Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Topic: edu-topic Partition: 1 Leader: 0 Replicas: 0 Isr: 0
Topic: edu-topic Partition: 2 Leader: 0 Replicas: 0 Isr: 0
I have a producer which have produced some message before the consumer was started as follows:
~/Documents/backups/kafka_2.12-2.2.0/data/kafka$ kafka-console-producer.sh --broker-list 127.0.0.1:9092 --topic edu-topic
>book
>pen
>pencil
>marker
>
and when i started the consumer with --from-beginning option, it does not shows all the messages produced by the producer:
~/Documents/backups/kafka_2.12-2.2.0/data/kafka$ kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic edu-topic --group edu-service --from-beginning
However, it is showing the newly added messages.
What's wrong i am doing here? Any help?
--from-beginning: If the consumer does not already have an established offset to consume from, start with the earliest message
present in the log rather than the latest message.
Kafka consumer uses --from-beginning very first time if you retry which I suspect you did, it will start from where it left. You can consume the message again with any of the below options
reset consumer group offset using below
kafka-streams-application-reset.sh --application-id edu-service
--input-topics edu-topic --bootstrap-servers localhost:9092 --zookeeper 127.0.0.1:2181
then retry again from the beginning
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic edu-topic --group edu-service --from-beginning
Use new consumer id which will start consuming from staring points
kafka-console-consumer.sh --bootstrap-server localhost:9092 --topic edu-topic --group new-edu-service --from-beginning
You can also use an offset instead to consume the next N messages from a partition
kafka-console-consumer.sh --bootstrap-server localhost:9092 --offset 0 --partition 0 --topic edu-topic
--offset <String: consume offset> : The offset id to consume from (a non- negative number), or 'earliest' which means from beginning, or
'latest' which means from end (default: latest)
--partition <Integer: partition> : The partition to consume from Consumption starts from the end of the partition unless '--offset'
is specified.
Because you are using the old consumer group. --from-beginning only works for the new consumer group which its group name has not been recorded on the Kafka cluster yet.
To re-consume again from the start, either you can:
Start a new consumer group (change the group name) with the flag --from-beginning
Reset the offsets of this consumer group. I haven't tried yet but you can test it here
The flag
--from-begining
will affect the behavior of your GroupConsumer the first time it is started/created , or the stored (last commited consuming) offset is expired (or maybe when you try to reset the stored offset).
Otherwise the GroupConsumer will just continue at the stored (last commited) offset.
Please consider get more message from manual.
Just add
--from-beginning
But do know that messages from the beginning would not be in order
if you have used multiple partitions for the same topic.
Order is only Guaranteed at the partition level. (for the same partition)

When primary Kafka Broker dies ISR doesn't expand to maintain replication

I am testing resilience of Kafka (apache; kafka_2.12-1.1.0). What i expect is that ISR of a topic should increase it self (i.e. replicate to available node) when ever a node crashes. I spent 4 days googling for possible solutions, but was of no use.
Have 3 node cluster, and created 3 brokers, 3 zoo keepers on it (1node = 1broker + 1 zookeeper) using docker (wurstmeister)
updated the below in server.properties
offsets.topic.replication.factor=3
transaction.state.log.replication.factor=3
transaction.state.log.min.isr=2
min.insync.replicas=2
default.replication.factor=3
started all brokers; waited a minute; created topic with replication3, min in sync replication 2
bin/kafka-topics.sh --create --zookeeper 172.31.31.142:2181,172.31.26.102:2181,172.31.17.252:2181 --config 'min.insync.replicas=2' --replication-factor 3 --partitions 1 --topic test2
when i describe the topic i see the below data
bash-4.4# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic test2
Topic:test2 PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: test2 Partition: 0 Leader: 2 Replicas: 2,3,1 Isr: 2,3,1
So far so good, Now i start consuers; followd by producers. When the consumpmtion is in full throttle i kill the broker #2. Now when i describe the same topic i see the below ([Edit-1])
bash-4.4# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic test2
Topic:test2 PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: test2 Partition: 0 Leader: 3 Replicas: 2,3,1 Isr: 3,1
bash-4.4# bin/kafka-topics.sh --describe --zookeeper zookeeper:2181 --topic __consumer_offsets
Topic:__consumer_offsets PartitionCount:50 ReplicationFactor:3 Configs:segment.bytes=104857600,cleanup.policy=compact,compression.type=producer Topic: __consumer_offsets Partition: 0 Leader: 1 Replicas: 1,2,3 Isr: 1,3
Topic: __consumer_offsets Partition: 1 Leader: 3 Replicas: 2,3,1 Isr: 1,3
.. .. ..
[end of edit-1]
I let the kafka producer, consumer continue for couple of minutes; Q1: why does Replicas still show 2 when the broker 2 is down?
Now i added 2 more brokers to the cluster. While the producer, consumers continue i keep observing ISR; the no of ISR replicas dont increase they stick to 3,1 only. Q2: why is ISR not increasing even though 2 more brokers are available?.
Then i stopped the producer, consumer; waited couple of minutes; re-ran the describe command again --stillthe same result. when does ISR expand its replication?. Where there are 2 more nodes available, why did ISR not replicate?
i crreate my producer as follows
props.put("acks", "all");
props.put("retries", 4);
props.put("batch.size", new Integer(args[2]));// 60384
props.put("linger.ms", new Integer(args[3]));// 1
props.put("buffer.memory", args[4]);// 33554432
props.put("bootstrap.servers", args[6]);// host:port,host:port,host:port etc
props.put("max.request.size", "10485760");// 1048576
and consumer as follows
props.put("group.id", "testgroup");
props.put("enable.auto.commit", "true");
props.put("auto.commit.interval.ms", args[2]);// 1000
props.put("session.timeout.ms", "30000");
props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
props.put("value.deserializer", "org.apache.kafka.common.serialization.ByteArrayDeserializer");
props.put("max.partition.fetch.bytes", args[3]);// 52428800
props.put("fetch.max.bytes", args[4]);// 1048576
props.put("fetch.message.max.bytes", args[5]);// 1048576
props.put("bootstrap.servers", args[6]);
props.put("max.poll.records", args[7]);
props.put("max.poll.interval.ms", "30000");
props.put("auto.offset.reset", "latest");
In a separate experiment, when i removed another broker the i started seeing errors that total in sync replications are less than the minimum required. Surprizingly in this state the producer is not blocked; but i see the error on the broker server.log. No new messages are getting enqueued. Q4:Shouldnt producer be blocked? instead of throwing error on broker side? or is my understanding wrong?
Any help please?
If I understand correctly, Kafka does not auto rebalance when you add brokers. A down replica will not be reassigned unless you use the repartition tool
It's not clear what difference are between your environments, but it looks like you didn't really kill a broker if it's still listed as a leader.
if you had two brokers down with min ISR as 2, then, yes you'll see errors. The producer should still be able to reach at least one broker, though, so I don't think it'll be completely blocked unless you set the ack value to all. The errors at the broker end are more related to placing replicas
Recap of the meaning of replica: all partition replicas are replicas, even the leader one; in other words 2 replicas means you have the leader and one follower.
When you describe the topic, for your only partition you see: "Replicas: 2,3,1 Isr: 3,1" which means that when the topic was created the leader partition was assigned to broker 2 (the first in the replicas list), and followers where assigned to brokers 3 and 1; now the broker 2 is the "preferred leader" for that partition.
This assignment is not going to change from itself (the leader may change, but not the "preferred leader"), so your followers will not move to other brokers, only the leader role can be given to another in-sync replica. (There is a property auto.leader.rebalance.enable which set to true will allow the leader role to go back to the preferred leader if it is up again, otherwise the leader role will be kept by the newly elected leader...
Next time try to kill the leader broker and you will see that a new leader will be elected and used, but the "Replicas: 2,3,1" will stay.
And if you set replication-factor=3 acks=all and min.insync.replicas=2, you can produce as long as 2 replicas acknowledge the writes (the leader and one follower), but will get logs on the broker if if is not possible to maintain 3 ISRs...
Hope this helps...