During application planned maintenance activities , there is a need to drain all the messages a in kafka topic.
In MQ , we can monitor the queue depth and start the maintenance activities once all the messages are consumed. In kafka , do we have a similar mechanism to find out if all messages in the topic has been consumed and its safe to shutdown the producer and consumer ?
Using the following command you can monitor the LAG of your consumer group, once the lag is 0 means no more messages in topic to consume
./bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --group count_errors --describe
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG OWNER
count_errors logs 2 2908278 2908278 0 consumer-1_/10.8.0.55
count_errors logs 3 2907501 2907501 0 consumer-1_/10.8.0.43
count_errors logs 4 2907541 2907541 0 consumer-1_/10.8.0.177
count_errors logs 1 2907499 2907499 0 consumer-1_/10.8.0.115
count_errors logs 0 2907469 2907469 0 consumer-1_/10.8.0.126
Related
I have a kafka console consumer in version 1.1.0 that i use to get messages from Kafka.
When I use kafka-console-consumer.sh script with option --max-messages it seems like it is commiting wrong offsets.
I've created a topic and a consumer group and read some messages:
/kafka_2.11-1.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.23:9092 --describe --group my-consumer-group
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
test.offset 1 374 374 0 - - -
test.offset 0 0 375 375 - - -
Than I read 10 messages like this:
/kafka_2.11-1.1.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.23:9092 --topic test.offset --timeout-ms 1000 --max-messages 10 --consumer.config /kafka_2.11-1.1.0/config/consumer.properties
1 var_1
3 var_3
5 var_5
7 var_7
9 var_9
11 var_11
13 var_13
15 var_15
17 var_17
19 var_19
Processed a total of 10 messages
But now offsets show that it read all the messages in a topic
/kafka_2.11-1.1.0/bin/kafka-consumer-groups.sh --bootstrap-server 192.168.1.23:9092 --describe --group my-consumer-group
Note: This will not show information about old Zookeeper-based consumers.
Consumer group 'my-consumer-group' has no active members.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
test.offset 1 374 374 0 - - -
test.offset 0 375 375 0 - - -
And now when I want to read more messages I get an error that there are no more messages in a topic:
/kafka_2.11-1.1.0/bin/kafka-console-consumer.sh --bootstrap-server 192.168.1.23:9092 --topic test.offset --timeout-ms 1000 --max-messages 10 --consumer.config /kafka_2.11-1.1.0/config/consumer.properties
[2020-02-28 08:27:54,782] ERROR Error processing message, terminating consumer process: (kafka.tools.ConsoleConsumer$)
kafka.consumer.ConsumerTimeoutException
at kafka.consumer.NewShinyConsumer.receive(BaseConsumer.scala:98)
at kafka.tools.ConsoleConsumer$.process(ConsoleConsumer.scala:129)
at kafka.tools.ConsoleConsumer$.run(ConsoleConsumer.scala:84)
at kafka.tools.ConsoleConsumer$.main(ConsoleConsumer.scala:54)
at kafka.tools.ConsoleConsumer.main(ConsoleConsumer.scala)
Processed a total of 0 messages
What do I do wrong? Why the offset moved to last message in topic and not just by 10 messages?
This is about auto commit feature of Kafka consumer. As mentioned in this link:
The easiest way to commit offsets is to allow the consumer to do it
for you. If you configure enable.auto.commit=true, then every five
seconds the consumer will commit the largest offset your client
received from poll(). The five-second interval is the default and is
controlled by setting auto.commit.interval.ms. Just like everything
else in the consumer, the automatic commits are driven by the poll
loop. Whenever you poll, the consumer checks if it is time to commit,
and if it is, it will commit the offsets it returned in the last poll.
So in your case when your consumer poll, it receives messages up to 500 (default value of max.poll.records) and after 5 seconds it commits largest offset that return from last poll (375 in your case) even you specify max-messages as 10.
--max-messages: The maximum number of messages to
consume before exiting. If not set,
consumption is continual.
During the last maintenance of Kafka, which required a rolling restart of kafka brokers, we witnessed a reset for consumer group offsets for certain partitions.
At 11:14 am, everything is fine for the consumer group and we don't see a consumer lag:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0 105130857 105130857 0 st-...
...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 6 78591770 78591770 0 st-...
However 5 minutes later, during the rolling restart of brokers, we have a reset for one partition and a consumer lag of millions of events.
$ bin/kafka-consumer-groups --bootstrap-server XXX:9093,XXX... --command-config secrets.config --group st-xx --describe
Note: This will not show information about old Zookeeper-based consumers.
[2019-08-26 12:44:13,539] WARN Connection to node -5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
[2019-08-26 12:44:13,707] WARN [Consumer clientId=consumer-1, groupId=st-xx] Connection to node -5 could not be established. Broker may not be available. (org.apache.kafka.clients.NetworkClient)
Consumer group 'st-xx' has no active members.
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 0 105132096 105132275 179
...
xxxxxxxxxxxxxxxxxxxxxxxxxxxxxx 6 15239401 78593165 63353764 ...
In the last two hours, the offset for the partition hasn't recovered and we need to patch it now manually. We had similar issues during the last rolling restart of brokers.
Has anyone seen something like this before? The only clue we could find is this ticket, however we run Kafka version: 1.0.1-kafka3.1.0,
While the Kafka consumer application is up and running, we are able to use the kafka-consumer-groups.sh to describe and retrieve the offset status.
However, if the application goes down, then the command just displays the application is in REBALANCING.
Is there a way to just see the lag of a particular consumer group, even if the application is not up and running?
For example, I would like this output
GROUP|TOPIC|PARTITION|CURRENT-OFFSET|LOG-END-OFFSET|LAG
hrly_ingest_grp|src_hrly|4|63832846|63832846|0
hrly_ingest_grp|src_hrly|2|38372346|38372346|0
hrly_ingest_grp|src_hrly|0|58642250|58642250|0
hrly_ingest_grp|src_hrly|5|96295762|96295762|0
hrly_ingest_grp|src_hrly|3|50602337|50602337|0
hrly_ingest_grp|src_hrly|1|29288993|29288993|0
You can use kt (Kafka tool) - https://github.com/fgeller/kt
Command to query offset and lag will be as follow:
kt group -group groupName -topic topicName -partitions all
Even the consumer application is down, this command will show the offset of each consumer of that group
bin/kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-group
Output:
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
topic3 0 241019 395308 154289 consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1 consumer2
topic2 1 520678 803288 282610 consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1 consumer2
topic3 1 241018 398817 157799 consumer2-e76ea8c3-5d30-4299-9005-47eb41f3d3c4 /127.0.0.1 consumer2
topic1 0 854144 855809 1665 consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1 consumer1
topic2 0 460537 803290 342753 consumer1-3fc8d6f1-581a-4472-bdf3-3515b4aee8c1 /127.0.0.1 consumer1
topic3 2 243655 398812 155157 consumer4-117fe4d3-c6c1-4178-8ee9-eb4a3954bee0 /127.0.0.1 consumer4
From 0.8.1.1 release, Kafka provides the provision for storage of offsets in Kafka, instead of Zookeeper (see this).
I'm not able to figure out how to check the details of offsets consumed, as the current tools only provide consumer offset count checks for zookeeper only.(I'm referring to this)
If there are any tools available to check consumer offset, please let me know.
I'am using kafka 0.8.2 with offsets stored in kafka. This tools works good for me:
./kafka-run-class.sh kafka.tools.ConsumerOffsetChecker
--topic your-topic
--group your-consumer-group
--zookeeper localhost:2181
You get all informations you need: topic size, consumer lag, owner.
The following straight command gives the enough details:
kafka-consumer-groups.sh --bootstrap-server localhost:9092 --describe --group my-second-application
You will get the details like this
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
first_topic 0 4 4 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first_topic 1 3 3 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first_topic 2 3 3 0 consumer-1-7cb31cf3-1621-4635-8f95-6ae85215b31b /10.200.237.53 consumer-1
first-topic 0 4 4 0 - - -
I'm using Kafka 2.1 and I use kafka-consumer-groups command which gives useful details like current offset, log-end offset, lag, etc. The simplest command syntax is
kafka-consumer-groups.sh \
--bootstrap-server localhost:29092 \
--describe --group <consumer group name>
And the sample output looks like this
TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
your.topic 1 17721650 17721673 23 consumer-159-beb9050b /1.2.3.4 consumer-159
your.topic 3 17718700 17718719 19 consumer-159-beb9050b /1.2.3.4 consumer-159
your.topic 0 17721700 17721717 17 consumer-159-beb9050b /1.2.3.4 consumer-159
HTH
I am under the impression that with two brokers with sync turned on my kafka setup should keep on working even on fail of one of the broker.
To test it I made a new topic named topicname. Its description is as follows:
Topic:topicname PartitionCount:1 ReplicationFactor:1 Configs:
Topic: topicname Partition: 0 Leader: 0 Replicas: 0 Isr: 0
Then I ran producer.sh and consumer.sh in the following way:
bin/kafka-console-producer.sh --broker-list localhost:9092,localhost:9095 sync --topic topicname
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --topic topicname --from-beginning
Till both the brokers were working I saw that messages were being received properly by the consumer, but when I killed one of the instance of the brokers through kill command then the consumer stopped showing me any new messages. Instead it showed me the following error message:
WARN [ConsumerFetcherThread-console-consumer-57116_ip-<internalipvalue>-1438604886831-603de65b-0-0], Error in fetch Name: FetchRequest; Version: 0; CorrelationId: 865; ClientId: console-consumer-57116; ReplicaId: -1; MaxWait: 100 ms; MinBytes: 1 bytes; RequestInfo: [topicname,0] -> PartitionFetchInfo(9,1048576). Possible cause: java.nio.channels.ClosedChannelException (kafka.consumer.ConsumerFetcherThread)
[2015-08-03 12:29:36,341] WARN Fetching topic metadata with correlation id 1 for topics [Set(topicname)] from broker [id:0,host:<hostname>,port:9092] failed (kafka.client.ClientUtils$)
java.nio.channels.ClosedChannelException
at kafka.network.BlockingChannel.send(BlockingChannel.scala:100)
at kafka.producer.SyncProducer.liftedTree1$1(SyncProducer.scala:73)
at kafka.producer.SyncProducer.kafka$producer$SyncProducer$$doSend(SyncProducer.scala:72)
at kafka.producer.SyncProducer.send(SyncProducer.scala:113)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:58)
at kafka.client.ClientUtils$.fetchTopicMetadata(ClientUtils.scala:93)
at kafka.consumer.ConsumerFetcherManager$LeaderFinderThread.doWork(ConsumerFetcherManager.scala:66)
at kafka.utils.ShutdownableThread.run(ShutdownableThread.scala:60)
I had this similar problem, setting the producer config "topic.metadata.refresh.interval.ms" to -1 (or whatever value is suitable for you) solved the issue for me.
So in my case , I had 3 broker (multi broker set up on my local machine) and created the topic with 3 partitions and replication factor 2.
Test set up:
Before the producer config:
Tried 3 brokers running , killed one of the brokers after producer started, the local Zookeeper updated the ISR and topic metadata info (removed down broker as leader) but the producer did not pick it up (may be due to default 10 mins refresh time).So messages end up failing. I get send exceptions.
After the producer config (-1 in my case):
Tried 3 brokers running , killed one of the brokers after producer started, the local Zookeeper updated the ISR info (removed down broker as leader), the producer refreshed the new ISR/topic metadata info and messages send did not fail.
-1 makes it refresh topic metadata on each failed attempt so may be you want to reduce the refresh time to something reasonable instead.
I think there are two things can make your consumer not work after a broker down for kafka HA cluster:
--replication-factor should bigger than 1 for your topic. so every topic partition can have at least one backup.
replication factor for internal topics for kafka configuration should also bigger than 1:
offsets.topic.replication.factor = 3
transaction.state.log.replication.factor = 3
transaction.state.log.min.isr = 2
This two modification make my producer and consumer still work after broker shutdown (5 broker and every broker goes down once) .
You can see in the topic description that you posted that your topic has only a single replica.
With a single replica there is no fault tolerance and if broker 0 (the broker that contains the replica) goes away, the topic will be unavailable.
Create a topic with more replicas (with --replication-factor 3) to have fault tolerance in case of crashes.
I had run into into the same problem even when using a topic with replication factor of 2.
Setting the following property on the producer worked for me.
"metadata.max.age.ms". (Kafka-0.8.2.1)
Else, my Producer was waiting for 1 minute by default to fetch the new leader and start contacting it
For a topic with replication factor N, Kafka tolerate up to N-1 server failures. E.g. having a replication factor 3 will allow you to handle upto 2 server failure.