we have 3 Kafka brokers on Linux RHEL 7.6 ( 3 linux machines )
kafka version is 2.7.X
brokers ID's are - 1010,1011,1012
from kafka described we can see the following
Topic: __consumer_offsets Partition: 0 Leader: none Replicas: 1011,1010,1012 Isr: 1010
Topic: __consumer_offsets Partition: 1 Leader: 1012 Replicas: 1012,1011,1010 Isr: 1012,1011
Topic: __consumer_offsets Partition: 2 Leader: 1011 Replicas: 1010,1012,1011 Isr: 1011,1012
Topic: __consumer_offsets Partition: 3 Leader: none Replicas: 1011,1012,1010 Isr: 1010
Topic: __consumer_offsets Partition: 4 Leader: 1011 Replicas: 1012,1010,1011 Isr: 1011
Topic: __consumer_offsets Partition: 5 Leader: none Replicas: 1010,1011,1012 Isr: 1010
from Zookeeper cli we can see that broker id 1010 not defined
[zk: localhost:2181(CONNECTED) 10] ls /brokers/ids
[1011, 1012]
and from the log - state-change.log
we can see the following
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-6 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-9 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-8 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-11 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-10 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-46 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-45 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-48 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-47 as the local replica for the partition is in an offline log directory (state.change.logger)
[2021-12-16 14:15:36,170] WARN [Broker id=1010] Ignoring LeaderAndIsr request from controller 1010 with correlation id 485 epoch 323 for partition __consumer_offsets-49 as the local replica for the partition is in an offline log directory (state.change.logger)
by ls -ltr , we can see that controller.log and state-change.log are not update from Dec 16
-rwxr-xr-x 1 root kafka 343477146 Dec 16 14:15 controller.log
-rwxr-xr-x 1 root kafka 207911766 Dec 16 14:15 state-change.log
-rw-r--r-- 1 root kafka 68759461 Dec 16 14:15 kafkaServer-gc.log.6.current
-rwxr-xr-x 1 root kafka 6570543 Dec 17 09:42 log-cleaner.log
-rw-r--r-- 1 root kafka 524288242 Dec 20 00:39 server.log.10
-rw-r--r-- 1 root kafka 524289332 Dec 20 01:37 server.log.9
-rw-r--r-- 1 root kafka 524288452 Dec 20 02:35 server.log.8
-rw-r--r-- 1 root kafka 524288625 Dec 20 03:33 server.log.7
-rw-r--r-- 1 root kafka 524288395 Dec 20 04:30 server.log.6
-rw-r--r-- 1 root kafka 524288237 Dec 20 05:27 server.log.5
-rw-r--r-- 1 root kafka 524289136 Dec 20 06:25 server.log.4
-rw-r--r-- 1 root kafka 524288142 Dec 20 07:25 server.log.3
-rw-r--r-- 1 root kafka 524288187 Dec 20 08:21 server.log.2
-rw-r--r-- 1 root kafka 524288094 Dec 20 10:52 server.log.1
-rw-r--r-- 1 root kafka 323361 Dec 20 19:50 kafkaServer-gc.log.0.current
-rw-r--r-- 1 root kafka 323132219 Dec 20 19:50 server.log
-rwxr-xr-x 1 root kafka 15669106 Dec 20 19:50 kafkaServer.out
what we did until now is that:
we restart all 3 zookeeper servers
we restart all kafka brokers
but still kafka broker 1010 appears as leader none , and not in zookeeper data
additional info
[zk: localhost:2181(CONNECTED) 11] get /controller
{"version":1,"brokerid":1011,"timestamp":"1640003679634"}
cZxid = 0x4900000b0c
ctime = Mon Dec 20 12:34:39 UTC 2021
mZxid = 0x4900000b0c
mtime = Mon Dec 20 12:34:39 UTC 2021
pZxid = 0x4900000b0c
cversion = 0
dataVersion = 0
aclVersion = 0
ephemeralOwner = 0x27dd7cf43350080
dataLength = 57
numChildren = 0
from kafka01
more meta.properties
#
#Tue Nov 16 07:45:36 UTC 2021
cluster.id=D3KpekCETmaNveBJzE6PZg
version=0
broker.id=1010
relevant ideas
in topics disk we have the following files ( additionally to topics partitions )
-rw-r--r-- 1 root kafka 91 Nov 16 07:45 meta.properties
-rw-r--r-- 1 root kafka 161 Dec 15 16:04 cleaner-offset-checkpoint
-rw-r--r-- 1 root kafka 13010 Dec 15 16:20 replication-offset-checkpoint
-rw-r--r-- 1 root kafka 1928 Dec 17 09:42 recovery-point-offset-checkpoint
-rw-r--r-- 1 root kafka 80 Dec 17 09:42 log-start-offset-checkpoint
any idea if deletion of one or more of above files can help with our issue?
All that you've shown is that broker 1010 isn't healthy and you probably have unclear leader election disabled.
ls /brokers/ids shows you the running, healthy brokers from Zookeeper's point of view.
Meanwhile, the data in the /topics znodes refers to a broker listed in the replica set that is not running, or at least not reporting back to Zookeeper, which you'd see in server.log
If you have another broker, you can use the partition reassignment tool to manually remove/change broker 1010 data from each partition of all topics it hosts, which would remove old replica information in Zookeeper, and should force a new leader
You shouldn't delete checkpoint files, but you can delete old, rotated log files after you've determined they're not needed anymore
Related
I have this snippet in my log4j configuration file:
log4j.appender.kafkaAppender=org.apache.log4j.RollingFileAppender
log4j.appender.kafkaAppender.MaxFileSize=50MB
log4j.appender.kafkaAppender.MaxBackupIndex=4
log4j.appender.kafkaAppender.File=/var/log/kafka/server.log
log4j.appender.kafkaAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaAppender.layout.ConversionPattern=[%d] %p %m (%c)%n
And in /var/log/kafka, indeed I see the server.log file. However, under my path /opt/kafka/logs, I see this below:
(server.log.2021-02-04-00 and continuing on...)
-rw-r--r-- 1 kafka kafka 942 Mar 5 08:57 server.log.2021-03-05-08
-rw-r--r-- 1 kafka kafka 942 Mar 5 09:57 server.log.2021-03-05-09
-rw-r--r-- 1 kafka kafka 2361 Mar 5 10:57 server.log.2021-03-05-10
-rw-r--r-- 1 kafka kafka 942 Mar 5 11:57 server.log.2021-03-05-11
-rw-r--r-- 1 kafka kafka 942 Mar 5 12:57 server.log.2021-03-05-12
-rw-r--r-- 1 kafka kafka 942 Mar 5 13:57 server.log.2021-03-05-13
-rw-r--r-- 1 kafka kafka 942 Mar 5 14:57 server.log.2021-03-05-14
-rw-r--r-- 1 kafka kafka 942 Mar 5 15:57 server.log.2021-03-05-15
-rw-r--r-- 1 kafka kafka 942 Mar 5 16:57 server.log.2021-03-05-16
-rw-r--r-- 1 kafka kafka 2361 Mar 5 17:57 server.log.2021-03-05-17
(all the way to server.log.2021-03-15-20)
How can I get these logs to delete properly? Why does it seem to make a log file per hour? Why would kafka be logging to another path?
this is example how to create new 10 topic partitions with name - test_test
kafka-topics.sh --create --zookeeper zookeeper01:2181 --replication-factor 3 --partitions 10 --topic test_test
Created topic "test_test".
[root#kafka01 kafka-data]# \ls -ltr | grep test_test
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-8
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-5
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-2
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-0
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-7
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-4
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-1
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-9
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-6
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-3
now we want to add additional 10 partitions to the topic name - test_test
how to add additional partitions to the existing 10 partitions ?
You can run this command:
./bin/kafka-topics.sh --alter --bootstrap-server localhost:9092 --topic test_test --partitions 20
By the way there are two things to consider about changing partitions:
Decreasing the number of partitions is not allowed
If you add more partitions to a topic, key based ordering of the messages cannot be guaranteed
Note: If your Kafka version is older than 2.2 you must use --zookeeper parameter instead of --bootstrap-server
Moreover, you should take into consideration that adding partitions triggers a rebalance which makes all of your this topic's consumers unavailable for a period of time.
rebalance is the process of re-assigning partitions to consumers, it happens when new partitions are added, new consumer is added or a consumer is leaving (may happen due to exception, network problems or initiated exit).
In order to preserve reading consistency, during a rebalance the consumer group entirely stops receiving messages until the new partition assignment is taking place.
This relatively short answer explains rebalance very well.
usually after kafka cluster scratch installation I saw this files under /data/kafka-logs ( kafka broker logs. where all topics should be located )
ls -ltr
-rw-r--r-- 1 kafka hadoop 0 Jan 9 10:07 cleaner-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 57 Jan 9 10:07 meta.properties
drwxr-xr-x 2 kafka hadoop 4096 Jan 9 10:51 _schemas-0
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 replication-offset-checkpoint
but on some other Kafka scratch installation we saw the folder - /data/kafka-logs is empty
is this indicate on problem ?
note - we still not create the topics
I'm not sure when each checkpoint file is created (though, they track log cleaner and replication offsets), but I assume that the meta properties is created at broker startup.
Otherwise, you would see one folder per Topic-partition, for example, looks like you had one topic created, _schemas.
If you only see one partition folder out of multiple brokers, then your replication factor for that topic is set to 1
I am getting the below on trying to produce to(/consume from) a kafka topic :
[2019-12-12 17:33:53,049] WARN [Producer clientId=console-producer] 1 partitions have leader brokers without a matching listener, including [wallet-V4_1_2-transactions-0] (org.apache.kafka.clients.NetworkClient)
On further investigation, I could see that the In-sync replicas is now 0.
Unfortunately, the partition and replication-factor was kept to 1 due to resource-constraints.
bin/kafka-topics --bootstrap-server 192.158.30.74:9092 --topic wallet-V4_1_2-transactions --describe
Topic:wallet-V4_1_2-transactions PartitionCount:1 ReplicationFactor:1 Configs:
Topic: wallet-V4_1_2-transactions Partition: 0 Leader: none Replicas: 2 Isr:
In the segment directory, I can see there is an additional file for leader-epoch-checkpoint
root#8436f52ec04c:/var/lib/kafka/data/wallet-V4_1_2-transactions-0# ls -lart
total 656
-rwxrwxrwx 1 nobody nogroup 10485756 Dec 4 06:50 00000000000000003235.timeindex
-rwxrwxrwx 1 nobody nogroup 10 Dec 4 06:50 00000000000000003235.snapshot
-rwxrwxrwx 1 nobody nogroup 18 Dec 10 04:49 leader-epoch-checkpoint
drwxrwxrwx 2 nobody nogroup 4096 Dec 10 04:49 .
-rwxrwxrwx 1 nobody nogroup 636034 Dec 10 11:55 00000000000000003235.log
-rwxrwxrwx 1 nobody nogroup 10485760 Dec 10 11:55 00000000000000003235.index
drwxrwxrwx 106 nobody nogroup 12288 Dec 15 06:22 ..
As mentioned here, the consumer/producer should not impacted by this.
Other topics have recovered from the Leader failure and I am able to query them.
However, this topic still gives the error:
[2019-12-15 09:54:12,303] WARN [Consumer clientId=consumer-1, groupId=console-consumer-83858] 1 partitions have leader brokers without a matching listener, including [wallet-V4_1_2-transactions-0] (org.apache.kafka.clients.NetworkClient)
How can I recover the already published messages from kafka and preserve this topic
I am trying to understand the kafka data logs. I can see the logs under the dir set in logs.dir as "Topicname_partitionnumber". However I would like to know what are the different logs captured under it. Below is the screenshot for a sample log.
In Kafka logs, each partition has a log.dir directory. Each partition is split into segments.
A segment is just a collection of messages. Instead of writing all messages into a single file, Kafka splits them into chunks of segments.
Whenever Kafka writes to a partition, it writes to an active segment. Each segment has defined size limit. When the segment size limit is reached, it closes the segment and opens a new one that becomes active. One partition can have one or more segment based on the configuration.
Each segment contains three files - segment.log,segment.index and segment.timeindex
There are three types of file for each Kafka topic partition:
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000000.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000000.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000000.timeindex
The 00000000000000000000 in front of log and index files is the name of the segments. It represents the offset of the first record written in that segment. If there are 2 segments i.e. Segment 1 containing message offset 0,1 and Segment 2 containing message offset 2 and 3.
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000000.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000000.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000000.timeindex
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000002.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000002.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000002.timeindex
.log file stores the offset, the physical position of the message, timestamp along with the message content. While reading the messages from Kafka at a particular offset, it becomes an expensive task to find the offset in a huge log file.
That's where .index the file becomes useful. It stores the offsets and physical position of the messages in the log file.
.timeindex the file is based on the timestamp of messages.
The files without a suffix are the segment files, i.e. the files the data is actually written to, named by earliest contained message offset. The latest of those is the active segment, meaning the one that messages are currently appended to.
.index are corresponding mappings from offset to positions in the segment file. .timeindex are mappings from timestamp to offset.
Below is the screenshot for a sample log
You should add your screenshot and sample log, then we could give your expected and specific answer.
before that, only can give you some common knowledge:
eg: in my CentOS, for folder:
/root/logs/kafka/kafka.log/storybook_add-0
storybook_add: is the topic name
in code, the real topic name is storybook-add
its contains:
[root#xxx storybook_add-0]# ll
total 8
-rw-r--r-- 1 root root 10485760 Aug 28 16:44 00000000000000000023.index
-rw-r--r-- 1 root root 700 Aug 28 16:45 00000000000000000023.log
-rw-r--r-- 1 root root 10485756 Aug 28 16:44 00000000000000000023.timeindex
-rw-r--r-- 1 root root 9 Aug 28 16:44 leader-epoch-checkpoint
00000000000000000023.log: log file
store the real data = kafka message
00000000000000000023.index: index file
00000000000000000023.timeindex: timeindex file
->
00000000000000000023 called segment name
why is 23?
in 00000000000000000023.log, which stored first message's postion is 23
kafka previously has totally received 23 messages
what the message data look like?
we can see through its content:
For further basic concept and logic of kafka, recommend to read this article:
A Practical Introduction to Kafka Storage Internals