this is example how to create new 10 topic partitions with name - test_test
kafka-topics.sh --create --zookeeper zookeeper01:2181 --replication-factor 3 --partitions 10 --topic test_test
Created topic "test_test".
[root#kafka01 kafka-data]# \ls -ltr | grep test_test
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-8
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-5
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-2
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-0
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-7
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-4
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-1
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-9
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-6
drwxr-xr-x 2 kafka hadoop 4096 Mar 22 16:53 test_test-3
now we want to add additional 10 partitions to the topic name - test_test
how to add additional partitions to the existing 10 partitions ?
You can run this command:
./bin/kafka-topics.sh --alter --bootstrap-server localhost:9092 --topic test_test --partitions 20
By the way there are two things to consider about changing partitions:
Decreasing the number of partitions is not allowed
If you add more partitions to a topic, key based ordering of the messages cannot be guaranteed
Note: If your Kafka version is older than 2.2 you must use --zookeeper parameter instead of --bootstrap-server
Moreover, you should take into consideration that adding partitions triggers a rebalance which makes all of your this topic's consumers unavailable for a period of time.
rebalance is the process of re-assigning partitions to consumers, it happens when new partitions are added, new consumer is added or a consumer is leaving (may happen due to exception, network problems or initiated exit).
In order to preserve reading consistency, during a rebalance the consumer group entirely stops receiving messages until the new partition assignment is taking place.
This relatively short answer explains rebalance very well.
usually after kafka cluster scratch installation I saw this files under /data/kafka-logs ( kafka broker logs. where all topics should be located )
ls -ltr
-rw-r--r-- 1 kafka hadoop 0 Jan 9 10:07 cleaner-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 57 Jan 9 10:07 meta.properties
drwxr-xr-x 2 kafka hadoop 4096 Jan 9 10:51 _schemas-0
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka hadoop 17 Jan 10 07:39 replication-offset-checkpoint
but on some other Kafka scratch installation we saw the folder - /data/kafka-logs is empty
is this indicate on problem ?
note - we still not create the topics
I'm not sure when each checkpoint file is created (though, they track log cleaner and replication offsets), but I assume that the meta properties is created at broker startup.
Otherwise, you would see one folder per Topic-partition, for example, looks like you had one topic created, _schemas.
If you only see one partition folder out of multiple brokers, then your replication factor for that topic is set to 1
I’m new to Kafka and trying out few small usecase for my new application. The use case is basically,
Kafka-producer —> Kafka-Consumer—> flume-Kafka source—>flume-hdfs-sink.
When Consuming(step2), below is the sequence of steps..
1. consumer.Poll(1.0)
1.a. Produce to multiple topics (multiple flume agents are listening)
1.b. Produce. Poll()
2. Flush() every 25 msgs
3. Commit() every msgs (asynchCommit=false)
Question 1: Is this sequence of action right!?!
Question2: Will this cause any data loss as the flush is every 25 msgs and commit is for every msg?!?
Question3 :Difference between poll() for producer and poll ()consumer?
Question4 :What happens when messages are committed but not flushed!?!
I will really appreciate if someone can help me understand with offset examples between producer/consumer for poll,flush and commit.
Thanks in advance!!
Let us first understand Kafka in short:
what is kafka producer:
t.turner#devs:~/developers/softwares/kafka_2.12-2.2.0$ bin/kafka-console-producer.sh --broker-list 100.102.1.40:9092,100.102.1.41:9092 --topic company_wallet_db_v3-V3_0_0-transactions
>{"created_at":1563415200000,"payload":{"action":"insert","entity":{"amount":40.0,"channel":"INTERNAL","cost_rate":1.0,"created_at":"2019-07-18T02:00:00Z","currency_id":1,"direction":"debit","effective_rate":1.0,"explanation":"Voucher,"exchange_rate":null,expired","id":1563415200,"instrument":null,"instrument_id":null,"latitude":null,"longitude":null,"other_party":null,"primary_account_id":2,"receiver_phone":null,"secondary_account_id":362,"sequence":1,"settlement_id":null,"status":"success","type":"voucher_expiration","updated_at":"2019-07-18T02:00:00Z","primary_account_previous_balance":0.0,"secondary_account_previous_balance":0.0}},"track_id":"a011ad33-2cdd-48a5-9597-5c27c8193033"}
[2019-07-21 11:53:37,907] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 7 : {company_wallet_db_v3-V3_0_0-transactions=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
You can ignore the warning. It appears as Kafka could not find the topic and auto-creates the topic.
Let us see how kafka has stored this message:
The producer creates a directory in the broker server at /kafka-logs (for apache kafka) or /kafka-cf-data (for the confluent version)
drwxr-xr-x 2 root root 4096 Jul 21 08:53 company_wallet_db_v3-V3_0_0-transactions-0
cd into this directory and then list the files. You will see the .log file that stores the actual data:
-rw-r--r-- 1 root root 10485756 Jul 21 08:53 00000000000000000000.timeindex
-rw-r--r-- 1 root root 10485760 Jul 21 08:53 00000000000000000000.index
-rw-r--r-- 1 root root 8 Jul 21 08:53 leader-epoch-checkpoint
drwxr-xr-x 2 root root 4096 Jul 21 08:53 .
-rw-r--r-- 1 root root 762 Jul 21 08:53 00000000000000000000.log
If you open the log file, you will see:
^#^#^#^#^#^#^#^#^#^#^Bî^#^#^#^#^B<96>T<88>ò^#^#^#^#^#^#^#^#^Al^S<85><98>k^#^#^Al^S<85><98>kÿÿÿÿÿÿÿÿÿÿÿÿÿÿ^#^#^#^Aö
^#^#^#^Aè
{"created_at":1563415200000,"payload":{"action":"insert","entity":{"amount":40.0,"channel":"INTERNAL","cost_rate":1.0,"created_at":"2019-07-18T02:00:00Z","currency_id":1,"direction":"debit","effective_rate":1.0,"explanation":"Voucher,"exchange_rate":null,expired","id":1563415200,"instrument":null,"instrument_id":null,"latitude":null,"longitude":null,"other_party":null,"primary_account_id":2,"receiver_phone":null,"secondary_account_id":362,"sequence":1,"settlement_id":null,"status":"success","type":"voucher_expiration","updated_at":"2019-07-18T02:00:00Z","primary_account_previous_balance":0.0,"secondary_account_previous_balance":0.0}},"track_id":"a011ad33-2cdd-48a5-9597-5c27c8193033"}^#
Let us understand how the consumer would poll and read records :
What is Kafka Poll :
Kafka maintains a numerical offset for each record in a partition.
This offset acts as a unique identifier of a record within that
partition, and also denotes the position of the consumer in the
partition. For example, a consumer which is at position 5 has consumed
records with offsets 0 through 4 and will next receive the record with
offset 5. There are actually two notions of position relevant to the
user of the consumer: The position of the consumer gives the offset of
the next record that will be given out. It will be one larger than the
highest offset the consumer has seen in that partition. It
automatically advances every time the consumer receives messages in a
call to poll(long).
So, poll takes a duration as input, reads the 00000000000000000000.log file for that duration, and returns them to the consumer.
When are messages removed :
Kafka takes care of the flushing of messages.
There are 2 ways:
Time-based : Default is 7 days. Can be altered using
log.retention.ms=1680000
Size-based : Can be set like
log.retention.bytes=10487500
Now let us look at the consumer:
t.turner#devs:~/developers/softwares/kafka_2.12-2.2.0$ bin/kafka-console-consumer.sh --bootstrap-server 100.102.1.40:9092 --topic company_wallet_db_v3-V3_0_0-transactions --from-beginning
{"created_at":1563415200000,"payload":{"action":"insert","entity":{"amount":40.0,"channel":"INTERNAL","cost_rate":1.0,"created_at":"2019-07-18T02:00:00Z","currency_id":1,"direction":"debit","effective_rate":1.0,"explanation":"Voucher,"exchange_rate":null,expired","id":1563415200,"instrument":null,"instrument_id":null,"latitude":null,"longitude":null,"other_party":null,"primary_account_id":2,"receiver_phone":null,"secondary_account_id":362,"sequence":1,"settlement_id":null,"status":"success","type":"voucher_expiration","updated_at":"2019-07-18T02:00:00Z","primary_account_previous_balance":0.0,"secondary_account_previous_balance":0.0}},"track_id":"a011ad33-2cdd-48a5-9597-5c27c8193033"}
^CProcessed a total of 1 messages
The above command instructs the consumer to read from offset = 0. Kafka assigns this console consumer a group_id and maintains the last offset that this group_id has read. So, it can push newer messages to this consumer-group
What is Kafka Commit:
Commit is a way to tell kafka the messages the consumer has successfully processed. This can be thought as updating the lookup between group-id : current_offset + 1.
You can manage this using the commitAsync() or commitSync() methods of the consumer object.
Reference: https://kafka.apache.org/10/javadoc/org/apache/kafka/clients/consumer/KafkaConsumer.html
I am trying to understand the kafka data logs. I can see the logs under the dir set in logs.dir as "Topicname_partitionnumber". However I would like to know what are the different logs captured under it. Below is the screenshot for a sample log.
In Kafka logs, each partition has a log.dir directory. Each partition is split into segments.
A segment is just a collection of messages. Instead of writing all messages into a single file, Kafka splits them into chunks of segments.
Whenever Kafka writes to a partition, it writes to an active segment. Each segment has defined size limit. When the segment size limit is reached, it closes the segment and opens a new one that becomes active. One partition can have one or more segment based on the configuration.
Each segment contains three files - segment.log,segment.index and segment.timeindex
There are three types of file for each Kafka topic partition:
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000000.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000000.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000000.timeindex
The 00000000000000000000 in front of log and index files is the name of the segments. It represents the offset of the first record written in that segment. If there are 2 segments i.e. Segment 1 containing message offset 0,1 and Segment 2 containing message offset 2 and 3.
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000000.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000000.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000000.timeindex
-rw-r--r-- 1 kafka hadoop 10485760 Dec 3 23:57 00000000000000000002.index
-rw-r--r-- 1 kafka hadoop 148814230 Oct 11 06:50 00000000000000000002.log
-rw-r--r-- 1 kafka hadoop 10485756 Dec 3 23:57 00000000000000000002.timeindex
.log file stores the offset, the physical position of the message, timestamp along with the message content. While reading the messages from Kafka at a particular offset, it becomes an expensive task to find the offset in a huge log file.
That's where .index the file becomes useful. It stores the offsets and physical position of the messages in the log file.
.timeindex the file is based on the timestamp of messages.
The files without a suffix are the segment files, i.e. the files the data is actually written to, named by earliest contained message offset. The latest of those is the active segment, meaning the one that messages are currently appended to.
.index are corresponding mappings from offset to positions in the segment file. .timeindex are mappings from timestamp to offset.
Below is the screenshot for a sample log
You should add your screenshot and sample log, then we could give your expected and specific answer.
before that, only can give you some common knowledge:
eg: in my CentOS, for folder:
/root/logs/kafka/kafka.log/storybook_add-0
storybook_add: is the topic name
in code, the real topic name is storybook-add
its contains:
[root#xxx storybook_add-0]# ll
total 8
-rw-r--r-- 1 root root 10485760 Aug 28 16:44 00000000000000000023.index
-rw-r--r-- 1 root root 700 Aug 28 16:45 00000000000000000023.log
-rw-r--r-- 1 root root 10485756 Aug 28 16:44 00000000000000000023.timeindex
-rw-r--r-- 1 root root 9 Aug 28 16:44 leader-epoch-checkpoint
00000000000000000023.log: log file
store the real data = kafka message
00000000000000000023.index: index file
00000000000000000023.timeindex: timeindex file
->
00000000000000000023 called segment name
why is 23?
in 00000000000000000023.log, which stored first message's postion is 23
kafka previously has totally received 23 messages
what the message data look like?
we can see through its content:
For further basic concept and logic of kafka, recommend to read this article:
A Practical Introduction to Kafka Storage Internals
I have a two node Kafka cluster with 48 gb disk allotted to each.
The server.properties is set to retain logs upto 48 hours or log segments up to 1 GB. Here it is :
log.retention.hours=48
log.retention.bytes=1073741824
log.segment.bytes=1073741824
I have 30 partitons for a topic. Here are the disk usage stats for one of these partitions:
-rw-r--r-- 1 root root 1.9M Apr 14 00:06 00000000000000000000.index
-rw-r--r-- 1 root root 1.0G Apr 14 00:06 00000000000000000000.log
-rw-r--r-- 1 root root 0 Apr 14 00:06 00000000000000000000.timeindex
-rw-r--r-- 1 root root 10M Apr 14 12:43 00000000000001486744.index
-rw-r--r-- 1 root root 73M Apr 14 12:43 00000000000001486744.log
-rw-r--r-- 1 root root 10M Apr 14 00:06 00000000000001486744.timeindex
As you can clearly see, we have a log segment of 1 gb. But as per my understanding, it should have already been deleted. Also, its been more than 48 hours since these logs were rolled out by Kafka. Thoughts?
In your case, you set log.retention.bytes and log.segment.bytes to the same value, which means there is always no candidate of deletable segment, so no delete happens.
The algorithm is:
firstly calculate the difference. In your case, the difference is 73MB (73MB + 1GB - 1GB)
Iterator all the non-active log segments, compare its size with the diff
If diff > log segment size, mark this segment deletable, and decrement the diff by the size
Otherwise, mark this segment undeletable and try with the next log segment.
Answering my own question:
Let's say that log.retention.hours has value 24 hours and log.retention.bytes and log.segment.bytes are both set to 1 GB. When the value of the log reaches 1 GB (call this Old Log), a new log segment is created (call this New Log). The Old Log is then deleted 24 hours after the New Log is created.
In my case, the New Log was created about 25 hours before I posted this question. I dynamically changed the retention.ms value for a topic (which is maintained by Zookeeper, and not the Kafka cluster, and therefore does not require Kafka restart) to 24 hours, my old logs were immediately deleted.