I installed kafka on a linux server. I defined a topic with a few partitions. I know that each partition is mapped to a physical file on disk, but I don't know where it is.
Where are the partition files saved ?
In your config/server.properties you'll find a section on "Log Basics". The property log.dirs is defining where your logs/partitions will be stored on disk.
By default on Linux it is stored in /tmp/kafka-logs. If you will navigate to this folder you will see something like this:
recovery-point-offset-checkpoint
replication-offset-checkpoint
topic-0
msg-0
msg-1
Which means that you have two topics (topic which has 1 partition and msg which has 2).
As it was noted by Ludd, you can find the location inside config/server.properties file by looking for log.dirs.
Try running this command
bin/kafka-topics.sh --zookeeper localhost:2181 --describe --topic test
you will get output
Topic:test Partition: 0 Leader: 1 Replicas: 1,2,0 Isr: 1,2,0
now try going to \config file
cat server.properties
and search for broker_id
if broker_id matches with leader number then topic partition is stored in that broker
Related
I'm trying to delete a topic-level configuration retention.bytes that was applied on my topic. But when I try to delete the config with the command as described in Kafka documentation I am getting the following message:
kafka-0:/opt/bitnami/kafka/bin$ kafka-configs.sh --bootstrap-server kafka:9092 --entity-type topics --entity-name foo --alter --delete-config retention.bytes
Invalid config(s): retention.bytes
I've already dug into Kafka's source code but the only thing it mentions is that it throws this error if "the command if any of the configs to be deleted does not exist". However when I describe my topic, I can see the config there:
kafka-0:/opt/bitnami/kafka/bin$ kafka-topics.sh --bootstrap-server kafka:9092 --describe --topic foo
Topic: foo PartitionCount: 1 ReplicationFactor: 3 Configs: cleanup.policy=compact,delete,flush.ms=1000,segment.bytes=1073741824,retention.ms=7776000000,flush.messages=10000,max.message.bytes=1000012,retention.bytes=1073741824
Topic: foo Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 2,1,0
So obviously it does exist...
Could anyone pinpoint what the problem could be here?
I needed to do the same thing today, looks like the same issue.
After some tinkering, I figured it out.
First, I was able to add the retention.bytes (I increased it 10 times compare to default 10737418240). After that, I was able to delete it, and it fell back to the default.
So, I think, you can only delete it if you override it yourself. Otherwise it does not have that config, and uses default value. Which means, you cannot make retention size to be unlimited.
Edit:
Overriding the default policy with -1 will enable unlimited retention (do so at your own risk).
Its my early days in learning kafka. And I am checking out every kafka property/concept in my local machine.
So I came across this property min.insync.replicas and here is my understanding. Please correct me if I've misunderstood anything.
Once a message is sent to a topic, the message must be written to at least min.insync.replicas number of followers.
min.insync.replicas also includes the leader.
If number of available live brokers( indirectly, in sync replicas ) are less than the specified min.insync.replicas , then producer will raise an exception failing to publish the message.
Following are the steps I followed to create the above scenario
Started 3 brokers in local with broker Ids 0, 1 and 2
created the topic insync and set min.insync.replicas to 2
using the following command
sudo ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic insync --config min.insync.replicas=2
Describe the topic resulted in the following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
At this point, I made sure the property I've provided is picked by kafka
I started sending messages and consuming them from terminal using following command
Producer: ./kafka-console-producer.sh --broker-list localhost:9092 --topic insync --producer.config ../config/producer.properties
Consumer: ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic insync
At this point, I was able to send and receive messages successfully.
Bought down 2 brokers (0 and 2) and described the topic and resulted in following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 1 Replicas: 2,0,1 Isr: 1
At this point, the In Sync Replicas are just 1(Isr: 1)
Then I tried to produce the message and it worked. I was able to send messages from console-producer and I could see those messages in console consumer.
My Kafka version: kafka_2.10-0.10.0.0
following are the producer properties:
bootstrap.servers=localhost:9092
compression.type=none
batch.size=20
acks=all
I expected the producer to fail with NotEnoughReplicasException as mentioned in this.
public class NotEnoughReplicasException
extends RetriableException
Number of insync replicas for the partition is lower than >min.insync.replicas
but it worked normally.
Am I missing something? How can I create the scenario?
*************** EDIT **********************
Instead of producing the messages from console producer, I tried to generate messages from java code. This time, I got the expected exception in the kafka broker. Although I expected it in the producer (java code). As this experiment is raising more questions, I've posted another question.
is acks set to "all"? if not, try setting it to all
I believe that error is for transactional producer, you may need to add this config:
transactional.id=TID-TEST
if still not working, please check your replicator factor and min insync isr for the internal topic: __transaction_state
I performed reassignment of a topic within my cluster. There were some problems throughout (running out of disk space on one of the target brokers), but I managed to fix it and the process completed succesfully.
It seems, however, that while one of the partitions was reassigned to another brokers, the data was not removed from the source broker's disk. And since the partion is quite big, I'd like it gone.
For obvious reasons I do not want to login to shell and rm -rf the directory. What would be the steps I could take to debug why the data was not deleted and then how to "encourage" the cluster to performe a clean up?
For a while thought that the retention policy might kick in and delete the data, but it's set to run every 10 minutes and it's been over a day since the reassignment has finished.
The replicas are as follows:
# bin/kafka-topics.sh -describe --zookeeper 1.2.3.4 --topic topic-name
Topic:topic-name PartitionCount:4 ReplicationFactor:3 Configs:retention.ms=3153600000000,compression.type=lz4
Topic: topic-name Partition: 0 Leader: 1014 Replicas: 1014,1012,1002 Isr: 1012,1002,1014
Topic: topic-name Partition: 1 Leader: 1007 Replicas: 1007,1006,1003 Isr: 1006,1007,1003 <--- this is the partition
Topic: topic-name Partition: 2 Leader: 1013 Replicas: 1013,1008,1001 Isr: 1013,1008,1001
Topic: topic-name Partition: 3 Leader: 1011 Replicas: 1011,1016,1010 Isr: 1010,1011,1016
And here we can see that the broker 1008 holds two partitions: 2 (it should) and 1 (it should not, this we need gone).
/data_disk_0/kafka-logs# cat meta.properties | grep broker.id
broker.id=1008
/data_disk_0/kafka-logs# du -h --max-depth=1 . | grep topic-name-1
295G ./topic-name-2
292G ./topic-name-1
edit: What's curious, all files in the topic directory (/data_disk_0/kafka-logs/topic-name-1/*) are opened by Kafka (checked with lsof). I don't know whether it's a default behaviour for Kafka to read all files in its data dir regardless of their status or it means that these files are still being somehow used.
It is not possible to delete partitions from a topic in Kafka. A partition is a file, Kafka assigns data to each partition depending on the Key. Let's say the partition 2 has data for the Key AAA, but the AAA Key isn't longer produced then you might see partition 2 not used.
Take a look to this video:
https://developer.confluent.io/learn-kafka/apache-kafka/partitions/#:~:text=Kafka%20Partitioning&text=Partitioning%20takes%20the%20single%20topic,many%20nodes%20in%20the%20cluster.
The only way is to delete the topic and create it again with the correct number of partitions
We are running a 6 node cluster of kafka 0.11.0. We have set a global as well as a per-topic retention in bytes, neither of which is being applied. There are no errors that I can see in the logs, just nothing being deleted (by size; the time retention does seem to be working)
See relevant configs below:
./config/server.properties :
# global retention 75GB or 60 days, segment size 512MB
log.retention.bytes=75000000000
log.retention.check.interval.ms=60000
log.retention.hours=1440
log.cleanup.policy=delete
log.segment.bytes=536870912
topic configuration (30GB):
[tstumpges#kafka-02 kafka]$ bin/kafka-topics.sh --zookeeper zk-01:2181/kafka --describe --topic stg_logtopic
Topic:stg_logtopic PartitionCount:12 ReplicationFactor:3 Configs:retention.bytes=30000000000
Topic: stg_logtopic Partition: 0 Leader: 4 Replicas: 4,5,6 Isr: 4,5,6
Topic: stg_logtopic Partition: 1 Leader: 5 Replicas: 5,6,1 Isr: 5,1,6
...
And, disk usage showing 910GB usage for one partition!
[tstumpges#kafka-02 kafka]$ sudo du -s -h /data1/kafka-data/*
82G /data1/kafka-data/stg_logother3-2
155G /data1/kafka-data/stg_logother2-9
169G /data1/kafka-data/stg_logother1-6
910G /data1/kafka-data/stg_logtopic-4
I can see there are plenty of segment log files (512MB each) in the partition directory... what is going on?!
Thanks in advance,
Thunder
Found the answer to this via the kafka user mailing list. We were apparently hitting kafka bug KAFKA-6030 (Integer overflow in log cleaner cleanable ratio computation)
Upgrading to v1.0.0 has fixed this for us!
I created a topic in my kafka cluster with the following command.
/opt/kafka/bin/kafka-topics.sh --zookeeper kaf1:2181,kaf2:2181,kaf3:2181 --create --topic mytopic --partitions 10 --replication-factor 2 --config retention.bytes=1074000000 --config delete.retention.ms=6000 --config segment.bytes=105000000
So, if I understand correctly the documentation, I have a topic with 10 partitions replicate 2 times beetween my 3 kafka hosts.
Next, each kafka host must retain 1Go of data. Each segment has a size of 100Mo and all old logs will be delete after 1 minute.
Now, when I do a du -h on my logs directory on a kafka hosts, I have this:
1,2G ./mytopic-2
1,1G ./mytopic-8
1,2G ./mytopic-9
1,1G ./mytopic-6
1,1G ./mytopic-3
1,1G ./mytopic-0
1,2G ./mytopic-4
7,6G .
I thought get 1Go for the directory entirely and not for each partition.
So my question is simple, the topic configuration is for each partition or for the all topic ?
Thanks.
Please, see the picture below (distribution of partitions by the cluster nodes might be different):