Kafka: dynamically query configurations - apache-kafka

Is there a way to access the configuration values in server.properties without direct access to that file itself?
I thought that:
kafka-configs.sh --describe --entity-type topics --zookeeper localhost:2181
might give me what I want, but I did not see the values set in server.properties. Just the following (I set 'ddos' as my own topic from kafka-topics.sh):
Configs for topics:ddos are
Configs for topics:__consumer_offsets are segment.bytes=104857600,cleanup.policy=compact
I was thinking I'd also see globally configured options, like this from the default configuration I have:
log.retention.hours=168
Thanks in advance.

Since Kafka 0.11, you can use the AdminClient describeConfigs() API to retrieve configuration of brokers.
For example, skeleton code to retrieve configuration for broker 0:
Properties adminProps = new Properties();
adminProps.load(new FileInputStream("admin.properties"));
AdminClient admin = KafkaAdminClient.create(adminProps);
Collection<ConfigResource> resources = new ArrayList<>();
ConfigResource cr = new ConfigResource(Type.BROKER, "0");
resources.add(cr);
DescribeConfigsResult dcr = admin.describeConfigs(resources);
System.out.println(dcr.all().get());

Related

Changing the Dynamic Default Broker Config using Java AdminClient

Currently I am changing the default broker configurations in my kafka cluster using the kafka-configs.sh script.
./kafka-configs.sh --bootstrap-server <bootstrap_server> --entity-type brokers --entity-default --alter --add-config max.connections=100
The above command would set the default value of max.connections configuration to 100 in all my brokers of the cluster. I would like to achieve the same through Java.
I tried using the alterConfigs method in the AdminClient class. Using this method I am able to set the configuration value, but this value getting at the broker level.
Due to this I would have to execute the alterConfigs for each and every broker in the cluster which is not scalable.
Could anyone help me with changing the default broker configuration using AdminClient class similar to what I was doing with the shell script.
Thank you.
You could use the code below to set configs at broker-default level:
Properties props = new Properties();
props.put(AdminClientConfig.BOOTSTRAP_SERVERS_CONFIG, "localhost:9092");
Map<String, NewPartitions> newPartitions = new HashMap<>();
ConfigResource configResource = new ConfigResource(ConfigResource.Type.BROKER, "");
ConfigEntry entry = new ConfigEntry("max.connections", String.valueOf(100));
AlterConfigOp op = new AlterConfigOp(entry, AlterConfigOp.OpType.SET);
Map<ConfigResource, Collection<AlterConfigOp>> configs = new HashMap<>(1);
configs.put(configResource, Arrays.asList(op));
try (Admin admin = AdminClient.create(props)) {
admin.incrementalAlterConfigs(configs).all().get();
}

What is entity-name and entity-type in Apache Kafka?

I'm recently learning Apache Kafka. In the quick start tutorial, I found the following command:
bin/kafka-configs.sh --zookeeper localhost:2181 --alter --add-config 'producer_byte_rate=1024,consumer_byte_rate=2048' --entity-name clientA --entity-type clients
But I can't find the explanation of these two required parameters: "entity-name" and "entity-type" in the documentation. Could someone elaborate what these are for? Where can I set it?
If you check out the source, you can see:
val entityType = parser.accepts("entity-type", "Type of entity (topics/clients)")
....
val entityName = parser.accepts("entity-name", "Name of entity (topic name/client id)")
So it's a switch -- you can either config topics or clients, which you reference either via topic name or client ID.

How can I get the group.id of a topic in command line in Kafka?

I installed kafka on my server and want to learn how to use it,
I found a sample code written by scala, below is part of it,
def createConsumerConfig(zookeeper: String, groupId: String): ConsumerConfig = {
val props = new Properties()
props.put("zookeeper.connect", zookeeper)
props.put("group.id", groupId)
props.put("auto.offset.reset", "largest")
props.put("zookeeper.session.timeout.ms", "400")
props.put("zookeeper.sync.time.ms", "200")
props.put("auto.commit.interval.ms", "1000")
val config = new ConsumerConfig(props)
config
}
but I don't know how to find the group id on my server.
The group id is something you define yourself for your consumer by providing a string id for it. All consumers started with the same id will "cooperate" and read topics in a coordinated way where each consumer instance will handle a subset of the messages in a topic. Providing a non-existent group id will be considered to be a new consumer and create a new entry in Zookeeper where committed offsets will be stored.
You could get a Zookeeper shell and list path where Kafka stores consumers' offsets like this:
./bin/zookeeper-shell.sh localhost:2181
ls /consumers
You'll get a list of all groups.
EDIT: I missed the part where you said that you're setting this up yourself so I thought that you want to list the consumer groups of an existing cluster.
Lundahl is right, this is a property that you define, which is used to coordinate consumer threads so that they don't consume "each other's" messages (each consumes a subset). If you, for example, use 2 consumers with different groups, they'll each consume the whole topic.
/kafkadir/kafka-consumer-groups.sh --all-topics --bootstrap-server hostname:port --list

Kafka 0.8, is it possible to create topic with partition and replication using java code?

In Kafka 0.8beta a topic can be created using a command like below as mentioned here
bin/kafka-create-topic.sh --zookeeper localhost:2181 --replica 2 --partition 3 --topic test
the above command will create a topic named "test" with 3 partitions and 2 replicas per partition.
Can I do the same thing using Java ?
So far what I found is using Java we can create a producer as seen below
Producer<String, String> producer = new Producer<String, String>(config);
producer.send(new KeyedMessage<String, String>("mytopic", msg));
This will create a topic named "mytopic" with the number of partition specified using the "num.partitions" attribute and start producing.
But is there a way to define the partition and replication also ? I couldn't find any such example. If we can't then does that mean we always need to create topic with partitions and replication (as per our requirement) before and then use the producer to produce message within that topic. For example will it be possible if I want to create the "mytopic" the same way but with different number of partition (overriding the num.partitions attribute) ?
Note: My answer covers Kafka 0.8.1+, i.e. the latest stable version available as of April 2014.
Yes, you can create a topic programatically via the Kafka API. And yes, you can specify the desired number of partitions as well as the replication factor for the topic.
Note that the recently released Kafka 0.8.1+ provides a slightly different API than Kafka 0.8.0 (which was used by Biks in his linked reply). I added a code example to create a topic in Kafka 0.8.1+ to my reply to the question How Can we create a topic in Kafka from the IDE using API that Biks was referring to above.
`
import kafka.admin.AdminUtils;
import kafka.cluster.Broker;
import kafka.utils.ZKStringSerializer$;
import kafka.utils.ZkUtils;
String zkConnect = "localhost:2181";
ZkClient zkClient = new ZkClient(zkConnect, 10 * 1000, 8 * 1000, ZKStringSerializer$.MODULE$);
ZkUtils zkUtils = new ZkUtils(zkClient, new ZkConnection(zkConnect), false);
Properties pop = new Properties();
AdminUtils.createTopic(zkUtils, topic.getTopicName(), topic.getPartitionCount(), topic.getReplicationFactor(),
pop);
zkClient.close();`

Is there a way to delete all the data from a topic or delete the topic before every run?

Is there a way to delete all the data from a topic or delete the topic before every run?
Can I modify the KafkaConfig.scala file to change the logRetentionHours property? Is there a way the messages gets deleted as soon as the consumer reads it?
I am using producers to fetch the data from somewhere and sending the data to a particular topic where a consumer consumes, can I delete all the data from that topic on every run? I want only new data every time in the topic. Is there a way to reinitialize the topic somehow?
As I mentioned here Purge Kafka Queue:
Tested in Kafka 0.8.2, for the quick-start example: First, Add one line to server.properties file under config folder:
delete.topic.enable=true
then, you can run this command:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
Don't think it is supported yet. Take a look at this JIRA issue "Add delete topic support".
To delete manually:
Shutdown the cluster
Clean kafka log dir (specified by the log.dir attribute in kafka config file ) as well the zookeeper data
Restart the cluster
For any given topic what you can do is
Stop kafka
Clean kafka log specific to partition, kafka stores its log file in a format of "logDir/topic-partition" so for a topic named "MyTopic" the log for partition id 0 will be stored in /tmp/kafka-logs/MyTopic-0 where /tmp/kafka-logs is specified by the log.dir attribute
Restart kafka
This is NOT a good and recommended approach but it should work.
In the Kafka broker config file the log.retention.hours.per.topic attribute is used to define The number of hours to keep a log file before deleting it for some specific topic
Also, is there a way the messages gets deleted as soon as the consumer reads it?
From the Kafka Documentation :
The Kafka cluster retains all published messages—whether or not they have been consumed—for a configurable period of time. For example if the log retention is set to two days, then for the two days after a message is published it is available for consumption, after which it will be discarded to free up space. Kafka's performance is effectively constant with respect to data size so retaining lots of data is not a problem.
In fact the only metadata retained on a per-consumer basis is the position of the consumer in in the log, called the "offset". This offset is controlled by the consumer: normally a consumer will advance its offset linearly as it reads messages, but in fact the position is controlled by the consumer and it can consume messages in any order it likes. For example a consumer can reset to an older offset to reprocess.
For finding the start offset to read in Kafka 0.8 Simple Consumer example they say
Kafka includes two constants to help, kafka.api.OffsetRequest.EarliestTime() finds the beginning of the data in the logs and starts streaming from there, kafka.api.OffsetRequest.LatestTime() will only stream new messages.
You can also find the example code there for managing the offset at your consumer end.
public static long getLastOffset(SimpleConsumer consumer, String topic, int partition,
long whichTime, String clientName) {
TopicAndPartition topicAndPartition = new TopicAndPartition(topic, partition);
Map<TopicAndPartition, PartitionOffsetRequestInfo> requestInfo = new HashMap<TopicAndPartition, PartitionOffsetRequestInfo>();
requestInfo.put(topicAndPartition, new PartitionOffsetRequestInfo(whichTime, 1));
kafka.javaapi.OffsetRequest request = new kafka.javaapi.OffsetRequest(requestInfo, kafka.api.OffsetRequest.CurrentVersion(),clientName);
OffsetResponse response = consumer.getOffsetsBefore(request);
if (response.hasError()) {
System.out.println("Error fetching data Offset Data the Broker. Reason: " + response.errorCode(topic, partition) );
return 0;
}
long[] offsets = response.offsets(topic, partition);
return offsets[0];
}
Tested with kafka 0.10
1. stop zookeeper & Kafka server,
2. then go to 'kafka-logs' folder , there you will see list of kafka topic folders, delete folder with topic name
3. go to 'zookeeper-data' folder , delete data inside that.
4. start zookeeper & kafka server again.
Note : if you are deleting topic folder/s inside kafka-logs but not from zookeeper-data folder, then you will see topics are still there.
Below are scripts for emptying and deleting a Kafka topic assuming localhost as the zookeeper server and Kafka_Home is set to the install directory:
The script below will empty a topic by setting its retention time to 1 second and then removing the configuration:
#!/bin/bash
echo "Enter name of topic to empty:"
read topicName
/$Kafka_Home/bin/kafka-configs --zookeeper localhost:2181 --alter --entity-type topics --entity-name $topicName --add-config retention.ms=1000
sleep 5
/$Kafka_Home/bin/kafka-configs --zookeeper localhost:2181 --alter --entity-type topics --entity-name $topicName --delete-config retention.ms
To fully delete topics you must stop any applicable kafka broker(s) and remove it's directory(s) from the kafka log dir (default: /tmp/kafka-logs) and then run this script to remove the topic from zookeeper. To verify it's been deleted from zookeeper the output of ls /brokers/topics should no longer include the topic:
#!/bin/bash
echo "Enter name of topic to delete from zookeeper:"
read topicName
/$Kafka_Home/bin/zookeeper-shell localhost:2181 <<EOF
rmr /brokers/topics/$topicName
ls /brokers/topics
quit
EOF
As a dirty workaround, you can adjust per-topic runtime retention settings, e.g. bin/kafka-topics.sh --zookeeper localhost:2181 --alter --topic my_topic --config retention.bytes=1 (retention.bytes=0 might also work)
After a short while kafka should free the space. Not sure if this has any implications compared to re-creating the topic.
ps. Better bring retention settings back, once kafka done with cleaning.
You can also use retention.ms to persist historical data
We tried pretty much what the other answers are describing with moderate level of success.
What really worked for us (Apache Kafka 0.8.1) is the class command
sh kafka-run-class.sh kafka.admin.DeleteTopicCommand --topic yourtopic --zookeeper localhost:2181
For brew users
If you're using brew like me and wasted a lot of time searching for the infamous kafka-logs folder, fear no more. (and please do let me know if that works for you and multiple different versions of Homebrew, Kafka etc :) )
You're probably going to find it under:
Location:
/usr/local/var/lib/kafka-logs
How to actually find that path
(this is also helpful for basically every app you install through brew)
1) brew services list
kafka started matbhz
/Users/matbhz/Library/LaunchAgents/homebrew.mxcl.kafka.plist
2) Open and read that plist you found above
3) Find the line defining server.properties location open it, in my case:
/usr/local/etc/kafka/server.properties
4) Look for the log.dirs line:
log.dirs=/usr/local/var/lib/kafka-logs
5) Go to that location and delete the logs for the topics you wish
6) Restart Kafka with brew services restart kafka
All data about topics and its partitions are stored in tmp/kafka-logs/. Moreover they are stored in a format topic-partionNumber, so if you want to delete a topic newTopic, you can:
stop kafka
delete the files rm -rf /tmp/kafka-logs/newTopic-*
As of kafka 2.3.0 version, there is an alternate way to soft deletion of Kafka (old approach are deprecated ).
Update retention.ms to 1 sec (1000ms) then set it again after a min, to default setting i.e 7 days (168 hours, 604,800,000 in ms )
Soft deletion:- (rentention.ms=1000) (using kafka-configs.sh)
bin/kafka-configs.sh --zookeeper 192.168.1.10:2181 --alter --entity-name kafka_topic3p3r --entity-type topics --add-config retention.ms=1000
Completed Updating config for entity: topic 'kafka_topic3p3r'.
Setting to default:- 7 days (168 hours , retention.ms= 604800000)
bin/kafka-configs.sh --zookeeper 192.168.1.10:2181 --alter --entity-name kafka_topic3p3r --entity-type topics --add-config retention.ms=604800000
Simplest way without restarting servers(I am using this with AWS MSK seamlessly):
cd kafka_2.12-2.6.2/bin
Topic Deletion:
Please replace $topic_name:
./kafka-topics.sh \
--bootstrap-server $kafka_bootstrap_servers \
--command-config client.properties \
--delete \
--topic $topic_name
Here is the client.properties file:
kafka_2.12-2.6.2/bin/client.properties
ssl.truststore.location=/usr/lib/jvm/java-11-openjdk-amd64/lib/security/cacerts
security.protocol=SASL_SSL
sasl.mechanism=AWS_MSK_IAM
sasl.jaas.config=software.amazon.msk.auth.iam.IAMLoginModule required;
sasl.client.callback.handler.class=software.amazon.msk.auth.iam.IAMClientCallbackHandler
max.request.size=104857600
Topic Data Deletion:
Option A:
./kafka-delete-records.sh \
--bootstrap-server $kafka_bootstrap_servers \
--command-config client.properties \
--offset-json-file ./delete-records.json
This is most clean way to delete the data immediately rather than waiting for Kafka to do this as a background job. But there is one time extra effort on specifiying all the partitions for a particular topic in the delete JSON file.
Here is the delete-records.json content is:
{
"partitions": [
{
"topic": $topic_name,
"partition": 0,
"offset": -1
},
{
"topic": $topic_name,
"partition": 1,
"offset": -1
},
{
"topic": $topic_name,
"partition": 2,
"offset": -1
}
],
"version": 1
}
Option B:
Step1:
./kafka-configs.sh \
--bootstrap-server $kafka_bootstrap_servers \
--command-config client.properties
--alter \
--entity-type topics \
--add-config retention.ms=1 \
--entity-name $topic_name
Now, wait for couple of minutes to let Kafka delete the data from topic and now come back and revert to default 7 days data retention.
Step2:
./kafka-configs.sh \
--bootstrap-server $kafka_bootstrap_servers \
--command-config client.properties
--alter \
--entity-type topics \
--add-config retention.ms=604800000 \
--entity-name $topic_name
Stop ZooKeeper and Kafka
In server.properties, change log.retention.hours value. You can comment log.retention.hours and add log.retention.ms=1000. It would keep the record on Kafka Topic for only one second.
Start zookeeper and kafka.
Check on consumer console. When I opened the console for the first time, record was there. But when I opened the console again, the record was removed.
Later on, you can set the value of log.retention.hours to your desired figure.
I use the utility below to cleanup after my integration test run.
It uses the latest AdminZkClient api. The older api has been deprecated.
import javax.inject.Inject
import kafka.zk.{AdminZkClient, KafkaZkClient}
import org.apache.kafka.common.utils.Time
class ZookeeperUtils #Inject() (config: AppConfig) {
val testTopic = "users_1"
val zkHost = config.KafkaConfig.zkHost
val sessionTimeoutMs = 10 * 1000
val connectionTimeoutMs = 60 * 1000
val isSecure = false
val maxInFlightRequests = 10
val time: Time = Time.SYSTEM
def cleanupTopic(config: AppConfig) = {
val zkClient = KafkaZkClient.apply(zkHost, isSecure, sessionTimeoutMs, connectionTimeoutMs, maxInFlightRequests, time)
val zkUtils = new AdminZkClient(zkClient)
val pp = new Properties()
pp.setProperty("delete.retention.ms", "10")
pp.setProperty("file.delete.delay.ms", "1000")
zkUtils.changeTopicConfig(testTopic , pp)
// zkUtils.deleteTopic(testTopic)
println("Waiting for topic to be purged. Then reset to retain records for the run")
Thread.sleep(60000L)
val resetProps = new Properties()
resetProps.setProperty("delete.retention.ms", "3000000")
resetProps.setProperty("file.delete.delay.ms", "4000000")
zkUtils.changeTopicConfig(testTopic , resetProps)
}
}
There is an option delete topic. But, it marks the topic for deletion. Zookeeper later deletes the topic. Since this can be unpredictably long, I prefer the retention.ms approach
do:
cd /path/to/kafkaInstallation/kafka-server
bin/kafka-topics.sh --bootstrap-server localhost:9092 --delete --topic name_of_kafka_topic
then you can recreate it using:
bin/kafka-topics.sh --create --bootstrap-server localhost:9092 --replication-factor 1 --partitions 1 --topic name_of_kafka_topic
In manually deleting a topic from a kafka cluster , you just might check this out https://github.com/darrenfu/bigdata/issues/6
A vital step missed a lot in most solution is in deleting the /config/topics/<topic_name> in ZK.
I use this script:
#!/bin/bash
topics=`kafka-topics --list --zookeeper zookeeper:2181`
for t in $topics; do
for p in retention.ms retention.bytes segment.ms segment.bytes; do
kafka-topics --zookeeper zookeeper:2181 --alter --topic $t --config ${p}=100
done
done
sleep 60
for t in $topics; do
for p in retention.ms retention.bytes segment.ms segment.bytes; do
kafka-topics --zookeeper zookeeper:2181 --alter --topic $t --delete-config ${p}
done
done
There are two solutions to clean up topics data
Change the zookeeper dataDir path "dataDir=/dataPath" to some other
value, delete kafka logs folder and restart zookeeper and kafka
server
Run zkCleanup.sh from zookeeper server