I'm using kafka_2.9.2-0.8.1.1 with zookeeper 3.4.6.
Is there a way to change broker configuration settings dynamically? Specifically, I want to change controlled.shutdown.enable
bin/kafka-topics.sh --zookeeper zookeeper01.mysite.com --config controlled.shutdown.enable=true --alter
but I get the error
Missing required argument "[topic]"
No, you can't change broker configs dynamically.
There are two kinds of configurations related to the brokers: broker configs and per-topic configs.
Since per-topic configs are managed by a Zookeeper cluster, you can change those with kafka-topics.sh on the fly.
controlled.shutdown.enable is, however, a broker config which can be only set up by server.properties file and requires broker restarting when to be changed.
This issue was also discussed in Kafka JIRA:
[KAFKA-1229] Reload broker config without a restart
You can now from 1.1 onwards: Dynamic Broker Config
In your case, something like:
> bin/kafka-configs.sh --bootstrap-server localhost:9092 \
--entity-type brokers --entity-name 0 --alter \
--add-config controlled.shutdown.enable=true
Related
HI I want to reduce the retention period of the Kafka.
so , How to reduce the retention during the run time and so as to not required to restart the Kafka service.
Note: I want to do retention at global level of Kafka and not topic level.
As Kafka documentation described in Broker configs section under the following sub topic
Updating Default Topic Configuration
Default topic configuration options used by brokers may be updated
without broker restart. The configs are applied to topics without a
topic config override for the equivalent per-topic config. One or more
of these configs may be overridden at cluster-default level used by
all brokers.
You can change dynamically Kafka topic default configs in cluster level. For retention period you can change below configs.
log.retention.ms
log.retention.minutes
log.retention.hours
You can see other config list in the documentation.
But again according to documentation log.retention.hours and enter link description here configurations' update mode in read-only and log.retention.ms is cluster-wide
So as stated in 3.1.1 Updating Broker Configs
From Kafka version 1.1 onwards, some of the broker configs can be
updated without restarting the broker. See the Dynamic Update Mode
column in Broker Configs for the update mode of each broker config.
read-only: Requires a broker restart for update
per-broker: May be updated dynamically for each broker
cluster-wide: May be updated dynamically as a cluster-wide default. May also be updated as a per-broker value for testing.
So only you can change log.retention.ms
config updating command for all brokers
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --add-config log.retention.ms=3600000
output
Completed updating default config for brokers in the cluster.
to verify if the config upodated in the cluster level, run following describe command
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --describe
output
Default configs for brokers in the cluster are:
log.retention.ms=3600000 sensitive=false synonyms={DYNAMIC_DEFAULT_BROKER_CONFIG:log.retention.ms=3600000}
If you need to remove or reset the config again, run
bin/kafka-configs.sh --bootstrap-server localhost:9092 --entity-type brokers --entity-default --alter --delete-config log.retention.ms
In the latest version of the Kafka (using Kraft) (which is currently 3.3.2) you can change the retention of the specified topic using the following command.
/bin/kafka-configs.sh --bootstrap-server <the-ip-of-the-broker>:9092 --topic <specified-topic> --alter --add-config retention.ms=<retention-ms>
In order to delete all of the data in a topic I set the retention.ms config of it to 1000.
./bin/kafka-topics.sh --zookeeper $KAFKAZKHOSTS --alter --topic <topic> --config retention.ms=1000
This worked fine. All the data was deleted after a very short wait.
Before altering the config, the retention.ms was not set on the topic and so the server default property log.retention.hours=168 was the previous retention policy. (log.retention.minutes and log.retention.ms had not been set in the server properties).
Now I would like to remove the retention.ms config from this topic completely and go back to using the server level config.
Commands like
./bin/kafka-topics.sh --zookeeper $KAFKAZKHOSTS --alter --topic <topic> --config retention.ms=
or
./bin/kafka-topics.sh --zookeeper $KAFKAZKHOSTS --alter --topic <topic> --config retention.ms=null
throw an error.
I know that the delete option for kafka-topics.sh actually deletes the entire topic, so I'm not going to try play around with that.
Question: How do I completely remove a topic level config so that the topic reverts to using the server default?
To remove a topic configuration override, you can use the kafka-config tool. For example:
./bin/kafka-configs.sh --zookeeper <zookeeper> --alter \
--entity-type topics --entity-name <topic> --delete-config retention.ms
Another traditional solution which I used is manually deleting the particular folder for particular topic.
First stop the kafka server
Go to /var/lib/kafka/data (where your kafka data ios being stored specifies by you at the time of installation)
rm -rf /var/lib/kafka/data/yourTopicName-0
When we create a new Kafka topic automatically in Kafka the default number of partitions for that topic will be 1, since the configuration num.partitions=1 .
Is there any ways to increase this property using any command or scripts without editing the server.properties file?
For updating the property you will have to modify the server.properties but you can increase the partitions by using kafka admin scripts as below
bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic my_topic_name
--partitions <number_of_partitions>
You could make a script called create-topic.sh:
./bin/kafka-topics.sh --create --zookeeper <ZK_HOST> --topic $1 --partitions <DEFAULT_NUM_TOPICS>
and force everyone to only make topics via this script:
./create-topic.sh <TOPIC_NAME>
This isn't a fantastic solution, but you're severely limited if you really can't change server.properties.
In Kafka version 1.1, dynamic broker configuration feature is added. But, updating num.partitions config is not supported.
I was trying to get the config for one of the kafka clusters we have. After doing a config change through puppet, I want to know if kafka has reloaded the config, or if we need to restart the service for that.
I have tried with ./kafka-configs.sh --describe --zookeeper my-zookeeper:2181 --entity-type brokers but I only have empty output.
I have also tried to find for the config browsing inside the zookeepers but i have found nothing.
Is there any way to retrieve which config is being used?
Here's full working command to list all configs for the broker with id=1:
./bin/kafka-configs.sh --bootstrap-server <broker-host>:9092 --entity-type
brokers --entity-name 1 --all --describe
as suggested #LijuJohn i found the config in the server.log file. Thanks a lot!!
Since Kafka 2.5.0 (see issue here), you can now specify the --all flag to list all configs (not just the dynamic ones) when using ./kafka-configs.sh
Have you tried with parameter --entity-name 0 where 0 is the id of the broker?
This is required at least for my cluster.
On CentOS7, kafka version:3.3.2
Should be able to find all the configurations via:
bin/kafka-configs.sh --describe --bootstrap-server <advertised.listeners>:9092 --entity-type brokers --entity-name <broker-id> --all
#example:
bin/kafka-configs.sh --describe --bootstrap-server localhost:9092 --entity-type brokers --entity-name 0 --all
Note:
Both <advertised.listeners> and could be found in config/server.properties
I am trying to tweak two config params of a kafka topic dynamically (without restart) i.e. flush.messages,flush.ms to restrict the no. of writes to disk, as disk seems to be bottleneck in our case. But these configuration changes are not getting applied to the topic. flush.ms has been set to 10000ms but it writes to disk at every 5000ms. Any idea why this config param isn't being picked up by the topic.
Kafka version used - 0.8.1.1
command(s) used -
bin/kafka-topics.sh --zookeeper zkHost:zkPort --topic topicName --alter --config flush.messages=200000
bin/kafka-topics.sh --zookeeper zkHost:zkPort --topic topicName --alter --config flush.ms=20000
How to change the log flush interval if this doesn't works?