Kafka "num.partitions" setting in service.properties does not take effect - apache-kafka

We use Kafka in Docker container. We create topics automatically if the topic does not exist when producing or consuming messages. We want 3 partitions for the topics, so set
num.partitions=3
in file /etc/kafka/server.properties in the Kafka container. However, it does not take effect. After doing the setting and restarting the container, then try subscribing or publishing on some non-existential topics, the topics are created, but only with one partition.
We tried this on containers created from image confluentinc/cp-kafka:5.1.0 and also on containers created from image confluentinc/cp-enterprise-kafka:5.3.1, and the behaviors were the same.
We tested creating topics with command:
kafka-topics --create --topic my_topic --zookeeper zookeeper:2181 --replication-factor 1 --partitions 3
This correctly created the topic with three partitions. But we need Kafka to create multi-partition topics automatically.
What could cause the problem? Or how to make Kafka auto-create multi-partition topics?
We do not have any dynamic configs. This is verified by running the following commands:
kafka-configs --bootstrap-server kafka:9092 --entity-type brokers --entity-default --describe
kafka-configs --bootstrap-server kafka:9092 --entity-type brokers --entity-name 0 (or other ids) --describe
Those commands return empty results.

This answer comes kinda late, but I've been struggling with the same thing using the docker image: confluentinc/cp-enterprise-kafka:5.3.4.
The solution for me was adding a new environment variable in my docker-compose:
KAFKA_NUM_PARTITIONS: 3 (or the partitions you want)
This will automatically add the property num.partitions in your kafka.properties file under /etc/kafka/ directory.
Modifying the property num.partitions in /etc/kafka/server.properties didn't work for me neither.

Docker containers are ephemeral, meaning that once you stopped them, all those changes that you've applied are lost.
If you want to overwrite the default settings you have to mount the property file:
Create a server.properties file on your machine
Fill it in with the properties that you need ( including the original ones )
Mount this file and in order to replace the original one from the container:
docker run ... -v /path/to/custom/server.properties:/etc/kafka/server.properties ...

Related

Kafka topic creation: Timed out waiting for node assignment

I am trying to set up the following on a single remote AWS EC2 machine.
1 Kafka Zookeeper
3 Kafka brokers
1 Kafka topic
I have done the following:
I have created two copies of the server.properties file and named them server_1.properties and server_2.properties, where I changed the following values:
broker.id=0 to broker.id=1 and broker.id=2 respectively
advertised.listeners=PLAINTEXT://your.host.name:9092 to advertised.listeners=PLAINTEXT://3.72.250.103:9092
advertised.listeners=PLAINTEXT://3.72.250.103:9093
advertised.listeners=PLAINTEXT://3.72.250.103:9094 in the three config files
changed log.dirs=/tmp/kafka-logs to
log.dirs=/tmp/kafka-logs_1 and log.dirs=/tmp/kafka-logs_2 in the respective files
All three brokers start up fine, and
bin/zookeeper.sh localhost:2181 ls /brokers/ids
shows all three brokers
But when I try to create a topic like:
bin/kafka-topics.sh --bootstrap-server localhost:9092,localhost:9093,localhost:9094 --create -- partitions 5 --replication-factor 1 --topic cars
the command times out with the error message: Timed out waiting for node assignment
What have I tried:
I tried creating only one broker - that worked fine
I tried creating the topic with --bootstrap-server localhost:9092 only - that did not work
I tried changing listeners=PLAINTEXT://:9092 to listeners=PLAINTEXT://3.72.250.103:9092 and for the 2nd and 3rd broker respectively - I was not even able to start the brokers with this configuration.
I am not sure what to try next

Test if kafka ready, topics are reachable from cli

Was looking around and couldn't find anything other than kafka-topic --list.
I'm running Kafka in a K8s environment and have an init container that creates a couple of topics. I want my main container to start only when the topics are created and the topics are "subscribable" to. kafka-topic --list I believe is just reaching zookeeper as I can see my pod is showing error messages about the topic.
I did try kafka-console-consumer but even if the topic is not present it doesn't exit with status 1. It does exit with status one if the bootstrap server is not reachable. I'm looking for a solution similar to below
kafka-console-consumer --bootstrap-server correct-bootstrap-server:9092 --topic correct-topic --timeout-ms 100
exits with 0 ( this one works )
kafka-console-consumer --bootstrap-server wrong-bootstrap-server:9092 --topic wrong-topic --timeout-ms 100
exits with a non zero exit code (this one works too).
kafka-console-consumer --bootstrap-server correct-bootstrap-server:9092 --topic wrong-topic --timeout-ms 100
exits with a non zero exit code ( this one doesn't work as it exits with code 0)
Thanks.
It is not totally trivial to make sure from CLI if a Kafka topic is "ready". Many things can go wrong.
We had the same issues, and the current approach we are taking involves several call to the kafka-topic CLI
We make sure the topic exists with kafka-topics.sh --describe --topic FOO
Check that all partitions have a leader kafka-topics.sh --describe --topic FOO --unavailable-partitions (output should be empty)
Check that all partitions are fully replicated kafka-topics.sh --describe --topic FOO --under-replicated-partitions (output should be empty)
That still doesn't make it 100% certain that the topic is "ready", but it works for us
kafka-topics can list under-replicated, offline and under-min-isr partitions.
The best bet is to check that your topic is not under-replicated. If it isn't, it should be ready.

Increasing Default number of partitions in Kafka cluster

When we create a new Kafka topic automatically in Kafka the default number of partitions for that topic will be 1, since the configuration num.partitions=1 .
Is there any ways to increase this property using any command or scripts without editing the server.properties file?
For updating the property you will have to modify the server.properties but you can increase the partitions by using kafka admin scripts as below
bin/kafka-topics.sh --zookeeper zk_host:port/chroot --alter --topic my_topic_name
--partitions <number_of_partitions>
You could make a script called create-topic.sh:
./bin/kafka-topics.sh --create --zookeeper <ZK_HOST> --topic $1 --partitions <DEFAULT_NUM_TOPICS>
and force everyone to only make topics via this script:
./create-topic.sh <TOPIC_NAME>
This isn't a fantastic solution, but you're severely limited if you really can't change server.properties.
In Kafka version 1.1, dynamic broker configuration feature is added. But, updating num.partitions config is not supported.

Not able to delete Topics from kafka

even after enabling delete.topic.enable=true in server.config
deletion of topics not working . i am getting following error on recreating the topic again
Topic 'test' already exists.
[2017-05-23 06:47:05,757] ERROR
org.apache.kafka.common.errors.TopicExistsException: Topic 'test' already exists.
You can not delete a topic when consuming it. Use bin/kafka-consumer-groups.sh or simple ps -aux|grep Consumer to find any possible consumers which block the operation.
remove meta data in zookeeper
./bin/zookeeper-shell.sh localhost:2181
rmr /brokers/topics/mytopic
rmr /admin/delete_topics/mytopic
If you use the latest Kafka (v. ~0.10.) then after you enabled the delete.topic.enable=true option you have to:
Restart Kafka
Delete topic:
kafka-topics.sh --zookeeper localhost:2181 --topic mytopic --delete
Check that it was marked for deletion:
kafka-topics.sh --zookeeper localhost:2181 --list
mytopic - marked for deletion
And wait a bit.
And if you use some old version of Kafka, then try to delete the topic from a zookeeper-shell.
If the zookeeper is a standalone instance (not on the localhost), mark for deletion of topics won't delete it properly.
One suggestion would be to use Zookeeper Exhibitor & delete it from admin & brokers.
Exhibitor give a UI interface to visualise how the topics & kafka brokers are arranged.

Delete topic in Kafka 0.8.1.1

I need to delete the topic test in Apache Kafka 0.8.1.1.
As expressed in the documentation here, I have executed:
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic test
However, this results in the following message:
Command must include exactly one action: --list, --describe, --create or --alter
How can I delete this topic?
Deleting topic isn't always working in 0.8.1.1
Deletion should be working in the next release, 0.8.2
kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
Topic your_topic_name is marked for deletion.
Note: This will have no impact if delete.topic.enable is not set to true.
You may also pass in a bootstrap server instead of zookeeper:
kafka-topics.sh --bootstrap-server kafka:9092 --delete --topic your_topic_name
Is it possible to delete a topic?
Jira KAFKA-1397
It seems that the deletion command was not officially documented in Kafka 0.8.1.x because of a known bug (https://issues.apache.org/jira/browse/KAFKA-1397).
Nevertheless, the command was still shipped in the code and can be executed as:
bin/kafka-run-class.sh kafka.admin.DeleteTopicCommand --zookeeper localhost:2181 --topic test
In the meantime, the bug got fixed and the deletion command is now officially available from Kafka 0.8.2.0 as:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
Add below line in ${kafka_home}/config/server.properties
delete.topic.enable=true
Restart the kafka server with new config:
${kafka_home}/bin/kafka-server-start.sh ~/kafka/config/server.properties
Delete the topics you wish to:
${kafka_home}/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic daemon12
Andrea is correct. we can do it using command line.
And we still can program it, by
ZkClient zkClient = new ZkClient("localhost:2181", 10000);
zkClient.deleteRecursive(ZkUtils.getTopicPath("test2"));
Actually I do not recommend you delete topic on Kafka 0.8.1.1. I can delete this topic by this method, but if you check log for zookeeper, deletion mess it up.
Steps to Delete 1 or more Topics in Kafka
#To delete topics in kafka the delete option needs to be enabled in Kafka server.
1. Go to {kafka_home}/config/server.properties
2. Uncomment delete.topic.enable=true
#Delete one Topic in Kafka enter the following command
kafka-topics.sh --delete --zookeeper localhost:2181 --topic <your_topic_name>
#To Delete more than one topic from kafka
(good for testing purposes, where i created multiple topics & had to delete them for different scenarios)
Stop the Kafka Server and Zookeeper
go to server folder where the logs are stored (defined in their config files) and delete the kafkalogs and zookeeper folder manually
Restart the zookeeper and kafka server and try to list topics,
bin/kafka-topics.sh --list --zookeeper localhost:2181
if no topics are listed then the all topics have been deleted successfully.If topics are listed, then the delete was not successful. Try the above steps again or restart your computer.
You can delete a specific kafka topic (example: test) from zookeeper shell command (zookeeper-shell.sh). Use the below command to delete the topic
rmr {path of the topic}
example:
rmr /brokers/topics/test
This steps will delete all topics and data
Stop Kafka-server and Zookeeper-server
Remove the data directories of both services, by default on windows they are
C:/tmp/kafka-logs and C:/tmp/zookeeper.
then start Zookeeper-server and Kafka-server
The command:
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic test
unfortunately only marks topic for deletion.
Deletion does not happen.
That makes troubles, while testing any scripts, which prepares Kafka configuration.
Connected threads:
Purge Kafka Queue
Is there a way to delete all the data from a topic or delete the topic before every run?
As mentioned in doc here
Topic deletion option is disabled by default. To enable it set the server config
delete.topic.enable=true
Kafka does not currently support reducing the number of partitions for a topic or changing the replication factor.
Make sure delete.topic.enable=true
Adding to above answers one has to delete the meta data associated with that topic in zookeeper consumer offset path.
bin/zookeeper-shell.sh zookeeperhost:port
rmr /consumers/<sample-consumer-1>/offsets/<deleted-topic>
Otherwise the lag will be negative in kafka-monitoring tools based on zookeeper.
First, you run this command to delete your topic:
$ bin/kafka-topics.sh --delete --bootstrap-server localhost:9092 --topic <topic_name>
List active topics to check delete completely:
$ bin/kafka-topics.sh --list --bootstrap-server localhost:9092
If you have issues deleting the topics, try to delete the topic using:
$KAFKA_HOME/bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic your_topic_name
command. Then in order to verify the deletion process, go to the kafka logs directory which normally is placed under /tmp/kafka-logs/, then delete the your_topic_name file via rm -rf your_topic_name command.
Remember to monitor the whole process via a kafka management tool like Kafka Tool.
The mentioned process above will remove the topics without kafka server restart.
There is actually a solution without touching those bin/kafka-*.sh: If you have installed kafdrop, then simply do:
url -XPOST http://your-kafdrop-domain/topic/THE-TOPIC-YOU-WANT-TO-DELETE/delete
bin/kafka-topics.sh --delete --zookeeper localhost:2181 --topic <topic-name>
Step 1: Make sure you are connected to zookeeper and Kafka running
Step 2: To delele the Kafka topic run kafka-topics script, add the port and --topic with name of your topic and --delete it just delete the topic with success.
# Delete the kafka topic
# it will delete the kafka topic
bin/kafka-topics.sh --zookeeper 127.0.0.1:2181 --topic name_of_topic --delete
This worked for me:
kafka-topics --delete --bootstrap-server localhost:9092 --topic user-commands
I am using Confluent CLI and run this on the directory I installed it.
Recent Kafka versions are about to remove the Zookeeper dependency. Therefore, you should instead reference the brokers (through --boostrap-server):
kafka-topics \
--bootstrap-server localhost:9092,localhost:9093,localhost:9094 \
--delete \
--topic topic_for_deletion
for confluent cloud use -
$ confluent kafka topic delete my_topic