Unable to delete kafka topic - apache-kafka

I am using kafka, zookeeper and kafka-manager for managing clusters.
I have 3 nodes cluster. In all the cluster I set since very beginning delete.topic.enable=true
Now when I want to delete a topic it is showing following error.
topicxyz - marked for deletion
but it is not deleted.
I tried to delete from kafka-manager also and it says
Yikes! KeeperErrorCode = NodeExists for /admin/delete_topics/topicxyz
Error logs:
kafka-manager:
[ESC[31merrorESC[0m] k.m.ApiError$ - error : KeeperErrorCode = NodeExists for /admin/delete_topics/topicxyz
org.apache.zookeeper.KeeperException$NodeExistsException: KeeperErrorCode = NodeExists for /admin/delete_topics/topicxyz
at org.apache.zookeeper.KeeperException.create(KeeperException.java:119) ~[org.apache.zookeeper.zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.KeeperException.create(KeeperException.java:51) ~[org.apache.zookeeper.zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.zookeeper.ZooKeeper.create(ZooKeeper.java:783) ~[org.apache.zookeeper.zookeeper-3.4.6.jar:3.4.6-1569965]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:721) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl$11.call(CreateBuilderImpl.java:704) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
at org.apache.curator.RetryLoop.callWithRetry(RetryLoop.java:108) ~[org.apache.curator.curator-client-2.10.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.pathInForeground(CreateBuilderImpl.java:701) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.protectedPathInForeground(CreateBuilderImpl.java:477) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:467) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
at org.apache.curator.framework.imps.CreateBuilderImpl.forPath(CreateBuilderImpl.java:447) ~[org.apache.curator.curator-framework-2.10.0.jar:na]
[ESC[37minfoESC[0m] k.m.a.KafkaManagerActor - Updating internal state...
kafka has no error log. zookeeper stdout errorlog says only warning and stderr log says Invalid config, exiting abnormally
kafka-version: kafka_2.12-0.10.2.0
Topic description:
$ bin/kafka-topics.sh --describe --zookeeper localhost:2181 --topic topicxyz
Topic:topicxyz PartitionCount:1 ReplicationFactor:1 Configs:
Topic: topicxyz Partition: 0 Leader: -1 Replicas: 3 Isr:
Please help.

I am not sure which kafka version are you using. But deleting a topic had a bug previously. Refer here & here.

This is sometimes caused by a corrupt ZooKeeper node found within /admin/delete_topics. Log into the ZK client and delete the misbehaving /admin/delete_topics/your_topic_name entry.
Depending on client version it will go something like this:
bin/zkCli.sh -server 127.0.0.1:2181
ls /admin/delete_topics
ls /brokers/topics
rmr /admin/delete_topics/your_topic_name
You should now be able to use Kafka Manager or Kafka-topics to delete your topics. You can also manually remove your topic by deleting the "/brokers/topics/your_topic_name" entry but I find that is unnecessary after removing the misbehaving "delete_topics" entry.

Related

Not Able to Increase Replication Factor of Kafka Topic

We have a 3 node kafka-zookeeper cluster setup with kafka-zookeeper communicating on SSL.
We are currently using apache kafka 2.5 and zookeeper 3.5.7 . We are trying to increase the replication factor in kafka topics using the below method:
To increase the number of replicas for a given topic you have to:
Specify the extra replicas in a custom reassignment json file. For example, you could create increase-replication-factor.json and put this content in it:
{"version":1,
"partitions":[
{"topic":"signals","partition":0,"replicas":[0,1,2]},
{"topic":"signals","partition":1,"replicas":[0,1,2]},
{"topic":"signals","partition":2,"replicas":[0,1,2]}
]}
Use the file with the --execute option of the kafka-reassign-partitions tool
For example:
$ kafka-reassign-partitions --zookeeper localhost:2182 --reassignment-json-file increase-replication-factor.json --execute --command-config zookeeper_client.properties
But we are facing the problem while running the kafka-reassign-partitions , while running this command the connection to zookeeper fails with below error:
Save this to use as the --reassignment-json-file option during rollback
Partitions reassignment failed due to KeeperErrorCode = NoAuth for
/admin/reassign_partitions
org.apache.zookeeper.KeeperException$NoAuthException: KeeperErrorCode = NoAuth
for /admin/reassign_partitions
at org.apache.zookeeper.KeeperException.create(KeeperException.java:120)
at org.apache.zookeeper.KeeperException.create(KeeperException.java:54)
at kafka.zookeeper.AsyncResponse.maybeThrow(ZooKeeperClient.scala:564)
at kafka.zk.KafkaZkClient.createRecursive(KafkaZkClient.scala:1644)
at kafka.zk.KafkaZkClient.createPartitionReassignment(KafkaZkClient.scala:871)
We don't have any ACLS created on the topic still we get this issue.

Error while fetching metadata kafka {test=LEADER_NOT_AVAILABLE}?

I get the error after running those simple commands -
I started the Zookeeper and the Kafka servers,
I execute the command:
./kafka-topics --create --zookeeper localhost:2181 --replication-factor 1 --partitions 1 --topic test
and execute the command:
./kafka-console-producer --broker-list localhost:9092 --topic test
I obtain a list of WARN like:
[2019-12-08 21:36:13,024] WARN [Producer clientId=console-producer] Error while fetching metadata with correlation id 37 : {test=LEADER_NOT_AVAILABLE} (org.apache.kafka.clients.NetworkClient)
what did I do wrong?
Thanks
If your broker has the auto.create.topics.enable set to true, then this error will be transient and you should be able to produce message without any further error.
It happens just because the producer is asking for metadata about the topic it wants to write to but that topic doesn't exist in the cluster and the partition leader (where the producer wants to write) doesn't exist yet.
If you retry, the broker will create the topic and the command will work fine.
If the above configuration is set to false, then the broker doesn't create the topic automatically on the first request from a client so you have to create it upfront.
Finally, but it's not your case, the above error could even happen when the topic exist but, for example, the broker which is leader for the specific topic partition is down and a new leader election is in progress.
I wanted to add a comment, but its seems I can't. Just go through this link. Someone had a similar problem and it seems the problem is not what you have done but can be somethings different.
Link: https://grokbase.com/t/kafka/users/134qvay38q/leadernotavailable-exception

Kafka configuration min.insync.replicas not working

Its my early days in learning kafka. And I am checking out every kafka property/concept in my local machine.
So I came across this property min.insync.replicas and here is my understanding. Please correct me if I've misunderstood anything.
Once a message is sent to a topic, the message must be written to at least min.insync.replicas number of followers.
min.insync.replicas also includes the leader.
If number of available live brokers( indirectly, in sync replicas ) are less than the specified min.insync.replicas , then producer will raise an exception failing to publish the message.
Following are the steps I followed to create the above scenario
Started 3 brokers in local with broker Ids 0, 1 and 2
created the topic insync and set min.insync.replicas to 2
using the following command
sudo ./kafka-topics.sh --create --zookeeper localhost:2181 --replication-factor 3 --partitions 1 --topic insync --config min.insync.replicas=2
Describe the topic resulted in the following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 2 Replicas: 2,0,1 Isr: 1,2,0
At this point, I made sure the property I've provided is picked by kafka
I started sending messages and consuming them from terminal using following command
Producer: ./kafka-console-producer.sh --broker-list localhost:9092 --topic insync --producer.config ../config/producer.properties
Consumer: ./kafka-console-consumer.sh --zookeeper localhost:2181 --topic insync
At this point, I was able to send and receive messages successfully.
Bought down 2 brokers (0 and 2) and described the topic and resulted in following
Topic:insync PartitionCount:1 ReplicationFactor:3 Configs:min.insync.replicas=2
Topic: insync Partition: 0 Leader: 1 Replicas: 2,0,1 Isr: 1
At this point, the In Sync Replicas are just 1(Isr: 1)
Then I tried to produce the message and it worked. I was able to send messages from console-producer and I could see those messages in console consumer.
My Kafka version: kafka_2.10-0.10.0.0
following are the producer properties:
bootstrap.servers=localhost:9092
compression.type=none
batch.size=20
acks=all
I expected the producer to fail with NotEnoughReplicasException as mentioned in this.
public class NotEnoughReplicasException
extends RetriableException
Number of insync replicas for the partition is lower than >min.insync.replicas
but it worked normally.
Am I missing something? How can I create the scenario?
*************** EDIT **********************
Instead of producing the messages from console producer, I tried to generate messages from java code. This time, I got the expected exception in the kafka broker. Although I expected it in the producer (java code). As this experiment is raising more questions, I've posted another question.
is acks set to "all"? if not, try setting it to all
I believe that error is for transactional producer, you may need to add this config:
transactional.id=TID-TEST
if still not working, please check your replicator factor and min insync isr for the internal topic: __transaction_state

Kafka consumer with new API not working

I found something very weird with Kafka.
I have a producer with 3 brokers :
bin/kafka-console-producer.sh --broker-list localhost:9093, localhost:9094, localhost:9095 --topic topic
Then I try to run a consumer with the new API :
bin/kafka-console-consumer.sh --bootstrap-server localhost:9093,localhost:9094,localhost:9095 --topic topic --from-beginning
I got nothing ! BUT if I use the old API :
bin/kafka-console-consumer.sh --zookeeper localhost:2181 --from-beginning --topic topic
I got my messages !
What is wrong with me ?
PS : I am using Kafka 10
I eventually resolved my problem thanks to this similar post : Kafka bootstrap-servers vs zookeeper in kafka-console-consumer
I believe it is a bug / wrong configuration of mine leading to a problem with zookeeper and kafka.
SOLUTION :
First be sure to have enable topic deleting in server.properties files of your brokers :
# Switch to enable topic deletion or not, default value is false
delete.topic.enable=true
Then delete the topic :
bin/kafka-topics.sh --zookeeper localhost:2181 --delete --topic myTopic
Remove all the /tmp/log.dir directories of your brokers.
EDIT : I faced again the problem and I had to remove also the log files of zookeeper in /tmp/zookeeper/version-2/.
Finally delete the topic in /brokers/topics in zookeeper as follow :
$ kafka/bin/zookeeper-shell.sh localhost:2181
Connecting to localhost:2181
Welcome to ZooKeeper!
JLine support is disabled
rmr /broker/topics/mytopic
And restart your brokers and create your topic again.
After fighting a while with same problem. Specify --partition and console consumer new API works (but hangs..). I have CDH 5.12 + Kafka 0.11 (from parcel).
UPD:
Also find out that Kafka 0.11 (versioned as 3.0.0 in CDH parclel) does not work correctly with consuming messages. After downgrading to Kafka 0.10 it become OK. --partition does not need any more.
I had the same problem, but I was using a single broker instance for my "cluster", and I was getting this error:
/var/log/messages
[2018-04-04 22:29:39,854] ERROR [KafkaApi-20] Number of alive brokers '1' does not meet the required replication factor '3' for the offsets topic (configured via 'offsets.topic.replication.factor'). This error can be ignored if the cluster is starting up and not all brokers are up yet. (kafka.server.KafkaApis)
I just added in my server configuration file the setting offsets.topic.replication.factor=1 and restarted. It started to work fine.

kafka how to delete topic with no leader

I can't delete / reassign / make any change to topics without leader
reproduce:
Create a topic with ReplicationFactor=1
Shutdown the only one broker host
Use kafka-topic --delete to delete the topic
Delete process will never end (I waited for more than 6 month, and it starting to get hurt)
describe of topic
Topic:topic_73 PartitionCount:1 ReplicationFactor:1
Configs:unclean.leader.election.enable=true
Topic: topic_73 Partition: 0 Leader: -1 Replicas: 755 Isr:
broker 755 can never gone back
how can i fix this?
You can try this script to clear metadata from zookeeper and directly delete the kafka logs.
./clear.sh /path-to-kafka-logs sampletopic /path-to-kafka-bin-dir
clear.sh content is as below:
#!/bin/bash
# this script is for the situation that leader is set to -1 and there is no ISR
ZK_HOST=`hostname`
ROOT_DIR=$1
TOPIC=$2
KAFKA_LOG_DIR=$3
# make sure kafka service is stopped while running this
rm -rf ${KAFKA_LOG_DIR}/${TOPIC}*
${ROOT_DIR}/kafka/bin/kafka-topics.sh --zookeeper ${ZK_HOST}:2181 --topic ${TOPIC} --delete
${ROOT_DIR}/kafka/bin/zookeeper-shell.sh ${ZK_HOST}:2181 rmr /brokers/topics/${TOPIC}
${ROOT_DIR}/kafka/bin/zookeeper-shell.sh ${ZK_HOST}:2181 rmr /admin/delete_topics/${TOPIC}
make sure chmod +x clear.sh before using it.