kafka _schemas topic when deleted, gets created without cleanup.policy - apache-kafka

I am using Confluent Platform version 5.3.2 and in this i can see that _schemas topic is created with cleanup.policy as compact. Without this the schema registry will not be started. Now if i delete the _schemas topi, it will be created automatically, but without the cleanup.policy and because of this, if we restart the schema registry, it fails to start.
How can we make the _schemas topics to be created with cleanup.policy=compact when it is deleted and created automatically?

Stop the registry. Delete the topic. Create the topic manually with the appropriate settings (compact, high replication, one partition). Never delete the topic again.
Or upgrade your registry version, and hopefully the topic will be created correctly

Related

Cannot delete topic successfully

As far as I know version 1.0x and up, the config delete.topic.enable=true is the default configuration. Meaning, if I execute the delete command it should work as it expected.
However, I tried deleting the topic but it still there and was not deleted (when I describe the said topic in all brokers)
I read in some forums that I need to restart the broker servers and the zookeepers in order the deletion take effect.
Is there any other way to delete the topic without restarting the zookeeper and the brokers?

Delete __Consumer_offset Topic form Kafka

I'm trying to delete Kafka topic __Consumer_offset as it is causing a lot of confusion for my brokers.
When i do so, it says this topic can't be marked for deletion.
i'm using the zookeeper cli to delete it such as rmr /brokers/topic __consumer_offset, but it is not working!
__consumer_offsets is a kafka internal topic and it is not allowed to be deleted through delete topic command. It contains information about committed offsets for each topic:partition for each group of consumers (groupID). If you want to wipe it out entirely you have to delete the zookeeper dataDir location. That implies, you lose all the metadata.
Also if you just want to get rid of the existing consumer groups, you can as well reset the offsets or consider deleting them.
AFAIK you cannot delete that topic. It is a internal topic and should not be deleted manually.
If it is must, then you will have to manually clean/remove your data directory. When you deploy Kafka brokers and Zookeepers it creates data directory.
Note: By removing data directory you will loose all topics and related data. So this is not feasible option in Production.

Kafka Connect - Delete Connector with configs?

I know how to delete Kafka connector as mentioned here Kafka Connect - How to delete a connector
But I am not sure if it also delete/erase specific connector related configs, offsets and status from *.sorage.topic for that worker?
For e.g:
Lets say I delete a connector having connector-name as"connector-abc-1.0.0" and Kafka connect worker was started with following config.
offset.storage.topic=<topic.name>.internal.offsets
config.storage.topic=<topic.name>.internal.configs
status.storage.topic=<topic.name>.internal.status
Now after DELETE call for that connector, will it erased all records from above internal topics for that specific connector?
So that I can create new connector with "same name" on same worker but different config(different offset.start or connector.class)?
When you delete a connector, the offsets are retained in the offsets topic.
If you recreate the connector with the same name, it will re-use the offsets from the previous execution (even if the connector was deleted in between).
Since Kafka is append only, then only way the messages in those Connect topics would be removed is if it were published with the connector name as the message key, and null as the value.
You could inspect those topics using console consumer to see what data is in them including --property print.key=true, and keep the consumer running when you delete a connector.
You can PUT a new config at /connectors/{name}/config, but any specific offsets that are used are dependent upon the actual connector type (sink / source); for example, there is the internal Kafka __consumer_offsets topic, used by Sink connectors, as well as the offset.storage.topic, optionally used by source connectors.
"same name" on same worker but different config(different offset.start or connector.class)?
I'm not sure changing connector.class would be a good idea with the above in mind since it'd change the connector behavior completely. offset.start isn't a property I'm aware of, so you'll need to see the documentation of that specific connector class to know what it does.

Kafka topic not able to assign leaders after creation

I was using a kafka topic, and it's metadata as well in my application. I hard deleted the topic from the zookeeper shell, by deleting the directories corresponding to that topic. After creating the topic again, I described the topic and found that no leaders have been assigned to this newly created topic. In the consumer, I can see repeated logs printing LEADER_NOT_AVAILABLE. Any reason as to what am I doing wrong? Or maybe is there a way to delete the metadata related to the kafka topic as well that I'm unaware of? Thanks in advance!
Deleting topics in Kafka hasn't been straightforward until recently. In general, you shouldn't attempt to delete Kafka topics by deleting metadata in Zookeeper. You should always use the included command line utilities.
First you need to make sure that deleting topics is enabled in the server.properties file on all brokers, and do a rolling restart if needed:
delete.topic.enable=true
After you restart the brokers to enable topic deletion, you should issue the delete command using the command line utilities:
./kafka-topics.sh —zookeeper <zookeeper_host>:2181 —delete —topic <topic_name>
If at this point, it's still stuck, try to run these two commands from the zookeeper shell to make sure and remove all metadata for that particular topic:
rmr /brokers/topics/<topic_name>
rmr /admin/delete_topics/<topic_name>
A few more details here:
https://medium.com/#contactsunny/manually-delete-apache-kafka-topics-424c7e016ff3

Kafka Streams Application Reset Tool not deleting internal topics

I am facing problems with using the Application Reset Tool in Kafka Streams. Kafka streams creates internal topics for my streams which have a config property cleanup.policy that defaults to delete for repartition topic and compact for changelog topic. But I am noticing that in one of our development environments the topics do not have these configs. And as such the reset tool does not delete these topics. Even when I manually alter the config for these topics it reverts back to no config for some of them. Any idea what might be happening. Its a weird behavior that just started on its own. I am running Kafka 1.0