Is there any way to delete a topic in kafka-node - apache-kafka

Is there any way to delete a topic in kafka-node? I have searched some documents but there are steps to remove a topic using kafka-node. I want to delete the topic instead of removing

No, there is no way to delete a topic using kafka-node. There's probably a way to do it via node-zookeeper-client, but that's non-standard. You'd have to look at the way the standard Kafka delete topic command does it, and then roll your own deletion code.
But regardless -- you can't do it via kafka-node.

Related

Automatically discover topics when using topics.regex configuration for Kafka Connector

When I'm using topics.regex config option for Kafka Connector Sink (in this particular case Confluent S3 Sink) everything works as expected when sink is first started (it discovers all topics and start consuming messages from them). I want to be able to also create some topics later. I can't find anywhere in documentation what is expected behaviour here, but I wanted to be able to somehow automatically start consuming new topics with name matching provided regex. Is it possible at all? If not what would be best way to implement some automation for this?
You should not need to do anything. It will find the new topic automatically if it matches the regex. But it is not immediate - it might take a few minutes (I think it is driven by the metadata refresh which is by default 5 minutes?). I never used it directly with the S3 connector you mention. But it worked fine for me with other connectors and I think there should be no differences.

how to delete kafka topics created programmatically

I am creating the Kafka topic programmatically . when application end . i have to delete the created Kafka topics . I am using adminClient.deleteTopics(TOPIC_LIST) but still topics doe not get deleted instead making marked as "marked for deletion" . Any one know how can i delete topic permanently using java.
The method you're using is correct.
There's no method available to wait for the data on disk to permanently be removed unless you decide to use something like JSSH and manually delete remote file paths from the brokers

In Kafka, how to handle deleted rows from source table that are already reflected in Kafka topic?

I am using a JDBC source connector with mode timestamp+incrementing to fetch table from Postgres, using Kafka Connect. The updates in data are reflected in Kafka topic but the deletion of records has no effect. So, my questions are:
Is there some way to handle deleted records?
How to handle records that are deleted but still present in kafka topic?
The recommendation is to either 1) adjust your source database to be append/update only, as well, either via a boolean or timestamp that is filtered out when Kafka Connect queries the table.
If your database is running out of space, then you can delete old records, which should already have been processed by Kafka
Option 2) Use CDC tools to capture delete events immediately rather than missing them in a period table scan. Debezium is a popular option for Postgres
A Kafka topic can be seen as an "append-only" log. It keeps all meesages for as long as you like but Kafka is not built to delete individual messages out of a topic.
In the scenario you are describing it is common that the downstream application (consuming the topic) handles the information on a deleted record.
As an alternative you could set the cleanup.policy of your topic to compact which means it will eventually keep only the latest value for each key. If you now define the key of a message as the primary key of the Postgres table, your topic will eventually delete the record when you produce a message with the same key and a null value into the topic. However,
I am not sure if your connector is flexible to do this
Depending on what you do with the data in the kafka topic, this could still not be a solution to your problem as the downstream application will still read both record, the original one and the null message as the deleted record.

How to migrate a kafka topic to log compaction?

we have a Kafka topic, which cleanup.policy is currently delete. Messages, which have been produced on this topic, have no keys. I'm able to alter the configuration of this topic and it won't accept any new messages without a key, which is reasonable and desired.
I'm wondering what Kafka is going to do with these old keyless messages, though. Are they going to be treated like they have one key, or aren't they going to be affected by the new cleanup policy?
Are there best practices for migrating, I'm not able to find something about that. Is this an unusual use case?
Thanks in advance
I've made some tests in my Kafka Cluster and answering this for future questions:
Messages without a key are going to be deleted
If you don't add new messages, you might end up with some of the old messages in the partitions, because they are "in the last segment". They are going to be deleted, when you add new messages
I think I'll introduce a new compacted topic and republish my data to the new one. This causes all consumers to consume a new topic, but this is ok in my case.
Good luck future me

Can I delete a Kafka Partition version 0.10.0.1

We have a requirement that demands to delete/purge data for any given partition within a topic. I am using Kafka 0.10.0.1. Is there any way I can delete entire partition content on demand? If yes then how. I see that we can use log compaction to post a null message for a key and delete it but other than that is there any way to achieve deletion?
Kafka does not currently support reducing the number of partitions for a topic, so no out-of-box tool is offered to be used to remove a partition directly.