I am using java Spring-Boot framework and trying to prevent our consumer from creating topic in kafka by setting the config properties.
where Configurations are:
From broker side:
auto.create.topics.enable=true
From consumer side
auto.create.topics.enable=false
for consumer we made auto creation topic false where on broker it is true.
Above configs are not working for us,
and Also if We have any other ways to archive the same we can discuss.
auto.create.topics.enable is not a consumer config. It needs to be allow.auto.create.topics, but is only a valid option in kafka-clients version 2.3+
There may be other Spring related settings; refer latest comment thread here. Disable auto topic creation from Spring Kafka Consumer
Related
I use KafkaAdminClient to delete Kafka topics in my Java project. I delete my topics then I produce a message to a new topic. Kafka creates my old topics again. kafka server.log
I have "allow.auto.create.topics" : "false" configuration on consumer instance.
You must send auto.create.topics.enable=false in the configuration of the broker. It also depends on how your producer is implemented. In case of Kafka Streams this property is not applied.
I just ran into the same issue and this is how I solved it in my situation:
You no longer have any active subscriptions to that topic, thus you should unsubscribe before deleting.
You have to close the producer that produced messages into that topic.
Apparently, if you don't do all that, Kafka kinda holds some connection to the topic and doesn't let it get removed.
How can I enable Kafka source connector idempotency feature?
I know in confluent we can override producer configs by producer.* properties in the worker configuration, but how about Kafka itself? is it the same?
After setting these configs where can I see applied configs for my connect worker?
Confluent doesn't modify the base Kafka Connect properties.
For configuration of the producers used by Kafka source tasks and the consumers used by Kafka sink tasks, the same parameters can be used but need to be prefixed with producer. and consumer. respectively
Starting with 2.3.0, client configuration overrides can be configured individually per connector by using the prefixes producer.override. and consumer.override. for Kafka sources or Kafka sinks respectively
https://kafka.apache.org/documentation/#connect_running
However, Kafka Connect sources aren't idenpotent - KAFKA-7077 & KIP-308
After setting these configs where can I see applied configs for my connect worker
In the logs, it should show the ProducerConfig or ConsumerConfig when the tasks start
I am using Apache Nifi version 1.10.0. I have put some data into Kafka from Nifi using the PublishKafka_2_0 processor. I have three Kafka brokers running along side with Kafka. I am getting the data from Nifi but the topic that is created in Nifi have a replication-factor of 1 and partitions of 1.
How can I change the default value of replication-factor and partitions when creating a new topic in PublishKafka? In other words, I want the processor to create new topics with partitions=3 and replication-factors=3 instead of 1.
I understand that this can be changed after the topic is created but I would like it to be done dynamically at creation.
If I understand your setup correctly, you are relying on the client side for topic creation, i.e. topics are created when NiFi attempts to produce/consume/fetch metadata for a non-existent topic. In this scenario, Kafka will use num.partitions and default.replication.factor settings for a new topic that are defined in broker config. (Kafka defaults to 1 for both.) Currently, updating these values in server.properties is the only way to control auto-created topics' configuration.
KIP-487 is being worked on to allow producers to control topic creation (as opposed to being server-side, one-for-all verdict), but even in that implementation there is no plan for a client to control number of partitions or replication factor.
I am using Kafka client library comes with Kafka 0.11.0.1. I noticed that using kafkaconsumer does not need to configure zookeeper anymore. Does that mean zookeep server will automatically be located by the kafka bootstrap server?
Since Kafka 0.9 the KafkaConsumer implementation stores offsets commit and consumer group information in Kafka brokers themselves. This eliminates the zookeeper dependency and increases the scalability of the consumers.
I'm a new user of Apache Kafka and I'm still getting to know the internals.
In my use case, I need to increase the number of partitions of a topic dynamically from the Kafka Producer client.
I found other similar questions regarding increasing the partition size, but they utilize the zookeeper configuration. But my kafkaProducer has only the Kafka broker config, but not the zookeeper config.
Is there any way I can increase the number of partitions of a topic from the Producer side? I'm running Kafka version 0.10.0.0.
As of Kafka 0.10.0.1 (latest release): As Manav said it is not possible to increase the number of partitions from the Producer client.
Looking ahead (next releases): In an upcoming version of Kafka, clients will be able to perform some topic management actions, as outlined in KIP-4. A lot of the KIP-4 functionality is already completed and available in Kafka's trunk; the code in trunk as of today allows client to create and to delete topics. But unfortunately, for your use case, increasing the number of partitions is still not possible yet -- this is in scope for KIP-4 (see Alter Topics Request) but is not completed yet.
TL;DR: The next versions of Kafka will allow you to increase the number of partitions of a Kafka topic, but this functionality is not yet available.
It is not possible to increase the number of partitions from the Producer client.
Any specific use case use why you cannot use the broker to achieve this ?
But my kafkaProducer has only the Kafka broker config, but not the
zookeeper config.
I don't think any client will let you change the broker config. You can only access (read) the server side config at max.
Your producer can provide different keys for ProducerRecord's. The broker would place them in different partitions. For example, if you need two partitions, use keys "abc" and "xyz".
This can be done in version 0.9 as well.