We have a spring boot application, that uses spring-kafka. We want to disable kafka producer (kafkatemplate), when we update a config property. I have tried with conditional bean, and using applicationcontext refresh.
Is there a way to gracefully shutdown kafkaproducer with spring-kafka ?
You can call destroy() on the DefaultKafkaProducerFactory and it will close the (singleton) producer, but the next time some code calls createProducer() another one will be created; there's currently no way to prevent that. You would need to subclass the factory and throw an exception when you don't want a producer to be created.
Related
We have a Spring Boot application producing messages to a AWS MSK Kafka cluster. Every now and then our MSK cluster gets an automatic security update (or such) and after that our KafkaTemplate producer loses connection to the cluster or something so all sends end up in a timeout. The producer doesn't recover from this automatically and keeps on trying to send messages. The following idempotent sends throw an exception:
org.apache.kafka.common.errors.ClusterAuthorizationException: The producer is not authorized to do idempotent sends
Restarting the producer application fixes the issue. Our producer is very simple application using KafkaTemplate to send messages without any custom retry logic or such.
One suggestion was to add a producer reset call to the error handler but testing the solution is very hard as there seems to be no real way to reproduce the issue.
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/ProducerFactory.html#reset()
Any ideas why this happens and what is the best way to fix it?
We have an open issue to close the producer on any timeout...
https://github.com/spring-projects/spring-kafka/issues/2251
Contributions are welcome.
My Spring boot application does two task
Initialise cache
kafka consumer
Already have a ApplicationEvent for cache initialisation and I want consumer to start listening the messages once cache initialisation is completed.
If you are using #KafkaListener, give it an id and set autoStartup to false.
When you are ready, use the KafkaListenerEndpointRegistry bean to start the container.
registry.getMessageListenerContainer("myId").start().
If you are using a listener container bean, set its autoStartup to false and start() it when ready.
In kafka connect SinkTask implementation, If there is an exception which is unavoidable, if I invoke stop() method from my code, does the connector stop altogether ?
Only that task that encountered the exception will stop
The connect cluster can have multiple connectors which won't stop, and there can be other tasks for a single connect depending on your configurations that, for example, would be assigned different, clean, data that could be processed
I'm using kafka with zookeeper that comes with the kafka bundle. I'm also using spring cloud stream, and kafka binder.
I wanted to see what happens if zookeeper is down, while the kafka broker is running. I send some items to the kafka topic via spring cloud stream source. To my surprise, instead of getting an exception, spring cloud stream says it's ok. However my items aren't in the kafka queue. Is this the expected behavior, or a bug? Is there something I can configure to get an exception back if zookeeper is down?
Try Kafka Producer sync = true property.
I'm afraid we can reach the Broker for metadata, but when we send a record that is done in async manner by default.
So, track down the network problems fully we have to switch to the sync mode.
http://docs.spring.io/spring-cloud-stream/docs/Chelsea.SR2/reference/htmlsingle/index.html#_kafka_producer_properties
I am newbie to Kafka. I am working on a Spring-Kafka POC. Our KAFKA severs are Kerberized. With all required configuration, we are able to access the Kerberized Kafka server. Now we have another requirement where we have to consume topics from non-Kerberized (Simple Kafka Consumer) Kafka servers. Can we do this in single application by creating another KafkaConsumer with its own Listener?
Yes; just define a different consumer factory bean for the second consumer.
If you are using Spring Boot's auto configuration, you will have to manually declare both because the auto configuration is disabled if a user-defined bean is discovered.