Handling graceful shutdown in Springboot and Kafka - apache-kafka

How to handle graceful shutdown in Kafka consumers using Springboot application?
We are using #KafkaListener, offset at consumer side. Is there any way to handle graceful shutdown in Kafka?
Provided graceful configuration in application.properties file (server.shutdown = graceful). It is working for normal rest calls, but this graceful scenario is needed in Kafka consumer.

Related

Why is KafkaTemplate losing connection to Kafka after AWS MSK cluster update?

We have a Spring Boot application producing messages to a AWS MSK Kafka cluster. Every now and then our MSK cluster gets an automatic security update (or such) and after that our KafkaTemplate producer loses connection to the cluster or something so all sends end up in a timeout. The producer doesn't recover from this automatically and keeps on trying to send messages. The following idempotent sends throw an exception:
org.apache.kafka.common.errors.ClusterAuthorizationException: The producer is not authorized to do idempotent sends
Restarting the producer application fixes the issue. Our producer is very simple application using KafkaTemplate to send messages without any custom retry logic or such.
One suggestion was to add a producer reset call to the error handler but testing the solution is very hard as there seems to be no real way to reproduce the issue.
https://docs.spring.io/spring-kafka/api/org/springframework/kafka/core/ProducerFactory.html#reset()
Any ideas why this happens and what is the best way to fix it?
We have an open issue to close the producer on any timeout...
https://github.com/spring-projects/spring-kafka/issues/2251
Contributions are welcome.

How to manage Flink application when Kafka broker is unavailable?

I have a Flink application running in production which writes data to a Kafka topic owned by an external vendor.
We were notified by the vendor that they would be migrating their cluster and hence there will be downtime where the Kafka brokers will not be available.
My question is, what will happen to the Flink application data when the topic is not available to write data into? Can I allow my Flink application to continue running or should I stop it and wait for the brokers to be up and running?
The task will fail if it can't connect to the Kafka Sink. What it does after failing will depend on your Task Failure Recovery strategy.
If you don't want to keep an eye on when Kafka will be available again, a fixed-delay with infinite retries and a long delay or an exponential-delay strategy may be your best option to not overload your infrastructure too much with unnecessary restarts.

Triggering kubernetes job for a kafka message

I have a kubernetes service that only does something when it consumes a message from a Kafka queue. The queue does not have messages very often, and running the service as a job triggered whenever a message is found would save resources.
I see that Kubernetes has this functionality for AMQP-type message services: https://kubernetes.io/docs/tasks/job/coarse-parallel-processing-work-queue/
Is there a way to adapt this for Kafka, given that Kafka does not support AMQP? I'd switch to a different messaging system, but I have other services that also read from this queue that require Kafka.
That Kafka consumer Service is all you really need. If you want to save resources, this could be paired with KEDA autoscaler such that it scales up and down, depending on load or consumer group lag.
Or you can use serverless platforms such as KNative to trigger based on Kafka (or other messaging systems) events.
Kafka does not support AMQP
Kafka Connect should be able to bridge AMQP to Kafka. E.g. Apache Camel has connectors for both.

Kafka Streams application stops working after no message have been read for a while

I have noticed that my Kafka Streams application stops working when it has not read new messages from the Kafka topic for a while. It is the third time that I have seen this happen.
No messages have been produced to the topic since 5 days. My Kafka Streams application, which also hosts a spark-java webserver, is still responsive. However, the messages I produce to the Kafka topic are not being read by Kafka Streams anymore. When I restart the application, all messages will be fetched from the broker.
How can I make my Kafka Streams Application more durable to this kind of scenario? It feels that Kafka Streams has an internal "timeout" after which it closes the connection to the Kafka broker when no messages have been received. I could not find such a setting in the documentation.
I use Kafka 1.1.0 and Kafka Streams 1.0.0
Kafka Streams do not have an internal timeout to control when to permanently close a connection to the Kafka broker; Kafka broker, on the other hand, does have some timeout value to close idle connections from clients. But Streams will keep trying to re-connect once it has some processed result data that is ready to be sent to the brokers. So I'd suspect your observed issue came from some other causes.
Could you share your application topology sketch and the config properties you used, for me to better understand your issue?

Spring cloud stream not throwing exception when zookeeper is down

I'm using kafka with zookeeper that comes with the kafka bundle. I'm also using spring cloud stream, and kafka binder.
I wanted to see what happens if zookeeper is down, while the kafka broker is running. I send some items to the kafka topic via spring cloud stream source. To my surprise, instead of getting an exception, spring cloud stream says it's ok. However my items aren't in the kafka queue. Is this the expected behavior, or a bug? Is there something I can configure to get an exception back if zookeeper is down?
Try Kafka Producer sync = true property.
I'm afraid we can reach the Broker for metadata, but when we send a record that is done in async manner by default.
So, track down the network problems fully we have to switch to the sync mode.
http://docs.spring.io/spring-cloud-stream/docs/Chelsea.SR2/reference/htmlsingle/index.html#_kafka_producer_properties