start kafka consumer once event is completed - apache-kafka

My Spring boot application does two task
Initialise cache
kafka consumer
Already have a ApplicationEvent for cache initialisation and I want consumer to start listening the messages once cache initialisation is completed.

If you are using #KafkaListener, give it an id and set autoStartup to false.
When you are ready, use the KafkaListenerEndpointRegistry bean to start the container.
registry.getMessageListenerContainer("myId").start().
If you are using a listener container bean, set its autoStartup to false and start() it when ready.

Related

Kafka AdminClient DEFAULT_API_TIMEOUT_MS_CONFIG property not working as expected

In my java application for kafkaProducer i created Admin and used this to list topics.
Admin admin1 = Admin.create(properties);
ListTopicsResult listTopicResult = admin1.listTopics();
When producer was not able to connect to broker thread get stucks at listTopic step.
To avoid this i have added DEFAULT_API_TIMEOUT_MS_CONFIG property and have set to 2000ms. so after 2s if producer is not able to connect to broker timeout should happen.
properties.put(AdminClientConfig.DEFAULT_API_TIMEOUT_MS_CONFIG,2000);
But this does not work as expected. even DEFAULT_API_TIMEOUT_MS_CONFIG with value 2000ms as property timeout didn't happen and thread was struck.
Is there any other way to achieve timeout or do we need to configure it in any other way?
Also is using AdminClient instead of Admin can have any impact.

Pausing Kafka consumer partitions through a farm

We are pointing our kafka consumer to farm of partitions for load balancing.
When using a rest controller endpoint to pause the kafka consumer the service only pauses a few partitions and not all of them. We want all the partitions to be paused but are unable to get them all even with repeated calls. How would you suggest we accomplish this? Hazelcast?
Thanks
consumer.pause() would only pause the instance on which it is called, not the entire consumer group.
A load balancer wouldn't be able to target all of your REST endpoints that wrap the consumers since the requests are randomly routed between all instances, so yes, you'd need some sort of external variable. Zookeeper would be a better option unless you already have a Hazelcast system. Apache Curator is a high-level Zookeeper client that can be used for this. For example, a shared counter could be set for 0 state of paused, non-zero for non-paused.

Stop the kafka connector

In kafka connect SinkTask implementation, If there is an exception which is unavoidable, if I invoke stop() method from my code, does the connector stop altogether ?
Only that task that encountered the exception will stop
The connect cluster can have multiple connectors which won't stop, and there can be other tasks for a single connect depending on your configurations that, for example, would be assigned different, clean, data that could be processed

When the verticle will undeploy in vertx

I am seeing the below statement in vertx document and I want to understand is the verticle is automatically undeploy when process completed or we need to explicitly call the undeploy method.
Automatic clean-up in verticles
If you’re creating consumers and producer from inside verticles, those consumers and producers will be automatically closed when the verticle is undeployed.
Thanks.
This statement indicates that when a verticle is undeployed, Vert.x resources like event bus consumers, event bus producers or HTTP servers will be closed automatically.
As for when verticles are undeployed, they are if you do it manually or if you started them via the Vert.x command line or the Launcher class.

spring kafka shutting down producer gracefully during runtime

We have a spring boot application, that uses spring-kafka. We want to disable kafka producer (kafkatemplate), when we update a config property. I have tried with conditional bean, and using applicationcontext refresh.
Is there a way to gracefully shutdown kafkaproducer with spring-kafka ?
You can call destroy() on the DefaultKafkaProducerFactory and it will close the (singleton) producer, but the next time some code calls createProducer() another one will be created; there's currently no way to prevent that. You would need to subclass the factory and throw an exception when you don't want a producer to be created.