I have two kafka consumers defined in two micro services from one producer with different group id defined.One of the micro service is not consuming the message from the producer
#Component
#ConditionalOnProperty(value = "XXX", havingValue = "true")
public class EventListener {
#KafkaListener(id = "XX", topics = "#{'${topic'}")
public void consumeMessageEvent(Event messageEvent, ConsumerRecord<String, ?> record) {
}
}
Related
i'm new to kafka. I have created a kafka consumer with spring boot (spring-kafka dependency). In my app i have used consumerFactory and producerfactory beans for config. So in my application i have created the kafka consumer like below.
#RetryableTopic(
attempts = "3",
backoff = #Backoff(delay = 1000, multiplier = 2.0),
autoCreateTopics = "false")
#KafkaListener(topics = "myTopic", groupId = "myGroupId")
public void consume(#Payload(required = false) String message) {
processMessage(message);
}
My configs are like below
#Bean
public ConsumerFactory<String, Object> consumerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, env.getProperty("kafka.consumer.bootstrap.servers"));
config.put(ConsumerConfig.GROUP_ID_CONFIG, env.getProperty("kafka.consumer.group"));
config.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
config.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
return new DefaultKafkaConsumerFactory<>(config);
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.DEBUG);
factory.getContainerProperties().setMissingTopicsFatal(false);
return factory;
}
#Bean
public ProducerFactory<String, String> producerFactory() {
Map<String, Object> config = new HashMap<>();
config.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG, env.getProperty("kafka.consumer.bootstrap.servers"));
config.put(ProducerConfig.KEY_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
config.put(ProducerConfig.VALUE_SERIALIZER_CLASS_CONFIG, StringSerializer.class);
return new DefaultKafkaProducerFactory<>(config);
}
#Bean
public KafkaTemplate<String, String> kafkaTemplate() {
return new KafkaTemplate<>(producerFactory());
}
So i want to consume parallelly since i may get more messages. What i found about consuming parallelly topics is that i need to create multiple partitions for a topic and i need to create a consumer for each partition. Let´s say i have 10 partitions for my topic, then i can have 10 consumers in the same consumer group reading one partition each. I understand this behavior. But my concern is how can i create several consumers in my application.
Do i have to write multiple kafka consumer using #KafkaListener with the same functionality ? In that case do i have to write below method X amount of times if i need X amount of same consumers.
#RetryableTopic(
attempts = "3",
backoff = #Backoff(delay = 1000, multiplier = 2.0),
autoCreateTopics = "false")
#KafkaListener(topics = "myTopic", groupId = "myGroupId")
public void consume(#Payload(required = false) String message) {
processMessage(message);
}
What are the options or configs that i need to achieve parallel consuming with multiple consumers ?
Thank you in advance.
The #KafkaListener has this option:
/**
* Override the container factory's {#code concurrency} setting for this listener. May
* be a property placeholder or SpEL expression that evaluates to a {#link Number}, in
* which case {#link Number#intValue()} is used to obtain the value.
* <p>SpEL {#code #{...}} and property place holders {#code ${...}} are supported.
* #return the concurrency.
* #since 2.2
*/
String concurrency() default "";
See more in docs: https://docs.spring.io/spring-kafka/reference/html/#kafka-listener-annotation
I am consuming Kafka events using #KafkaHandler on the method level (#KafkaListener on class level).
I have seen a lot of examples where an "Acknowledgement" argument is available, on which the "acknowledge()" method can be called to commit consumption of the event, however, I am not able to get the acknowledgement object populated when including it as an argument to my method. How do I manual commit when using a KafkaHandler? Is it possible at all?
Code example:
#Service
#KafkaListener(topics = "mytopic", groupId = "mygroup")
public class TestListener {
#KafkaHandler
public void consumeEvent(MyEvent event, Acknowledgement ack) throws Exception {
//... processing
ack.acknowledge(); // ack is not available
}
Using SpringBoot and Spring-kafka.
You must configure the listener container with AckMode.MANUAL or AckMode.MANUAL_IMMEDIATE to get this functionality.
However, it's generally better to let the container take care of committing the offset with AckMode.RECORD or AckMode.BATCH (default).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets
EDIT
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.ack-mode=MANUAL
#SpringBootApplication
public class So68844554Application {
public static void main(String[] args) {
SpringApplication.run(So68844554Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so68844554").partitions(1).replicas(1).build();
}
}
#Component
#KafkaListener(id = "so68844554", topics = "so68844554")
class Foo {
#KafkaHandler
void listen(String in, Acknowledgment ack) {
System.out.println(in);
ack.acknowledge();
}
}
% kafka-consumer-groups --bootstrap-server localhost:9092 --describe -group so68844554
Consumer group 'so68844554' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
so68844554 so68844554 0 2 2 0 - - -
i have two kafka listeners like below:
#KafkaListener(topics = "foo1, foo2", groupId = foo.id, id = "foo")
public void fooTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
}
#KafkaListener(topics = "Bar1, Bar2", groupId = bar.id, id = "bar")
public void barTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
same application is running on two instances like inc1 and inc2. is there a way if i can assign foo listener to inc1 and bar listener to inc2. and if one instance is going down both the listener(foo and bar) assign to the running instance.
You can use the #KafkaListener property autoStartup, introduced since 2.2.
When an instance die, you can automatically start it up in the other instance like so:
#Autowired
private KafkaListenerEndpointRegistry registry;
...
#KafkaListener(topics = "foo1, foo2", groupId = foo.id, id = "foo", autoStartup = "false")
public void fooTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
}
//Start up condition
registry.getListenerContainer("foo").start();
I need to read message from topic1 completely and then read message from topic2. I will be receiving messages in these topic everyday once. I managed to stop reading messages from topic2 before reading all the messages in topic1, but this is happening for me only once when the server is started. Can someone help me with this scenario.
ListenerConfig code
#EnableKafka
#Configuration
public class ListenerConfig {
#Value("${spring.kafka.bootstrap-servers}")
private String bootstrapServers;
#Bean
public Map<String, Object> consumerConfigs() {
Map<String, Object> props = new HashMap<>();
props.put(ConsumerConfig.BOOTSTRAP_SERVERS_CONFIG, bootstrapServers);
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.GROUP_ID_CONFIG, "batch");
props.put(ConsumerConfig.MAX_POLL_RECORDS_CONFIG, "5");
return props;
}
#Bean
public ConsumerFactory<String, String> consumerFactory() {
return new DefaultKafkaConsumerFactory<>(consumerConfigs());
}
#Bean
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
return factory;
}
#Bean("kafkaListenerContainerTopic1Factory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerTopic1Factory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setIdleEventInterval(60000L);
factory.setBatchListener(true);
return factory;
}
#Bean("kafkaListenerContainerTopic2Factory")
public ConcurrentKafkaListenerContainerFactory<String, String> kafkaListenerContainerTopic2Factory() {
ConcurrentKafkaListenerContainerFactory<String, String> factory = new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.setBatchListener(true);
return factory;
}
}
Listner code
#Service
public class Listener {
private static final Logger LOG = LoggerFactory.getLogger(Listener.class);
#Autowired
private KafkaListenerEndpointRegistry registry;
#KafkaListener(id = "first-listener", topics = "topic1", containerFactory = "kafkaListenerContainerTopic1Factory")
public void receive(#Payload List<String> messages,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
for (int i = 0; i < messages.size(); i++) {
LOG.info("received first='{}' with partition-offset='{}'",
messages.get(i), partitions.get(i) + "-" + offsets.get(i));
}
}
#KafkaListener(id = "second-listener", topics = "topic2", containerFactory = "kafkaListenerContaierTopic2Factory" , autoStartup="false" )
public void receiveRel(#Payload List<String> messages,
#Header(KafkaHeaders.RECEIVED_PARTITION_ID) List<Integer> partitions,
#Header(KafkaHeaders.OFFSET) List<Long> offsets) {
for (int i = 0; i < messages.size(); i++) {
LOG.info("received second='{}' with partition-offset='{}'",
messages.get(i), partitions.get(i) + "-" + offsets.get(i));
}
}
#EventListener()
public void eventHandler(ListenerContainerIdleEvent event) {
LOG.info("Inside event");
this.registry.getListenerContainer("second-listener").start();
}
Kindly help me in resolving , as this cycle should happen everyday. Reading topic1 message completely and then reading message from topic2.
You are already using an idle event listener to start the second listener - it should also stop the first listener.
When the second listener goes idle; stop it.
You should be checking which container the event is for to decide which container to stop and/or start.
Then, using a TaskScheduler, schedule a start() of the first listener at the next time you want it to start.
Topic in Kafka is an abstraction where stream of records are published. Streams are naturally unbounded, so they have a start but they do not have a defined end. For your case, first you need to clearly define what is the end of your topic1 and your topic2 so that you can stop/presume your consumers when needed. Maybe you know how many messages you will process for each topic, so you can use: position or commmited to stop one consumer and presume the other one in that moment. Or if you are using a streaming framework they usually have a session window where the framework detects a groups elements by sessions of activity. You can also prefer to put that logic into the application side so that you don't need to stop/start any consumer threads.
I know I can find out from which partition record comes in, but I wonder is any way to dynamically get which partitions are assigned for consumers at specific moment? Maybe I need to implement some listener to detect and follow up partitions assignation info?
I am using spring-kafka 1.3.2 with ConcurrentKafkaListenerContainerFactory and #KafkaListener.
Yes, you can do:
#Bean
public ConsumerAwareRebalanceListener rebalanceListener() {
return new ConsumerAwareRebalanceListener() {
#Override
public void onPartitionsAssigned(Consumer<?, ?> consumer, Collection<TopicPartition> partitions) {
// here partitions
}
};
}
And then add it, for example, to ConcurrentKafkaListenerContainerFactory
#Bean
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<Object, Object> factory = new ConcurrentKafkaListenerContainerFactory<>();
ContainerProperties props = factory.getContainerProperties();
props.setConsumerRebalanceListener(rebalanceListener());
return factory;
}
I did it in different way by using KafkaListenerEndpointRegistry
for (MessageListenerContainer messageListenerContainer : kafkaListenerEndpointRegistry.getListenerContainers()) {
List<KafkaMessageListenerContainer> containers = ((ConcurrentMessageListenerContainer) messageListenerContainer).getContainers();
List<TopicPartition> topicPartitions = (List<TopicPartition>) containers.stream().flatMap(kafkaMessageListenerContainer ->
kafkaMessageListenerContainer.getAssignedPartitions().stream()).collect(Collectors.toList());
partitions.addAll(topicPartitions.stream().map(TopicPartition::partition).collect(Collectors.toList()));
}