Consuming again messages from kafka log compaction topic - apache-kafka

I have a spring application with a Kafka consumer using a #KafkaListerner annotation. The topic being consumed is log compacted and we might have the scenario where we must consume again the topic messages. What's the best way to achieve this programmatically? We don't control the Kafka topic configuration.

#KafkaListener(...)
public void listen(String in, #Header(KafkaHeaders.CONSUMER) Consumer<?, ?> consumer) {
System.out.println(in);
if (this.resetNeeded) {
consumer.seekToBeginning(consumer.assignment());
this.resetNeeded = false;
}
}
If you want to reset when the listener is idle (no records) you can enable idle events and perform the seeks by listening for a ListenerContainerIdleEvent in an ApplicationListener or #EventListener method.
The event has a reference to the consumer.
EDIT
#SpringBootApplication
public class So58769796Application {
public static void main(String[] args) {
SpringApplication.run(So58769796Application.class, args);
}
#KafkaListener(id = "so58769796", topics = "so58769796")
public void listen1(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println("One:" + key + ":" + value);
}
#KafkaListener(id = "so58769796a", topics = "so58769796")
public void listen2(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println("Two:" + key + ":" + value);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so58769796")
.compact()
.partitions(1)
.replicas(1)
.build();
}
boolean reset;
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template) {
return args -> {
template.send("so58769796", "foo", "bar");
System.out.println("Hit enter to rewind");
System.in.read();
this.reset = true;
};
}
#EventListener
public void listen(ListenerContainerIdleEvent event) {
System.out.println(event);
if (this.reset && event.getListenerId().startsWith("so58769796-")) {
event.getConsumer().seekToBeginning(event.getConsumer().assignment());
}
}
}
and
spring.kafka.listener.idle-event-interval=5000
EDIT2
Here's another technique - in this case we rewind each time the app starts (and on demand)...
#SpringBootApplication
public class So58769796Application implements ConsumerSeekAware {
public static void main(String[] args) {
SpringApplication.run(So58769796Application.class, args);
}
#KafkaListener(id = "so58769796", topics = "so58769796")
public void listen(String value, #Header(KafkaHeaders.RECEIVED_MESSAGE_KEY) String key) {
System.out.println(key + ":" + value);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so58769796")
.compact()
.partitions(1)
.replicas(1)
.build();
}
#Bean
public ApplicationRunner runner(KafkaTemplate<String, String> template,
KafkaListenerEndpointRegistry registry) {
return args -> {
template.send("so58769796", "foo", "bar");
System.out.println("Hit enter to rewind");
System.in.read();
registry.getListenerContainer("so58769796").stop();
registry.getListenerContainer("so58769796").start();
};
}
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments, ConsumerSeekCallback callback) {
assignments.keySet().forEach(tp -> callback.seekToBeginning(tp.topic(), tp.partition()));
}
}

Related

Kafka headers with type of string

Kafka headers values are type of byte array and for some reason I need to use string type of value for one of the header. Is it possible to manage it somehow instead of handling it in the listener ?
The framework takes care of the conversion automically:
#SpringBootApplication
public class So71941853Application {
public static void main(String[] args) {
SpringApplication.run(So71941853Application.class, args);
}
#KafkaListener(id = "so71941853", topics = "so71941853")
void listen(String in, #Header("hdr") String foo) {
System.out.println(in + " - " + foo);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so71941853").partitions(1).replicas(1).build();
}
#Bean
ApplicationRunner runner(KafkaTemplate<String, String> template) {
template.setDefaultTopic("so71941853");
return args -> {
template.send(new GenericMessage<>("foo", Collections.singletonMap("hdr", "bar".getBytes())));
};
}
}
foo - bar

Dynamic Merge of Infinite Reactor streams

Usecase:
There is a module which Listens for events in synchronous mode. In the same module using the EmitterProccessor, the event is converted to Flux and made as infinite stream of events. Now there is a upstream module which can subscribes for these event streams. The problem here is how can I dynamically merge these streams to one and then subscribe in a single stream. A simple example is, let us say there are N number of sensors, we can dynamically register these sensors and start listening for measurements as stream of data in single stream after merging them into one stream. Here is the code sample written to mock this behavior.
Create callback and start listening for events
public interface CallBack {
void callBack(int name);
void done();
}
#Slf4j
#RequiredArgsConstructor
public class CallBackService {
private CallBack callBack;
private final Function<Integer, Integer> func;
public void register(CallBack intf) {
this.callBack = intf;
}
public void startServer() {
log.info("Callback started..");
IntStream.range(0, 10).forEach(i -> {
callBack.callBack(func.apply(i));
try {
Thread.sleep(3000);
} catch (InterruptedException e) {
e.printStackTrace();
}
});
log.info("Callback finished..");
callBack.done();
}
}
Convert the events to streams using event proccessor
#Slf4j
public class EmitterService implements CallBack {
private EmitterProcessor<Integer> emitterProcessor;
public EmitterService(){
emitterProcessor = EmitterProcessor.create();
}
public EmitterProcessor<Integer> getEmmitor() {
return emitterProcessor;
}
#Override
public void callBack(int name) {
log.info("callbakc {} invoked", name);
//fluxSink.next(name);
emitterProcessor.onNext(name);
}
public void done() {
//fluxSink.complete();
emitterProcessor.onComplete();
}
}
public class WrapperService {
EmitterService service1;
ExecutorService service2;
public Flux<Integer> startService(Function<Integer, Integer> func) {
CallBackService service = new CallBackService(func);
service1 = new EmitterService();
service.register(service1);
service2 = Executors.newSingleThreadExecutor();
service2.submit(service::startServer);
return service1.getEmmitor();
}
public void shutDown() {
service1.getEmmitor().onComplete();
service2.shutdown();
}
}
Subscribe for the events
#Slf4j
public class MainService {
public static void main(String[] args) throws InterruptedException {
TopicProcessor<Integer> stealer = TopicProcessor.<Integer>builder().share(true).build();
CountDownLatch latch = new CountDownLatch(20);
WrapperService n1 =new WrapperService();
WrapperService n2 =new WrapperService();
// n1.startService(i->i).mergeWith(n2.startService(i->i*2)).subscribe(stealer);
n1.startService(i->i).subscribe(stealer);
n2.startService(i->i*2).subscribe(stealer);
stealer.subscribeOn(Schedulers.boundedElastic())
.subscribe(x->{
log.info("Stole=>{}", x);
latch.countDown();
log.info("Latch count=>{}", latch.getCount());
});
latch.await();
n1.shutDown();
n2.shutDown();
stealer.shutdown();
}
}
Tried to use TopicProccessor with no success. In the above code subscription happens for first source, for second source there is no subscription. however if use n1.startService(i->i).mergeWith(n2.startService(i->i*2)).subscribe(stealer); subscription works, but there is no dynamic behavior in this case. Need to change subscriber every time.

spring-kafka Request Reply: Different Types for Request and Reply

The documentation for ReplyingKafkaTemplate which provides Request-Reply support (introduced in Spring-Kafka 2.1.3) suggests that different types may be used for the Request and Reply:
ReplyingKafkaTemplate<K, V, R>
where the parameterised type K designates the message Key, V designates the Value (i.e the Request), and R designates the Reply.
So good so far. But the corresponding supporting classes for implementing the server side Request-Reply doesn't seem to support different types for V, R. The documentation suggests using a KafkaListener with an added #SendTo annotation, which behind the scene uses a configured replyTemplate on the MessageListenerContainer. But the AbstractKafkaListenerEndpoint only supports a single type for the listener as well as the replyTemplate:
public abstract class AbstractKafkaListenerEndpoint<K, V>
implements KafkaListenerEndpoint, BeanFactoryAware, InitializingBean {
...
/**
* Set the {#link KafkaTemplate} to use to send replies.
* #param replyTemplate the template.
* #since 2.0
*/
public void setReplyTemplate(KafkaTemplate<K, V> replyTemplate) {
this.replyTemplate = replyTemplate;
}
...
}
hence V and R needs to be the same type.
The example used in the documentation indeed uses String for both Request and Reply.
Am I missing something, or is this a design flaw in the Spring-Kafka Request-Reply support that should be reported and corrected?
This is fixed in the 2.2 release.
For earlier versions, simply inject a raw KafkaTemplate (with no generics).
EDIT
#SpringBootApplication
public class So53151961Application {
public static void main(String[] args) {
SpringApplication.run(So53151961Application.class, args);
}
#KafkaListener(id = "so53151961", topics = "so53151961")
#SendTo
public Bar handle(Foo foo) {
System.out.println(foo);
return new Bar(foo.getValue().toUpperCase());
}
#Bean
public ReplyingKafkaTemplate<String, Foo, Bar> replyingTemplate(ProducerFactory<String, Foo> pf,
ConcurrentKafkaListenerContainerFactory<String, Bar> factory) {
ConcurrentMessageListenerContainer<String, Bar> replyContainer =
factory.createContainer("so53151961-replyTopic");
replyContainer.getContainerProperties().setGroupId("so53151961.reply");
ReplyingKafkaTemplate<String, Foo, Bar> replyingKafkaTemplate = new ReplyingKafkaTemplate<>(pf, replyContainer);
return replyingKafkaTemplate;
}
#Bean
public KafkaTemplate<String, Bar> replyTemplate(ProducerFactory<String, Bar> pf,
ConcurrentKafkaListenerContainerFactory<String, Bar> factory) {
KafkaTemplate<String, Bar> kafkaTemplate = new KafkaTemplate<>(pf);
factory.setReplyTemplate(kafkaTemplate);
return kafkaTemplate;
}
#Bean
public ApplicationRunner runner(ReplyingKafkaTemplate<String, Foo, Bar> template) {
return args -> {
ProducerRecord<String, Foo> record = new ProducerRecord<>("so53151961", null, "key", new Foo("foo"));
RequestReplyFuture<String, Foo, Bar> future = template.sendAndReceive(record);
System.out.println(future.get(10, TimeUnit.SECONDS).value());
};
}
#Bean
public NewTopic topic() {
return new NewTopic("so53151961", 1, (short) 1);
}
#Bean
public NewTopic reply() {
return new NewTopic("so53151961-replyTopic", 1, (short) 1);
}
public static class Foo {
public String value;
public Foo() {
super();
}
public Foo(String value) {
this.value = value;
}
public String getValue() {
return this.value;
}
public void setValue(String value) {
this.value = value;
}
#Override
public String toString() {
return "Foo [value=" + this.value + "]";
}
}
public static class Bar {
public String value;
public Bar() {
super();
}
public Bar(String value) {
this.value = value;
}
public String getValue() {
return this.value;
}
public void setValue(String value) {
this.value = value;
}
#Override
public String toString() {
return "Bar [value=" + this.value + "]";
}
}
}
spring.kafka.producer.value-serializer=org.springframework.kafka.support.serializer.JsonSerializer
spring.kafka.consumer.value-deserializer=org.springframework.kafka.support.serializer.JsonDeserializer
spring.kafka.consumer.enable-auto-commit=false
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.properties.spring.json.trusted.packages=com.example
result
Foo [value=foo]
Bar [value=FOO]

Spring multiple imapAdapter

I am novice in Spring and I don't like code duplication.
I wrote one ImapAdapter that works fine:
#Component
public class GeneralImapAdapter {
private Logger logger = LoggerFactory.getLogger(getClass());
#Autowired
private EmailReceiverService emailReceiverService;
#Bean
#InboundChannelAdapter(value = "emailChannel", poller = #Poller(fixedDelay = "10000", taskExecutor = "asyncTaskExecutor"))
public MessageSource<javax.mail.Message> mailMessageSource(MailReceiver imapMailReceiver) {
return new MailReceivingMessageSource(imapMailReceiver);
}
#Bean
#Value("imaps://<login>:<pass>#<url>:993/inbox")
public MailReceiver imapMailReceiver(String imapUrl) {
ImapMailReceiver imapMailReceiver = new ImapMailReceiver(imapUrl);
imapMailReceiver.setShouldMarkMessagesAsRead(true);
imapMailReceiver.setShouldDeleteMessages(false);
// other setters here
return imapMailReceiver;
}
#ServiceActivator(inputChannel = "emailChannel", poller = #Poller(fixedDelay = "10000", taskExecutor = "asyncTaskExecutor"))
public void emailMessageSource(javax.mail.Message message) {
emailReceiverService.receive(message);
}
}
But I want about 20 adapters like that, the only difference is imapUrl.
How to do that without code duplication?
Use multiple application contexts, configured with properties.
This sample is an example; it uses XML for its configuration, but the same techniques apply with Java configuration.
If you need them to feed into a common emailReceiverService; make the individual adapter contexts child contexts; see the sample readme for pointers about how to do that.
EDIT:
Here's an example, with the service (and channel) in a shared parent context...
#Configuration
#EnableIntegration
public class MultiImapAdapter {
public static void main(String[] args) throws Exception {
AnnotationConfigApplicationContext parent = new AnnotationConfigApplicationContext(MultiImapAdapter.class);
parent.setId("parent");
String[] urls = { "imap://foo", "imap://bar" };
List<ConfigurableApplicationContext> children = new ArrayList<ConfigurableApplicationContext>();
int n = 0;
for (String url : urls) {
AnnotationConfigApplicationContext child = new AnnotationConfigApplicationContext();
child.setId("child" + ++n);
children.add(child);
child.setParent(parent);
child.register(GeneralImapAdapter.class);
StandardEnvironment env = new StandardEnvironment();
Properties props = new Properties();
// populate properties for this adapter
props.setProperty("imap.url", url);
PropertiesPropertySource pps = new PropertiesPropertySource("imapprops", props);
env.getPropertySources().addLast(pps);
child.setEnvironment(env);
child.refresh();
}
System.out.println("Hit enter to terminate");
System.in.read();
for (ConfigurableApplicationContext child : children) {
child.close();
}
parent.close();
}
#Bean
public MessageChannel emailChannel() {
return new DirectChannel();
}
#Bean
public EmailReceiverService emailReceiverService() {
return new EmailReceiverService();
}
}
and
#Configuration
#EnableIntegration
public class GeneralImapAdapter {
#Bean
public static PropertySourcesPlaceholderConfigurer pspc() {
return new PropertySourcesPlaceholderConfigurer();
}
#Bean
#InboundChannelAdapter(value = "emailChannel", poller = #Poller(fixedDelay = "10000") )
public MessageSource<javax.mail.Message> mailMessageSource(MailReceiver imapMailReceiver) {
return new MailReceivingMessageSource(imapMailReceiver);
}
#Bean
#Value("${imap.url}")
public MailReceiver imapMailReceiver(String imapUrl) {
// ImapMailReceiver imapMailReceiver = new ImapMailReceiver(imapUrl);
// imapMailReceiver.setShouldMarkMessagesAsRead(true);
// imapMailReceiver.setShouldDeleteMessages(false);
// // other setters here
// return imapMailReceiver;
MailReceiver receiver = mock(MailReceiver.class);
Message message = mock(Message.class);
when(message.toString()).thenReturn("Message from " + imapUrl);
Message[] messages = new Message[] {message};
try {
when(receiver.receive()).thenReturn(messages);
}
catch (MessagingException e) {
e.printStackTrace();
}
return receiver;
}
}
and
#MessageEndpoint
public class EmailReceiverService {
#ServiceActivator(inputChannel="emailChannel")
public void handleMessage(javax.mail.Message message) {
System.out.println(message);
}
}
Hope that helps.
Notice that you don't need a poller on the service activator - use a DirectChannel and the service will be invoked on the poller executor thread - no need for another async handoff.

how to set queue/message durability to false in Spring AMQP using annotations?

I wrote sample spring amqp producer which is running on RabbitMQ server which sends messages and consuming those messages uisng MessageListener using Spring AMQP. Here, I want to set queue and message durability to false. Could you please any one help me on how to set "durable" flag to false using annotations.
Here is sample code
#Configuration
public class ProducerConfiguration {
protected final String queueName = "hello.queue";
#Bean
public RabbitTemplate rabbitTemplate() {
RabbitTemplate template = new RabbitTemplate(connectionFactory());
template.setRoutingKey(this.queueName);
template.setQueue(this.queueName);
return template;
}
#Bean
public ConnectionFactory connectionFactory() {
CachingConnectionFactory connectionFactory = new CachingConnectionFactory("localhost");
connectionFactory.setUsername("guest");
connectionFactory.setPassword("guest");
return connectionFactory;
}
}
public class Producer {
public static void main(String[] args) throws Exception {
new Producer().send();
}
public void send() {
ApplicationContext context = new AnnotationConfigApplicationContext(
ProducerConfiguration.class);
RabbitTemplate rabbitTemplate = context.getBean(RabbitTemplate.class);
for (int i = 1; i <= 10; i++) {
rabbitTemplate.convertAndSend(i);
}
}
}
Thanks in Advance.
#Configuration
public class Config {
#Bean
public ConnectionFactory connectionFactory() {
return new CachingConnectionFactory();
}
#Bean
public Queue foo() {
return new Queue("foo", false);
}
#Bean
public RabbitAdmin rabbitAdmin() {
return new RabbitAdmin(connectionFactory());
}
}
The rabbit admin will declare the queue the first time the connection is opened. Note that you can't change a queue from durable to not; delete it first.