How Spring Cloud Stream reactive processing works? - spring-cloud

How to achieve reactive message processing in Spring Cloud Stream? I read about Spring Cloud Function and that I should use them for reactive processing so I created sample one:
#Bean
public Consumer<Flux<Message<Loan>>> loanProcess() {
return loanMessages ->
loanMessages
.flatMap(loanMessage -> Mono.fromCallable(() -> {
if (loanMessage.getPayload().getStatus() == null) {
log.error("Empty status");
throw new RuntimeException("Loan status is empty");
}
return "Good";
}))
.doOnError(throwable -> log.error("Exception occurred: {}", throwable))
.subscribe(status -> log.info("Message processed correctly: {}", status));
}
Afterwards I started to thinking what is the difference between the above function and the class with #StreamListener and usage of Reactor types:
#StreamListener(Sink.INPUT)
public void loanReceived(Message<Loan> message) {
Mono.just(message)
.flatMap(loanMessage -> Mono.fromCallable(() -> {
if (loanMessage.getPayload().getStatus() == null) {
log.error("Empty status");
throw new RuntimeException("Loan status is empty");
}
log.info("Correct message");
return "Correct message received";
}))
.doOnError(throwable -> log.error("Exception occurred: {}", throwable.getClass()))
.subscribe(status -> log.info("Message processed correctly: {}", status));
}
Additionally, in Spring Webflux I understand that there are few threads from netty which handle requests processing (running in event loop). However, I cannot find a documentation how thread model works in Spring Cloud Stream.

Related

IntegrationFlow for Kafka Message error while configureListenerContainer

I am trying to use IntegrationFlow for kafka to pass message received from Kafka to channel.
Below is my working code:-
#Bean
public MessageChannel fromKafka() {
return new DirectChannel();
}
#Bean
public IntegrationFlow topic1ListenerFromKafkaFlow1() throws Exception {
/* return IntegrationFlows
.from(Kafka.messageDrivenChannelAdapter(consumerFactory(),
KafkaMessageDrivenChannelAdapter.ListenerMode.record, kafkaTopic)
.configureListenerContainer( c -> c.ackMode(AbstractMessageListenerContainer.AckMode.MANUAL)
.id("topic1ListenerContainer"))
.recoveryCallback(new ErrorMessageSendingRecoverer(messageFromKafka(),
new RawRecordHeaderErrorMessageStrategy()))
.retryTemplate(new RetryTemplate())
.filterInRetry(true))
.filter(Message.class, m ->
m.getHeaders().get(KafkaHeaders.RECEIVED_MESSAGE_KEY, Integer.class) < 101,
f -> f.throwExceptionOnRejection(true))
.<String, String>transform(String::toUpperCase)
.channel(c -> c.queue("listeningFromKafkaResults1"))
.get();*/
return IntegrationFlows
.from(Kafka.messageDrivenChannelAdapter(listener(), KafkaMessageDrivenChannelAdapter.ListenerMode.record))
.channel("fromKafka")
.get();
}
#Bean("listenerkafka")
public KafkaMessageListenerContainer<String, String> listener() throws Exception {
ContainerProperties properties = new ContainerProperties(kafkaTopic1);
properties.setGroupId("kafka-test");
return new KafkaMessageListenerContainer<>(consumerFactory, properties);
}
#ServiceActivator(inputChannel="fromKafka", outputChannel = "somechannel")
public Message<CreatRequest> fromKafka(Message<?> msg) throws JsonProcessingException {
CreatRequest creatRequest = objectMapper.readValue(msg.getPayload().toString(), CreatRequest.class);
Message<CreatRequest> message= MessageBuilder.withPayload(creatRequest).build();
logger.info("Inside fromKafka " + message);
return message;
}
Issue which I am facing is commented code doesn't work inside topic1ListenerFromKafkaFlow1.
Here I am not able to find c.ackMode(AbstractMessageListenerContainer.AckMode.MANUAL)
As it is showing compile time error ackmode not recognised.
Can you please correct me where i am going wrong.
Also I need to pass this flow in another thread and not in main thread.
Use the Kafka message-driven channel adapter instead:
https://docs.spring.io/spring-integration/docs/current/reference/html/kafka.html#kafka-inbound
However, with two adapters on the same channel the requests will be round-robin distributed between them. If you want both to receive the message, you need a PublishSubscribeChannel.

Can't handle bad request using doOnError WebFlux

I wanna send some DTO object to server. Server have "Valid" annotation, and when server getting not valid DTO, he should send validation errors and something like "HttpStatus.BAD_REQUEST", but when I'm trying to send HttpStatus.BAD_REQUEST doOnError just ignore it.
POST-request from client
BookDTO bookDTO = BookDTO
.builder()
.author(authorTf.getText())
.title(titleTf.getText())
.publishDate(LocalDate.parse(publishDateDp.getValue().toString()))
.owner(userAuthRepository.getUser().getLogin())
.fileData(file.readAllBytes())
.build();
webClient.post()
.uri(bookAdd)
.contentType(MediaType.APPLICATION_JSON)
.bodyValue(bookDTO)
.retrieve()
.bodyToMono(Void.class)
.doOnError(exception -> log.error("Error on server - [{}]", exception.getMessage()))
.onErrorResume(WebClientResponseException.class, throwable -> {
if (throwable.getStatusCode() == HttpStatus.BAD_REQUEST) {
log.error("BAD_REQUEST!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!"); --My log doesn't contain this error, but server still has errors from bindingResult
return Mono.empty();
}
return Mono.error(throwable);
})
.block();
Server-part
#PostMapping(value = "/add", consumes = {MediaType.APPLICATION_JSON_VALUE})
public HttpStatus savingBook(#RequestBody #Valid BookDTO bookDTO, BindingResult bindingResult) {
List<FieldError> errors = bindingResult.getFieldErrors();
if (bindingResult.hasErrors()) {
for (FieldError error : errors ) {
log.info("Client post uncorrected data [{}]", error.getDefaultMessage());
}
return HttpStatus.BAD_REQUEST;
}else{libraryService.addingBookToDB(bookDTO);}
return null;
}
doOnError is a so-called side effect operation that could be used for instrumentation before onError signal is propagated downstream. (e.g. to log error).
To handle errors you could use onErrorResume. The example, the following code handles the WebClientResponseException and returns Mono.empty instead.
...
.retrieve()
.doOnError(ex -> log.error("Error on server: {}", ex.getMessage()))
.onErrorResume(WebClientResponseException.class, ex -> {
if (ex.getStatusCode() == HttpStatus.BAD_REQUEST) {
return Mono.empty();
}
return Mono.error(ex);
})
...
As an alternative as #Toerktumlare mentioned in his comment, in case you want to handle http status, you could use onStatus method of the WebClient
...
.retrieve()
.onStatus(HttpStatus.BAD_REQUEST::equals, res -> Mono.empty())
...
Update
While working with block it's important to understand how reactive signals will be transformed.
onNext(T) -> T in case of Mono and List<T> for Flux
onError -> exception
onComplete -> null, in case flow completes without onNext
Here is a full example using WireMock for tests
class WebClientErrorHandlingTest {
private WireMockServer wireMockServer;
#BeforeEach
void init() {
wireMockServer = new WireMockServer(wireMockConfig().dynamicPort());
wireMockServer.start();
WireMock.configureFor(wireMockServer.port());
}
#Test
void test() {
stubFor(post("/test")
.willReturn(aResponse()
.withHeader(HttpHeaders.CONTENT_TYPE, MediaType.APPLICATION_JSON_VALUE)
.withStatus(400)
)
);
WebClient webClient = WebClient.create("http://localhost:" + wireMockServer.port());
Mono<Void> request = webClient.post()
.uri("/test")
.retrieve()
.bodyToMono(Void.class)
.doOnError(e -> log.error("Error on server - [{}]", e.getMessage()))
.onErrorResume(WebClientResponseException.class, e -> {
if (e.getStatusCode() == HttpStatus.BAD_REQUEST) {
log.info("Ignoring error: {}", e.getMessage());
return Mono.empty();
}
return Mono.error(e);
});
Void response = request.block();
assertNull(response);
}
}
The response is null because we had just complete signal Mono.empty() that was transformed to null by applying block

Kafka reactor - How to disable KAFKA consumer being autostarted?

Below is my KAFKA consumer
#Bean("kafkaConfluentInboundReceiver")
#ConditionalOnProperty(value = "com.demo.kafka.core.inbound.confluent.topic-name",
matchIfMissing = false)
public KafkaReceiver<String, Object> kafkaInboundReceiver() {
ReceiverOptions<String, Object> receiverOptions = ReceiverOptions.create(inboundConsumerConfigs());
receiverOptions.schedulerSupplier(() -> Schedulers
.fromExecutorService(applicationContext.getBean("inboundKafkaExecutorService", ExecutorService.class)));
receiverOptions.maxCommitAttempts(kafkaProperties.getKafka().getCore().getMaxCommitAttempts());
return KafkaReceiver.create(receiverOptions.addAssignListener(Collection::iterator)
.subscription(Collections.singleton(
kafkaProperties.getKafka()
.getCore().getInbound().getConfluent()
.getTopicName()))
.commitInterval(Duration.ZERO).commitBatchSize(0));
}
My KAFKA consumer is getting started automatically. However I want to disable KAFKA consumer being autostarted.
I got to know that, In spring KAFKA we can do something like this
factory.setAutoStartup(start);
however, I am not sure how I achieve(control auto start/stop behavior) in Kafka reactor. I want to have something like below
Introducing a property to handle the auto start/stop behavior
#Value("${consumer.autostart:true}")
private boolean start;
using the above property I should be able to set the KAFKA Auto-Start flag in Kafka reactor, something like this
return KafkaReceiver.create(receiverOptions.addAssignListener(Collection::iterator)
.subscription(Collections.singleton(
kafkaProperties.getKafka()
.getCore().getInbound().getConfluent()
.getTopicName()))
.commitInterval(Duration.ZERO).commitBatchSize(0)).setAutoStart(start);
Note: .setAutoStart(start);
Is this doable in Kafka reactor, if so, how do I do it?
Update:
protected void inboundEventHubListener(String topicName, List<String> allowedValues) {
Scheduler scheduler = Schedulers.fromExecutorService(kafkaExecutorService);
kafkaEventHubInboundReceiver
.receive()
.publishOn(scheduler)
.groupBy(receiverRecord -> {
try {
return receiverRecord.receiverOffset().topicPartition();
} catch (Throwable throwable) {
log.error("exception in groupby", throwable);
return Flux.empty();
}
}).flatMap(partitionFlux -> partitionFlux.publishOn(scheduler)
.map(record -> {
processMessage(record, topicName, allowedValues).block(
Duration.ofSeconds(60L));//This subscribe is to trigger processing of a message
return record;
}).concatMap(message -> {
log.info("Received message after processing offset: {} partition: {} ",
message.offset(), message.partition());
return message.receiverOffset()
.commit()
.onErrorContinue((t, o) -> log.error(
String.format("exception raised while commit offset %s", o), t)
);
})).onErrorContinue((t, o) -> {
try {
if (null != o) {
ReceiverRecord<String, Object> record = (ReceiverRecord<String, Object>) o;
ReceiverOffset offset = record.receiverOffset();
log.debug("failed to process message: {} partition: {} and message: {} ",
offset.offset(), record.partition(), record.value());
}
log.error(String.format("exception raised while processing message %s", o), t);
} catch (Throwable inner) {
log.error("encountered error in onErrorContinue", inner);
}
}).subscribeOn(scheduler).subscribe();
Can I do something like this?
kafkaEventHubInboundReceiverObj = kafkaEventHubInboundReceiver.....subscribeOn(scheduler);
if(consumer.autostart) {
kafkaEventHubInboundReceiverObj.subscribe();
}
With reactor-kafka there is no concept of "auto start"; you are in complete control.
The consumer is not "started" until you subscribe to the Flux returned from receiver.receive().
Simply delay the flux.subscribe() until you are ready to consume data.

No handlers for address while using eventBus in communicating between verticles of a springboot project

I developed a project with Springboot and used Vertx as an asynchronous reactive toolkit. My ServerVerticle, create a httpServer which receives http requests from an Angular app and sends messages to it via eventBus. By the way, the time that received message arrives, ServerVerticle sends it to another verticle which has service instance in it (for connecting to repository). i tested it with postman and get "No handlers for address" error as a bad request.
here is my ServerVerticle:
HttpServerResponse res = routingContext.response();
res.setChunked(true);
EventBus eventBus = vertx.eventBus();
eventBus.request(InstrumentsServiceVerticle.FETCH_INSTRUMENTS_ADDRESS, "", result -> {
if (result.succeeded()) {
res.setStatusCode(200).write((Buffer) result.result().body()).end();
} else {
res.setStatusCode(400).write(result.cause().toString()).end();
}
});
My instrumentVerticle is as follows:
static final String FETCH_INSTRUMENTS_ADDRESS = "fetch.instruments.service";
// Reuse the Vert.x Mapper :)
private final ObjectMapper mapper = Json.mapper;
private final InstrumentService instrumentService;
public InstrumentsServiceVerticle(InstrumentService instrumentService) {
this.instrumentService = instrumentService;
}
private Handler<Message<String>> fetchInstrumentsHandler() {
return msg -> vertx.<String>executeBlocking(future -> {
try {
future.complete(mapper.writeValueAsString(instrumentService.getInstruments()));
} catch (JsonProcessingException e) {
logger.error("Failed to serialize result "+ InstrumentsServiceVerticle.class.getName());
future.fail(e);
}
},
result -> {
if (result.succeeded()) {
msg.reply(result.result());
} else {
msg.reply(result.cause().toString());
}
});
}
#Override
public void start() throws Exception {
super.start();
vertx.eventBus().<String>consumer(FETCH_INSTRUMENTS_ADDRESS).handler(fetchInstrumentsHandler());
}
and i deployed both verticles in the springbootApp starter.

Spring Cloud Stream - How to implement retry using reactive API?

I am using Spring Cloud Stream, with RabbitMQ binder and reactive API. The official documentation provides information on how to implement retry using DLQ, which works great but I can't get it to work using reactive streams. Using the code below I only see an error logged but no retry attempts are performed as if the original message was acknowledged.
#StreamListener
#Output(Channels.STATUS)
public Flux<WriteResponse> handle(#Input(WRITE_DATA) Flux<Message<WriteRequestDTO>> stream) {
return stream
.doOnNext(m -> log.info("Received {} request", WRITE_DATA))
.doOnNext(m -> handleDeadQueue(m.getHeaders()))
.map(this::validateDestination)
.map(writerService::write)
.doOnNext(m -> log.info("Write response: {}", m));
}
private WriteRequest validateDestination(WriteRequestDTO request) {
if (!destinationService.isValidDestination(request)) {
String message = String.format("Can't process write request '%s': destination doesn't exist", request);
throw new AmqpRejectAndDontRequeueException(message);
}
return request;
}
private void handleDeadQueue(Map<String, Object> headers) {
if (headers.containsKey("x-death")) {
List<Map<String, Object>> deathList = (List<Map<String, Object>>) headers.get("x-death");
if (deathList != null && !deathList.isEmpty()) {
Map<String, Object> death = deathList.get(0);
if (death != null) {
Long numberOfDeaths = Long.valueOf(death.get("count").toString());
if (numberOfDeaths >= properties.getDlqMaxAttempts()) {
// giving up - don't send to DLX
throw new ImmediateAcknowledgeAmqpException("Failed after " + numberOfDeaths + " attempts");
}
}
}
}
}
Is there an example on how retry should be implemented using reactive API?
What's the general approach for error-handling using reactive API?