Spring boot Kafka request-reply scenario - apache-kafka

I am implementing POC of request/reply scenario in order to move event-based microservice stack with using Kafka.
There is 2 options in spring.
I wonder which one is better to use. ReplyingKafkaTemplate or cloud-stream
First is ReplyingKafkaTemplate which can be easily configured to have dedicated channel to reply topics for each instance.
record.headers().add(new RecordHeader(KafkaHeaders.REPLY_TOPIC, provider.getReplyChannelName().getBytes()));
Consumer should not need to know replying topic name, just listens a topic and returns with given reply topic.
#KafkaListener(topics = "${kafka.topic.concat-request}")
#SendTo
public ConcatReply listen(ConcatModel request) {
.....
}
Second option is using combination of StreamListener, spring-integration and IntegrationFlows. Gateway should be configured and reply topics should be filtered.
#MessagingGateway
public interface StreamGateway {
#Gateway(requestChannel = START, replyChannel = FILTER, replyTimeout = 5000, requestTimeout = 2000)
String process(String payload);
}
#Bean
public IntegrationFlow headerEnricherFlow() {
return IntegrationFlows.from(START)
.enrichHeaders(HeaderEnricherSpec::headerChannelsToString)
.enrichHeaders(headerEnricherSpec -> headerEnricherSpec.header(Channels.INSTANCE_ID ,instanceUUID))
.channel(Channels.REQUEST)
.get();
}
#Bean
public IntegrationFlow replyFiltererFlow() {
return IntegrationFlows.from(GatewayChannels.REPLY)
.filter(Message.class, message -> Channels.INSTANCE_ID.equals(message.getHeaders().get("instanceId")) )
.channel(FILTER)
.get();
}
Building reply
#StreamListener(Channels.REQUEST)
#SendTo(Channels.REPLY)
public Message<?> process(Message<String> request) {
Specifying reply channel is mandatory. So receieved reply topics are filtered according to instanceID which a kind of workaround (might bloat up the network). On the other hand, DLQ scenario is enabled with adding
consumer:
enableDlq: true
Using spring cloud streams looks promising in terms of interoperability with RabbitMQ and other features, but not officially supports request reply scenario right away. Issue is still open, not rejected also. (https://github.com/spring-cloud/spring-cloud-stream/issues/1800)
Any suggestions are welcomed.

Spring Cloud Stream is not designed for request/reply; it can be done, it is not straightforward and you have to write code.
With #KafkaListener the framework takes care of everything for you.
If you want it to work with RabbitMQ too, you can annotate it with #RabbitListener as well.

Related

Error handling in Spring Cloud Kafka Streams

I'm using Spring Cloud Stream with Kafka Streams. Let's say I have a processor which is a Function which converts a KStream of Strings to a KStream of CityProgrammes. It invokes an API to find the City by name and an other transformation which finds any events near that city.
Now the problem is that any error happens during the transformation, the whole application stops. I want to send that one particular message to a DLQ and move along. I've been reading for days and everyone suggests to handle errors within the called services but that is a nonesense in my opinion, plus I still need to return a KStream: how do I do that within a catch?
I also looked at UncaughtExeptionHandler but it is not aware of the message and only able to restart the processing which won't skip this invalid message.
This might sound like an A-B problem so the question rephrased: how do I maintain the flow in a KStream when an exception occurs and send the invalid item to the DLQ?
When it comes to the application-level errors you have, it is up to the application itself how the error is handled. Kafka Streams and the Spring Cloud Stream binder mainly support deserialization and serialization errors at the framework level. Although that is the case, I think your scenario can be handled. If you are using Kafka Client prior to 2.8, here is an SO answer I gave before on something similar: https://stackoverflow.com/a/66749750/2070861
If you are using Kafka/Streams 2.8, here is an idea that you can use. However, the code below should only be used as a starting point. Adjust it according to your use case. Read more on how branching works in Kafka Streams 2.8. The branching API is significantly refactored in 2.8 from the prior versions.
public Function<KStream<?, String>, KStream<?, Foo>> convert() {
Foo[] foo = new Foo[0];
return input -> {
final Map<String, ? extends KStream<?, String>> branches =
input.split(Named.as("foo-")).branch((key, value) -> {
try {
foo[0] = new Foo(); // your API call for CitiProgramme converion here, possibly.
return true;
}
catch (Exception e) {
Message<?> message = MessageBuilder.withPayload(value).build();
streamBridge.send("to-my-dlt", message);
return false;
}
}, Branched.as("bar"))
.defaultBranch();
final KStream<?, String> kStream = branches.get("foo-bar");
return kStream.map((key, value) -> new KeyValue<>("", foo[0]));
};
}
}
The default branch is ignored in this code because that only contains the records that threw exceptions. Those were handled by the catch statement above in which we send the records to a DLT programmatically. Finally, we get the good records and map them to a new KStream and send it through the outbound.

Artemis message routing

I'm using ActiveMQ Artemis 2.17.0 and I'm facing routing issues.
I've implementing a plugin that logs the before message route and I see that some message are routed from topic.private.abc.task.V1 to topic.abc.rawmessage.V1.
There is no divert setup and topic and queue are created dynamically by the producers and consumers. There is a setup to map destination clustered.*.> to virtual topics
private TransportConfiguration getServerTransportConfiguration() {
Map<String, Object> extraProps = new HashMap<>();
extraProps.put("virtualTopicConsumerWildcards", "clustered.*.>;2");
Map<String, Object> params = new HashMap<>();
params.put("scheme", "tcp");
params.put("port", port);
params.put("host", hostname);
return new TransportConfiguration("org.apache.activemq.artemis.core.remoting.impl.netty.NettyAcceptorFactory", params, "netty-acceptor", extraProps);
}
Both topic.private.abc.task.V1 and topic.abc.rawmessage.V1 are valid topics but they are not supposed to be linked.
What could explain that behavior?
Here is the plugin code:
#Override
public void beforeMessageRoute(Message message, RoutingContext context, boolean direct, boolean rejectDuplicates) throws ActiveMQException {
Map<String, Object> map = new HashMap<>();
map.put("RoutingContext", new RoutingContextLogView(context));
logger.info(mapper.writeValueAsString(map));
ActiveMQServerPlugin.super.beforeMessageRoute(message, context, direct, rejectDuplicates);
}
public class RoutingContextLogView {
private RoutingContext routingContext;
public RoutingContextLogView(RoutingContext routingContext) {
this.routingContext = routingContext;
}
public String getAddress() {
return routingContext.getAddress() != null ? routingContext.getAddress().toString() : null;
}
public String getPreviousAddress() {
return routingContext.getPreviousAddress() != null ? routingContext.getPreviousAddress().toString() : null;
}
public String getRoutingType() {
return routingContext.getRoutingType() != null ? routingContext.getRoutingType().name() : null;
}
public String getPreviousRoutingType() {
return routingContext.getPreviousRoutingType() != null ? routingContext.getPreviousRoutingType().name() : null;
}
}
Despite the odd logging the flow followed by the message seems to be OK (i.e. the message is produced to topic.abc.rawmessage.V1 and consumed from topic.abc.rawmessage.V1). I'm just wandering why there is message routing and why the previousAddress in the RoutingContext is wrong.
The RoutingContext object, which is used internally by the broker, is reusable. This is done for performance reasons to prevent having to re-create the RoutingContext for every routing operation no matter what. As one might guess, routing messages is a very common operation in the broker so it pays to optimize it as much as possible. Reusing the RoutingContext means fewer objects are created and thrown away which means less garbage needs to be cleaned up which means fewer pauses and better overall performance by the broker.
The fact that the previousAddress is different here from the address where the current message is going to be routed is not a problem. It just means that the context won't be re-used for this routing operation and therefore will be cleared. As the name suggests, the beforeMessageRoute method is invoked before any routing logic is performed (e.g. clearing the RoutingContext). If you inspect the RoutingContext using afterMessageRoute then you should see that it was cleared and populated with the proper details.
Message "sending" and message "routing" (both of which have plugin hooks) are related but distinct operations. A message is "sent" in response to a client operation. Sends always result in a route. However, not all routes are the results of sends. A message can be routed due to internal broker operations which do not involve a send (e.g. moving messages around a cluster, expiring a message, cancelling an undeliverable message to a dead-letter address, using a divert, etc.).
I would caution you against inspecting internal broker state (which can be subtle and nuanced) and assuming a problem exists when everything else indicates that the broker is functioning normally. In this case you said that you were "facing routing issues" and that "some message are routed from topic.private.abc.task.V1 to topic.abc.rawmessage.V1" when, in fact, there was no routing issue and messages were not actually being routed from topic.private.abc.task.V1 to topic.abc.rawmessage.V1. From what I can see everything is in fact functioning normally.

How to inject KafkaTemplate in Quarkus

I'm trying to inject a KafkaTemplate to send a single message. I'm developing a small function that lies outside the reactive approach.
I can only find examples that use #Ingoing and #Outgoing from Smallrye but I don't need a KafkaStream.
I tried with Kafka-CDI but I'm unable to inject the SimpleKafkaProducer.
Any ideas?
For Clement's answer
It seems the right direction, but executing orders.send("hello"); I receive this error:
(vert.x-eventloop-thread-3) Unhandled exception:java.lang.IllegalStateException: Stream not yet connected
I'm consuming from my topic by command line, Kafka is up and running, if I produce manually I can see the consumed messages.
It seems relative to this sentence by the doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
I have this code in my class:
#Incoming("orders")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Maybe I've forgotten some configurations?
So, you just need to use an Emitter:
#Inject
#Stream("orders") // Emit on the channel 'orders'
Emitter<String> orders;
// ...
orders.send("hello");
And in your application.properties, declare:
## Orders topic (WRITE)
mp.messaging.outgoing.orders.type=io.smallrye.reactive.messaging.kafka.Kafka
mp.messaging.outgoing.orders.topic=orders
mp.messaging.outgoing.orders.bootstrap.servers=localhost:9092
mp.messaging.outgoing.orders.key.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.value.serializer=org.apache.kafka.common.serialization.StringSerializer
mp.messaging.outgoing.orders.acks=1
To avoid Stream not yet connected exception, as suggested by doc:
To use an Emitter for the stream hello, you need a #Incoming("hello")
somewhere in your code (or in your configuration).
Assuming you have something like this in your application.properties:
# Orders topic (READ)
smallrye.messaging.source.orders-r-topic.type=io.smallrye.reactive.messaging.kafka.Kafka
smallrye.messaging.source.orders-r-topic.topic=orders
smallrye.messaging.source.orders-r-topic.bootstrap.servers=0.0.0.0:9092
smallrye.messaging.source.orders-r-topic.key.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.value.deserializer=org.apache.kafka.common.serialization.StringDeserializer
smallrye.messaging.source.orders-r-topic.group.id=my-group-id
Add something like this:
#Incoming("orders-r-topic")
public CompletionStage<Void> consume(KafkaMessage<String, String> msg) {
log.info("Received message (topic: {}, partition: {}) with key {}: {}", msg.getTopic(), msg.getPartition(), msg.getKey(), msg.getPayload());
return msg.ack();
}
Since Clement's answer the #Stream annotation has been deprecated. The #Channel annotation
must be used instead.
You can use an Emitter provided by the quarkus-smallrye-reactive-messaging-kafka dependency to produce message to a Kafka topic.
A simple Kafka producer implementation:
public class MyKafkaProducer {
#Inject
#Channel("my-topic")
Emitter<String> myEmitter;
public void produce(String message) {
myEmitter.send(message);
}
}
And the following configuration must be added to the application.properties file:
mp.messaging.outgoing.my-topic.connector=smallrye-kafka
mp.messaging.outgoing.my-topic.bootstrap.servers=localhost:9092
mp.messaging.outgoing.my-topic.value.serializer=org.apache.kafka.common.serialization.StringSerializer
This will produce string serialized messages to a kafka topic named my-topic.
Note that by default the name of the channel is also the name of the kafka topic in which the data will be produced. This behavior can be changed through the configuration. The supported configuration attributes are described in the reactive Messaging documentation

Any pointers for creating Kafka endpoint support?

I have a fairly immediate need to support Kafka integration testing using the Citrus Framework. I was thinking of taking the existing jms module as an example/framework and using Spring Kafka. Any pointers or gotchas that I should be aware of? I am willing, assuming I'm successful, to donate the module back to the project.
Here is an example of how you can use a Kafka Camel component with Citrus:
#Bean
public CamelContext camelKafkaAdapterContext() throws Exception {
SpringCamelContext context = new SpringCamelContext();
context.addRouteDefinition(new RouteDefinition()
.from("kafka:localhost:9092?topic=test&zookeeperHost=localhost&zookeeperPort=2181&serializerClass=kafka.serializer.StringEncoder")
.to("log:com.consol.citrus.camel?level=DEBUG")
.to("seda:kafka-buffer"));
return context;
}
#Bean
public CamelEndpoint kafkaEndpoint(CamelContext camelContext) {
CamelEndpoint endpoint = new CamelEndpoint();
endpoint.getEndpointConfiguration().setCamelContext(camelContext);
endpoint.getEndpointConfiguration().setEndpointUri("seda:kafka-buffer");
return endpoint;
}
You first define a Camel Context which will be startet when you run any test with Citrus. After it is instantiated, this Camel component will read from the configured topic and send all messages into a buffer seda:kafka-buffer (seda is used only as an example). After which you can use a Citrus CamelEndpoint to read messages from that buffer inside any test.
receive(action -> action.endpoint(kafkaEndpoint)
.messageType(MessageType.JSON)
.payload(...);
Note, this is just an example I have assembled. I haven't tested this exact setup, but it will work once you configure the Camel Context correctly.

CometD Subscription Listeners

I’m having a problem processing Subscription Requests from Clients and carrying out some processing based on the request. I’d like to be able to invoke a method and carry out some processing when an incoming subscription request is received on the Server. I’ve had a look at the following CometD documentation and tried the example outlined in “Subscription Configuration Support” but I’m not having much luck.
http://www.cometd.org/documentation/2.x/cometd-java/server/services/annotated
I’ve already created the Bayeux Server using a Spring Bean and I’m able to publish data to other channel names I’ve created on the Server side. Any help or additional info. on the topic would be appreciated!
The code example I’m using:
#Service("CometDSubscriptionListener")
public class CometDSubscriptionListener {
private final String channel = "/subscription";
private static final Logger logger = Logger.getLogger(CometDSubscriptionListener.class);
private Heartbeat heartbeat;
#Inject
private BayeuxServer bayeuxserver;
#Session
private ServerSession sender;
public CometDSubscriptionListener(BayeuxServer bayeuxserver){
logger.info("CometDSubscriptionListener constructor called");
}
#Subscription(channel)
public void processClientRequest(Message message)
{
logger.info("Received request from client for channel " + channel);
PublishData();
}
Have a look at the documentation for annotated services, and also to the CometD concepts.
If I read your question correctly, you want to be able to perform some logic when clients subscribe to a channel, not when messages arrive to that channel.
You're confusing the meaning of the #Subscription annotation, so read the links above that should clarify its semantic.
To do what I understood you want to do it, you need this:
#Service
public class CometDSubscriptionListener
{
...
#Listener(Channel.META_SUBSCRIBE)
public void processSubscription(ServerSession remote, ServerMessage message)
{
// What channel the client wants to subscribe to ?
String channel = (String)message.get(Message.SUBSCRIPTION_FIELD);
// Do your logic here
}
}