I followed the Quarkus - Using Apache Kafka with Reactive Messaging to create a sample to taste it, I changed message flow like this:
When a post is saved, fire an event by CDI.
Received the CDI and send to Kafka topic.
Read the data from Kafka topic, and expose it as SSE to the client.
The config for Kafka messaging, part of application.properties.
# Consume data from Kafka
mp.messaging.incoming.activities.connector=smallrye-kafka
mp.messaging.incoming.activities.value.deserializer=io.vertx.kafka.client.serialization.JsonObjectDeserializer
# Produce data to Kafka
mp.messaging.outgoing.activitiesOut.connector=smallrye-kafka
mp.messaging.outgoing.activitiesOut.topic=activities
mp.messaging.outgoing.activitiesOut.value.serializer=io.vertx.kafka.client.serialization.JsonObjectSerializer
The event handling class for CDI event and reactive messages.
#ApplicationScoped
public class ActivityStreams {
ReplaySubject<JsonObject> replaySubject;
Flowable<JsonObject> flowable;
#PostConstruct public void init() {
replaySubject = ReplaySubject.create();
flowable = replaySubject.share().toFlowable(BackpressureStrategy.BUFFER);
}
public void onActivityCreated(#ObservesAsync Activity activity) {
replaySubject.onNext(JsonObject.mapFrom(activity));
}
#Outgoing("activitiesOut")
public Publisher<JsonObject> onReceivedActivityCreated() {
return flowable;
}
#Incoming("activities")
#Outgoing("my-data-stream")
#Broadcast
public Activity onActivityReceived(JsonObject data) {
Activity activity = data.mapTo(Activity.class);
activity.setOccurred(LocalDateTime.now());
return activity;
}
}
When I tried to expose it as SSE, it does not work as expected.
#Path("/activities")
#ApplicationScoped
public class ActivityResource {
#Inject
#Channel("my-data-stream")
public Publisher<Activity> stream;
#GET
#Produces(MediaType.SERVER_SENT_EVENTS)
#SseElementType(MediaType.APPLICATION_JSON)
Publisher<Activity> eventStream(){
return stream;
}
}
In the console logging, I saw the message sent to activities queue, but there is no further steps to SSE. And when I accessed the sse endpoint by curl, it always returned a Not found status.
curl -v -N -H "Accept:text/event-stream" http://localhost:8080/activities --connect-timeout 60
...
HTTP/1.1 404 Not Found
The complete sample code is here.
Related
On spring boot 2.6.4, this method is deprecated.
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer) {
var factory = new ConcurrentKafkaListenerContainerFactory<Object, Object>();
configurer.configure(factory, consumerFactory());
// deprecated
factory.setErrorHandler(new GlobalErrorHandler());
return factory;
}
The global error handler class
public class GlobalErrorHandler implements ConsumerAwareErrorHandler {
private static final Logger log = LoggerFactory.getLogger(GlobalErrorHandler.class);
#Override
public void handle(Exception thrownException, ConsumerRecord<?, ?> data, Consumer<?, ?> consumer) {
// my custom global logic (e.g. notify ops team via slack)
}
}
What is the replacement sample for this? The doc says I should use setCommonErrorHandler, but how to implements the CommonErrorHandler interface, as no method to be overriden there.
Point is, I have to send slack notification to ops team, based on certain condition (the message tpye, which is available on kafka message header)
This is not blocking, just an annoying deprecated message though.
Thanks
See the Spring for Apache Kafka documentation; legacy error handlers are replaced with CommonErrorHandler implementations.
What's New?
https://docs.spring.io/spring-kafka/docs/current/reference/html/#x28-eh
The legacy GenericErrorHandler and its sub-interface hierarchies for record an batch listeners have been replaced by a new single interface CommonErrorHandler with implementations corresponding to most legacy implementations of GenericErrorHandler. See Container Error Handlers for more information.
Container Error Handlers
https://docs.spring.io/spring-kafka/docs/current/reference/html/#error-handlers
Starting with version 2.8, the legacy ErrorHandler and BatchErrorHandler interfaces have been superseded by a new CommonErrorHandler. These error handlers can handle errors for both record and batch listeners, allowing a single listener container factory to create containers for both types of listener. CommonErrorHandler implementations to replace most legacy framework error handler implementations are provided and the legacy error handlers deprecated. The legacy interfaces are still supported by listener containers and listener container factories; they will be deprecated in a future release.
I was facing exactly the same problem, so I changed the method implementation ConsumerAwareErrorHandler by
CommonErrorHandler
and implemented
handleRecord
like described in the docs and it works!
public class GlobalErrorHandler implements CommonErrorHandler {
private static final Logger log = LoggerFactory.getLogger(GlobalErrorHandler.class);
#Override
public void handleRecord(
Exception thrownException,
ConsumerRecord<?, ?> record,
Consumer<?, ?> consumer,
MessageListenerContainer container) {
log.warn("Global error handler for message: {}", record.value().toString());
}
}
In KafkaConfig.class
#Bean(value = "kafkaListenerContainerFactory")
public ConcurrentKafkaListenerContainerFactory<Object, Object> kafkaListenerContainerFactory(
ConcurrentKafkaListenerContainerFactoryConfigurer configurer) {
var factory = new ConcurrentKafkaListenerContainerFactory<>();
configurer.configure(factory, consumerFactory());
factory.setCommonErrorHandler(new GlobalErrorHandler());
return factory;
}
I want to batch process. In my use case send kafka producer messages are sent one by one. I want to read them as a list in the consumer application. I can do that at the Spring Kafka library. Spring Kafka batch listener
Is there any way to do this with the quarkus-smallrye-reactive-messaging-kafka library?
I tried the example below but got an error.
ERROR [io.sma.rea.mes.provider] (vert.x-eventloop-thread-3) SRMSG00200: The method org.MyConsumer#aggregate has thrown an exception: java.lang.ClassCastException: class org.TestConsumer cannot be cast to class io.smallrye.mutiny.Multi (org.TestConsumer is in unnamed module of loader io.quarkus.bootstrap.classloading.QuarkusClassLoader #6f2c0754; io.smallrye.mutiny.Multi is in unnamed module of loader io.quarkus.bootstrap.classloading.QuarkusClassLoader #4c1638b)
application.properties:
kafka.bootstrap.servers=hosts
mp.messaging.connector.smallrye-kafka.group.id=KafkaQuick
mp.messaging.connector.smallrye-kafka.auto.offset.reset=earliest
mp.messaging.incoming.test-consumer.connector=smallrye-kafka
mp.messaging.incoming.test-consumer.value.deserializer=org.TestConsumerDeserializer
TestConsumerDeserializer:
public class TestConsumerDeserializer extends JsonbDeserializer<TestConsumer>{
public TestConsumerDeserializer(){
// pass the class to the parent.
super(TestConsumer.class);
}
}
MyConsumer:
#ApplicationScoped
public class MyConsumer {
#Incoming("test-consumer")
//#Outgoing("aggregated-channel")
public void aggregate(Multi<Message<TestConsumer>> in) {
System.out.println(in);
}
}
Batch support has been added to the Quarkus Kafka connector.
See https://quarkus.io/guides/kafka#receiving-kafka-records-in-batches.
I don't understand the reason why the ClassNotFoundException in the question.
But I found solutions for reading bulk/bach messages using quarkus-smallrye-reactive-messaging-kafka.
Solution 1:
#Incoming("test-consumer-topic")
#Outgoing("aggregated-channel")
public Multi<List<TestConsumer>> aggregate(Multi<TestConsumer> in) {
return in.groupItems().intoLists().every(Duration.ofSeconds(5));
}
#Incoming("aggregated-channel")
public void test(List<TestConsumer> test) {
System.out.println("size: "+ test.size());
}
Solution 2:
#Incoming("test-consumer-topic")
#Outgoing("events-persisted")
public Multi<Message<TestConsumer>> processPayloadStream(Multi<Message<TestConsumer>> messages) {
return messages
.groupItems().intoLists().of(4)
.emitOn(Infrastructure.getDefaultWorkerPool())
.flatMap(messages1 -> {
persist(messages1);
return Multi.createFrom().items(messages1.stream());
}).emitOn(Infrastructure.getDefaultExecutor());
}
public void persist(List<Message<TestConsumer>> messages){
System.out.println("messages size:"+ messages.size());
}
#Incoming("events-persisted")
public CompletionStage<Void> messageAcknowledging(Message<TestConsumer> message){
return message.ack();
}
note: Using the application.properties config in the question.
references:
Support subscribing with Multi<Message<>>...
Get Bulk polled message from kafka
Are there any example projects showing how to use Kafka with Micronaut? I am having problems with getting it to work.
I have the following producer:
#KafkaClient
interface AppClient {
#Topic("topic-name")
void sendMessage(#KafkaKey String id, Event event)
}
and listener:
#KafkaListener(
groupId="group-id",
offsetReset = OffsetReset.EARLIEST
)
class AppListener {
#Topic("topic-name")
void onMessage(Event event) {
// do stuff
}
}
My application.yml contains:
kafka:
bootstrap:
servers: localhost:2181
and application-test.yml (is this right and should it be in the same directory as application.yml?. Also unsure how the embedded server should be used):
kafka:
# embedded:
# enabled: true
# topics: promo-api-promotions
bootstrap:
servers: localhost:9092
My test looks like:
#MicronautTest
class AppSpec extends Specification {
#Shared
#AutoCleanup
EmbeddedServer server = ApplicationContext.run(EmbeddedServer)
#Shared
private AppClient appClient =
server.applicationContext.getBean(AppClient)
def 'The upload endpoint is called'() {
// test here
appClient.sendMessage(id, event)
// other test stuff
}
The main problems I am having are:
My consumer is not consuming from my topic. I can see the producer creates the topic in Kafka and the client group is created, but the offset stays at 0.
I am having problems when the test is started up where it looks as if two instances of the client are created and therefore the MBean registration fails (also, if I try to use the embedded Kafka, I get a different message about port 9092 already being in use because it tries to start the server up twice):
javax.management.InstanceAlreadyExistsException:
kafka.consumer:type=app-info,id=app-kafka-client-app-listener
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
Managed to fix the second problem - the object passed into the listener did not have a #JsonCreator. I found this out by trying to use the Jackson object mapper to construct the object from it's JSON while playing around.
If anyone else has the same problem - make sure that the object model works with Jackson before going any further!
You should add the embedded configuration kafka.embedded.enabled to a map with configuration and pass it to the ApplicationContext.run method.
Map<String, Object> config = Collections.
unmodifiableMap(new HashMap<String, Object>() {
{
put(AbstractKafkaConfiguration.EMBEDDED, true);
put(AbstractKafkaConfiguration.EMBEDDED_TOPICS, "test_topic");
}
});
try (ApplicationContext ctx = ApplicationContext.run(config)) {
The consumer consumes from Kafka in another thread and you have to wait for a while until your AppListener catches up.
You can see a short example in KafkaProducerListenerTest
Remember the Kafka dependencies described in the Micronaut doc: Embedding Kafka
I am trying to use the event filter to reduce the amount of topics the application uses using the new feature available in the new release of the spring cloud stream (Chelsea.RC1). The message is being created, with the correct header, however, inspecting the contents of the message in the queue, the message does not contain the header, only the body with the payload.
public void sendEnroll(EnrollCommand data) {
//MessageChannel
outputEnroll.send(MessageBuilder
.withPayload(data)
.setHeader("brand", "MASTERCARD")
.setHeader("operation", Operation.ENROLL).build());
}
Consumer
#Service
#EnableBinding(Channel.class)
public class EnrollConsumer {
#Autowired
private EnrollService service;
#StreamListener(target = Channel.INPUT_ENROLL, condition = "headers['brand']=='MASTERCARD'")
public void enrollConsumer(#Payload String command){
System.out.println(command);
//service.enrollment(command);
}
}
In consumer service, it gives the following warning:
WARN -kafka-listener-1 o.s.c.s.b.DispatchingStreamListenerMessageHandler:62 - Cannot find a #StreamListener matching for message with id: 7baae934-7484-a7fd-91b0-ba906558bb13
You have to map that your custom headers:
spring.cloud.stream.kafka.binder.headers = brand,operation
That information is present in the documentation.
I have a similar question as this post:
Consume message only once from Topic per listeners running in cluster
When I tried using a queue to publish messages and added an item listener in two different JVMs, I am receiving the messages twice in both of them. I want to receive the message only once in a clustered/distributed environments.
Here's my code snippet:
Publishing of the message:
getQueue().add("some sample message");
I have the same listener configured in two different JVMs which goes like this:
public HazelcastQueueListener(){
HazelcastInstance instance = HazelcastClient.newHazelcastClient(HazelClientConfig.getClientConfig());
IQueue<String> queue1 = instance.getQueue("SAMPLEQUEUE");
queue1.addItemListener(this, false);
}
public static void main(String args[]){
HazelcastQueueListener listener = new HazelcastQueueListener();
}
#Override
public void itemAdded(ItemEvent<String> arg0) {
// TODO Auto-generated method stub
if(arg0!=null){
System.out.println("Item coming out of queue 1" +arg0);
}
else{
System.out.println("null");
}
}
You have to poll the queue, like a standard java BlockingQueue in order to consume an item only once.
String item = queue1.take()
AFAIK, Hazelcast doesn't support asynchronous operation on queue. The ItemListener doesn't consume the item, it only notifies that an item is available.