Kafka Streams dynamic routing (ProducerInterceptor might be a solution?) - apache-kafka

I'm working with Apache Kafka and I've been experimenting with the Kafka Streams functionality.
What I'm trying to achieve is very simple, at least in words and it can be achieved easily with the regular plain Consumer/Producer approach:
Read a from a dynamic list of topics
Do some processing on the message
Push the message to another topic which name is computed based on the message content
Initially I thought I could create a custom Sink or inject some kind of endpoint resolver in order to programmatically define the topic name for each single message, although ultimately couldn't find any way to do that.
So I dug into the code and found the ProducerInterceptor class that is (quoting from the JavaDoc):
A plugin interface that allows you to intercept (and possibly mutate)
the records received by the producer before they are published to the
Kafka cluster.
And it's onSend method:
This is called from KafkaProducer.send(ProducerRecord) and
KafkaProducer.send(ProducerRecord, Callback) methods, before key and
value get serialized and partition is assigned (if partition is not
specified in ProducerRecord).
It seemed like the perfect solution for me as I can effectively return a new ProducerRecord with the topic name I want.
Although apparently there's a bug (I've opened an issue on their JIRA: KAFKA-4691) and that method is called when the key and value have already been serialized.
Bummer as I don't think doing an additional deserialization at this point is acceptable.
My question to you more experienced and knowledgeable users would be your input and ideas and any kind of suggestions on how would be an efficient and elegant way of achieving it.
Thanks in advance for your help/comments/suggestions/ideas.
Below are some code snippets of what I've tried:
public static void main(String[] args) throws Exception {
StreamsConfig streamingConfig = new StreamsConfig(getProperties());
StringDeserializer stringDeserializer = new StringDeserializer();
StringSerializer stringSerializer = new StringSerializer();
MyObjectSerializer myObjectSerializer = new MyObjectSerializer();
TopologyBuilder topologyBuilder = new TopologyBuilder();
topologyBuilder.addSource("SOURCE", stringDeserializer, myObjectSerializer, Pattern.compile("input-.*"));
.addProcessor("PROCESS", MyCustomProcessor::new, "SOURCE");
System.out.println("Starting PurchaseProcessor Example");
KafkaStreams streaming = new KafkaStreams(topologyBuilder, streamingConfig);
streaming.start();
System.out.println("Now started PurchaseProcessor Example");
}
private static Properties getProperties() {
Properties props = new Properties();
.....
.....
props.put(StreamsConfig.producerPrefix(ProducerConfig.INTERCEPTOR_CLASSES_CONFIG), "com.test.kafka.streams.OutputTopicRouterInterceptor");
return props;
}
OutputTopicRouterInterceptor onSend implementation:
#Override
public ProducerRecord<String, MyObject> onSend(ProducerRecord<String, MyObject> record) {
MyObject obj = record.value();
String topic = computeTopicName(obj);
ProducerRecord<String, MyObject> newRecord = new ProducerRecord<String, MyObject>(topic, record.partition(), record.timestamp(), record.key(), obj);
return newRecord;
}

Related

Create message liseners dynamically for the topics

I was analysing a problem on creating a generic consumer library which can be deployed in multiple microservices ( all of them are spring based) . The requirement is to have around 15-20 topics to listen .If we use annotation based kafka listener ,we need to add more code for each microservice . Is there any way where we can create the consumers dynamically based on some xml file where each consumer can have these data injected
topic
groupid
partition
filter (if any)
With annotations ,the design is very rigid .The only way I can think of is ,we can create messagelisteners after parsing xml config and each topic will have its own concurrentmessagelistenercontainer .
Is there any alternative better approach available using spring ?
P.S: I am little new to spring & kafka . Please let me know if there is confusion in explaning the requirements
Thanks,
Rajasekhar
Maybe you can use topic patterns. Take a look at consumer properties. E.g. the listener
#KafkaListener(topicPattern = "topic1|topic2")
will listen to topic1 and topic2.
If you need to create a listener dynamically extra care must be taken, because you must shutdown it.
I would use a similar approach as spring's KafkaListenerAnnotationBeanPostProcessor. This post processor is responsible for processing #KafkaListeners.
Here is a proposal of how it could work:
public class DynamicEndpointRegistrar {
private BeanFactory beanFactory;
private KafkaListenerContainerFactory<?> containerFactory;
private KafkaListenerEndpointRegistry endpointRegistry;
private MessageHandlerMethodFactory messageHandlerMethodFactory;
public DynamicEndpointRegistrar(BeanFactory beanFactory,
KafkaListenerContainerFactory<?> containerFactory,
KafkaListenerEndpointRegistry endpointRegistry, MessageHandlerMethodFactory messageHandlerMethodFactory) {
this.beanFactory = beanFactory;
this.containerFactory = containerFactory;
this.endpointRegistry = endpointRegistry;
this.messageHandlerMethodFactory = messageHandlerMethodFactory;
}
public void registerMethodEndpoint(String endpointId, Object bean, Method method, Properties consumerProperties,
String... topics) throws Exception {
KafkaListenerEndpointRegistrar registrar = new KafkaListenerEndpointRegistrar();
registrar.setBeanFactory(beanFactory);
registrar.setContainerFactory(containerFactory);
registrar.setEndpointRegistry(endpointRegistry);
registrar.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
MethodKafkaListenerEndpoint<Integer, String> endpoint = new MethodKafkaListenerEndpoint<>();
endpoint.setBeanFactory(beanFactory);
endpoint.setMessageHandlerMethodFactory(messageHandlerMethodFactory);
endpoint.setId(endpointId);
endpoint.setGroupId(consumerProperties.getProperty(ConsumerConfig.GROUP_ID_CONFIG));
endpoint.setBean(bean);
endpoint.setMethod(method);
endpoint.setConsumerProperties(consumerProperties);
endpoint.setTopics(topics);
registrar.registerEndpoint(endpoint);
registrar.afterPropertiesSet();
}
}
You should then be able to register a listener dynamically. E.g.
DynamicEndpointRegistrar dynamicEndpointRegistrar = ...;
MyConsumer myConsumer = ...; // create an instance of your consumer
Properties properties = ...; // consumer properties
// the method that should be invoked
// (the method that's normally annotated with KafkaListener)
Method method = MyConsumer.class.getDeclaredMethod("consume", String.class);
dynamicEndpointRegistrar.registerMethodEndpoint("endpointId", myConsumer, method, properties, "topic");

How should I define Flink's Schema to read Protocol Buffer data from Pulsar

I am using Pulsar-Flink to read data from Pulsar in Flink. I am having difficulty when the data's format is Protocol Buffer.
In the GitHub top page, Pulsar-Flink is using SimpleStringSchema. However, seemingly it does not comply with Protocol Buffer officially. Does anyone know how to deal with the data format? How should I define the schema?
StreamExecutionEnvironment see = StreamExecutionEnvironment.getExecutionEnvironment();
Properties props = new Properties();
props.setProperty("topic", "test-source-topic")
FlinkPulsarSource<String> source = new FlinkPulsarSource<>(serviceUrl, adminUrl, new SimpleStringSchema(), props);
DataStream<String> stream = see.addSource(source);
// chain operations on dataStream of String and sink the output
// end method chaining
see.execute();
FYI, I am writing Scala code, so if your explanation is for Scala(not for Java), it is really helpful. Surely, any kind of advice is welcome!! Including Java.
You should implement your own DeserializationSchema. Let's assume that you have a protobuf message Address and have generated the respective Java class. Then the schema should look like the following:
public class ProtoDeserializer implements DeserializationSchema<Address> {
#Override
public TypeInformation<Address> getProducedType() {
return TypeInformation.of(Address.class);
}
#Override
public Address deserialize(byte[] message) throws IOException {
return Address.parseFrom(message);
}
#Override
public boolean isEndOfStream(Address nextElement) {
return false;
}
}

Flink Data Stream Conversion and Expose to REST end point

I have spring boot application and integrating with Apache Flink. I wanted to read data from Kafka system, and expose them to REST end point.
The below is my simple data,
#GetMapping("/details/{personName}")
public String getPersonDetails() throws Exception {
StreamExecutionEnvironment env = LocalStreamEnvironment.getExecutionEnvironment();
Properties properties = new Properties();
properties.setProperty("bootstrap.servers", "localhost:9092");
properties.setProperty("group.id", "group_id");
FlinkKafkaConsumer<String> consumer = new FlinkKafkaConsumer<>("test-topic-1",
new SimpleStringSchema(), properties);
consumer.setStartFromEarliest();
DataStream<String> stream = env.addSource(consumer);
stream.map(new MapFunction<String, String>() {
private static final long serialVersionUID = 1L;
#Override
public String map(String value) throws Exception {
logger.info(value);
return value;
}
}).print();
env.execute();
return "hello world";
}
My problems is,
My Kafka returns String value as below,
"id":"1","PersonName":"John","address":"Bristol","weight":"34", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
{"id":"2","PersonName":"Mann","address":"Bristol","weight":"88", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
{"id":"3","PersonName":"Chris","address":"Leeds","weight":"12", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
{"id":"4","PersonName":"John","address":"Bristol","weight":"44", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
{"id":"5","PersonName":"John","address":"NewPort","weight":"26", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
{"id":"6","PersonName":"Mann","address":"Bristol","weight":"89", "country":"UK","timeStamp":"2020-08-08T10:25:42"}
How can i return by converting into JSON by applying filters. For example if my input from REST call is "John" i want to group them and sum the weight values and return as JSON (only Name, and Weight).
Second problem,
I can't stop execute environment. Is there any alternatives? I checked Flink document, i didn't get any for my situation.
Third problem,
I wanted to keep in environment is eager loading, tried to using static block but it takes more time also.
NFRS:
I have massive data in Kafka, so wanted to scale and fast processing.
It sounds like you might need to spend more time reviewing the Flink documentation. But in a nutshell...
Add a MapFunction that parses the string into JSON, extracts the name and weight, and outputs that as a Tuple2<String, Integer> or some custom Java class.
Do a groupBy(name field), followed by a ProcessFunction that sums the weight and saves it in state.
Use QueryableState to expose the state (the summed weights) to code that's running as part of your program's main() method.
In your main method, implement a REST handler that uses the QueryableStateClient to get the weight for a given name.

apache flink with Kafka: InvalidTypesException

I have following code:
Properties properties = new Properties();
properties.setProperty(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, MyCustomClassDeserializer.class.getName());
FlinkKafkaConsumer<MyCustomClass> kafkaConsumer = new FlinkKafkaConsumer(
"test-kafka-topic",
new SimpleStringSchema(),
properties);
final StreamExecutionEnvironment streamEnv = StreamExecutionEnvironment.getExecutionEnvironment();
DataStream<MyCustomClass> kafkaInputStream = streamEnv.addSource(kafkaConsumer);
DataStream<String> stringStream = kafkaInputStream
.map(new MapFunction<MyCustomClass,String>() {
#Override
public String map(MyCustomClass message) {
logger.info("--- Received message : " + message.toString());
return message.toString();
}
});
streamEnv.execute("Published messages");
MyCustomClassDeserializer is implemented to convert byte array to java object.
When I run this program locally, I get error:
Caused by: org.apache.flink.api.common.functions.InvalidTypesException: Input mismatch: Basic type expected.
And I get this for code line:
.map(new MapFunction<MyCustomClass,String>() {
Not sure why I get this?
So, You have a deserializer that returns POJO, yet You are telling Flink that it should deserialize record from byte[] to String by using SimpleStringSchema.
See the problem now? :)
I don't think You should use the custom Kafka deserializers in FlinkKafkaConsumer in general. What You should aim for instead is to instead create a custom class that extends DeserializationSchema from Flink. It should be much better in terms of type safety and testability.

Apache Kafka : Check existence of message in a Topic

I have a situation where i need to check whether a particular message already exists in a topic or not, i need absolutely no duplicates in the topic.
Can any one suggest any graceful way of doing this, rather than consuming all the messages and checking against them.
I do not consider myself an expert in Kafka, but I think what you pretend is "against" the essence of Kafka.
However I come out with a solution using the Kafka Streams library for Java. Basically, the process is the following:
Map every message into a new key-value where the key is a combination of the previous key and its value: (key1, message1) -> (key1-message1, message1)
Group the messages using the keys, as a result of this operation you obtain a KGroupedStream.
Apply a reduce function, modifying the value to some custom value such as the string "Duplicated value".
Convert the resulting KTable after the reduce into a KStream and push it into a new Kafka Topic.
There are so many assumptions in the previous explanation, I am going to provide some code in order to give some light:
KStreamBuilder builder = new KStreamBuilder();
KStream<String, String> resources = builder.stream("topic-where-the-messages-are-sent");
KeyValueMapper<String, String, KeyValue<String,String>> kvMapper = new KeyValueMapper<String, String, KeyValue<String,String>>() {
public KeyValue<String, String> apply(String key, String value) {
return new KeyValue<String, String>(key + "-" + value, value);
}
};
Reducer<String> reducer = new Reducer<String>() {
public String apply(String value1, String value2) {
return "Duplicated message";
}
};
resources.map(kvMapper)
.groupByKey()
.reduce(reducer, "test-store-name")
.toStream()
.to("unique-message-output");
KafkaStreams streams = new KafkaStreams(builder, props);
streams.start();
Have in mind that this is probably not an optimal solution and maybe you would not consider it as a "graceful" way of solving your problem.
I hope it helps.