Kafka Streams: Define multiple Kafka Streams using Spring Cloud Stream for each set of topics - apache-kafka

I am trying to do a simple POC with Kafka Streams. However I am getting exception while starting the application. I am using Spring-Kafka, Kafka-Streams 2.5.1 with Spring boot 2.3.5
Kafka stream configuration
#Configuration
public class KafkaStreamsConfig {
private static final Logger log = LoggerFactory.getLogger(KafkaStreamsConfig.class);
#Bean
public Function<KStream<String, String>, KStream<String, String>> processAAA() {
return input -> input.peek((key, value) -> log
.info("AAA Cloud Stream Kafka Stream processing : {}", input.toString().length()));
}
#Bean
public Function<KStream<String, String>, KStream<String, String>> processBBB() {
return input -> input.peek((key, value) -> log
.info("BBB Cloud Stream Kafka Stream processing : {}", input.toString().length()));
}
#Bean
public Function<KStream<String, String>, KStream<String, String>> processCCC() {
return input -> input.peek((key, value) -> log
.info("CCC Cloud Stream Kafka Stream processing : {}", input.toString().length()));
}
/*
#Bean
public KafkaStreams kafkaStreams(KafkaProperties kafkaProperties) {
final Properties props = new Properties();
props.put(StreamsConfig.BOOTSTRAP_SERVERS_CONFIG, kafkaProperties.getBootstrapServers());
props.put(StreamsConfig.APPLICATION_ID_CONFIG, "groupId-1"););
props.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
props.put(StreamsConfig.DEFAULT_KEY_SERDE_CLASS_CONFIG, Serdes.String().getClass());
props.put(StreamsConfig.DEFAULT_VALUE_SERDE_CLASS_CONFIG, JsonSerde.class);
props.put(JsonDeserializer.VALUE_DEFAULT_TYPE, JsonNode.class);
final KafkaStreams kafkaStreams = new KafkaStreams(kafkaStreamTopology(), props);
kafkaStreams.start();
return kafkaStreams;
}
#Bean
public Topology kafkaStreamTopology() {
final StreamsBuilder streamsBuilder = new StreamsBuilder();
streamsBuilder.stream(Arrays.asList(AAATOPIC, BBBInputTOPIC, CCCInputTOPIC));
return streamsBuilder.build();
} */
}
application.yaml configured is like below. The idea is that I have 3 input and 3 output topics.
The component takes input from input topic and gives output to outputtopic.
spring:
application.name: consumerapp-1
cloud:
function:
definition: processAAA;processBBB;processCCC
stream:
kafka.binder:
brokers: 127.0.0.1:9092
autoCreateTopics: true
auto-add-partitions: true
kafka.streams.binder:
configuration:
commit.interval.ms: 1000
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
bindings:
processAAA-in-0:
destination: aaaInputTopic
processAAA-out-0:
destination: aaaOutputTopic
processBBB-in-0:
destination: bbbInputTopic
processBBB-out-0:
destination: bbbOutputTopic
processCCC-in-0:
destination: cccInputTopic
processCCC-out-0:
destination: cccOutputTopic
Exception thrown is
Caused by: java.lang.IllegalArgumentException: Trying to prepareConsumerBinding public abstract void org.apache.kafka.streams.kstream.KStream.to(java.lang.String,org.apache.kafka.streams.kstream.Produced) but no delegate has been set.
at org.springframework.util.Assert.notNull(Assert.java:201)
at org.springframework.cloud.stream.binder.kafka.streams.KStreamBoundElementFactory$KStreamWrapperHandler.invoke(KStreamBoundElementFactory.java:134)
Can anyone help me with Kafka Streams Spring-Kafka code samples for processing with multiple input and output topics.
Updates: 21-Jan-2021
After removing all kafkaStreams and kafkaStreamsTopology beans configuration iam getting below message in an infinite loop. The messages consumption is still not working. I have checked the subscription in application.yaml with the #Bean Function definitions. they all look ok to me but still I get this cross wiring error. I have replaced the application.properties with application.yaml above
[consumerapp-1-75eec5e5-2772-4999-acf2-e9ef1e69f100-StreamThread-1] [Consumer clientId=consumerapp-1-75eec5e5-2772-4999-acf2-e9ef1e69f100-StreamThread-1-consumer, groupId=consumerapp-1] We received an assignment [cccParserTopic-0] that doesn't match our current subscription Subscribe(bbbParserTopic); it is likely that the subscription has changed since we joined the group. Will try re-join the group with current subscription
2021-01-21 14:12:43,336 WARN org.apache.kafka.clients.consumer.internals.ConsumerCoordinator [consumerapp-1-75eec5e5-2772-4999-acf2-e9ef1e69f100-StreamThread-1] [Consumer clientId=consumerapp-1-75eec5e5-2772-4999-acf2-e9ef1e69f100-StreamThread-1-consumer, groupId=consumerapp-1] We received an assignment [cccParserTopic-0] that doesn't match our current subscription Subscribe(bbbParserTopic); it is likely that the subscription has changed since we joined the group. Will try re-join the group with current subscription

I have managed to solve the problem. I am writing this for the benefit of others.
If you want to include multiple streams in your single app jar then the key is in defining multiple application Ids that is one per each of your streams. I knew this all along but I was not aware on how to define it. Finally the answer is something I have managed to dig out after reading the SCSt documentation. Below is how the application.yaml can be defined.
application.yaml is like below
spring:
application.name: kafkaMultiStreamConsumer
cloud:
function:
definition: processAAA; processBBB; processCCC --> // needed for Imperative #StreamListener
stream:
kafka:
binder:
brokers: 127.0.0.1:9092
min-partition-count: 3
replication-factor: 2
transaction:
transaction-id-prefix: transaction-id-2000
autoCreateTopics: true
auto-add-partitions: true
streams:
binder:
functions:
// needed for functional
processBBB:
application-id: SampleBBBapplication
processAAA:
application-id: SampleAAAapplication
processCCC:
application-id: SampleCCCapplication
configuration:
commit.interval.ms: 1000
default.key.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
default.value.serde: org.apache.kafka.common.serialization.Serdes$StringSerde
bindings:
// Below is for Imperative Style programming using
// the annotation namely #StreamListener, #SendTo in .java class
inputAAA:
destination: aaaInputTopic
outputAAA:
destination: aaaOutputTopic
inputBBB:
destination: bbbInputTopic
outputBBB:
destination: bbbOutputTopic
inputCCC:
destination: cccInputTopic
outputCCC:
destination: cccOutputTopic
// Functional Style programming using Function<KStream...> use either one of them
// as both are not required. If you use both its ok but only one of them works
// from what i have seen #StreamListener is triggered always.
// Below is from functional style
processAAA-in-0:
destination: aaaInputTopic
group: processAAA-group
processAAA-out-0:
destination: aaaOutputTopic
group: processAAA-group
processBBB-in-0:
destination: bbbInputTopic
group: processBBB-group
processBBB-out-0:
destination: bbbOutputTopic
group: processBBB-group
processCCC-in-0:
destination: cccInputTopic
group: processCCC-group
processCCC-out-0:
destination: cccOutputTopic
group: processCCC-group
Once above is defined we now need to define individual java classes where the Stream processing logic is implemented.
Your Java class can be something like below. Create similarly for other 2 or N streams as per your requirement. One example is like below : AAASampleStreamTask.java
#Component
#EnableBinding(AAASampleChannel.class) // One Channel interface corresponding to in-topic and out-topic
public class AAASampleStreamTask {
private static final Logger log = LoggerFactory.getLogger(AAASampleStreamTask.class);
#StreamListener(AAASampleChannel.INPUT)
#SendTo(AAASampleChannel.OUTPUT)
public KStream<String, String> processAAA(KStream<String, String> input) {
input.foreach((key, value) -> log.info("Annotation AAA *Sample* Cloud Stream Kafka Stream processing {}", String.valueOf(System.currentTimeMillis())));
...
// do other business logic
...
return input;
}
/**
* Use above or below. Below style is latest startting from ScSt 3.0 if iam not
* wrong. 2 different styles of consuming Kafka Streams using SCSt. If we have
* both then above gets priority as per my observation
*/
/*
#Bean
public Function<KStream<String, String>, KStream<String, String>> processAAA() {
return input -> input.peek((key, value) -> log.info(
"Functional AAA *Sample* Cloud Stream Kafka Stream processing : {}", String.valueOf(System.currentTimeMillis())));
...
// do other business logic
...
}
*/
}
The Channel is required if you want to go with Imperative style programming not for functional.
AAASampleChannel.java
public interface AAASampleChannel {
String INPUT = "inputAAA";
String OUTPUT = "outputAAA";
#Input(INPUT)
KStream<String, String> inputAAA();
#Output(OUTPUT)
KStream<String, String> outputAAA();
}

Looks like you are mixing Spring Cloud Stream and Spring Kafka in the application. When using the binder, you don't need to directly define components required by Spring Kafka such as KafkaStreams and Topology, rather they are created by SCSt implicitly. Can you remove the following beans and try again?
#Bean
public KafkaStreams kafkaStreams(KafkaProperties kafkaProperties) {
and
#Bean
public Topology kafkaStreamTopology() {
If you are still facing issues, please share a small sample that can be reproducible, that way we can triage it further.

Related

Routing events type (Avro-SpecificRecordBase) to right Consumer from one topic in reactive programming

I use
spring-cloud-stream:3.2.2
spring-cloud-stream-binder-kafka:3.2.5
spring-cloud-stream-binder-kafka-streams:3.2.5
I want to write consumer kafka in reactive programming. I work with avro schema registry.
In my case i have multiple events type in one topic. My consumer consume all type, but i want to write one consumer per events type.
In your documentation i found some information concerning Routing. In reactive mode i can use routing-expression in application.yml only. But it's not working for me.
Can you help me ? I tried several things, but i don't find why it's not working.
My 2 Consumer consume all events type not specific.
My two consumer:
#Bean
public Consumer<FirstRankPaymentAgreed> testAvroConsumer() {
return firstRankPaymentAgreed -> {
log.error("test reception event {} ", firstRankPaymentAgreed.getState().getCustomerOrderId());
};
}
#Bean
public Consumer<CustomerOrderValidated> devNull() {
return o -> {
log.error("devNull ");
};
}
my application.yml ( i try lot of simple test)
spring:
cloud:
stream:
function:
routing:
enabled: true
definition: testAvroConsumer;devNull
# routing-expression: "'true'.equals('true') ? devNull : testAvroConsumer;" #"payload['type'] == 'CustomerOrderValidated' ? devNull : testAvroConsumer;"
bindings:
testAvroConsumer-in-0:
destination: tempo-composer-event
devNull-in-0:
destination: tempo-composer-event
kafka:
binder:
brokers: localhost:9092
auto-create-topics: false
consumer-properties:
value:
subject:
name:
strategy: io.confluent.kafka.serializers.subject.TopicRecordNameStrategy
key.deserializer: org.apache.kafka.common.serialization.StringDeserializer
value.deserializer: io.confluent.kafka.serializers.KafkaAvroDeserializer
schema.registry.url: http://localhost:8081
specific.avro.reader: true
function:
# routing-expression: "'true'.equals('true') ? devNull : testAvroConsumer;"
# routing-expression: "payload['type'] == 'CustomerOrderValidated' ? devNull : testAvroConsumer;"
definition: testAvroConsumer;devNull
Routing and reactive doesn't really mix well.
Unlike the imperative functions which play a role of a message handler (invoked each time there is a message), reactive Reactive functions are initialization functions that connect user flux/mono with system. They are only invoked once during the startup of the applications. After that the stream is processed by reactive API and s-c-stream as a framework plays no additional role (as if it didn't exist in the first place)
So, RoutingFunction mixed with reactive acts as if it was reactive. The expression is evaluated only once during startup and from that point the function is selected and the entire stream is forwarded to that function.
Consider changing your functions to imperative.
I found an uggly solution but it's work. I have an interface EventHandler<?>. All handler class extend this interface. Handler force generic type with the right avro type. I ask Spring to find the right handler.
var beanNames = context.getBeanNamesForType(ResolvableType.forClassWithGenerics(EventHandler.class, message.getPayload().getClass()));
if (beanNames.length > 0) {
var bean = (EventHandler) context.getBean(beanNames[0]);

How to compare two kafka tables?

Kafka Stream Config
#Configuration
#EnableKafkaStreams
#EnableKafka
public class KafkaConfig {
#Bean(name = KafkaStreamsDefaultConfiguration.DEFAULT_STREAMS_CONFIG_BEAN_NAME)
KafkaStreamsConfiguration kstreamConfig(){
Map<String,Object> props=new HashMap<>();
props.put(APPLICATION_ID_CONFIG,"MY_GROUP_ID");
props.put(BOOTSTRAP_SERVERS_CONFIG,"localhost:9092");
props.put(DEFAULT_KEY_SERDE_CLASS_CONFIG,Serdes.String().getClass().getName());
props.put(DEFAULT_VALUE_SERDE_CLASS_CONFIG, Serdes.String().getClass().getName());
return new KafkaStreamsConfiguration(props);
}
}
//My logic
KTable<String,String> table1=streamsBuilder.table("TOPIC1");
KTable<String,String> table2=streamsBuilder.table("TOPIC2");
Now I've two tables, I need to compare these two tables and the data which are not present in the table1 will be send to another kafka topic.
Call StreamsBuilderFactoryBean#getKafkaStreams() to get a KafkaStreams instance.
Then call streams.store(StoreQueryParameters.fromNameAndType("<store-name>", QueryableStoreTypes.keyValueStore())) to get both stores, which you can iterate and compare using all()
For any differences you want to send to another topic, use KafkaTemplate producer

Kafkastream springcloud kafka join selectKey

could you please help to configure a spring cloud stream app based on Kafka, I'm facing issue on the selectKey operation.
Let's explain what i m try to reach
2 incoming topics Person, RefGenre
Person contain the key of Refgenre (in value)
public class Person {
String nom;
String prenom;
String codeGenre; <<--- here is the key of the second topic refgenre
}
So I m using the selectKey operator to prepare my stream before the join operation.
a new topic is created with selectByKey (my-app-KSTREAM-KEY-SELECT-0000000004-repartition), and then serialization issue happens :
Exception in thread "my-app-3c57b31c-28e5-4199-b07d-87f8940425ab-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: ClassCastException while producing data to topic my-app-KSTREAM-KEY-SELECT-0000000004-repartition. A serializer (key: org.apache.kafka.common.serialization.StringSerializer / value: statefull.serde.PersonWithGenreSerde) is not compatible to the actual key or value type (key type: java.lang.String / value type: statefull.model.Person). Change the default Serdes in StreamConfig or provide correct Serdes via method parameters (for example if using the DSL, #to(String topic, Produced<K, V> produced) with Produced.keySerde(WindowedSerdes.timeWindowedSerdeFrom(String.class))).
Where can i specify serde for this repartition topic and can i specify the name of this "internal" topic ?
#Bean
public BiFunction<KStream<String, Person>, KTable<String, ReferentielGenre>, KStream<Long, PersonWithGenre>> joinKtable() {
return (persons, referentielGenres) ->
persons.selectKey((k,v) -> v.getCodeGenre())
.join(referentielGenres,
(person, genre) -> new PersonWithGenre(person.getNom(), person.getPrenom(),genre),
Joined.with(Serdes.String(), new PersonWithGenreSerde(), null));
}
here is the full code of my not working job : https://github.com/YohanAlard/joinkstream
Is there a better way to handle this usecase ?

Using Kafka with Micronaut

Are there any example projects showing how to use Kafka with Micronaut? I am having problems with getting it to work.
I have the following producer:
#KafkaClient
interface AppClient {
#Topic("topic-name")
void sendMessage(#KafkaKey String id, Event event)
}
and listener:
#KafkaListener(
groupId="group-id",
offsetReset = OffsetReset.EARLIEST
)
class AppListener {
#Topic("topic-name")
void onMessage(Event event) {
// do stuff
}
}
My application.yml contains:
kafka:
bootstrap:
servers: localhost:2181
and application-test.yml (is this right and should it be in the same directory as application.yml?. Also unsure how the embedded server should be used):
kafka:
# embedded:
# enabled: true
# topics: promo-api-promotions
bootstrap:
servers: localhost:9092
My test looks like:
#MicronautTest
class AppSpec extends Specification {
#Shared
#AutoCleanup
EmbeddedServer server = ApplicationContext.run(EmbeddedServer)
#Shared
private AppClient appClient =
server.applicationContext.getBean(AppClient)
def 'The upload endpoint is called'() {
// test here
appClient.sendMessage(id, event)
// other test stuff
}
The main problems I am having are:
My consumer is not consuming from my topic. I can see the producer creates the topic in Kafka and the client group is created, but the offset stays at 0.
I am having problems when the test is started up where it looks as if two instances of the client are created and therefore the MBean registration fails (also, if I try to use the embedded Kafka, I get a different message about port 9092 already being in use because it tries to start the server up twice):
javax.management.InstanceAlreadyExistsException:
kafka.consumer:type=app-info,id=app-kafka-client-app-listener
at com.sun.jmx.mbeanserver.Repository.addMBean(Repository.java:437)
at com.sun.jmx.interceptor.DefaultMBeanServerInterceptor.registerWithRepository(DefaultMBeanServerInterceptor.java:1898)
Managed to fix the second problem - the object passed into the listener did not have a #JsonCreator. I found this out by trying to use the Jackson object mapper to construct the object from it's JSON while playing around.
If anyone else has the same problem - make sure that the object model works with Jackson before going any further!
You should add the embedded configuration kafka.embedded.enabled to a map with configuration and pass it to the ApplicationContext.run method.
Map<String, Object> config = Collections.
unmodifiableMap(new HashMap<String, Object>() {
{
put(AbstractKafkaConfiguration.EMBEDDED, true);
put(AbstractKafkaConfiguration.EMBEDDED_TOPICS, "test_topic");
}
});
try (ApplicationContext ctx = ApplicationContext.run(config)) {
The consumer consumes from Kafka in another thread and you have to wait for a while until your AppListener catches up.
You can see a short example in KafkaProducerListenerTest
Remember the Kafka dependencies described in the Micronaut doc: Embedding Kafka

Spring Kafka - access offsetsForTimes to start consuming from specific offset

I have a fairly straightforward Kafka consumer:
MessageListener<String, T> messageListener = record -> {
doStuff( record.value()));
};
startConsumer(messageListener);
protected void startConsumer(MessageListener<String, T> messageListener) {
ConcurrentMessageListenerContainer<String, T> container = new ConcurrentMessageListenerContainer<>(
consumerFactory(this.brokerAddress, this.groupId),
containerProperties(this.topic, messageListener));
container.start();
}
I can consume messages without any issue.
Now, I have the requirement to seek from a specific offset based on the result of a call to offsetsForTimes on the Kafka Consumer.
I understand that I can seek to a certain position using the ConsumerSeekAware interface:
#Override
public void onPartitionsAssigned(Map<TopicPartition, Long> assignments,
ConsumerSeekCallback callback) {
assignments.forEach((t, o) -> callback.seek(t.topic(), t.partition(), ?????));
}
The problem now, is that I do not have access to the Kafka Consumer inside the callback, therefore I have no way to call offsetsForTimes.
Is there any other way to achieve this?
Use a ConsumerAwareRebalanceListener to do the initial seeks (introduced in 2.0).
The current version is 2.2.0.
How to test a ConsumerAwareRebalanceListener?