I have the following setup in application.yml for my Spring cloud stream Kafka:
spring:
cloud:
function:
definition: userBinding
stream:
kafka:
binder:
broker: localhost:9092
replicationFactor: 1
bindings:
userBinding-in-0:
destination: user
and the following consumer function to be called:
#Bean
public Consumer<Message<UserModel>> userBinding() {
return message -> {
System.out.println("Received:" + message);
};
}
For some reason as soon as I start the application it automatically creates a new topic named userBinding-in-0 instead of consuming messages from user topic!
Can you please let me know what I'm missing or setting up incorrectly?
The destination is a common property (not Kafka-specific) and shouldn't be under the kafka node.
spring:
cloud:
stream:
bindings:
userBinding-in-0:
destination: user
Related
in my application kafka messages were consumed from multiple topics by streamlistner but i want to add functional consumer bean for one topic is this possible some topics consuming by streamlistner and some with consumer bean in same application?
when i tried topics were created only which are consumed by streamlistner but not for consumer bean?
application.yml
spring:
cloud:
function:
definition: printString
stream:
kafka:
binder:
brokers: localhost:9092
zkNodes: localhost:2181
autoCreateTopics: true
autoAddPartitions: true
bindings:
printString-in-0:
destination: function-input-topic
group: abc
consumer:
maxAttempts: 1
partitioned: true
concurrency: 2
xyz-input:
group: abc
destination: xyz-input-input
consumer:
maxAttempts: 1
partitioned: true
concurrency: 2
topic was created for xyz-input topic but not for function-input-topic
consumer bean
import java.util.function.Consumer;
#Component
public class xyz {
#Bean
Consumer<String> printString() {
return System.out::print;
}
}
kafkaConfig Interface
public interface KafkaConfig {
#Input("xyz-input")
SubscribableChannel inbound();
}
No, we tried in the past but there are subtle issues when the two programming modes collide. It is quite simple to factor out any StreamListener into a functional way (mostly removing code) and the app actually becomes much simpler.
Just get rid of KafkaConfig interface all together, get rid of #EnableBinding and you should be fine.
My question is how to manage the multi instance with Spring Cloud Stream Kafka.
Let me explain, in a Spring Cloud Stream Microservices context (eureka, configserver, kafka) I want to have 2 instances of the same microservice. When I change a configuration in my GIT Repository, the configserver (via a webhook) will push a message into the Kafka topic.
If i use the same group-id in my microservice, only one of two instances will received the notification, and reload his spring context.
But I need to refresh all instances ...
So, to do that, I have configured an unique group-id : ${spring.application.name}.bus.${hostname}
It's work well, but the problem is, each time I start a new instance of my service, it create a new consumer group in kafka. Now i have a lot of unused consumer group.
[![consumers for a microservice][1]][1]
[1]: https://i.stack.imgur.com/6jIzx.png
Here is the Spring Cloud Stream configuration of my service :
spring:
cloud:
bus:
destination: sys.spring-cloud-bus.refresh
enabled: true
refresh:
enabled: true
env:
enabled: true
trace:
enabled: false
stream:
bindings:
# Override spring cloud bus configuration with a specific binder named "bus"
springCloudBusInput:
binder: bus
destination: sys.spring-cloud-bus.refresh
content-type: application/json
group: ${spring.application.name}.bus.${hostname}
springCloudBusOutput:
binder: bus
destination: sys.spring-cloud-bus.refresh
content-type: application/json
group: ${spring.application.name}.bus.${hostname}
binders:
bus:
type: kafka
defaultCandidate: false
environment:
spring:
cloud:
stream:
kafka:
binder:
brokers: kafka-dev.hcuge.ch:9092
kafka:
streams:
bindings:
springCloudBusInput:
consumer:
startOffset: latest # Reset offset to the latest value to avoid consume configserver notifications on startup
resetOffsets: true
How to avoid lot of consumer creation ? Should I remove old consumer group in kafka ?
I think my solution is not the best way to do it, so if you have a better option, I'm interested;)
Thank you
If you don't provide a group, bus will use a random group anyway.
The broker will eventually remove the unused groups according to its offsets.retention.minutes property (currently 7 days by default).
Need some help in integrating kafka with spring cloud stream. The application is very simple, with 2 parts(run as separate java processes)
A consumer- puts request into RequestTopic and gets response from ResponseTopic
A producer- gets the request from the RequestTopic and puts the response back in ResponseTopic.
I have created RequestSenderChannel and ResponseReceiverChannel interfaces for consumer and RequestReceiverChannel and ResponseSenderChannel
for the producer application. both of them share the same yaml file.
As per the documentation spring.cloud.stream.bindings..destination should specify the topic to which the message is sent or received.
But when i run the application, the application creates topics as 'RequestSender', 'RequestReceiver', 'ResponseSender' and 'ResponseReceiver' in the kafka
My assumption was: since destination in the YAML file specifies only two topics 'RequestTopic' and 'ResponseTopic', it should have created those topics.
but it creates Kafka topics for attributes specified at 'spring.cloud.stream.bindings' in the YAML file.
can someone please point out the issue in the configruation/code?
public interface RequestReceiverChannel
{
String requestReceiver ="RequestReceiver";
#Input(requestReceiver)
SubscribableChannel pathQueryRequest();
}
public interface RequestSenderChannel
{
String RequestSender ="RequestSender";
#Output(RequestSender)
MessageChannel pathQueryRequestSender();
}
public interface ResponseReceiverChannel
{
String ResponseReceiver = "ResponseReceiver";
#Input(ResponseReceiver)
SubscribableChannel pceResponseServiceReceiver();
}
public interface ResponseSenderChannel
{
String ResponseSender = "ResponseSender";
#Output(ResponseSender)
MessageChannel pceResponseService();
}
'''
The YAML configuration file
spring:
cloud:
stream:
defaultBinder: kafka
bindings:
RequestSender:
binder: kafka
destination: RequestTopic
content-type: application/protobuf
group: consumergroup
ResponseSender:
binder: kafka
destination: ResponseTopic
content-type: application/protobuf
group: consumergroup
RequestReceiver:
binder: kafka
destination: RequestTopic
content-type: application/protobuf
group: consumergroup
ResponseReceiver:
binder: kafka
destination: ResponseTopic
content-type: application/protobuf
group: consumergroup
kafka:
bindings:
RequestTopic:
consumer:
autoCommitOffset: false
ResponseTopic:
consumer:
autoCommitOffset: false
binder:
brokers: ${SERVICE_KAFKA_HOST:localhost}
zkNodes: ${SERVICE_ZOOKEEPER_HOST:127.0.0.1}
defaultZkPort: ${SERVICE_ZOOKEEPER_PORT:2181}
defaultBrokerPort: ${SERVICE_KAFKA_PORT:9092}
By doing spring.cloud.stream.bindings.<binding-name>.destination=foo you are expressing desire to map binding specified by <binding-name> (e.g., RequestSender) to a broker destination named foo. If such destination does not exist it will be auto-provisioned.
So there are no issues.
That said, we've just released Horsham.RELEASE (part of cloud Hoxton.RELEASE) and we are moving away from annotation-based model you are currently using in favor of a significantly simpler functional model. You can read more about it in our release blog which also provides links to 4 posts where we elaborate and provide more examples on functional programming paradigm.
Spring Cloud Stream Kafka, KTable as input not working
Sink.java
public interface EventSink {
#Input("inputTable")
KTable<?, ?> inputTable();
}
MessageReceiver.java
#EnableBinding(EventSink .class)
public class MessageReceiver {
#StreamListener
public void process(#Input("inputTable") KTable<String, Event> KTable) {
// below code is just for representation. I need to do lot of things after getting this KTable
KTable.toStream()
.foreach((key, value) -> System.out.println(value));
}
}
application.yml
server:
port: 8083
spring:
cloud:
stream:
kafka:
streams:
binder:
application-id: kafka-stream-demo
configuration:
default:
key:
serde: org.apache.kafka.common.serialization.Serdes$StringSerde
value:
serde: org.springframework.kafka.support.serializer.JsonSerde
bindings:
inputTable:
materialized-as: event_store
binder:
brokers: localhost:9092
bindings:
inputTable:
destination: nscevent
group: nsceventGroup
I'm getting below error
Exception in thread "kafka-stream-demo-1e64cf93-de19-4185-bee4-8fc882275010-StreamThread-1" org.apache.kafka.streams.errors.StreamsException: Deserialization exception handler is set to fail upon a deserialization error. If you would rather have the streaming pipeline continue after a deserialization error, please set the default.deserialization.exception.handler appropriately.
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:80)
at org.apache.kafka.streams.processor.internals.RecordQueue.addRawRecords(RecordQueue.java:97)
at org.apache.kafka.streams.processor.internals.PartitionGroup.addRawRecords(PartitionGroup.java:117)
at org.apache.kafka.streams.processor.internals.StreamTask.addRecords(StreamTask.java:677)
at org.apache.kafka.streams.processor.internals.StreamThread.addRecordsToTasks(StreamThread.java:943)
at org.apache.kafka.streams.processor.internals.StreamThread.runOnce(StreamThread.java:831)
at org.apache.kafka.streams.processor.internals.StreamThread.runLoop(StreamThread.java:767)
at org.apache.kafka.streams.processor.internals.StreamThread.run(StreamThread.java:736)
Caused by: java.lang.IllegalStateException: No type information in headers and no default type provided
at org.springframework.util.Assert.state(Assert.java:73)
at org.springframework.kafka.support.serializer.JsonDeserializer.deserialize(JsonDeserializer.java:370)
at org.apache.kafka.streams.processor.internals.SourceNode.deserializeValue(SourceNode.java:63)
at org.apache.kafka.streams.processor.internals.RecordDeserializer.deserialize(RecordDeserializer.java:66)
... 7 more
Can somebody please advise what is the issue? With KStream as input it is working, but not as KTable.
Thanks in advance
KTable is always converted using the native Serde feature of Kafka Streams. Framework level conversion is not done on KTable (although there is an issue out there to add it). Since you are using a custom type for value, you need to specify a proper Serde instead of using the default String serde. You can add these to the configuration.
spring.cloud.stream.kafka.streams.binder.configuration:
default.value.serde: org.springframework.kafka.support.serializer.JsonSerde
spring.json.value.default.type: RawAccounting
KTable don't auto convert as input channel
I've built a producer spring cloud stream app and kafka as binder. Here is the application.yml:
spring:
cloud:
stream:
instanceCount : 1
bindings:
output:
destination: topic-sink
producer:
partitionSelectorClass: com.partition.CustomPartition
partitionCount: 1
...
I have two instances (same app running on a single jvm) as consumers. Here is the application.yml:
spring:
cloud:
stream:
bindings:
input:
destination: topic-sink
group: hdfs-sink
consumer:
partitioned: true
...
My understanding of kafka groups is that messages will be consumed only once, for those consumers in same group. Let's say, if the producer app produces messages A, B and there are two consumer apps in the same group, message A will be read by consumer 1 and messages B, C will be read by consumer 2. However, my consumers are consuming same messages. Are my assumptions wrong?
I got the solution, thanks Arek. For 1 partition and 1 consumer.
I share the solution for producer\consumer in spring cloud stream app.
Producer:
spring:
cloud:
stream:
instanceCount : 1
bindings:
output:
destination: topic-sink
producer:
partitionSelectorClass: com.partition.CustomPartition
partitionCount: 1
Consumer:
spring:
cloud:
stream:
instanceIndex: 0 #between 0 and instanceCount - 1
instanceCount: 1
bindings:
input:
destination: topic-sink
group: hdfs-sink
consumer:
partitioned: true
kafka:
binder:
autoAddPartitions: true