In SpringBoot-application, which have been deployed in Stream, I use properties in yaml-file:
spring:
kafka:
producer:
retries: 8
Is it possible to show and manage this properties in UI of Spring Cloud DataFlow, like this:
The binder specific properties are not currently exposed as Spring Cloud Data Flow isn't aware of (doesn't need to be aware of) the underlying middleware in use.
Related
We have a Spring Boot Microservice that as well as having HTTP endpoints uses Spring Cloud Bus to pick up refresh events (from rabbit) and also has a Spring Cloud Stream Sink that picks up custom messages from another rabbit topic.
After updating to Spring Boot 2.4.1 and Spring Cloud 2020.0.0 everything seemed to be working until we discovered Spring Cloud Bus was no longer picking up events.
Looking into this it turned out some of the Spring Cloud Bus internal channels where not getting created.
This wasn't happening in another service that didn't have the stream functionality as well so we tested disabling that and the bus functionality then started working.
So it was obviously some sort of interference between the old style stream model and the newer Spring Cloud Bus.
After updating the our sink to use the new function model I still had issues and eventually got both to work by including the following lines in our application.yml:
spring:
cloud:
stream:
bindings.mySink-in-0.destination: mytopic
function.definition: busConsumer;mySink
So I have the following questions
Did I miss something or should there be better documentation on how stream / bus can affect each other and the migration to 2020.0.0?
Does my current configuration look correct?
It doesn't seem right to have to include busConsumer here - should the auto configuration for it not be able to 'combine it in' with any other stream config?
What's the difference between spring.cloud.stream.function.definition and spring.cloud.function.definition? I've seen both in documentation and Spring Cloud Bus seems to be also setting spring.cloud.function.definition=busConsumer
In org.springframework.cloud.stream.function.FunctionConfiguration, It does a search for #EnableBinding.
if (ObjectUtils.isEmpty(applicationContext.getBeanNamesForAnnotation(EnableBinding.class)))
If found, functional binding is disabled. See this
logger.info("Functional binding is disabled due to the presense of #EnableBinding annotation in your configuration");
After the upgrade, we need to transform our Listener classes to use functional interface in order to activate the functional binding. After that, cloud bus consumer binding will be created too.
Has anyone tried using both Spring Cloud Function and Spring Cloud Stream together? Is there any reason this shouldn't be done? We currently use Spring Cloud Function but there are certain cases where we need to have synchronous Kafka publishing and it seems as if the only way to do that is with Spring Cloud Stream.
Thanks, Anne
Using Spring for Apache Kafka, or Spring AMQP, I can achieve message pub/sub. Spring Cloud Bus uses kafka/rabbitmq to do the approximately same things, what's the differencce between them?
Spring Cloud Bus is an abstraction built on top of Spring Cloud Stream (and hence kafka and rabbitmq). It is not general purpose, but is built for sending administrative commands to multiple nodes of a service at once. For example, sending a refresh (from spring cloud commons) to all nodes of the user service. There is only one channel, where in spring cloud stream there are many. Think of it as distributed spring boot actuator.
I have an application already implemented using Spring Cloud Stream (SCS) with 3 components: 1 source #EnableBinding(Source.class), 1 processor #EnableBinding(Processor.class) and 1 sink #EnableBinding(Sink.class) that I communicate using Apache Kafka binders.
As part of the configuration for these components, I'm using several properties from Spring Cloud Stream, such as the topics to use, the number of partitions, the serializers, the max poll, etc.:
spring:
application:
name: myapp
cloud:
stream:
bindings:
output:
destination: topic1
producer:
partitionCount: 5
partitionKeyExpression: headers.kafka_messageKey
kafka:
binder:
brokers: 10.138.128.62
defaultBrokerPort: 9092
zkNodes: 10.138.128.62
defaultZkPort: 2181
requiredAcks: -1
replicationFactor: 1
autoCreateTopics: true
autoAddPartitions: true
bindings:
output:
producer:
configuration:
key.serializer: org.apache.kafka.common.serialization.StringSerializer
value.serializer: org.apache.kafka.common.serialization.ByteArraySerializer
All these properties are defined in an external file 'application.yml' that I indicate at the time of executing the component:
java -jar mycomponent.jar --spring.config.location=/conf/application.yml
Currently, I orchestrate those 3 components "manually", but I would like to use Spring Cloud Data Flow (SCDF) to create a stream and be able to operate them much better.
Based on the SCDF documentation, any SCS application can be straight-forwardly used as an application to be defined in a stream. Besides that, properties for the application can be provided through a external properties file. However, I'm providing my 'application.yml' properties file and it's not working:
stream deploy --name mystream --definition "mysource | myprocessor | mysink' --deploy --propertiesFile /conf/application.yml
After some research, I realized that the documentation states that any property for any application must be passed in this format:
app.<app-name>.<property-name>=<value>
So I have some questions:
Do I have add that "app." to all my existing properties?
Is there any way I can provide something like "--spring.config.location" to my application in SCDF?
If I already provide a "spring.application.name" property in the application.yml, how does it impact SCDF, as I also provide an application name when defining the stream?
If I already provide a "server.port" property in the application.yml, how does it impact SCDF? Will SCDF pick it as the port to use for the application or will it just ignore it?
Thanks in advance for your support.
Do I have to add that "app." to all my existing properties?
Yes. You can have something like this:
app:
app-name:
spring:
cloud:
...
Is there any way I can provide something like "--spring.config.location" to my application in SCDF?
For the stream that is deployed, only --propertiesFile can provide properties at runtime. But, you can still use application specific properties like:
stream deploy mystream --properties "app.*.spring.config.location=configFile"
or, different configFiles for each app with the app.app-name prefix.
This way, all the apps that are deployed would get these properties.
If I already provide a "spring.application.name" property in the application.yml, how does it impact SCDF, as I also provide an application name when defining the stream?
Do you use spring.application.name explicitly in your application for some reason. I guess there will be some impact in metrics collector if you change the spring.application.name.
If I already provide a "server.port" property in the application.yml, how does it impact SCDF? Will SCDF pick it as the port to use for the application or will it just ignore it?
It works the same way as Spring Boot property sources precedence. The server.port in your application.yml of your application will get the least precedence over the other property sources that can be set via stream definition/deployment properties.
There is a way in Spring to configure Redis queue listeners using annotations?
I would like something like Annotation-based SQS Queue Listener from Spring Cloud for AWS, but using Redis as a queue.
Looking the documentation I can't find anything that fits well for me.
This feature is already implemented in Spring or I need implement it by my own?
Spring Cloud Stream has support for redis