Is there a simple a simple way to pause the spirng boot AMPQ and resume is after some time??
i'm using spring boot verison 2.6
If it is a #RabbitListener method, give it an id attribute and stop/start it via the RabbitListenerEndpointRegistry #Bean.
registry.getListenerContainer("myListenerId").stop();
If you created the listener container yourself then it can be stopped and started directly.
Related
I am creating a spring boot application using Temporal as a workflow engine and kafka topics .
So in order to validate whole setup is working fine, I am writing integration test cases using spring boot test, embedded kafka, and temporal. But I am not getting any option or way to run in memory workflow engine.
maybe you can try this distribution of Temporal that runs as a single process
https://github.com/DataDog/temporalite
We have a Spring Boot Microservice that as well as having HTTP endpoints uses Spring Cloud Bus to pick up refresh events (from rabbit) and also has a Spring Cloud Stream Sink that picks up custom messages from another rabbit topic.
After updating to Spring Boot 2.4.1 and Spring Cloud 2020.0.0 everything seemed to be working until we discovered Spring Cloud Bus was no longer picking up events.
Looking into this it turned out some of the Spring Cloud Bus internal channels where not getting created.
This wasn't happening in another service that didn't have the stream functionality as well so we tested disabling that and the bus functionality then started working.
So it was obviously some sort of interference between the old style stream model and the newer Spring Cloud Bus.
After updating the our sink to use the new function model I still had issues and eventually got both to work by including the following lines in our application.yml:
spring:
cloud:
stream:
bindings.mySink-in-0.destination: mytopic
function.definition: busConsumer;mySink
So I have the following questions
Did I miss something or should there be better documentation on how stream / bus can affect each other and the migration to 2020.0.0?
Does my current configuration look correct?
It doesn't seem right to have to include busConsumer here - should the auto configuration for it not be able to 'combine it in' with any other stream config?
What's the difference between spring.cloud.stream.function.definition and spring.cloud.function.definition? I've seen both in documentation and Spring Cloud Bus seems to be also setting spring.cloud.function.definition=busConsumer
In org.springframework.cloud.stream.function.FunctionConfiguration, It does a search for #EnableBinding.
if (ObjectUtils.isEmpty(applicationContext.getBeanNamesForAnnotation(EnableBinding.class)))
If found, functional binding is disabled. See this
logger.info("Functional binding is disabled due to the presense of #EnableBinding annotation in your configuration");
After the upgrade, we need to transform our Listener classes to use functional interface in order to activate the functional binding. After that, cloud bus consumer binding will be created too.
I'm pretty new to Kafka Streams. Right now I'm trying to understand the basic principles of this system.
This is a quote from the following article https://www.confluent.io/blog/introducing-kafka-streams-stream-processing-made-simple/
You just use the library in your app, and start as many instances of the app as you like, and Kafka will partition up and balance the work over these instances.
Right now it is not clear to me how it works. Where will the business logic(computation tasks) of Kafka Streams be executed? It will be executed inside of my application, or it is just a client for Kafka cluster and this client will only prepare tasks that will be executed on Kafka cluster? If no, how to properly scale the computation power of my Kafka Streams application? Is it possible to execute inside Yarn or something like this? This way is it a good idea to implement the Kafka Streams application as an embedded component of the core application(let's say web application in my case) or it should be implemented as a separate service and deployed to Yarn/Mesos(if it is possible) separately from the main web application? Also, how to prepare Kafka Streams application to be ready deploy with Yarn/Mesos application management frameworks?
You stream processing code is running inside your applications -- it's not running in the Kafka cluster.
You can deploy anyway you like: Yarn/Mesos/kubernetes/WAR/Chef whatever. The idea is to embed it directly into your application to avoid setting up a processing cluster.
You don't need to prepare Kafka Streams for a deployment method -- it's completely agnostic to how it gets deployed. For Yarn/Mesos you would deploy it as any other Java application within the framework.
I have Spring boot application, in which I configured Spring Cloud Stream Kafka, so when I am starting the application, on container startup it is looking for Zookeeper and Kafka server running instances, if its doesn't found then it will terminate the application container.
So my requirement is I wanted a switch like thing, so I can make it configurable, so when I wanted my application to check Zookeeper and Kafka instances running, I can say like true, and when I don't want I can say false, both the case I wanted my application container should start.
There is a way in Spring to configure Redis queue listeners using annotations?
I would like something like Annotation-based SQS Queue Listener from Spring Cloud for AWS, but using Redis as a queue.
Looking the documentation I can't find anything that fits well for me.
This feature is already implemented in Spring or I need implement it by my own?
Spring Cloud Stream has support for redis