I have one publisher which is publishing messages on topic, and I have 2 subscribers S1 & S2 which are receiving the messages. When my publisher sends a message and both subscribers are up then they both receive the message. However, when my subscribers are not up and my publisher sends a message then when the subscribers come up they do not receive the message. How can my subscribers receive messages sent when they are not up?
Note: I am using Spring Boot.
MessageProducer.java
#RestController
#RequestMapping("/rest/produce")
public class MessageProducer {
private static final Logger LOG = LoggerFactory.getLogger(MessageProducer.class);
#Autowired
public JmsTemplate jmsTemplate;
#GetMapping("/{message}")
public void run(#PathVariable("message") final String message) throws Exception {
final String messageText = "Hello Blockchain World";
LOG.info("============= Sending " + message);
sendMessage(message);
}
public void sendMessage(String payload) {
this.jmsTemplate.convertAndSend("example", payload);
}
}
application.properties - (MessageProducer)
spring.qpidjms.remoteURL=amqp://127.0.0.1:5672
spring.qpidjms.username=admin
spring.qpidjms.password=admin
activemq.broker-url=tcp://localhost:61616
server.port=8888
spring.jms.pub-sub-domain=true
MessageConsumer.java
#Component
public class MessageConsumer {
private static final Logger LOG = LoggerFactory.getLogger(MessageConsumer.class);
#JmsListener( destination = "example")
public void processMsg(String message) {
LOG.info("============= Received: " + message);
}
}
MessageConsumer main Initiator class (ignore class name)
#SpringBootApplication
#EnableJms
public class QpidJMSSpringBootHelloWorld {
public static void main(String[] args) {
SpringApplication.run(QpidJMSSpringBootHelloWorld.class, args);
}
}
Second consumer is same as first one just port no has been changed in application.properties
application.properties (MessageConsumer-1, S1)
spring.qpidjms.remoteURL=amqp://127.0.0.1:5672
spring.qpidjms.username=admin
spring.qpidjms.password=admin
activemq.broker-url=tcp://localhost:61616
server.port=9999
spring.jms.pub-sub-domain=true
application.properties (S2)
spring.qpidjms.remoteURL=amqp://127.0.0.1:5672
spring.qpidjms.username=admin
spring.qpidjms.password=admin
activemq.broker-url=tcp://localhost:61616
server.port=9990
spring.jms.pub-sub-domain=true
Messages sent to a multicast address (i.e. a JMS topic) are routed to all existing multicast queues (i.e. JMS subscriptions). If no subscriptions exist then the messages are discarded. This is the fundamental semantics of multicast routing (i.e. JMS publish-subscribe).
If you want messages for a subscriber to be stored when the subscriber is not connected then the subscriber must create a durable subscription before any messages which it wants are sent. Once the durable subscription is created the messages sent to the topic will be stored in that subscription even if the subscriber is not connected.
Related
I'm a beginner with kafka stream,
when I create two sample modules one for "Order" and other is "Payment"
In the "Order" Project I use kafka to send to message to "oder-topic"
then in "Payment" I use Kafka listener to received value then send value to other topic( ex: payment-topic).
I want to use kafka Stream in "Order" module to read value from "payment-topic" I define it Order application java class in a "Order" module like this:
#SpringBootApplication
#EnableKafka
#EnableKafkaStreams
public class OrderServiceApplication {
// this topic will received all value that were processed
public static final String OUTPUT_TOPIC_NAME = "ordered";
// this topic is a places received value from payment and stock send value
public static final String INPUT_ORDER_TOPIC_NAME = "order-topic-result";
public static final String INPUT_STOCK_TOPIC_NAME = "stock-topic-result";
public static void main(String[] args) {
SpringApplication.run (OrderServiceApplication.class, args);
}
#Bean
public KStream <String, String> readStream (StreamsBuilder kStreamBuilder) {
KStream <String, String> input = kStreamBuilder.stream(INPUT_ORDER_TOPIC_NAME);
KStream<String, String> output = input.filter((key, value) -> value.length() > 2);
output.to(OUTPUT_TOPIC_NAME);
return output;
}
but the readStream does not work. Please help me
How will I execute the method is automatically after the value was send to this topic
I am consuming Kafka events using #KafkaHandler on the method level (#KafkaListener on class level).
I have seen a lot of examples where an "Acknowledgement" argument is available, on which the "acknowledge()" method can be called to commit consumption of the event, however, I am not able to get the acknowledgement object populated when including it as an argument to my method. How do I manual commit when using a KafkaHandler? Is it possible at all?
Code example:
#Service
#KafkaListener(topics = "mytopic", groupId = "mygroup")
public class TestListener {
#KafkaHandler
public void consumeEvent(MyEvent event, Acknowledgement ack) throws Exception {
//... processing
ack.acknowledge(); // ack is not available
}
Using SpringBoot and Spring-kafka.
You must configure the listener container with AckMode.MANUAL or AckMode.MANUAL_IMMEDIATE to get this functionality.
However, it's generally better to let the container take care of committing the offset with AckMode.RECORD or AckMode.BATCH (default).
https://docs.spring.io/spring-kafka/docs/current/reference/html/#committing-offsets
EDIT
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.listener.ack-mode=MANUAL
#SpringBootApplication
public class So68844554Application {
public static void main(String[] args) {
SpringApplication.run(So68844554Application.class, args);
}
#Bean
public NewTopic topic() {
return TopicBuilder.name("so68844554").partitions(1).replicas(1).build();
}
}
#Component
#KafkaListener(id = "so68844554", topics = "so68844554")
class Foo {
#KafkaHandler
void listen(String in, Acknowledgment ack) {
System.out.println(in);
ack.acknowledge();
}
}
% kafka-consumer-groups --bootstrap-server localhost:9092 --describe -group so68844554
Consumer group 'so68844554' has no active members.
GROUP TOPIC PARTITION CURRENT-OFFSET LOG-END-OFFSET LAG CONSUMER-ID HOST CLIENT-ID
so68844554 so68844554 0 2 2 0 - - -
i have two kafka listeners like below:
#KafkaListener(topics = "foo1, foo2", groupId = foo.id, id = "foo")
public void fooTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
}
#KafkaListener(topics = "Bar1, Bar2", groupId = bar.id, id = "bar")
public void barTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
same application is running on two instances like inc1 and inc2. is there a way if i can assign foo listener to inc1 and bar listener to inc2. and if one instance is going down both the listener(foo and bar) assign to the running instance.
You can use the #KafkaListener property autoStartup, introduced since 2.2.
When an instance die, you can automatically start it up in the other instance like so:
#Autowired
private KafkaListenerEndpointRegistry registry;
...
#KafkaListener(topics = "foo1, foo2", groupId = foo.id, id = "foo", autoStartup = "false")
public void fooTopics(#Header(KafkaHeaders.RECEIVED_TOPIC) String topic, String message, Acknowledgment acknowledgment) {
//processing
}
//Start up condition
registry.getListenerContainer("foo").start();
I have a web service that via a GET Http method, the user requests for a person object. This person is sent to a JMS Queue and then with the help of Spring Integration, I send it to a fake email address (https://papercut.codeplex.com/). I have written the code with Spring Integration Java DSL. I would like to ask:
Is there a more flexible way to send the email message?
If an exception is thrown, how can the mail be redelivered with the help of Spring Integration? (e.g. for 5 times and if it is not sent then the exception gets handled and the program stops)
Here is my code:
Web Service
public Person findById(Integer id) {
Person person = jpaPersonRepository.findOne(id);
jmsTemplate.convertAndSend("testQueue", person);
return jpaPersonRepository.findOne(id);
}
Java Confiuration
#Configuration
#EnableIntegration
#ComponentScan
public class JavaConfig {
private static final String DEFAULT_BROKER_URL = "tcp://localhost:61616";
private static final String DEFAULT_QUEUE = "testQueue";
#Bean
public ActiveMQConnectionFactory connectionFactory() {
ActiveMQConnectionFactory connectionFactory = new ActiveMQConnectionFactory();
connectionFactory.setBrokerURL(DEFAULT_BROKER_URL);
return connectionFactory;
}
#Bean
public JmsTemplate jmsTemplate() {
JmsTemplate template = new JmsTemplate();
template.setConnectionFactory(this.connectionFactory());
template.setDefaultDestinationName(DEFAULT_QUEUE);
return template;
}
#Bean
public DefaultMessageListenerContainer defaultMessageListenerContainer() {
DefaultMessageListenerContainer defaultMessageListenerContainer = new DefaultMessageListenerContainer();
defaultMessageListenerContainer.setDestinationName(DEFAULT_QUEUE);
defaultMessageListenerContainer.setConnectionFactory(this.connectionFactory());
return defaultMessageListenerContainer;
}
#Bean(name="inputChannel")
public DirectChannel directChannel() {
return new DirectChannel();
}
#Bean
public IntegrationFlow orders() {
return IntegrationFlows
.from(Jms.messageDrivenChannelAdapter(defaultMessageListenerContainer()))
.transform(new ObjectToStringTransformer())
.enrichHeaders(p -> p.header(MailHeaders.TO, "Papercut0#test.com"))
.handle(Mail.outboundAdapter("127.0.0.1")
.credentials("test","test").port(25)
.javaMailProperties(p -> p.put("mail.debug", "true")),
e -> e.id("sendMailEndpoint"))
.get();
}
}
Is there a more flexible way to send the email message?
Sorry, the question isn't clear. You have enough short code to do that. Mail.outboundAdapter() and all its fluent API. What should be more flexible?
If an exception is thrown, how can the mail be redelivered with the help of Spring Integration?
For this purpose Spring Integration suggests RequestHandlerRetryAdvice. And Mail.outboundAdapter() can be configured with that as:
#Bean
public Advice retryAdvice() {
RequestHandlerRetryAdvice advice = new RequestHandlerRetryAdvice();
RetryTemplate retryTemplate = new RetryTemplate();
SimpleRetryPolicy retryPolicy = new SimpleRetryPolicy();
retryPolicy.setMaxAttempts(5);
retryTemplate.setRetryPolicy(retryPolicy);
advice.setRetryTemplate(retryTemplate);
advice.setRecoveryCallback(new ErrorMessageSendingRecoverer(emailErrorChannel()));
return advice;
}
...
.handle(Mail.outboundAdapter("127.0.0.1")
.credentials("test","test").port(25)
.javaMailProperties(p -> p.put("mail.debug", "true")),
e -> e.id("sendMailEndpoint")
.advice(retryAdvice())) // HERE IS THE TRICK!
See its JavaDocs and Reference Manual on the matter.
I am trying to create a Spring Boot application with Spring Cloud Stream and Kafka integration. I created a sample Topic in Kafka with 1 partition and have published to the topic from the Spring Boot application created based on the directions given here
http://docs.spring.io/spring-cloud-stream/docs/1.0.2.RELEASE/reference/htmlsingle/index.html
and
https://blog.codecentric.de/en/2016/04/event-driven-microservices-spring-cloud-stream/
Spring Boot App -
#SpringBootApplication
public class MyApplication {
private static final Log logger = LogFactory.getLog(MyApplication.class);
public static void main(String[] args) {
SpringApplication.run(MyApplication.class, args);
}
}
Kafka Producer Class
#Service
#EnableBinding(Source.class)
public class MyProducer {
private static final Log logger = LogFactory.getLog(MyProducer.class);
#Bean
#InboundChannelAdapter(value = Source.OUTPUT, poller = #Poller(fixedDelay = "10000", maxMessagesPerPoll = "1"))
public MessageSource<TimeInfo> timerMessageSource() {
TimeInfo t = new TimeInfo(new Timestamp(new Date().getTime())+"","Label");
MessageBuilder<TimeInfo> m = MessageBuilder.withPayload(t);
return () -> m.build();
}
public static class TimeInfo{
private String time;
private String label;
public TimeInfo(String time, String label) {
super();
this.time = time;
this.label = label;
}
public String getTime() {
return time;
}
public String getLabel() {
return label;
}
}
}
All is working well except for when I want to handle exceptions.
If the Kafka Topic went down, I can see the ConnectionRefused exception being thrown in the log files for the app, but the retry logic built in seems to be going at retrying continuously without stopping!
There is no exception thrown at all for me to handle and do further exception processing. I have read through the Producer options and the Binder options for Kafka in the Spring Cloud Stream documentation above and I cannot see any customization options possible to get this exception thrown above all the way for me to capture.
I am new to Spring Boot / Spring Cloud Stream / Spring Integration (which seems to be the underlying implementation to the cloud stream project).
Is there anything else you guys know to get this exception cascaded to my Spring Cloud Stream app?