Spring cloud stream + spring retry, How to add recovery callback and disable logic that send to DLQ? - spring-cloud

I'm using spring cloud stream + rabbit mq binder.
In my #StreaListener I want to apply retry logic on specific exceptions using RetryTemplate. After retries are exhausted or not retriable error is thrown, I would like to add a recovery callback that will save a new record with an error message to my Postgres DB and finish with the message (move to the next).
Here what I got so far:
#StreamListener(Sink.INPUT)
public void saveUser(User user) {
User user = userService.saveUser(user); //could throw exceptions
log.info(">>>>>>User is created successfully: {}", user);
}
#StreamRetryTemplate
public RetryTemplate myRetryTemplate() {
RetryTemplate retryTemplate = new RetryTemplate();
retryTemplate.setBackOffPolicy(new ExponentialBackOffPolicy());
Map<Class<? extends Throwable>, Boolean> retryableExceptions = new HashMap<>();
retryableExceptions.put(ConnectionException.class, true);
retryTemplate.registerListener(new RetryListener() {
#Override
public <T, E extends Throwable> boolean open(RetryContext context,
RetryCallback<T, E> callback) {
return true;
}
#Override
public <T, E extends Throwable> void close(RetryContext context, RetryCallback<T, E> callback,
Throwable throwable) {
//could add recovery logic here, like save error to db why sertain user was not saved
log.info("retries exausted");
}
#Override
public <T, E extends Throwable> void onError(RetryContext context,
RetryCallback<T, E> callback, Throwable throwable) {
log.error("Error on retry", throwable);
}
});
retryTemplate.setRetryPolicy(
new SimpleRetryPolicy(properties.getRetriesCount(), retryableExceptions, true));
return retryTemplate;
}
from properties, I only have these (no any dlq configuration)
spring.cloud.stream.bindings.input.destination = user-topic
spring.cloud.stream.bindings.input.group = user-consumer
And after retries are exhausted I get this log.
2020-06-01 20:05:58.674 INFO 18524 --- [idge-consumer-1] o.s.a.r.c.CachingConnectionFactory : Attempting to connect to: [localhost:56722]
2020-06-01 20:05:58.685 INFO 18524 --- [idge-consumer-1] o.s.a.r.c.CachingConnectionFactory : Created new connection: rabbitConnectionFactory.publisher#319c51b0:0/SimpleConnection#2a060201 [delegate=amqp://guest#127.0.0.1:56722/, localPort= 50728]
2020-06-01 20:05:58.697 INFO 18524 --- [idge-consumer-1] c.e.i.o.b.c.RetryConfiguration : retry finish
2020-06-01 20:05:58.702 ERROR 18524 --- [127.0.0.1:56722] o.s.a.r.c.CachingConnectionFactory : Channel shutdown: channel error; protocol method: #method<channel.close>(reply-code=404, reply-text=NOT_FOUND - no exchange 'DLX' in vhost '/', class-id=60, method-id=40)
After RetryListener close method triggered, I can see that listener tries to connect to DLX probably to publish an error message. And I don't want it to do that as well as observe this error message in the log each time.
So my questions are:
1) Where to add RecoveryCalback for my retryTemplate? Supposedly I could write my recover logic with saving error to db in RetryListener#close method, but there definetely should be more appropriate way to do that.
2) How to configure rabbit-mq binder not to send messages to DLQ, maybe I could override some method? Currently, after retries are exhausted (or not retriable error is coming) listener tries to send a message to DLX and logs error that couldn't find it. I don't need any messages to be sent to dlq in scope of my application, I only need to save it to DB.

There is currently no mechanism to provision a custom recovery callback.
Set republishToDlq to false (it used to be). It was changed to true, which is wrong if autoBindDlq is false (default); I will open an issue for that.
Then, when retries are exhausted, the exception will be thrown back to the container; you can use a ListenerContainerCustomizer to add a custom ErrorHandler.
However, the data you get there will be a ListenerExecutionFailed exception with the raw (unconverted) Spring AMQP Message in its failedMessage property, not your User object.
EDIT
You can add a listener to the binding's error channel...
#SpringBootApplication
#EnableBinding(Sink.class)
public class So62137618Application {
public static void main(String[] args) {
SpringApplication.run(So62137618Application.class, args);
}
#StreamListener(Sink.INPUT)
public void listen(String in) {
System.out.println(in);
throw new RuntimeException("test");
}
#ServiceActivator(inputChannel = "user-topic.user-consumer.errors")
public void errors(String in) {
System.out.println("Retries exhausted for " + new String((byte[]) in.getFailedMessage().getPayload()));
}
}

Related

Spring data MongoDB change stream with multiple application instances

I have a springboot with   springdata  mongodb application where I am connecting to mongo change stream to save the changes to a audit collection.  My application is running multiple instances (2 instances) and will be scaled up to n number instances when the load increased.   When records are created in the original collection (“my collection”), the listeners will be triggered in all running instances and creates duplicate records.  Following is my setup
build.gradle
…
// spring data mingodb version 3.1.5
implementation 'org.springframework.boot:spring-boot-starter-data-mongodb'
…
Listener config
#Configuration
#Slf4j
public class MongoChangeStreamListenerConfig {
#Bean
MessageListenerContainer changeStreamListenerContainer(
MongoTemplate template,
PartyConsentAuditListener consentAuditListener,
ErrorHandler errorHandler) {
MessageListenerContainer messageListenerContainer =
new MongoStreamListenerContainer(template, errorHandler);
ChangeStreamRequest<PartyConsentEntity> request =
ChangeStreamRequest.builder(consentAuditListener)
.collection("my-collection")
.filter(newAggregation(match(where("operationType").in("insert", "update", "replace"))))
.fullDocumentLookup(FullDocument.UPDATE_LOOKUP)
.build();
messageListenerContainer.register(request, MyEntity.class, errorHandler);
log.info("mongo stream listener is registered");
return messageListenerContainer;
}
#Bean
ErrorHandler getLoggingErrorHandler() {
return new ErrorHandler() {
#Override
public void handleError(Throwable throwable) {
log.error("error in creating audit records {}", throwable);
}
};
}
}
Listener container
public class MongoStreamListenerContainer extends DefaultMessageListenerContainer {
public MongoStreamListenerContainer(MongoTemplate template, ErrorHandler errorHandler) {
super(template, Executors.newFixedThreadPool(15), errorHandler);
}
#Override
public boolean isAutoStartup() {
return true;
}
}
ChangeListener
#Component
#Slf4j
#RequiredArgsConstructor
/**
* This class will listen to mongodb change stream and process changes. The onMessage will triggered
* when a record added, updated or replaced in the mongo db.
*/
public class MyEntityAuditListener
implements MessageListener<ChangeStreamDocument<Document>, MyEntity> {
#Override
public void onMessage(Message<ChangeStreamDocument<Document>, MyEntity > message) {
var update = message.getBody();
log.info("db change event received");
if (update != null) {
log.info("creating audit entries for id {}", update.getId());
// This will execute in all the instances and creating duplicating records
}
}
}
Is there a way to control the execution on one instance at a given time and share the load between nodes?. It would be really nice to know if there is a config from spring data mongodb to control the flow.
Also, I have checked the following post in stack overflow and I am not sure how to use this with spring data.
Mongo Change Streams running multiple times (kind of): Node app running multiple instances
Any help or tip to resolve this issue is highly appreciated. Thank you very much in advance.

How to handle errors occurring during the processing of data in Kafka Streams

I am writing a Java application using Spring Cloud Stream Kafka Streams. Here is the functional method snippet I'm using:
#Bean
public Function<KStream<String, String>, KStream<String, String>> process() {
return input ->
input.transform(
() ->
new Transformer<String, String, KeyValue<String, String>>() {
ProcessorContext context;
#Override
public void init(ProcessorContext context) {
this.context = context;
}
#Override
public void close() {}
#Override
public KeyValue<String, String> transform(String key, String value) {
String result = fetch_data_from_database(key, value);
return new KeyValue<>(key, result);
}
});
fetch_data_from_database() can throw an Exception.
How can I stop the processing of the inbound KStream(offset should not get committed) in case of exception from fetch_from_database() and make it retry processing with the same offset data?
In this case, you need to retry the logic on your own. For that, you can use Spring's RetryTemplate. This answer has the details about how to use the RetryTemplate within Kafka Streams. It does not use the low-level Processor API as you have, but it's the same idea. Wrap your database call within a retry template and customize the retries based on your requirements. Any upstream processing will be paused until the retries are exhausted.

Kafka connect : Stop the task, are offsets committed

I have code as below in kafka connect SinkTask implementation. For critical exceptions I want to invoke stop() in a task and stop the task.
I needed to know if I goto stop() does kafka connect commit the offsets in kafka topic's partitions ? i.e. if I handle my exception, log it, perform some activities based on the exception and invoke stop(), are the topic's offsets (application topics) committed because there is no exception from put()? I am assuming because there is no successful run from put() or it was not completed for a batch (based on max poll records) offsets should not be completed, so that when I retry, no records will be missed. Please let me know if any one can provide suggestion if offsets are not committed?
Code :
public class ExampleSinkTask extends SinkTask {
#Override
public void start(Map<String, String> map) {
....
}
#Override
private void doSomeCriticalOperation() {
// call stop if critical operation fails.
...
// Exception handled, close the task (allow user interference in such cases)
stop();
}
#Override
public void put(Collection<SinkRecord> collection) {
..
doSomeCriticalOperation();
..
}
#Override
public void stop() {
log.info("*** Stopping Kafka task completely ***");
System.exit(0);
}
}

Best way to handle incoming messages with XMPP

Is there a work-around to get Spring to handle incoming messages from XMPP? I have tried many different configurations to get an inbound-channel-adapter to respond to incoming XMPP messages and nothing happens. I know that they show up at the Spring Integration layer (I can see that in the logs) but they are ignored. Is there any way to get them into my application layer? I hope to avoid needing to make changes to Spring Integration itself if I can.
Here is my integration configuration:
<int-xmpp:inbound-channel-adapter id="gcmIn"
channel="gcmInChannel"
xmpp-connection="gcmConnection"
auto-startup="true"
/>
<bean id="inboundBean" class="example.integration.GcmInputHandler"/>
<int:service-activator input-channel="gcmInChannel" output-channel="nullChannel" ref="inboundBean" method="handle"/>
Using the outbound-channel-adapter works fine. I can send messages over GCM 100% easily. But inbound does nothing, even though I know the messages are coming in.
Thanks
Not a very clean one, you would need to overwrite the ChatMessageListeningEndpoint, which drops all empty body messages.
This one needs then to be used as inbound-channel adapter in your config.
In addition you need to register the GCM package extension on the Smack Provider Manager, otherwise you lose the JSON message.
Working on a sample project -- so if you need more help let me know and I will post a link as soon it works somehow in a understandable way.
Here a sample GCM Input Adapter
public class GcmMessageListeningEndpoint extends ChatMessageListeningEndpoint {
private static final Logger LOG = LoggerFactory.getLogger(GcmMessageListeningEndpoint.class);
#Setter
protected PacketListener packetListener = new GcmPacketListener();
protected XmppHeaderMapper headerMapper = new DefaultXmppHeaderMapper();
public GcmMessageListeningEndpoint(XMPPConnection connection) {
super(connection);
ProviderManager.addExtensionProvider(GcmPacketExtension.GCM_ELEMENT_NAME, GcmPacketExtension.GCM_NAMESPACE,
new PacketExtensionProvider() {
#Override
public PacketExtension parseExtension(XmlPullParser parser) throws Exception {
String json = parser.nextText();
return new GcmPacketExtension(json);
}
});
}
#Override
public void setHeaderMapper(XmppHeaderMapper headerMapper) {
super.setHeaderMapper(headerMapper);
this.headerMapper = headerMapper;
if (this.headerMapper == null) throw new IllegalArgumentException("Null XmppHeaderMapper isn't supported!");
}
public String getComponentType() {
return "xmpp:inbound-channel-adapter-gcm";
}
#Override
protected void doStart() {
Assert.isTrue(this.initialized, this.getComponentName() + " [" + this.getComponentType() + "] must be initialized");
this.xmppConnection.addPacketListener(this.packetListener, null);
}
#Override
protected void doStop() {
if (this.xmppConnection != null) {
this.xmppConnection.removePacketListener(this.packetListener);
}
}
class GcmPacketListener implements PacketListener {
#Override
public void processPacket(Packet packet) throws NotConnectedException {
if (packet instanceof org.jivesoftware.smack.packet.Message) {
org.jivesoftware.smack.packet.Message xmppMessage = (org.jivesoftware.smack.packet.Message) packet;
Map<String, ?> mappedHeaders = headerMapper.toHeadersFromRequest(xmppMessage);
sendMessage(MessageBuilder.withPayload(xmppMessage).copyHeaders(mappedHeaders).build());
} else {
LOG.warn("Unsuported Packet {}", packet);
}
}
}
}
And here the new configuration for the inbound-channel-adapter remove the one in XML:
#Bean
public GcmMessageListeningEndpoint inboundAdpater(XMPPConnection connection, MessageChannel gcmInChannel) {
GcmMessageListeningEndpoint endpoint = new GcmMessageListeningEndpoint(connection);
endpoint.setOutputChannel(gcmInChannel);
return endpoint;
}

Gwt Logging into Client UI from Server-side

I have created GWT app, in which I have a Vertical Panel where I log the details.
Client side logging I'm doing using logger
sample code is:
public static VerticalPanel customLogArea = new VerticalPanel();
public static Logger rootLogger = Logger.getLogger("");
logerPanel.setTitle("Log");
scrollPanel.add(customLogArea);
logerPanel.add(scrollPanel);
if (LogConfiguration.loggingIsEnabled()) {
rootLogger.addHandler(new HasWidgetsLogHandler(customLogArea));
}
And I'm updating my vertical log panel using this code
rootLogger.log(Level.INFO,
"Already Present in Process Workspace\n");
But now my question is , I have to log server side details also into my vertical log panel.
My serverside GreetingServiceImpl code is:
public boolean createDirectory(String fileName)
throws IllegalArgumentException {
Boolean result = false;
try {
rootLogger.log(Level.INFO,
"I want to log this to my UI vertical log Panel");
system.out.println("log this to UI");
File dir = new File("D:/GenomeSamples/" + fileName);
if (!dir.exists()) {
result = dir.mkdir();
}
} catch (Exception e) {
e.printStackTrace();
}
return result;
}
Now I want to log sysoutprt statements to my UI from here. How can I achieve this. Now using rootLogger.log(Level.INFO,
"I want to log this to my UI vertical log Panel"); code it is logging this to eclipse console . But how to log this to my UI in client side.
Please let me know If anything wrong in this question.
If I understood you right, you want to see your server log entries in web interface. And of course, java logger and printStackTrace() won't help you in that: your gwt code is compiled to JavaScript and has nothing to do with console and log files. Besides, your server can't "push" log entries to client - it's up to client to make requests. So if you want to track new log entries and move it to client, you need to poll server for new entries. And yet another problem: you may have many clients polling your servlet and you should keep in mind this multi-threading.
This is how I see probable implementation (it's just concept, may contain some errors and misspellings):
Remote interface:
public interface GreetingService extends RemoteService {
List<String> getLogEntries();
boolean createDirectory(String fileName)throws IllegalArgumentException;
}
Remote Servlet:
public class GreetingServiceImpl extends RemoteServiceServlet implements GreetingService {
public static final String LOG_ENTRIES = "LogEntries";
public List<String> getLogEntries() {
List<String> entries = getEntriesFromSession();
List<String>copy = new ArrayList<String>(entries.size());
copy.addAll(entries);
//prevent loading the same entries twice
entries.clear();
return copy;
}
public boolean createDirectory(String fileName)throws IllegalArgumentException {
Boolean result = false;
try {
log("I want to log this to my UI vertical log Panel");
log("log this to UI");
File dir = new File("D:/GenomeSamples/" + fileName);
if (!dir.exists()) {
result = dir.mkdir();
}
} catch (Exception e) {
log("Exception occurred: " + e.getMessage());
}
return result;
}
private List<String> getEntriesFromSession() {
HttpSession session= getThreadLocalRequest().getSession();
List<String>entries = (List<String>)session.getAttribute(LOG_ENTRIES);
if (entries == null) {
entries = new ArrayList<String>();
session.setAttribute(LOG_ENTRIES,entries);
}
return entries;
}
private void log(String message) {
getEntriesFromSession().add(message);
}
Simple implementation of polling (gwt client-side):
Timer t = new Timer() {
#Override
public void run() {
greetingAsyncService.getLogEntries(new AsyncCallBack<List<String>>() {
void onSuccess(List<String>entries) {
//put entries to your vertical panel
}
void onFailure(Throwable caught){
//handle exceptions
}
});
}
};
// Schedule the timer to run once in second.
t.scheduleRepeating(1000);
greetingAsyncService.createDirectory(fileName, new AsyncCallBack<Void>(){
void onSuccess(List<String>entries) {
//no need to poll anymore
t.cancel();
}
void onFailure(Throwable caught){
//handle exceptions
}
});
}
As you can see, I have used session to keep log entries, because session is client-specific and so different clients will receive different logs. It's up to you to decide what to use - you may create your own Logger class that will track users itself and give appropriate logs to appropriate clients.
And also you may want to save level of your messages (INFO,ERROR etc.) and then display messages in different colors (red for ERROR, for instance). To do so, you need to save not List, but some your custom class.
You'd create a logging servlet that has the same methods as your logging framework to send log messages to your server via RPC.
Here are some sample RPC log methods you can use:
public interface LogService extends RemoteService {
public void logException(String logger, String priority, String message, String error, StackTraceElement[] stackTrace, String nativeStack);
}
public interface LogServiceAsync {
public void logException(String logger, String priority, String message, String error, StackTraceElement[] stackTrace, String nativeStack, AsyncCallback<Void> callback);
}
public class LogServiceImpl extends RemoteServiceServlet implements LogService {
public void logException(String loggerName, String priority, String logMessage, String errorMessage, StackTraceElement[] stackTrace, String nativeStack) {
Logger logger = getLogger(loggerName);
Level level = getLevel(priority);
// Create a Throwable to log
Throwable caught = new Throwable();
if (errorMessage != null && stackTrace != null) {
caught = new Throwable(errorMessage);
caught.setStackTrace(stackTrace);
}
//do stuff with the other passed arguments (optional)
logger.log(level, message, caught);
}
}
Although those implementations are very nice, forget about timers and repeated server queries. We've something better now.
It's possible to push data from server to client using Atmosphere which supports WebSockets.