How to avoid creation of Multiple Method Interceptors for the method with cache annotations when using spring data jpa and spring data redis - spring-data-jpa

In my application I am using two data modules (spring-boot-starter-data-jpa and spring-boot-starter-data-redis).
I have a method annotated with #CachPut to store in cache. this method is called once but the actual CachePut operation is happening twice. when I debug there are two proxies created which intercepts the same method.
When I use only one module spring-boot-starter-data-redis it works as expected because there is only one proxy created which intercepts the method having #CachPut annotation.
But as per our requirement our application need to use both the data modules (spring-boot-starter-data-jpa for DB related stuff and spring-boot-starter-data-redis for handling some cache related stuff). If I add spring-boot-starter-data-jpa then the cache operation is getting executed twice (due to multiple proxies).
Is there any way to disable proxy creation with #EnableCaching for jpa module but enable only for redis module.
Code Snippets
Configuration class:
#Configuration
#EnableCaching
public class MyConfiguration {
#Bean
public RedisCacheManager cacheManager(RedisConnectionFactory connectionFactory, #NotNull RedisCacheProperties properties) {
....
}
}
Dependencies
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-redis</artifactId>
<version>1.5.9.RELEASE</version>
</dependency>
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-jpa</artifactId>
<version>1.5.9.RELEASE</version>
</dependency>
CacheProcessor (Interface and Implementation classes)
public interface MyCacheProcessor {
MyData store(final MyData myData);
}
public class MyCacheProcessorImpl implements MyCacheProcessor {
#CachePut(cacheNames = {"my-data"}, key = "#myData.id")
public MyData store(final MyData myData) {
log.debug("storing in redis"); // This is printed only once (but actual cache put operation is happening twice )
return myData;
}
}
yaml configs
spring:
redis:
timeout: 500
host: devserver
port: 26379
datasource:
url: jdbc:h2:mem:testdb;Mode=Oracle;DB_CLOSE_ON_EXIT=FALSE
platform: h2
redis.cache.enabled: true
application:
redis:
cache:
my-data:
ttl: 15m
serializer:
type: org.springframework.data.redis.serializer.Jackson2JsonRedisSerializer
handles: com.my.sample.model.MyData
I expect the cache operation should be executed only once even If I use both the data modules. Currently the cache operation is executed twice.
Any guidance would be appreciated.

Related

Dynamic destination in Spring Cloud Stream from Azure Event Hub to Kafka

I'm trying to use Spring Cloud Stream to process messages sent to an Azure Event Hub instance. Those messages should be routed to a tenant-specific topic determined at runtime, based on message content, on a Kafka cluster. For development purposes, I'm running Kafka locally via Docker.
I've done some research about bindings not known at configuration time and have found that dynamic destination resolution might be exactly what I need for this scenario.
However, the only way to get my solution working is to use StreamBridge. I would rather use the dynamic destination header spring.cloud.stream.sendto.destination, in that way the processor could be written as a Function<> instead of a Consumer<> (it is not properly a sink). The main concern about this approach is that, since the final solution will be deployed with Spring Data Flow, I'm afraid I will have troubles configuring the streams if using StreamBridge.
Moving on to the code, this is the processor function, I stripped away the unrelated parts
private static final String OUTPUT_DESTINATION_TEMPLATE = "%s.gateway-report";
private static final String STREAM_DESTINATION_HEADER = "spring.cloud.stream.sendto.destination";
private static final String TENANT_ID_HEADER = "tenant-id";
#Bean
public Function<Message<String>, Message<String>>
routeMessageToTenantDestination(TenantGatewayDeviceService gatewayDeviceService) {
return msg -> {
final String tenantId = "test";
final String destination = String.format(OUTPUT_DESTINATION_TEMPLATE, tenantId);
return MessageBuilder.withPayload(msg.getPayload())
.setHeader(STREAM_DESTINATION_HEADER, destination)
.setHeader(TENANT_ID_HEADER, tenantId)
.build();
};
}
and this is my application.yml
spring:
cloud:
stream:
bindings:
routeMessageToTenantDestination-in-0:
binder: kafka-evthub
destination: gateway-report
group: report-processor
dynamic-destinations:
binders:
kafka-ioc:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: localhost:29092
kafka-evthub:
type: kafka
environment:
spring.cloud.stream.kafka.binder:
brokers: xxxxxxxxxxx.servicebus.windows.net:9093
configuration:
sasl:
jaas:
config: org.apache.kafka.common.security.plain.PlainLoginModule required username="$ConnectionString" password="Endpoint=sb://xxxxxxxxxxx.servicebus.windows.net/;SharedAccessKeyName=*******;SharedAccessKey=********";
mechanism: PLAIN
security.protocol: SASL_SSL
default-binder: kafka-ioc
My relevant dependencies in pom.xml
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream-binder-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.kafka</groupId>
<artifactId>spring-kafka</artifactId>
</dependency>
This is the exception I get each time the function fires
2022-01-20 10:56:18.848 ERROR 2258917 --- [container-0-C-1] o.s.integration.handler.LoggingHandler : org.springframework.messaging.MessageHandlingException: error occurred in message handler [... stripped away ...]
at org.springframework.integration.support.utils.IntegrationUtils.wrapInHandlingExceptionIfNecessary(IntegrationUtils.java:191)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:65)
at org.springframework.integration.dispatcher.AbstractDispatcher.tryOptimizedDispatch(AbstractDispatcher.java:115)
at org.springframework.integration.dispatcher.UnicastingDispatcher.doDispatch(UnicastingDispatcher.java:133)
at org.springframework.integration.dispatcher.UnicastingDispatcher.dispatch(UnicastingDispatcher.java:106)
at org.springframework.integration.channel.AbstractSubscribableChannel.doSend(AbstractSubscribableChannel.java:72)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:317)
at org.springframework.integration.channel.AbstractMessageChannel.send(AbstractMessageChannel.java:272)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:187)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:166)
at org.springframework.messaging.core.GenericMessagingTemplate.doSend(GenericMessagingTemplate.java:47)
at org.springframework.messaging.core.AbstractMessageSendingTemplate.send(AbstractMessageSendingTemplate.java:109)
at org.springframework.integration.endpoint.MessageProducerSupport.sendMessage(MessageProducerSupport.java:208)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.sendMessageIfAny(KafkaMessageDrivenChannelAdapter.java:385)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter.access$300(KafkaMessageDrivenChannelAdapter.java:79)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:442)
at org.springframework.integration.kafka.inbound.KafkaMessageDrivenChannelAdapter$IntegrationRecordMessageListener.onMessage(KafkaMessageDrivenChannelAdapter.java:416)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.lambda$onMessage$0(RetryingMessageListenerAdapter.java:125)
at org.springframework.retry.support.RetryTemplate.doExecute(RetryTemplate.java:329)
at org.springframework.retry.support.RetryTemplate.execute(RetryTemplate.java:255)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:119)
at org.springframework.kafka.listener.adapter.RetryingMessageListenerAdapter.onMessage(RetryingMessageListenerAdapter.java:42)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeOnMessage(KafkaMessageListenerContainer.java:2588)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeOnMessage(KafkaMessageListenerContainer.java:2569)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeRecordListener(KafkaMessageListenerContainer.java:2483)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.doInvokeWithRecords(KafkaMessageListenerContainer.java:2405)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeRecordListener(KafkaMessageListenerContainer.java:2284)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeListener(KafkaMessageListenerContainer.java:1958)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.invokeIfHaveRecords(KafkaMessageListenerContainer.java:1353)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.pollAndInvoke(KafkaMessageListenerContainer.java:1344)
at org.springframework.kafka.listener.KafkaMessageListenerContainer$ListenerConsumer.run(KafkaMessageListenerContainer.java:1236)
at java.base/java.util.concurrent.Executors$RunnableAdapter.call(Executors.java:515)
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264)
at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.NullPointerException
at org.springframework.cloud.stream.function.StreamBridge.resolveDestination(StreamBridge.java:276)
at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.doSendMessage(FunctionConfiguration.java:604)
at org.springframework.cloud.stream.function.FunctionConfiguration$FunctionToDestinationBinder$1.handleMessageInternal(FunctionConfiguration.java:597)
at org.springframework.integration.handler.AbstractMessageHandler.handleMessage(AbstractMessageHandler.java:56)
... 32 more
I've tried different things, f.i. manually creating the destination topic, configuring an explicit destination binding with the same name assigned to the header (not a definitive solution, just for testing), but I keep getting this exception. I've also tried to provide a NewDestinationBindingCallback<> and I can see from printing a log that the framework enters the method, but nevertheless I keep getting the same error.
This happens also with the other approach for integrating Spring Cloud Stream with Event Hubs, namely the library azure-spring-cloud-stream-binder-eventhubs.
As I said previously, I've found a workaround in relying to StreamBridge, but this solution seems less desirable to me and I would like to understand what I'm missing.
EDIT: I made a small step forward and managed to make it work by downgrading spring boot starter version from 2.6.2 to 2.4.4
<parent>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-parent</artifactId>
<version>2.4.4</version>
<relativePath/> <!-- lookup parent from repository -->
</parent>
and setting
<properties>
<spring-cloud.version>2020.0.2</spring-cloud.version>
</properties>
instead of 2021.0.0 in pom.xml, as found in the sample provided by sobychacko. However, it seems like a regression, or something is missing in my configuration to make this work with the most recent version?
Not sure what exactly is causing the issues you have. I just created a basic sample app demonstrating the sendto.destination header and verified that the app works as expected. It is a multi-binder application with two Kafka clusters connected. The function will consume from the first cluster and then using the sendto header, produce the output to the second cluster. Compare the code/config in this sample with your app and see what is missing.
I see references to StreamBridge in the stacktrace you shared. However, when using the sendto.destination header, it shouldn't go through StreamBridge.

Changes to mongo collection not visible across sessions

this is my mongo config:
#Configuration
public class MongoConfig {
#Bean
public MongoCustomConversions customConversions() {
return new MongoCustomConversions(Arrays.asList(new OffsetDateTimeReadConverter(), new OffsetDateTimeWriteConverter()));
}
#Bean
public ValidatingMongoEventListener validatingMongoEventListener() {
return new ValidatingMongoEventListener(validator());
}
#Bean
public LocalValidatorFactoryBean validator() {
return new LocalValidatorFactoryBean();
}
}
and:
spring:
data:
mongodb:
uri: mongodb://localhost:27017/my-database
I have noticed that whatever changes I am making to my collection in the Spring Boot service, whether I use repository or MongoOperations, save or find, they are visible only during the lifetime of the Spring Boot service and are NOT visible with command line mongo interface. Also the documents that I add with mongo command line are NOT visible to the spring boot service.
To my knowledge I have only one instance of mongodb, only one is visible in Task manager. I double checked the name of db and collection, too.
What could be the reason?
I have found that the problem was caused by embedded MongoDB:
<dependency>
<groupId>de.flapdoodle.embed</groupId>
<artifactId>de.flapdoodle.embed.mongo</artifactId>
<scope>test</scope>
</dependency>
Despite the scope being test, the embedded server run also for the main configuration. I could observe this behaviour only in eclipse, not in Intellij.
I could solve or at least circumvent the problem by excluding the embedded configuration:
#SpringBootApplication(exclude = EmbeddedMongoAutoConfiguration.class)
public class MyApplication

Using Spring Boot auto configuration of MongoDB with Camel, how to know what application.properties

I'm trying to add a Camel rout to a working project with Spring Boot for using MongoDB. I've using Mongo with Spring Boot autoconfigure, and it worked pretty easily.
I was confused about how to specify the bean that Spring Boot generates, but I finally found an answer to a related question on SO that said the name of the bean is "mongo". So I changed my rout to .to("mongodb:mongo?....
No Spring is trying to connect to default parameters, localhost and 72017, etc. So how do I figure out what properties to specify in application.properties to set the connection parameters? The documentation isn't being helpful here.
{Edit: I managed to figure this out. The below works now}
Here are the Maven dependencies I added:
<dependency>
<groupId>org.apache.camel</groupId>
<artifactId>camel-mongodb</artifactId>
<version>${camel-version}</version>
</dependency>
<dependency>
<groupId>org.apache.camel.springboot</groupId>
<artifactId>camel-mongodb-starter</artifactId>
<version>${camel-version}</version>
</dependency>
And here are the additions to my application.properties file
spring.data.mongodb.host=<IP>
spring.data.mongodb.port=27017
spring.data.mongodb.database=dev
spring.data.mongodb.username=test
spring.data.mongodb.password=password
And the Camel route:
package Order;
import org.apache.camel.builder.RouteBuilder;
import org.springframework.stereotype.Component;
#Component
public class OrderRouter extends RouteBuilder {
#Override
public void configure() {
// Process message
from("jms:topic:order")
.log("JMS Message: ${body}")
.choice()
.when().jsonpath("$.[?(#.type=='partial')]")
.to("mongodb:mongo?database=dev&collection=order&operation=insert");
}
}
Does this mean I need to define a bean when connecting with Camel? Looking at the documentation it seems that it should generate a bean by adding camel-mongodb-starter along with the application.properteis
https://camel.apache.org/components/latest/mongodb-component.html#_spring_boot_auto_configuration
I found the spring bean name, but only by looking around for examples...
spring.data.mongodb

How to enable mongo connection pool monitoring with spring-data-mongodb in XML?

I am using spring-data-mongodb 1.10.12 with mongo 3.6.4. I recently upgraded from a lower version of mongo, and now my mongo connection pool monitoring is broken because there is no ConnectionPoolStatisticsMBean registered. According to the documentation for that version of mongo "JMX connection pool monitoring is disabled by default. To enable it add a com.mongodb.management.JMXConnectionPoolListener instance via MongoClientOptions"
However, in the xml schema for spring-data-mongo, the clientOptionsType does not allow setting that value, unless I am missing something. Is there any way, with spring-data-mongodb, to turn on the connection pool monitoring through xml?
Here is my xml for the mongo beans
<mongo:mongo-client id="mongo"
host="${mongo.hostname:#{null}}"
replica-set="${mongo.replica.set:#{null}}"
port="${mongo.port}"
credentials="'${mongo.username}:${mongo.password}#${mongo.auth.db.name}?uri.authMechanism=${mongo.auth.mechanism:SCRAM-SHA-1}'"
>
<mongo:client-options connections-per-host="${mongo.connections-per-host:40}"
threads-allowed-to-block-for-connection-multiplier="${mongo.threads-blocked-per-connection:3}"
connect-timeout="${mongo.connection-timeout:10000}"
max-wait-time="${mongo.maxWaitTime:120000}"
socket-keep-alive="${mongo.socketKeepAlive:true}"
socket-timeout="${mongo.socketTimeout:0}"
read-preference="${mongo.read.preference:PRIMARY_PREFERRED}"
write-concern="${mongo.write.concern:ACKNOWLEDGED}"
/>
</mongo:mongo-client>
and my pom dependencies
<properties>
<mongo-version>3.6.4</mongo-version>
<spring-data-version>1.10.12.RELEASE</spring-data-version>
</properties>
<dependencies>
<dependency>
<groupId>org.mongodb</groupId>
<artifactId>mongo-java-driver</artifactId>
<version>${mongo-version}</version>
</dependency>
<dependency>
<groupId>org.springframework.data</groupId>
<artifactId>spring-data-mongodb</artifactId>
<version>${spring-data-version}</version>
</dependency>
</dependencies>
It is true that there is no way, through the spring-data-mongodb schema, to add a connection pool listener, but the folks that maintain the repo suggested a solution which is to use a BeanPostProcessor to alter the MongoClientOptions before they are passed to the mongo client like so
#Override
public Object postProcessAfterInitialization(Object bean, String beanName) throws BeansException {
if (bean instanceof MongoClientOptions) {
return MongoClientOptions.builder((MongoClientOptions) bean)
.addConnectionPoolListener(new JMXConnectionPoolListener()).build();
}
return bean;
}
Doing so successfully registered ConnectionPoolStatisticsMBeans for me
I tackled the very same challenge. In my case, originally the Spring configuration was done using XML. I have managed to combine the XML configuration with Java configuration, because the Java configuration gives you more flexibility to configure the MongoClientOptions:
#Configuration
public class MongoClientWrapper {
#Bean
public MongoClient mongo()
{
//credentials:
MongoCredential credential = MongoCredential.createCredential("user", "auth-db", "password".toCharArray());
MongoClientOptions options = MongoClientOptions.builder()
.addConnectionPoolListener(new MyConnectionPoolListener())
.build();
return new MongoClient(
new ServerAddress("localhost", 27017), //replica-set
Arrays.asList(credential)
,options
);
}
#Bean
public MongoTemplate mongoTemplate()
{
return new MongoTemplate(mongo(), database);
}
...
}
Hope this helps someone...
In my project, adding the BeanPostProcessor was useless, because the MongoClientOptions Bean was not automatically instantiate.
I had to create Bean manually to add a connection pool listener in my environment:
#Bean public MongoClientOptions myMongoClientOptions() {
return MongoClientOptions.builder().addConnectionPoolListener(new JMXConnectionPoolListener()).build();
}

Spring Cloud Stream unable to detect message router

I'm trying to set up a simple cloud stream Sink but keep running into the following errors.
I've tried several binders and they all keep giving the same error.
"SEVERE","logNameSource":"org.springframework.boot.diagnostics.LoggingFailureAnalysisReporter","message":"
***************************
APPLICATION FAILED TO START
***************************
Description:
Parameter 0 of method binderAwareRouterBeanPostProcessor in org.springframework.cloud.stream.config.BindingServiceConfiguration required a bean of type '[ Lorg.springframework.integration.router.AbstractMappingMessageRouter;' that could not be found.
Action:
Consider defining a bean of type '[ Lorg.springframework.integration.router.AbstractMappingMessageRouter;' in your configuration.
I'm trying to use a simple Sink to log an incoming message from a kafka topic
#EnableBinding(Sink.class)
public class ReadEMPMesage {
private static Logger logger =
LoggerFactory.getLogger(ReadEMPMesage.class);
public ReadEMPMesage() {
System.out.println("In constructor");
}
#StreamListener(Sink.INPUT)
public void loggerSink(String ccpEvent) {
logger.info("Received" + ccpEvent);
}
}
and my configuration is as follows
# Test consumer properties
spring.kafka.consumer.auto-offset-reset=earliest
spring.kafka.consumer.group-id=testEmbeddedKafkaApplication
spring.kafka.consumer.key-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer
spring.kafka.consumer.value-deserializer=org.apache.kafka.common.serialization.ByteArrayDeserializer
# Binding properties
spring.cloud.stream.bindings.output.destination=testEmbeddedOut
spring.cloud.stream.bindings.input.destination=testEmbeddedIn
spring.cloud.stream.bindings.output.producer.headerMode=raw
spring.cloud.stream.bindings.input.consumer.headerMode=raw
spring.cloud.stream.bindings.input.group=embeddedKafkaApplication
and my pom
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-stream-kafka</artifactId>
</dependency>
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-stream</artifactId>
</dependency>
TL;DR - check your version of Spring Boot and try upgrading it a few minor revs.
I ran into this problem on a project after upgrading from Spring Cloud DALSTON.RELEASE to Spring Cloud Edgware.SR4 -- it was strange because other projects worked fine but there was a single one that didn't.
After further investigation I realized that the troublemaker project was using Spring Boot 1.5.3.RELEASE and others were using 1.5.9.RELEASE
After upgrading Spring Boot to 1.5.9.RELEASE things seemed to start working