How to inject spring aop advice for MongoDb call? - mongodb

I am new to Spring Aop, but I have case to implement AOP advice for a mongo db call(monog db update). I am trying in different way but getting 'Point cut not well formed' error or 'warning no match for this type name: arg string [Xlint:invalidAbsoluteTypeName]'(even if I give absolute name of the argument). Anyone can help on this as how to inject advice for mongo db update call?
#Aspect
#Component
public class DBStatsLoggerAspect {
private static final Logger log = LoggerFactory
.getLogger(DBStatsLoggerAspect.class);
private static final Document reqStatsCmdBson = new Document(
"getLastRequestStatistics", 1);
private DbCallback<Document> requestStatsDbCallback = new DbCallback<Document>() {
#Override
public Document doInDB(MongoDatabase db) throws MongoException,
DataAccessException {
return db.runCommand(reqStatsCmdBson);
}
};
#After("execution( public * com.mongodb.client.MongoCollection.*(..)) && args(org.bson.conversions.Bson.filter,..)")
public void requestStatsLoggerAdvice(JoinPoint joinPoint) {
MongoTemplate mongoTemplate = (MongoTemplate) joinPoint.getTarget();
log.info(mongoTemplate.execute(requestStatsDbCallback).toJson());
}
}
Actual db call method where I need to inject advice:(filter, updatePart all are org.bson.conversions.Bson data type) and here 'collection' is com.mongodb.client.MongoCollection.collection
Document result = collection.findOneAndUpdate(filter, updatePart, new FindOneAndUpdateOptions().upsert(false));

I am not a Spring or MongoDB user, just an AOP expert. But from what I see I am wondering:
You are intercepting execution(public * com.mongodb.client.MongoCollection.*(..)), so joinPoint.getTarget() is a MongoCollection type. Why do you think you can cast it to MongoTemplate? That would only work if your MongoCollection happened to be a MongoTemplate subclass. To me this looks like a bug.
Class MongoCollection is not a Spring component but a third-party class. Spring AOP can only intercept Spring component calls by means of creating dynamic proxies for those components and adding aspect interceptors to said proxies. so no matter how correct or incorrect your pointcut, it should never trigger.
What you can do instead is switch from Spring AOP to full-blown AspectJ. The standard way to do this is to activate AspectJ load-time weaving (LTW).

Related

View Single Record in Spring Boot and Mongo

I'm trying to develop a simple crud application using Spring and Mongodb.
When I'm trying to develop view single data function, I get no error.
But it return value as null when I try in Postman.
Could you please help me to find what is the wrong with my code?
Controller
#GetMapping("/patient/{id}")
public Optional<Patients> findTicketById(#PathVariable("id") #NotNull String id){
System.out.println(id);
return patientRepository.findById(id);
}
Repository
#Repository
public interface PatientRepository extends MongoRepository<Patients, Long> {
Optional<Patients> findById(String id);
}
You can use ifPresentOrElse , check for the usages :
Functional style of Java 8's Optional.ifPresent and if-not-Present?

Is there any way to force spring not to use/create '_class' field in the mapping?

The thing is on production servers we got mapping for Elasticsearch with dynamic set to strict. Currently, we use a rest level client to communicate with Elastisearch, however, we would like to migrate to spring-data-elasticsearch.
Unfortunately, it seems spring data force to use either _class or #TypeAlias which also interfere with the mapping itself. Is any way to use spring-data without _class or #TypeAlias?
Ok I have found a workaround for it.
Be aware of using it when your elasticsearch model uses inheritance.
To solve this problem create class like this:
public class CustomMappingEsConverter extends MappingElasticsearchConverter {
public CustomMappingEsConverter(MappingContext<? extends ElasticsearchPersistentEntity<?>, ElasticsearchPersistentProperty> mappingContext, GenericConversionService conversionService) {
super(mappingContext, conversionService);
}
#Override
public Document mapObject(#Nullable Object source) {
Document target = Document.create();
if (source != null) {
this.write(source, target);
}
target.remove("_class"); // << workaround to remove those _class field in elasticsearch
return target;
}
}
And register the bean:
#Configuration
public class MappingEsConfiguration {
#Bean
#Primary
public CustomMappingEsConverter CustomMappingElasticsearchConverter(MappingContext<? extends ElasticsearchPersistentEntity<?>, ElasticsearchPersistentProperty> mappingContext,
GenericConversionService genericConversionService) {
return new CustomMappingEsConverter(mappingContext, genericConversionService);
}
}
After this changes I was able to use spring data without additional field _class.
Currently this is not possible. There is an open issue for that.
Edit 25.04.2021:
this feature will be available from the next version (4.3) on.

Axon Framework - Configuring Multiple EventStores in Axon Configuration

We are having an usecase wherein each aggregate root should have different eventstores. We have used the following configuration where currently , we have only one event-store configured as below
#Configuration
#EnableDiscoveryClient
public class AxonConfig {
private static final String DOMAIN_EVENTS_COLLECTION_NAME = "coll-capture.domainEvents";
//private static final String DOMAIN_EVENTS_COLLECTION_NAME_TEST =
//"coll-capture.domainEvents-test";
#Value("${mongodb.database}")
private String databaseName;
#Value("${spring.application.name}")
private String appName;
#Bean
public RestTemplate restTemplate() {
CloseableHttpClient httpClient = HttpClientBuilder.create().build();
HttpComponentsClientHttpRequestFactory clientHttpRequestFactory = new
HttpComponentsClientHttpRequestFactory(httpClient);
return new RestTemplate(clientHttpRequestFactory);
}
#Bean
#Profile({"uat", "prod"})
public CommandRouter springCloudHttpBackupCommandRouter(DiscoveryClient discoveryClient,
Registration localInstance,
RestTemplate restTemplate,
#Value("${axon.distributed.spring-
cloud.fallback-url}") String messageRoutingInformationEndpoint) {
return new SpringCloudHttpBackupCommandRouter(discoveryClient,
localInstance,
new AnnotationRoutingStrategy(),
serviceInstance -> appName.equalsIgnoreCase(serviceInstance.getServiceId()),
restTemplate,
messageRoutingInformationEndpoint);
}
#Bean
public Repository<TestEnquiry> testEnquiryRepository(EventStore eventStore) {
return new EventSourcingRepository<>(TestEnquiry.class, eventStore);
}
#Bean
public Repository<Test2Enquiry> test2enquiryRepository(EventStore eventStore) {
return new EventSourcingRepository<>(Test2Enquiry.class, eventStore);
}
#Bean
public EventStorageEngine eventStorageEngine(MongoClient client) {
MongoTemplate mongoTemplate = new DefaultMongoTemplate(client, databaseName)
.withDomainEventsCollection(DOMAIN_EVENTS_COLLECTION_NAME);
return new MongoEventStorageEngine(mongoTemplate);
}
}
Now , We want to configure "DOMAIN_EVENTS_COLLECTION_NAME_TEST"(just for example) as well in EventStorageEngine. How we can achieve the same support for multiple event-stores and select the tracking process as which collection they should be part of
If you are going the route of segregating the event streams, then combining them from an event handling perspective could become a necessity indeed. Especially when having several bounded contexts, segregating the event streams into distinct storage solutions is reasonable.
If you want to define which [message source / event store] is used by a TrackingEventProcessor, you will have to deal with the EventProcessingConfigurer. More specifically, you should invoke the EventProcessingConfigurer#registerTrackingEventProcessor(String, Function<Configuration, StreamableMessageSource<TrackedEventMessage<?>>>) method. The first String parameter is the name of the processor you want to configure as being "tracking". The second parameter defines a Function which gives you the message source to be used by this TrackingEventProcessor (TEP). It is here where you should provide the event store you want this TEP to ingest events from.
Pairing them up at a later stage could also occur of course, which is also supported by Axon Framework. This boils down to a specific form of StreamableMessageSource implementation.
More specifically, you can use the MultiStreamableMessageSource, where you can connect any number of StreamableMessageSources together.
Note that Axon's EmbeddedEventStore is in essence an implementation of a StreamableMessageSource. Once the MultiStreamableMessageSource, you will have to specify it as the messageSource for your TrackingEventProcessors of course.
Last note, know that this solution can only be used when you are using TrackingEventProcessors, as those are the only Event Processors provided by Axon ingesting a StreamableMessageSource as the source for it's events.

how to give mongodb socketkeepalive in spring boot application?

In spring boot if we want to connect to mongodb, we can create a configuration file for mongodb or writing datasource in application.properties
I am following the second way
For me, I am gettint this error
"Timeout while receiving message; nested exception is com.mongodb.MongoSocketReadTimeoutException: Timeout while receiving message
.
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin
I am gettint this error If I am not using my app for 6/7 hours and after that If I try to hit any controller to retrieve data from Mongodb. After 1/2 try I am able to get
Question - Is it the normal behavior of mongodb?
So, in my case it is closing the socket after some particular hours
I read some blogs where it was written you can give socket-keep-alive, so the connection pool will not close
In spring boot mongodb connection, we can pass options in uri like
spring.data.mongodb.uri = mongodb://mongodb0.example.com:27017/admin/?replicaSet=test&connectTimeoutMS=300000
So, I want to give socket-keep-alive options for spring.data.mongodb.uri like replicaset here.
I searched the official site, but can't able to find any
You can achieve this by providing a MongoClientOptions bean. Spring Data's MongoAutoConfiguration will pick this MongoClientOptions bean up and use it further on:
#Bean
public MongoClientOptions mongoClientOptions() {
return MongoClientOptions.builder()
.socketKeepAlive(true)
.build();
}
Also note that the socket-keep-alive option is deprecated (and defaulted to true) since mongo-driver version 3.5 (used by spring-data since version 2.0.0 of spring-data-mongodb)
You can achieve to pass this option using MongoClientOptionsFactoryBean.
public MongoClientOptions mongoClientOptions() {
try {
final MongoClientOptionsFactoryBean bean = new MongoClientOptionsFactoryBean();
bean.setSocketKeepAlive(true);
bean.afterPropertiesSet();
return bean.getObject();
} catch (final Exception e) {
throw new BeanCreationException(e.getMessage(), e);
}
}
Here an example of this configuration by extending AbstractMongoConfiguration:
#Configuration
public class DataportalApplicationConfig extends AbstractMongoConfiguration {
//#Value: inject property values into components
#Value("${spring.data.mongodb.uri}")
private String uri;
#Value("${spring.data.mongodb.database}")
private String database;
/**
* Configure the MongoClient with the uri
*
* #return MongoClient.class
*/
#Override
public MongoClient mongoClient() {
return new MongoClient(new MongoClientURI(uri,mongoClientOptions().builder()));
}

Problems while connecting to two MongoDBs via Spring

I'm trying to achieve to connect to two different MongoDBs with Spring (1.5.2. --> we included Spring in an internal Framework therefore it is not the latest version yet) and this already works partially but not fully. More precisely I found a strange behavior which I will describe below after showing my setup.
So this is what I done so far:
Project structure
backend
config
domain
customer
internal
repository
customer
internal
service
In configI have my Mongoconfigurations.
I created one base class which extends AbstractMongoConfiguration. This class holds fields for database, host etc. which are filled with the properties from a application.yml. It also holds a couple of methods for creating MongoClient and SimpleMongoDbFactory.
Furthermore there are two custom configuration classes. For each MongoDB one config. Both extend the base class.
Here is how they are coded:
Primary Connection
#Primary
#EntityScan(basePackages = "backend.domain.customer")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.customer"},
mongoTemplateRef = "customerDataMongoTemplate")
#ConfigurationProperties(prefix = "customer.mongodb")
public class CustomerDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "customerDataMongoTemplate";
#Override
#Bean(name = CustomerDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(),
getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
The second configuration class looks pretty similar. Here it is:
#EntityScan(basePackages = "backend.domain.internal")
#Configuration
#EnableMongoRepositories(
basePackages = {"backend.repository.internal"}
mongoTemplateRef = InternalDataMongoConnection.TEMPLATE_NAME
)
#ConfigurationProperties(prefix = "internal.mongodb")
public class InternalDataMongoConnection extends BaseMongoConfig{
public static final String TEMPLATE_NAME = "internalDataMongoTemplate";
#Override
#Bean(name = InternalDataMongoConnection.TEMPLATE_NAME)
public MongoTemplate mongoTemplate() {
MongoClient client = getMongoClient(getAddress(), getCredentials());
SimpleMongoDbFactory factory = getSimpleMongoDbFactory(client,
getDatabaseName());
return new MongoTemplate(factory);
}
}
As you can see, I use EnableMongoRepositoriesto define which repository should use which connection.
My repositories are defined just like it is described in the Spring documentation.
However, here is one example which is located in package backend.repository.customer:
public interface ContactHistoryRepository extends MongoRepository<ContactHistoryEntity, String> {
public ContactHistoryEntity findById(String id);
}
The problem is that my backend always only uses the primary connection with this setup. Interestingly, when I remove the beanname for the MongoTemplate (just #Bean) the backend then uses the secondary connection (InternalMongoDataConnection). This is true for all defined repositories.
My question is, how can I achieve that my backend really take care of both connections? Probably I missed to set another parameter/configuration?
Since this is a pretty extensive post I apologise if I forgot something to mention. Please ask for missing information in the comments.
I found the answer.
In my package structure there was a empty configuration class (of my colleague) with the annotation #Configurationand #EnableMongoRepositories. This triggered the automatic wiring process of Stpring Data and therefore led to the problems I reported above.
I simply deleted the class and now it works as it should!