Use multiple mongo DBs in same application for same model & same Repository - mongodb

I need to implement Spring boot - MongoDb application where There are 2 mongo DBs which have exact same database name & collections. Based on User making a request, i need to choose whether to fetch data from DB1 or DB2 (only difference in mongo URI host - IP).
E.g. I need some way to create 2 mongoTemplates like mTempA & mTempB in my Repository & based on some condition, use either of the template to execute query as below:
#Repository
public class MyCustomRepository {
private Logger logger = LoggerFactory.getLogger(MyCustomRepository.class);
#Autowired
private MongoTemplateA mongoTemplateA;// Need to know if this is possible & how
#Autowired
private MongoTemplateB mongoTemplateB;// Need to know if this is possible & how
public List<MyModel> findByCriteria(MyRequest request) {
List<MyModel> result;
//Query query = <build query based on request>
if (request.getUserType().equals("A")) {
result = mongoTemplateA.find(query, MyModel.class);
} else {
result = mongoTemplateB.find(query, MyModel.class);
}
logger.debug("Result fetched with {} records", result.size());
return result;
}
}
I don't want to have 2 separate Repo (Class or Interfaces) or different models to be used. Just want to have 2 different mongoTemplates to be injected in single repo.
Is this possible? If yes, please give some example code.
I have followed below tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot

As rightly pointed out by #Lucia, below is how it can be done:
Have 2 different configuration placeholders
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateA")
public class MongoConfigA {
// Configuration class for DB 1 access
}
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateB")
public class MongoConfigB {
// Configuration class for DB 2 access
}
Get one class which will help in reading custom properties for mongo db properties in application.properties:
#ConfigurationProperties(prefix = "mongodb")
public class MultipleMongoProperties {
private MongoProperties adb = new MongoProperties();
private MongoProperties bdb = new MongoProperties();
public MongoProperties getAdb() {
return adb;
}
public MongoProperties getBdb() {
return bdb;
}
}
Add a configuration class to create mongoTemplates:
#Configuration
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
#Autowired
private MultipleMongoProperties mongoProperties = new MultipleMongoProperties();
#Bean(name = "mongoTemplateA")
#Primary
public MongoTemplate mongoTemplateA() {
return new MongoTemplate(aDbFactory(this.mongoProperties.getAdb()));
}
#Bean(name = "mongoTemplateB")
public MongoTemplate mongoTemplateB() {
return new MongoTemplate(bDbFactory(this.mongoProperties.getBdb()));
}
#Bean
#Primary
public MongoDbFactory aDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
#Bean
public MongoDbFactory bDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
}
Add below decelerations to your service/repository:
#Autowired
#Qualifier("mongoTemplateA")
private MongoTemplate mongoTemplateA;
#Autowired
#Qualifier("MongoTemplateB")
private MongoTemplate MongoTemplateB;
Add below properties in your application.properties:
mongodb.adb.uri=mongodb://user:pass#myhost1:27017/adb
mongodb.bdb.uri=mongodb://user:pass#myhost2:27017/bdb
If you have mongo rplica set, URL can be set as:
mongodb.adb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/adb?replicaSet=rsName
mongodb.bdb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/bdb?replicaSet=rsName
Based on your logic, use either of the template.
Thought, there are few catches:
Notice the #Primary annotation, one bean needs to be marked as primary. I haven't find any solution if no template is marked primary.
If any of the mongo DB is down & application is started/restarted, application will not start/deploy. to avoid this, #Autowired needs to be changed to #Autowired(required = false).
If any of the mongo DB is down & application is already running, it automatically uses 2nd mongo BD (which is not down). So, even if you want to use A DB, if it's down, requests are processed with B DB & vice-versa.

Related

SpringBoot: Create document in mongodb on startup if not exists

I have a small service on SpringBoot and Mongodb as a DB.
I need to be able create a small collection with one document ( very basic: id, name, status) on startup. An analog of sql create table if not exists, but for mongo. How do I do that?
I tried to initialize values in the document attributes, but it didn't help.
Currently, collection and the document appear only if I use API to add it.
You may want to use something like ApplicationRunner or CommandLineRunner which can be defined as a bean.
Example:
#SpringBootApplication
public class MyApplication {
public static void main(String[] args) {
SpringApplication.run(MyApplication .class, args);
}
#Bean
public CommandLineRunner initialize(MyRepository myRepository) {
return args -> {
// Insert elements into myRepository
};
}
}
Both CommandLineRunner and ApplicationRunner are functional interfaces, so we can use a lambda for them. Spring Boot will execute them at the startup of the application.
You can leverage the spring internal event mechanism.
When your application is ready, spring triggers the event ApplicationReadyEvent
You can listen to this event and init your collection:
#Component
public class DataInit implements ApplicationListener<ApplicationReadyEvent> {
private final MyRepository myRepository;
public DataInit(MyRepository myRepository) {
this.myRepository = myRepository;
}
#Override
public void onApplicationEvent(ApplicationReadyEvent event) {
// init data
}
}

Uploading retrieved files in a FTP server to mongodb collection using spring boot

I have implemented a springboot application to retrieve files from an FTP server and to download them into my local directory.
Following is the code which I used to do that.
#Configuration
public class FTPConfiguration {
#ServiceActivator(inputChannel = "ftpMGET")
#Bean
public FtpOutboundGateway getFiles() {
FtpOutboundGateway gateway = new FtpOutboundGateway(sf(), "mget", "payload");
gateway.setAutoCreateDirectory(true);
gateway.setLocalDirectory(new File("./downloads/"));
gateway.setFileExistsMode(FileExistsMode.REPLACE_IF_MODIFIED);
gateway.setFilter(new AcceptOnceFileListFilter<>());
gateway.setOutputChannelName("fileResults");
return gateway;
}
#Bean
public MessageChannel fileResults() {
DirectChannel channel = new DirectChannel();
channel.addInterceptor(tap());
return channel;
}
#Bean
public WireTap tap() {
return new WireTap("logging");
}
#ServiceActivator(inputChannel = "logging")
#Bean
public LoggingHandler logger() {
LoggingHandler logger = new LoggingHandler(LoggingHandler.Level.INFO);
logger.setLogExpressionString("'Files:' + payload");
return logger;
}
#Bean
public DefaultFtpSessionFactory sf() {
DefaultFtpSessionFactory sf = new DefaultFtpSessionFactory();
sf.setHost("localhost");
sf.setPort(2121);
sf.setUsername("anonymous");
sf.setPassword("");
return sf;
}
#MessagingGateway(defaultRequestChannel = "ftpMGET", defaultReplyChannel = "fileResults")
public interface GateFile {
List<File> mget(String directory);
}
}
Now I need to upload these files to my MongoDB automatically, when I run this program.
Can anyone please help me or guide me the steps I should follow?
Please, take a look into a MongoDB support in Spring Integration: https://docs.spring.io/spring-integration/docs/current/reference/html/mongodb.html#mongodb-outbound-channel-adapter
You probably need to think how to make some POJO from files of that MGET result and send it to that MongoDB channel adapter for storing as documents in some collection.

Using Spring Batch to write to a Cassandra Database

As of now, I'm able to connect to Cassandra via the following code:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
public static Session connection() {
Cluster cluster = Cluster.builder()
.addContactPoints("IP1", "IP2")
.withCredentials("user", "password")
.withSSL()
.build();
Session session = null;
try {
session = cluster.connect("database_name");
session.execute("CQL Statement");
} finally {
IOUtils.closeQuietly(session);
IOUtils.closeQuietly(cluster);
}
return session;
}
The problem is that I need to write to Cassandra in a Spring Batch project. Most of the starter kits seem to use a JdbcBatchItemWriter to write to a mySQL database from a chunk. Is this possible? It seems that a JdbcBatchItemWriter cannot connect to a Cassandra database.
The current itemwriter code is below:
#Bean
public JdbcBatchItemWriter<Person> writer() {
JdbcBatchItemWriter<Person> writer = new JdbcBatchItemWriter<Person>();
writer.setItemSqlParameterSourceProvider(new
BeanPropertyItemSqlParameterSourceProvider<Person>());
writer.setSql("INSERT INTO people (first_name, last_name) VALUES
(:firstName, :lastName)");
writer.setDataSource(dataSource);
return writer;
}
Spring Data Cassandra provides repository abstractions for Cassandra that you should be able to use in conjunction with the RepositoryItemWriter to write to Cassandra from Spring Batch.
It is possible to extend Spring Batch to support Cassandra by customising ItemReader and ItemWriter.
ItemWriter example:
public class CassandraBatchItemWriter<Company> implements ItemWriter<Company>, InitializingBean {
protected static final Log logger = LogFactory.getLog(CassandraBatchItemWriter.class);
private final Class<Company> aClass;
#Autowired
private CassandraTemplate cassandraTemplate;
#Override
public void afterPropertiesSet() throws Exception { }
public CassandraBatchItemWriter(final Class<Company> aClass) {
this.aClass = aClass;
}
#Override
public void write(final List<? extends Company> items) throws Exception {
logger.debug("Write operations is performing, the size is {}" + items.size());
if (!items.isEmpty()) {
logger.info("Deleting in a batch performing...");
cassandraTemplate.deleteAll(aClass);
logger.info("Inserting in a batch performing...");
cassandraTemplate.insert(items);
}
logger.debug("Items is null...");
}
}
Then you can inject it as a #Bean through #Configuration
#Bean
public ItemWriter<Company> writer(final DataSource dataSource) {
final CassandraBatchItemWriter<Company> writer = new CassandraBatchItemWriter<Company>(Company.class);
return writer;
}
Full source code can be found in Github repo: Spring-Batch-with-Cassandra

How to configure collections other than fs.files fs.chunks in GridFsTemplate in MongoDB

GridFsTemplate insertes data in collections:fs.files and fs.chunks.
Is there any way to make GridFsTemplate use my own collections?
e.g. In my project, I need to dump all files in attachments.files and attachments.chunks.
I am able to do it using just GridFs like this:
DB mongoDB = mongoDbFactory.getDb();
GridFS fileStore = new GridFS(mongoDB, "attachment");
but I would like to do it by #autowire GridFsTemplate, perform operations like gridFsTemplate.store(file, filename), and it should dump data in my own collections.
To be able to Autowire it with a default Bucket, you can define a Bean for it in Mongo Client Configuration. It goes something like this:
Create a configuration class->
#Configuration
public class MongoConnectionConfig extends AbstractMongoClientConfiguration {
//store this in application.properties file 'mongo.file-system-bucket.attachment=attachment'
#Value("${mongo.file-system-bucket.attachment}")
private String bucketAttachment;
//store this in application.properties file 'mongo.file-system-bucket.image=image'
#Value("${mongo.file-system-bucket.image}")
private String bucketImage;
#Autowired
private MappingMongoConverter mongoConverter;
#Bean(name = "gridFsTemplateAttachments")
public GridFsTemplate gridFsTemplate() throws Exception {
return new GridFsTemplate(mongoDbFactory(), mongoConverter, bucketAttachment);
}
#Bean(name = "gridFsTemplateImages")
public GridFsTemplate gridFsTemplate() throws Exception {
return new GridFsTemplate(mongoDbFactory(), mongoConverter, bucketImage);
}
}
Now to use these Beans you can Autowire them in your DAO and work with different buckets as per your requirements.
Eg. ->
#Repository
public class AssetTransactionsDAO implements AssetTransactionsInterface {
#Autowired
private GridFsTemplate gridFsTemplateImages;
#Autowired
private GridFsTemplate gridFsTemplateAttachments;
#Override
public String insertAttachment(AttachementModel attachment){
//do your stuff gridFsTemplateAttachments.store(.....)
return "";
}
}

#EnableMongoAuditing for MongoDB on Cloud Foundry / mongolab

My setup works on my local but not when I deploy it to CloudFoundry/mongolab.
The config is very similar to the docs.
My local spring config
#Configuration
#Profile("dev")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringMongoConfiguration extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "myDb";
}
#Override
public Mongo mongo() throws Exception {
return new MongoClient("localhost");
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
This is the cloud foundry setup
#Configuration
#Profile("cloud")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringCloudMongoDBConfiguration extends AbstractMongoConfiguration {
private Cloud getCloud() {
CloudFactory cloudFactory = new CloudFactory();
return cloudFactory.getCloud();
}
#Bean
public MongoDbFactory mongoDbFactory() {
Cloud cloud = getCloud();
MongoServiceInfo serviceInfo = (MongoServiceInfo) cloud.getServiceInfo(cloud.getCloudProperties().getProperty("cloud.services.mongo.id"));
String serviceID = serviceInfo.getId();
return cloud.getServiceConnector(serviceID, MongoDbFactory.class, null);
}
#Override
protected String getDatabaseName() {
Cloud cloud = getCloud();
return cloud.getCloudProperties().getProperty("cloud.services.mongo.id");
}
#Override
public Mongo mongo() throws Exception {
Cloud cloud = getCloud();
return new MongoClient(cloud.getCloudProperties().getProperty("cloud.services.mongo.connection.host"));
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
And the error I'm getting when I try to save a document in Cloud Foundry is:
OUT ERROR: org.springframework.data.support.IsNewStrategyFactorySupport - Unexpected error
OUT java.lang.IllegalArgumentException: Unsupported entity com.foo.model.project.Project! Could not determine IsNewStrategy.
OUT at org.springframework.data.mongodb.core.MongoTemplate.insert(MongoTemplate.java:739)
OUT at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
OUT at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
Any ideas? Is it my config file etc..?
Thanks in advance
Niclas
This is usually caused if the Mongo mapping metadata obtained for entities does not scan entities at application startup. By default, AbstractMongoConfiguration uses the package of the actual configuration class to look for #Document annotated classes at startup.
The exception message makes me assume that SpringCloudMongoDBConfiguration is not located in any of the super packages of com.foo.model.project. There are two solutions to this:
Stick to the convenience of putting application configuration classes into the root package of your application. This will cause your application packages be scanned for domain classes, metadata obtained, and the is-new-detection work as expected.
Manually hand the package containing domain classes to the infrastructure by overriding MongoConfiguration.getMappingBasePackage().
The reason you might see the configuration working in the local environment is that the mapping metadata might be obtained through a non-persisting persistence operation (e.g. a query) and everything else proceeding from there.