My setup works on my local but not when I deploy it to CloudFoundry/mongolab.
The config is very similar to the docs.
My local spring config
#Configuration
#Profile("dev")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringMongoConfiguration extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "myDb";
}
#Override
public Mongo mongo() throws Exception {
return new MongoClient("localhost");
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
This is the cloud foundry setup
#Configuration
#Profile("cloud")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringCloudMongoDBConfiguration extends AbstractMongoConfiguration {
private Cloud getCloud() {
CloudFactory cloudFactory = new CloudFactory();
return cloudFactory.getCloud();
}
#Bean
public MongoDbFactory mongoDbFactory() {
Cloud cloud = getCloud();
MongoServiceInfo serviceInfo = (MongoServiceInfo) cloud.getServiceInfo(cloud.getCloudProperties().getProperty("cloud.services.mongo.id"));
String serviceID = serviceInfo.getId();
return cloud.getServiceConnector(serviceID, MongoDbFactory.class, null);
}
#Override
protected String getDatabaseName() {
Cloud cloud = getCloud();
return cloud.getCloudProperties().getProperty("cloud.services.mongo.id");
}
#Override
public Mongo mongo() throws Exception {
Cloud cloud = getCloud();
return new MongoClient(cloud.getCloudProperties().getProperty("cloud.services.mongo.connection.host"));
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
And the error I'm getting when I try to save a document in Cloud Foundry is:
OUT ERROR: org.springframework.data.support.IsNewStrategyFactorySupport - Unexpected error
OUT java.lang.IllegalArgumentException: Unsupported entity com.foo.model.project.Project! Could not determine IsNewStrategy.
OUT at org.springframework.data.mongodb.core.MongoTemplate.insert(MongoTemplate.java:739)
OUT at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
OUT at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
Any ideas? Is it my config file etc..?
Thanks in advance
Niclas
This is usually caused if the Mongo mapping metadata obtained for entities does not scan entities at application startup. By default, AbstractMongoConfiguration uses the package of the actual configuration class to look for #Document annotated classes at startup.
The exception message makes me assume that SpringCloudMongoDBConfiguration is not located in any of the super packages of com.foo.model.project. There are two solutions to this:
Stick to the convenience of putting application configuration classes into the root package of your application. This will cause your application packages be scanned for domain classes, metadata obtained, and the is-new-detection work as expected.
Manually hand the package containing domain classes to the infrastructure by overriding MongoConfiguration.getMappingBasePackage().
The reason you might see the configuration working in the local environment is that the mapping metadata might be obtained through a non-persisting persistence operation (e.g. a query) and everything else proceeding from there.
Related
I have been working in a application with Spring webflux and reactive mongo DB. in there i used mongo DB atlas as the database and it worked fine.
Recently i had to introduce mongo custom conversion to handle the Zoned Date Time objects.
#Configuration
public class MongoReactiveConfiguration extends AbstractReactiveMongoConfiguration{
#Override
public MongoCustomConversions customConversions() {
ZonedDateTimeReadConverter zonedDateTimeReadConverter = new ZonedDateTimeReadConverter();
ZonedDateTimeWriteConverter zonedDateTimeWriteConverter = new ZonedDateTimeWriteConverter();
List<Converter<?, ?>> converterList = new ArrayList<>();
converterList.add(zonedDateTimeReadConverter);
converterList.add(zonedDateTimeWriteConverter);
return new MongoCustomConversions(converterList);
}
#Override
protected String getDatabaseName() {
// TODO Auto-generated method stub
return "stlDB";
}
}
HoOwever now i no longer can connect to mongo db atlas, it ignores the proeprty spring.data.mongodb.uri and tries to connect local server with default configuration.
i tried
#EnableAutoConfiguration(exclude={MongoReactiveAutoConfiguration.class})
but then it ignored the above conversions as well. Is there any other configurations to override in AbstractReactiveMongoConfiguration to ignore the default server IP and port?
I had the same issue and could not find a solution other than configuring converters differently, without extending AbstractReactiveMongoConfiguration:
#Configuration
public class MongoAlternativeConfiguration {
#Bean
public MongoCustomConversions mongoCustomConversions() {
return new MongoCustomConversions(
Arrays.asList(
new ZonedDateTimeReadConverter(),
new ZonedDateTimeWriteConverter()));
}
}
I need to implement Spring boot - MongoDb application where There are 2 mongo DBs which have exact same database name & collections. Based on User making a request, i need to choose whether to fetch data from DB1 or DB2 (only difference in mongo URI host - IP).
E.g. I need some way to create 2 mongoTemplates like mTempA & mTempB in my Repository & based on some condition, use either of the template to execute query as below:
#Repository
public class MyCustomRepository {
private Logger logger = LoggerFactory.getLogger(MyCustomRepository.class);
#Autowired
private MongoTemplateA mongoTemplateA;// Need to know if this is possible & how
#Autowired
private MongoTemplateB mongoTemplateB;// Need to know if this is possible & how
public List<MyModel> findByCriteria(MyRequest request) {
List<MyModel> result;
//Query query = <build query based on request>
if (request.getUserType().equals("A")) {
result = mongoTemplateA.find(query, MyModel.class);
} else {
result = mongoTemplateB.find(query, MyModel.class);
}
logger.debug("Result fetched with {} records", result.size());
return result;
}
}
I don't want to have 2 separate Repo (Class or Interfaces) or different models to be used. Just want to have 2 different mongoTemplates to be injected in single repo.
Is this possible? If yes, please give some example code.
I have followed below tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot
As rightly pointed out by #Lucia, below is how it can be done:
Have 2 different configuration placeholders
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateA")
public class MongoConfigA {
// Configuration class for DB 1 access
}
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateB")
public class MongoConfigB {
// Configuration class for DB 2 access
}
Get one class which will help in reading custom properties for mongo db properties in application.properties:
#ConfigurationProperties(prefix = "mongodb")
public class MultipleMongoProperties {
private MongoProperties adb = new MongoProperties();
private MongoProperties bdb = new MongoProperties();
public MongoProperties getAdb() {
return adb;
}
public MongoProperties getBdb() {
return bdb;
}
}
Add a configuration class to create mongoTemplates:
#Configuration
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
#Autowired
private MultipleMongoProperties mongoProperties = new MultipleMongoProperties();
#Bean(name = "mongoTemplateA")
#Primary
public MongoTemplate mongoTemplateA() {
return new MongoTemplate(aDbFactory(this.mongoProperties.getAdb()));
}
#Bean(name = "mongoTemplateB")
public MongoTemplate mongoTemplateB() {
return new MongoTemplate(bDbFactory(this.mongoProperties.getBdb()));
}
#Bean
#Primary
public MongoDbFactory aDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
#Bean
public MongoDbFactory bDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
}
Add below decelerations to your service/repository:
#Autowired
#Qualifier("mongoTemplateA")
private MongoTemplate mongoTemplateA;
#Autowired
#Qualifier("MongoTemplateB")
private MongoTemplate MongoTemplateB;
Add below properties in your application.properties:
mongodb.adb.uri=mongodb://user:pass#myhost1:27017/adb
mongodb.bdb.uri=mongodb://user:pass#myhost2:27017/bdb
If you have mongo rplica set, URL can be set as:
mongodb.adb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/adb?replicaSet=rsName
mongodb.bdb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/bdb?replicaSet=rsName
Based on your logic, use either of the template.
Thought, there are few catches:
Notice the #Primary annotation, one bean needs to be marked as primary. I haven't find any solution if no template is marked primary.
If any of the mongo DB is down & application is started/restarted, application will not start/deploy. to avoid this, #Autowired needs to be changed to #Autowired(required = false).
If any of the mongo DB is down & application is already running, it automatically uses 2nd mongo BD (which is not down). So, even if you want to use A DB, if it's down, requests are processed with B DB & vice-versa.
As of now, I'm able to connect to Cassandra via the following code:
import com.datastax.driver.core.Cluster;
import com.datastax.driver.core.Session;
public static Session connection() {
Cluster cluster = Cluster.builder()
.addContactPoints("IP1", "IP2")
.withCredentials("user", "password")
.withSSL()
.build();
Session session = null;
try {
session = cluster.connect("database_name");
session.execute("CQL Statement");
} finally {
IOUtils.closeQuietly(session);
IOUtils.closeQuietly(cluster);
}
return session;
}
The problem is that I need to write to Cassandra in a Spring Batch project. Most of the starter kits seem to use a JdbcBatchItemWriter to write to a mySQL database from a chunk. Is this possible? It seems that a JdbcBatchItemWriter cannot connect to a Cassandra database.
The current itemwriter code is below:
#Bean
public JdbcBatchItemWriter<Person> writer() {
JdbcBatchItemWriter<Person> writer = new JdbcBatchItemWriter<Person>();
writer.setItemSqlParameterSourceProvider(new
BeanPropertyItemSqlParameterSourceProvider<Person>());
writer.setSql("INSERT INTO people (first_name, last_name) VALUES
(:firstName, :lastName)");
writer.setDataSource(dataSource);
return writer;
}
Spring Data Cassandra provides repository abstractions for Cassandra that you should be able to use in conjunction with the RepositoryItemWriter to write to Cassandra from Spring Batch.
It is possible to extend Spring Batch to support Cassandra by customising ItemReader and ItemWriter.
ItemWriter example:
public class CassandraBatchItemWriter<Company> implements ItemWriter<Company>, InitializingBean {
protected static final Log logger = LogFactory.getLog(CassandraBatchItemWriter.class);
private final Class<Company> aClass;
#Autowired
private CassandraTemplate cassandraTemplate;
#Override
public void afterPropertiesSet() throws Exception { }
public CassandraBatchItemWriter(final Class<Company> aClass) {
this.aClass = aClass;
}
#Override
public void write(final List<? extends Company> items) throws Exception {
logger.debug("Write operations is performing, the size is {}" + items.size());
if (!items.isEmpty()) {
logger.info("Deleting in a batch performing...");
cassandraTemplate.deleteAll(aClass);
logger.info("Inserting in a batch performing...");
cassandraTemplate.insert(items);
}
logger.debug("Items is null...");
}
}
Then you can inject it as a #Bean through #Configuration
#Bean
public ItemWriter<Company> writer(final DataSource dataSource) {
final CassandraBatchItemWriter<Company> writer = new CassandraBatchItemWriter<Company>(Company.class);
return writer;
}
Full source code can be found in Github repo: Spring-Batch-with-Cassandra
Using mongodb with Spring Data MongoDB backend. Using Mongo Repositories too.
This is my actual configuration:
/** MONGO CLIENT *****************************************************/
#Override
protected String getDatabaseName() {
return db;
}
#Override
public Mongo mongo() throws Exception {
/* I'm so dump to automatize this that I just do it manually */
return new Fongo("meh").getMongo(); //Using it for unit tests
//return new MongoClient(url, port); //Using it for IT
}
#Override
protected Collection<String> getMappingBasePackages() {
return Arrays.asList("com.foo");
}
/** BEANS ************************************************************/
#Bean
public Jackson2RepositoryPopulatorFactoryBean repositoryPopulator() {
Resource foo1 = (Resource) new ClassPathResource("collections/foo1.json");
Resource foo2 = (Resource) new ClassPathResource("collections/foo2.json");
Jackson2RepositoryPopulatorFactoryBean factory = new Jackson2RepositoryPopulatorFactoryBean();
factory.setResources(new Resource[] { foo1, foo2 });
return factory;
}
The repository populator is what I added and it's what gives me troubles.
When I compile and test my project I'm getting DuplicateKeyException because I guess the repository populator triggers more than once.
These are the annotations that I use on my test classes:
#RunWith(SpringRunner.class)
#SpringBootTest
#AutoConfigureMockMvc
Is it well configured my application? What's the reasonable solution to avoid repository populator to trigger multiple times ?
Solution based on this guide (in spanish, sorry): https://www.paradigmadigital.com/dev/tests-integrados-spring-boot-fongo
Is needed to separate fongo configuration from mongo.
fongo configuration must be placed on test/
Just take the example code (and using MongoConfiguration.java too, my actual config is wrong) from the guide as a base and you will be fine.
My motivation is to easily find out during maintenance of a large Spring Data Jpa project which Repository method generated given sql.
I have CustomerRepository as in GitHub spring-data-examples.
I changed CustomizableTraceInterceptor to:
public #Bean CustomizableTraceInterceptor interceptor() {
CustomizableTraceInterceptor interceptor = new CustomizableTraceInterceptor();
interceptor.setHideProxyClassNames(true);
interceptor.setEnterMessage("Entering $[targetClassShortName].$[methodName]()");
return interceptor;
}
I would like to see in the log:
Entering CustomerRepository.save()
but instead I am getting:
Entering SimpleJpaRepository.save()
Thanks a lot for your help.
I solved the problem by extending CustomizableTraceInterceptor:
import org.springframework.aop.framework.AopProxyUtils;
import org.springframework.aop.interceptor.CustomizableTraceInterceptor;
import org.springframework.data.jpa.repository.support.SimpleJpaRepository;
public class MethodTraceInterceptor extends CustomizableTraceInterceptor {
#Override
protected Class<?> getClassForLogging(Object target) {
Class<?> classForLogging = super.getClassForLogging(target);
if (SimpleJpaRepository.class.equals(classForLogging)) {
Class<?>[] interfaces = AopProxyUtils.proxiedUserInterfaces(target);
if (interfaces.length > 0) {
return interfaces[0];
}
}
return classForLogging;
}
}
but I find it strange that such overriding is necessary. Ideally Spring trace interceptors should resolve class for logging correctly, even for spring data repositories.