java configuration of mongodb with latest spring/spring boot release version - mongodb

I want to create a MongoTemplate bean using spring boot 2.3.4.RELEASE. Earlier I configured it using 2.0.5.RELEASE as below.
public MongoDbFactory mongoDbFactory() {
return new SimpleMongoDbFactory(new MongoClientURI("mongodb://localhost:27017/test"));
}
#Bean
public MongoTemplate mongoTemplate() {
System.out.println("check");
MongoTemplate mongoTemplate = new MongoTemplate(mongoDbFactory());
return mongoTemplate;
}
I upgraded the spring boot version to 2.3.4.RELEASE. Since, SimpleMongoDbFactory has been deprecated since 2.0.0, I am unable us this configuration. I tried to implement in following way for newer version,
public MongoDatabaseFactory mongoDbFactory() {
//return new SimpleMongoClientDatabaseFactory("mongodb://localhost:27017/test");
return new SimpleMongoClientDatabaseFactory(new MongoClientURI("mongodb://localhost:27017/MGXPI413"));
}
#Bean
public MongoTemplate mongoTemplate() {
MongoTemplate template = new MongoTemplate(mongoDbFactory());
return template;
}
I do not want to downgrade the version. I searched a lot about it, but could not get correct configuration. Please let me know how to do it.

Related

Configuring the Batch Configurer to include transaction with Spring Data MongoDb in Spring Batch

In our application, we implemented Spring Data MongoDB transactions by following this guide
https://www.baeldung.com/spring-data-mongodb-transactions
However, we keep facing this issue.
I am unsure on how I should override the BatchConfigurer.
I have tried to follow this guide, but it uses an earlier version of Spring Data MongoDb. Some of the classes are deprecated
https://dzone.com/articles/spring-batch-goodies-with-mongodb
Error:
Description:
The bean 'transactionManager', defined in class path resource [org/springframework/batch/core/configuration/annotation/SimpleBatchConfiguration.class], could not be registered. A bean with that name has already been defined in class path resource [com/pragnamic/common/infrastructure/MongoDbConfig.class] and overriding is disabled.
Action:
Consider renaming one of the beans or enabling overriding by setting spring.main.allow-bean-definition-overriding=true
MongoDbConfig
#Configuration
public class MongoDbConfig extends AbstractMongoClientConfiguration {
private final List<Converter<?,?>> converters = new ArrayList<>();
#Value("${spring.data.mongodb.uri}")
private String uri;
#Value("${spring.data.mongodb.database}")
private String database;
#Bean
MongoTransactionManager transactionManager(MongoDatabaseFactory mongoDatabaseFactory) {
return new MongoTransactionManager(mongoDatabaseFactory);
}
#Override
protected String getDatabaseName() {
return database;
}
#Override
public MongoClient mongoClient() {
ConnectionString connectionString = new ConnectionString(uri);
MongoClientSettings mongoClientSettings = MongoClientSettings.builder()
.applyConnectionString(connectionString)
.build();
return MongoClients.create(mongoClientSettings);
}
#Bean
public MongoCustomConversions customConversions() {
converters.add(new DomainObjectIdWriterConverter());
return new MongoCustomConversions(converters);
}
}
We are also using the latest version of Spring Data MongoDB
<dependency>
<groupId>org.springframework.boot</groupId>
<artifactId>spring-boot-starter-data-mongodb</artifactId>
</dependency>

Exception while creating a ProcessEngine with PostgreSQL

I’m trying to update the Camunda DMN table programmatically and deploy it again after the update.
But while creating a process engine, getting the exception for H2 driver, but for my given project I’m using the PostgreSQL database for Camunda tables.
ProcessEngine processEngine = ProcessEngineConfiguration
.createStandaloneInMemProcessEngineConfiguration().buildProcessEngine();
org.camunda.bpm.engine.repository.Deployment deployment = processEngine.getRepositoryService()
.createDeployment()
.addString(fileName, Dmn.convertToString(dmnModelInstance))
.name("Deployment after update").deploy();
java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.h2.Driver
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.initializeDriver(UnpooledDataSource.java:221)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.doGetConnection(UnpooledDataSource.java:200)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.doGetConnection(UnpooledDataSource.java:196)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.getConnection(UnpooledDataSource.java:93)
at org.apache.ibatis.datasource.pooled.PooledDataSource.popConnection(PooledDataSource.java:385)
at org.apache.ibatis.datasource.pooled.PooledDataSource.getConnection(PooledDataSource.java:89)
at org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl.initDatabaseType(ProcessEngineConfigurationImpl.java:1300)
You need to create datasource bean explicitly or can declare the datasource attributes in bootstrap.yml or application.properties file.
#Configuration
public class ExampleProcessEngineConfiguration {
#Bean
public DataSource dataSource() {
// Use a JNDI data source or read the properties from
// env or a properties file.
// Note: The following shows only a simple data source
// for In-Memory H2 database.
SimpleDriverDataSource dataSource = new SimpleDriverDataSource();
dataSource.setDriverClass(org.h2.Driver.class);
dataSource.setUrl("jdbc:h2:mem:camunda;DB_CLOSE_DELAY=-1");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
#Bean
public PlatformTransactionManager transactionManager() {
return new DataSourceTransactionManager(dataSource());
}
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() {
SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setDataSource(dataSource());
config.setTransactionManager(transactionManager());
config.setDatabaseSchemaUpdate("true");
config.setHistory("audit");
config.setJobExecutorActivate(true);
return config;
}
#Bean
public ProcessEngineFactoryBean processEngine() {
ProcessEngineFactoryBean factoryBean = new ProcessEngineFactoryBean();
factoryBean.setProcessEngineConfiguration(processEngineConfiguration());
return factoryBean;
}
#Bean
public RepositoryService repositoryService(ProcessEngine processEngine) {
return processEngine.getRepositoryService();
}
#Bean
public RuntimeService runtimeService(ProcessEngine processEngine) {
return processEngine.getRuntimeService();
}
#Bean
public TaskService taskService(ProcessEngine processEngine) {
return processEngine.getTaskService();
}
// more engine services and additional beans ...
}

I am getting error Table 'test.batch_job_instance' doesn't exist

I am new to Spring Batch. I have configured my job with inmemoryrepository. But still, it seems it is using DB to persist job Metadata.
My spring batch Configuration is :
#Configuration
public class BatchConfiguration {
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private JobBuilderFactory jobBuilder;
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher job =new SimpleJobLauncher();
job.setJobRepository(getJobRepo());
job.afterPropertiesSet();
return job;
}
#Bean
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository getJobRepo() throws Exception {
return new MapJobRepositoryFactoryBean(getTransactionManager()).getObject();
}
#Bean
public Step step1(JdbcBatchItemWriter<Person> writer) throws Exception {
return stepBuilderFactory.get("step1")
.<Person, Person> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer).repository(getJobRepo())
.build();
}
#Bean
public Job job( #Qualifier("step1") Step step1) throws Exception {
return jobBuilder.get("myJob").start(step1).repository(getJobRepo()).build();
}
}
How to resolve above issue?
If you are using Sprint boot
a simple property in your application.properties will solve the issue
spring.batch.initialize-schema=ALWAYS
For a non-Spring Boot setup:This error shows up when a datasource bean is declared in the batch configuration. To workaround the problem I added an embedded datasource, since I didn't want to create those tables in the application database:
#Bean
public DataSource mysqlDataSource() {
// create your application datasource here
}
#Bean
#Primary
public DataSource batchEmbeddedDatasource() {
// in memory datasource required by spring batch
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.H2)
.addScript("classpath:schema-drop-h2.sql")
.addScript("classpath:schema-h2.sql")
.build();
}
The initialization scripts can be found inside the spring-batch-core-xxx.jar under org.springframework.batch.core package.Note I used an in-memory database but the solution is valid also for other database systems.
Those who face the same problem with MySql database in CentOS(Most Unix based systems).
Table names are case-sensitive in Linux. Setting lower_case_table_names=1 has solved the problem.
Find official document here
For those using versions greater then spring-boot 2.5 this worked inside of application.properties
spring.batch.jdbc.initialize-schema = ALWAYS
This solved my case:
spring.batch.jdbc.initialize-schema=ALWAYS

Configuration of chained transaction manager in SDN4 after migration from SDN3

At the moment I am trying to migrate from SDN3 to SDN4. In my project I use two databases: Neo4j and MySQL, so I end up with chained transaction manager. However, after migration I have problem with its configuration. Before the migration I had this:
#Bean(name = "transactionManager")
#Autowired
public PlatformTransactionManager neo4jTransactionManager(
LocalContainerEntityManagerFactoryBean entityManagerFactory, GraphDatabaseService graphDatabaseService)
throws Exception {
JtaTransactionManager neoTransactionManager = new JtaTransactionManagerFactoryBean(graphDatabaseService)
.getObject();
neoTransactionManager.setRollbackOnCommitFailure(true);
neoTransactionManager.setAllowCustomIsolationLevels(true);
JpaTransactionManager mysqlTransactioNmanager = new JpaTransactionManager(entityManagerFactory.getObject());
return new ChainedTransactionManager(mysqlTransactioNmanager, neoTransactionManager);
}
Now I have something like this:
#Bean(name = "transactionManager")
#Autowired
public PlatformTransactionManager neo4jTransactionManager(
LocalContainerEntityManagerFactoryBean entityManagerFactory, Neo4jTransactionManager neo4jTransactionManager)
throws Exception {
Neo4jTransactionManager neoTransactionManager = neo4jTransactionManager;
JpaTransactionManager mysqlTransactioNmanager = new JpaTransactionManager(entityManagerFactory.getObject());
return new ChainedTransactionManager(mysqlTransactioNmanager, neoTransactionManager);
}
However, project could not be deployed on server, because of this exception:
Caused by: org.springframework.beans.factory.BeanCreationException: Could not autowire method: public org.springframework.transaction.PlatformTransactionManager com.project.config.ApplicationConfig.neo4jTransactionManager(org.springframework.orm.jpa.LocalContainerEntityManagerFactoryBean,org.springframework.data.neo4j.transaction.Neo4jTransactionManager) throws java.lang.Exception; nested exception is org.springframework.beans.factory.NoSuchBeanDefinitionException: No qualifying bean of type [org.springframework.data.neo4j.transaction.Neo4jTransactionManager] found for dependency: expected at least 1 bean which qualifies as autowire candidate for this dependency. Dependency annotations: {}
When mentioned part of configuration is commented project is properly deployed, but obviously there is an exception concerning missing transaction during save to MySQL database.
How should I configure this chained transaction manager in SDN4? It is hard to find any examples now, because SDN4 is quite recent and I really need to have Neo4j in standalone mode, so migration seems to be a good idea.
With this configuration I managed to successfully deploy my application:
#Bean(name = "transactionManager")
#Autowired
public PlatformTransactionManager neo4jTransactionManager(
LocalContainerEntityManagerFactoryBean entityManagerFactory,
Session session) throws Exception {
Neo4jTransactionManager neoTransactionManager = new Neo4jTransactionManager(session);
JpaTransactionManager mysqlTransactioNmanager = new JpaTransactionManager(entityManagerFactory.getObject());
return new ChainedTransactionManager(mysqlTransactioNmanager,neoTransactionManager);
}
I had also to add this element to my config:
#Override
#Bean
#Scope(value = "session", proxyMode = ScopedProxyMode.TARGET_CLASS)
public Session getSession() throws Exception {
return super.getSession();
}

#EnableMongoAuditing for MongoDB on Cloud Foundry / mongolab

My setup works on my local but not when I deploy it to CloudFoundry/mongolab.
The config is very similar to the docs.
My local spring config
#Configuration
#Profile("dev")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringMongoConfiguration extends AbstractMongoConfiguration {
#Override
protected String getDatabaseName() {
return "myDb";
}
#Override
public Mongo mongo() throws Exception {
return new MongoClient("localhost");
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
This is the cloud foundry setup
#Configuration
#Profile("cloud")
#EnableMongoAuditing
#EnableMongoRepositories(basePackages = "com.foo.model")
public class SpringCloudMongoDBConfiguration extends AbstractMongoConfiguration {
private Cloud getCloud() {
CloudFactory cloudFactory = new CloudFactory();
return cloudFactory.getCloud();
}
#Bean
public MongoDbFactory mongoDbFactory() {
Cloud cloud = getCloud();
MongoServiceInfo serviceInfo = (MongoServiceInfo) cloud.getServiceInfo(cloud.getCloudProperties().getProperty("cloud.services.mongo.id"));
String serviceID = serviceInfo.getId();
return cloud.getServiceConnector(serviceID, MongoDbFactory.class, null);
}
#Override
protected String getDatabaseName() {
Cloud cloud = getCloud();
return cloud.getCloudProperties().getProperty("cloud.services.mongo.id");
}
#Override
public Mongo mongo() throws Exception {
Cloud cloud = getCloud();
return new MongoClient(cloud.getCloudProperties().getProperty("cloud.services.mongo.connection.host"));
}
#Bean
public MongoTemplate mongoTemplate() {
return new MongoTemplate(mongoDbFactory());
}
#Bean
public AuditorAware<User> myAuditorProvider() {
return new SpringSecurityAuditorAware();
}
}
And the error I'm getting when I try to save a document in Cloud Foundry is:
OUT ERROR: org.springframework.data.support.IsNewStrategyFactorySupport - Unexpected error
OUT java.lang.IllegalArgumentException: Unsupported entity com.foo.model.project.Project! Could not determine IsNewStrategy.
OUT at org.springframework.data.mongodb.core.MongoTemplate.insert(MongoTemplate.java:739)
OUT at org.springframework.web.method.support.InvocableHandlerMethod.doInvoke(InvocableHandlerMethod.java:221)
OUT at org.springframework.web.servlet.mvc.method.AbstractHandlerMethodAdapter.handle(AbstractHandlerMethodAdapter.java:85)
Any ideas? Is it my config file etc..?
Thanks in advance
Niclas
This is usually caused if the Mongo mapping metadata obtained for entities does not scan entities at application startup. By default, AbstractMongoConfiguration uses the package of the actual configuration class to look for #Document annotated classes at startup.
The exception message makes me assume that SpringCloudMongoDBConfiguration is not located in any of the super packages of com.foo.model.project. There are two solutions to this:
Stick to the convenience of putting application configuration classes into the root package of your application. This will cause your application packages be scanned for domain classes, metadata obtained, and the is-new-detection work as expected.
Manually hand the package containing domain classes to the infrastructure by overriding MongoConfiguration.getMappingBasePackage().
The reason you might see the configuration working in the local environment is that the mapping metadata might be obtained through a non-persisting persistence operation (e.g. a query) and everything else proceeding from there.