For a set of multiple database, I successfully configured JDBC/JPA configurations.
#Db1 #Bean Datasource dataSourceDb1();
#Db1 #Bean AbstractEntityManagerFactoryBean entityManagerFactoryDb1(#Db1 DataSource);
#Db1 #Bean TransactionManager transactionManagerDb1(#Db1 DataSource);
#Db2 #Bean Datasource dataSourceDb2();
#Db2 #Bean AbstractEntityManagerFactoryBean entityManagerFactoryDb2(#Db2 DataSource);
#Db2 #Bean TransactionManager transactionManagerDb2(#Db2 DataSource);
As you can see all configurations are separated for each database.
Now I want to configure MyBatis for each databases.
I know I can produce an SqlSessionFactory like this.
#Db1
#Bean
#ConfigurationProperties("application.db1.mybatis.configuration")
org.apache.ibatis.session.Configuration mybatisConfigurationDb1() {
return new org.apache.ibatis.session.Configuration();
}
#Db1
#Bean
public SqlSessionFactory sqlSessionFactoryDb1(
#Db1 final DataSource dataSource,
#Db1 final org.apache.ibatis.session.Configuration configuration) throws Exception {
final SqlSessionFactoryBean factoryBean = new SqlSessionFactoryBean();
factoryBean.setDataSource(dataSource);
factoryBean.setConfiguration(configuration);
factoryBean.setVfs(SpringBootVFS.class); // // TODO: Check, works as expected? Required, actually?
return factoryBean.getObject();
}
My question is, How can I make above MyBatis configuration for #Db1 use the #Db1 transactionManagerDb1?
And, in a same way, make another MyBatis configuration for #Db2 use the #Db2 transactionManagerDb2?
A SqlSessionFactory will use the transaction manager associated with the Datasource. MyBatis is using Spring's native transaction management so you can configure Datasources and transaction managers as is normal in Spring. The trick is to create multiple SqlSessionFactories and associate mappers to them.
The basic idea is to make two SqlSessionFactories and give them different names:
// configure Datasources and transaction managers as normal, then...
#Bean(name = "SessionFactory1")
public SqlSessionFactory sqlSessionFactory1(Datasource...) {
...
}
#Bean(name = "SessionFactory2")
public SqlSessionFactory sqlSessionFactory2(Datasource...) {
...
}
Then use the MapperScan annotation to attach different mappers to each factory:
#MapperScan(basePackages = "foo.bar.mapper1", sqlSessionFactoryRef = "SessionFactory1")
#MapperScan(basePackages = "foo.bar.mapper2", sqlSessionFactoryRef = "SessionFactory2")
Related
As fars as all the examples from the Spring Batch reference doc , I see that those objects like job/step/reader/writer are all marked as #bean, like the following:
#Bean
public Job footballJob() {
return this.jobBuilderFactory.get("footballJob")
.listener(sampleListener())
...
.build();
}
#Bean
public Step sampleStep(PlatformTransactionManager transactionManager) {
return this.stepBuilderFactory.get("sampleStep")
.transactionManager(transactionManager)
.<String, String>chunk(10)
.reader(itemReader())
.writer(itemWriter())
.build();
}
I have a scenario that the server side will receive requests and run job concurrently(different job names or same job name with different jobparameters). The usage is to new a job object(including steps/reader/writers) in concurrent threads, so I propabaly will not state the job method as #bean and new a job each time.
And there is actually a differenence on how to transmit parameters to object like reader. If using #bean , parameters must be put in e.g. JobParameters to be late binding into object using #StepScope, like the following example:
#StepScope
#Bean
public FlatFileItemReader flatFileItemReader(#Value(
"#{jobParameters['input.file.name']}") String name) {
return new FlatFileItemReaderBuilder<Foo>()
.name("flatFileItemReader")
.resource(new FileSystemResource(name))
}
If not using #bean , I can just transmit parameter directly with no need to put data into JobParameter,like the following
public FlatFileItemReader flatFileItemReader(String name) {
return new FlatFileItemReaderBuilder<Foo>()
.name("flatFileItemReader")
.resource(new FileSystemResource(name))
}
Simple test shows that no #bean works. But I want to confirm formally:
1、 Is using #bean at job/step/reader/writer mandatory or not ?
2、 if it is not mandatory, when I new a object like reader, do I need to call afterPropertiesSet() manually?
Thanks!
1、 Is using #bean at job/step/reader/writer mandatory or not ?
No, it is not mandatory to declare batch artefacts as beans. But you would want to at least declare the Job as a bean to benefit from Spring's dependency injection (like injecting the job repository reference into the job, etc) and be able to do something like:
ApplicationContext context = new AnnotationConfigApplicationContext(MyJobConfig.class);
Job job = context.getBean(Job.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
jobLauncher.run(job, new JobParameters());
2、 if it is not mandatory, when I new a object like reader, do I need to call afterPropertiesSet() manually?
I guess that by "when I new a object like reader" you mean create a new instance manually. In this case yes, if the object is not managed by Spring, you need to call that method yourself. If the object is declared as a bean, Spring will call
the afterPropertiesSet() method automatically. Here is a quick sample:
import org.springframework.beans.factory.InitializingBean;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class TestAfterPropertiesSet {
#Bean
public MyBean myBean() {
return new MyBean();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(TestAfterPropertiesSet.class);
MyBean myBean = context.getBean(MyBean.class);
myBean.sayHello();
}
static class MyBean implements InitializingBean {
#Override
public void afterPropertiesSet() throws Exception {
System.out.println("MyBean.afterPropertiesSet");
}
public void sayHello() {
System.out.println("Hello");
}
}
}
This prints:
MyBean.afterPropertiesSet
Hello
I need to implement Spring boot - MongoDb application where There are 2 mongo DBs which have exact same database name & collections. Based on User making a request, i need to choose whether to fetch data from DB1 or DB2 (only difference in mongo URI host - IP).
E.g. I need some way to create 2 mongoTemplates like mTempA & mTempB in my Repository & based on some condition, use either of the template to execute query as below:
#Repository
public class MyCustomRepository {
private Logger logger = LoggerFactory.getLogger(MyCustomRepository.class);
#Autowired
private MongoTemplateA mongoTemplateA;// Need to know if this is possible & how
#Autowired
private MongoTemplateB mongoTemplateB;// Need to know if this is possible & how
public List<MyModel> findByCriteria(MyRequest request) {
List<MyModel> result;
//Query query = <build query based on request>
if (request.getUserType().equals("A")) {
result = mongoTemplateA.find(query, MyModel.class);
} else {
result = mongoTemplateB.find(query, MyModel.class);
}
logger.debug("Result fetched with {} records", result.size());
return result;
}
}
I don't want to have 2 separate Repo (Class or Interfaces) or different models to be used. Just want to have 2 different mongoTemplates to be injected in single repo.
Is this possible? If yes, please give some example code.
I have followed below tutorial:
https://dzone.com/articles/multiple-mongodb-connectors-with-spring-boot
As rightly pointed out by #Lucia, below is how it can be done:
Have 2 different configuration placeholders
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateA")
public class MongoConfigA {
// Configuration class for DB 1 access
}
#Configuration
#EnableMongoRepositories(basePackages = "com.snk.repository", mongoTemplateRef = "mongoTemplateB")
public class MongoConfigB {
// Configuration class for DB 2 access
}
Get one class which will help in reading custom properties for mongo db properties in application.properties:
#ConfigurationProperties(prefix = "mongodb")
public class MultipleMongoProperties {
private MongoProperties adb = new MongoProperties();
private MongoProperties bdb = new MongoProperties();
public MongoProperties getAdb() {
return adb;
}
public MongoProperties getBdb() {
return bdb;
}
}
Add a configuration class to create mongoTemplates:
#Configuration
#EnableConfigurationProperties(MultipleMongoProperties.class)
public class MultipleMongoConfig {
#Autowired
private MultipleMongoProperties mongoProperties = new MultipleMongoProperties();
#Bean(name = "mongoTemplateA")
#Primary
public MongoTemplate mongoTemplateA() {
return new MongoTemplate(aDbFactory(this.mongoProperties.getAdb()));
}
#Bean(name = "mongoTemplateB")
public MongoTemplate mongoTemplateB() {
return new MongoTemplate(bDbFactory(this.mongoProperties.getBdb()));
}
#Bean
#Primary
public MongoDbFactory aDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
#Bean
public MongoDbFactory bDbFactory(final MongoProperties mongo) {
return new SimpleMongoDbFactory(new MongoClientURI(mongo.getUri()));
}
}
Add below decelerations to your service/repository:
#Autowired
#Qualifier("mongoTemplateA")
private MongoTemplate mongoTemplateA;
#Autowired
#Qualifier("MongoTemplateB")
private MongoTemplate MongoTemplateB;
Add below properties in your application.properties:
mongodb.adb.uri=mongodb://user:pass#myhost1:27017/adb
mongodb.bdb.uri=mongodb://user:pass#myhost2:27017/bdb
If you have mongo rplica set, URL can be set as:
mongodb.adb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/adb?replicaSet=rsName
mongodb.bdb.uri=mongodb://user:pass#myhost1,myhost2,myhost13/bdb?replicaSet=rsName
Based on your logic, use either of the template.
Thought, there are few catches:
Notice the #Primary annotation, one bean needs to be marked as primary. I haven't find any solution if no template is marked primary.
If any of the mongo DB is down & application is started/restarted, application will not start/deploy. to avoid this, #Autowired needs to be changed to #Autowired(required = false).
If any of the mongo DB is down & application is already running, it automatically uses 2nd mongo BD (which is not down). So, even if you want to use A DB, if it's down, requests are processed with B DB & vice-versa.
I’m trying to update the Camunda DMN table programmatically and deploy it again after the update.
But while creating a process engine, getting the exception for H2 driver, but for my given project I’m using the PostgreSQL database for Camunda tables.
ProcessEngine processEngine = ProcessEngineConfiguration
.createStandaloneInMemProcessEngineConfiguration().buildProcessEngine();
org.camunda.bpm.engine.repository.Deployment deployment = processEngine.getRepositoryService()
.createDeployment()
.addString(fileName, Dmn.convertToString(dmnModelInstance))
.name("Deployment after update").deploy();
java.sql.SQLException: Error setting driver on UnpooledDataSource. Cause: java.lang.ClassNotFoundException: org.h2.Driver
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.initializeDriver(UnpooledDataSource.java:221)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.doGetConnection(UnpooledDataSource.java:200)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.doGetConnection(UnpooledDataSource.java:196)
at org.apache.ibatis.datasource.unpooled.UnpooledDataSource.getConnection(UnpooledDataSource.java:93)
at org.apache.ibatis.datasource.pooled.PooledDataSource.popConnection(PooledDataSource.java:385)
at org.apache.ibatis.datasource.pooled.PooledDataSource.getConnection(PooledDataSource.java:89)
at org.camunda.bpm.engine.impl.cfg.ProcessEngineConfigurationImpl.initDatabaseType(ProcessEngineConfigurationImpl.java:1300)
You need to create datasource bean explicitly or can declare the datasource attributes in bootstrap.yml or application.properties file.
#Configuration
public class ExampleProcessEngineConfiguration {
#Bean
public DataSource dataSource() {
// Use a JNDI data source or read the properties from
// env or a properties file.
// Note: The following shows only a simple data source
// for In-Memory H2 database.
SimpleDriverDataSource dataSource = new SimpleDriverDataSource();
dataSource.setDriverClass(org.h2.Driver.class);
dataSource.setUrl("jdbc:h2:mem:camunda;DB_CLOSE_DELAY=-1");
dataSource.setUsername("sa");
dataSource.setPassword("");
return dataSource;
}
#Bean
public PlatformTransactionManager transactionManager() {
return new DataSourceTransactionManager(dataSource());
}
#Bean
public SpringProcessEngineConfiguration processEngineConfiguration() {
SpringProcessEngineConfiguration config = new SpringProcessEngineConfiguration();
config.setDataSource(dataSource());
config.setTransactionManager(transactionManager());
config.setDatabaseSchemaUpdate("true");
config.setHistory("audit");
config.setJobExecutorActivate(true);
return config;
}
#Bean
public ProcessEngineFactoryBean processEngine() {
ProcessEngineFactoryBean factoryBean = new ProcessEngineFactoryBean();
factoryBean.setProcessEngineConfiguration(processEngineConfiguration());
return factoryBean;
}
#Bean
public RepositoryService repositoryService(ProcessEngine processEngine) {
return processEngine.getRepositoryService();
}
#Bean
public RuntimeService runtimeService(ProcessEngine processEngine) {
return processEngine.getRuntimeService();
}
#Bean
public TaskService taskService(ProcessEngine processEngine) {
return processEngine.getTaskService();
}
// more engine services and additional beans ...
}
I am new to Spring Batch. I have configured my job with inmemoryrepository. But still, it seems it is using DB to persist job Metadata.
My spring batch Configuration is :
#Configuration
public class BatchConfiguration {
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private JobBuilderFactory jobBuilder;
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher job =new SimpleJobLauncher();
job.setJobRepository(getJobRepo());
job.afterPropertiesSet();
return job;
}
#Bean
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository getJobRepo() throws Exception {
return new MapJobRepositoryFactoryBean(getTransactionManager()).getObject();
}
#Bean
public Step step1(JdbcBatchItemWriter<Person> writer) throws Exception {
return stepBuilderFactory.get("step1")
.<Person, Person> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer).repository(getJobRepo())
.build();
}
#Bean
public Job job( #Qualifier("step1") Step step1) throws Exception {
return jobBuilder.get("myJob").start(step1).repository(getJobRepo()).build();
}
}
How to resolve above issue?
If you are using Sprint boot
a simple property in your application.properties will solve the issue
spring.batch.initialize-schema=ALWAYS
For a non-Spring Boot setup:This error shows up when a datasource bean is declared in the batch configuration. To workaround the problem I added an embedded datasource, since I didn't want to create those tables in the application database:
#Bean
public DataSource mysqlDataSource() {
// create your application datasource here
}
#Bean
#Primary
public DataSource batchEmbeddedDatasource() {
// in memory datasource required by spring batch
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.H2)
.addScript("classpath:schema-drop-h2.sql")
.addScript("classpath:schema-h2.sql")
.build();
}
The initialization scripts can be found inside the spring-batch-core-xxx.jar under org.springframework.batch.core package.Note I used an in-memory database but the solution is valid also for other database systems.
Those who face the same problem with MySql database in CentOS(Most Unix based systems).
Table names are case-sensitive in Linux. Setting lower_case_table_names=1 has solved the problem.
Find official document here
For those using versions greater then spring-boot 2.5 this worked inside of application.properties
spring.batch.jdbc.initialize-schema = ALWAYS
This solved my case:
spring.batch.jdbc.initialize-schema=ALWAYS
Use Case:
During JBoss server startup, one permanent database connection is already made using Spring Data JPA configurations(xml based approach).
Now when application is already up and running, requirement is to connect to multiple Database and connection string is dynamic which is available on run-time.
How to achieve this using Spring Data JPA?
One way to switch out your data source is to define a "runtime" repository that is configured with the "runtime" data source. But this will make client code aware of the different repos:
package com...runtime.repository;
public interface RuntimeRepo extends JpaRepository<OBJECT, ID> { ... }
#Configuration
#EnableJpaRepositories(
transactionManagerRef="runtimeTransactionManager",
entityManagerFactoryRef="runtimeEmfBean")
#EnableTransactionManagement
public class RuntimeDatabaseConfig {
#Bean public DataSource runtimeDataSource() {
DriverManagerDataSource rds = new DriverManagerDataSource();
// setup driver, username, password, url
return rds;
}
#Bean public LocalContainerEntityManagerFactoryBean runtimeEmfBean() {
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean();
factoryBean.setDataSource(runtimeDataSource());
// setup JpaVendorAdapter, jpaProperties,
return factoryBean;
}
#Bean public PlatformTransactionManager runtimeTransactionManager() {
JpaTransactionManager jtm = new JpaTransactionManager();
jtm.setEntityManagerFactory(runtimeEmfBean());
return jtm;
}
}
I have combined the code to save space; you would define the javaconfig and the repo interface in separate files, but within the same package.
To make client code agnostic of the repo type, implement your own repo factory, autowire the repo factory into client code and have your repo factory check application state before returning the particular repo implementation.