Trying to create some end-to-end tests for an spring batch application, which works great. I get an sql error because it is not initializing Spring Batch processing tables: org.postgresql.util.PSQLException: ERROR: relation "batch_job_instance" does not exist
I have this code in the src/test/resources/application.properties:
spring.datasource.initialize=true
spring.datasource.initialization-mode=always
spring.datasource.platform=postgresql
spring.batch.initialize-schema=always
Which is the same I have on `src/main/resources/application.properties and works.
This is the code I have for ApplicationTest:
#RunWith(SpringRunner.class)
#ContextConfiguration(classes={
TestConfiguration.class,
JobCompletionNotificationListener.class,
BatchConfiguration.class
})
#SpringBatchTest
public class ApplicationTests {
#Autowired
private JobLauncherTestUtils jobLauncherTestUtils;
#Test
public void testJob() throws Exception {
JobExecution jobExecution = jobLauncherTestUtils.launchJob();
}
}
I have an specific TestConfiguration to generate a Bean with the DataSource.
#Configuration
#PropertySource("application.properties")
public class TestConfiguration {
#Autowired
private Environment env;
#Bean
public DataSource dataSource() {
DriverManagerDataSource dataSource = new DriverManagerDataSource();
dataSource.setDriverClassName(env.getProperty("spring.datasource.driverClassname"));
dataSource.setUrl(env.getProperty("spring.datasource.url"));
dataSource.setUsername(env.getProperty("spring.datasource.username"));
dataSource.setPassword(env.getProperty("spring.datasource.password"));
return dataSource;
}
I was expecting all tables to be created (internal Batch tables and the tables defined in schema-all.sql).
But I get the following error org.postgresql.util.PSQLException: ERROR: relation "batch_job_instance" does not exist.
I don't understand why in the main application all works automagically, and it doesn't in the test.
If a Spring test misses misses the BatchDataSourceInitializer that is being auto-configured by Spring Boot in the actual application, and you don't want to write a full #SpringBootTest, you can selectively add the auto-configuration for Spring Batch by adding the annotation
#ImportAutoConfiguration(BatchAutoConfiguration.class)
This will then provide the initializer for the injected DataSource.
Related
We have a spring boot rest api (spring boot 2.3.0.RELEASE) that uses spring cloud sleuth (version 2.2.3.RELEASE).
At some point, we use the trace id from spring sleuth as data. The trace id is fetched by autowiring the Tracing bean and then accessing the current span. Lets say we defined a bean SimpleCorrelationBean with:
#Autowired
private Tracer tracer;
public String getCorrelationId() {
return tracer.currentSpan().context().traceIdString();
}
This seem to work perfectly when running the spring boot application, but when we try to access the tracer.currentSpan() in the unit tests, this is null. It looks like spring cloud sleuth is not creating any span while running tests..
I think it has something to do with the application context that is set up during the unit test, but I don't know how to enable spring cloud sleuth for the test application context.
Below is a simple test class where the error occurs in simpleTest1. In simpleTest2, no error occurs.
simpleTest1 errors because tracer.currentSpan() is null
#ExtendWith({ RestDocumentationExtension.class, SpringExtension.class })
#SpringBootTest(classes = MusicService.class)
#WebAppConfiguration
#ActiveProfiles("unit-test")
#ComponentScan(basePackageClasses = datacast2.data.JpaConfig.class)
public class SimpleTest {
private static final Logger logger = LoggerFactory.getLogger(SimpleTest.class);
#Autowired
private WebApplicationContext context;
private MockMvc mockMvc;
#Autowired
private FilterChainProxy springSecurityFilterChain;
#Autowired
private SimpleCorrelationBean simpleCorrelationBean;
#Autowired
private Tracer tracer;
#BeforeEach
public void setup(RestDocumentationContextProvider restDocumentation) throws Exception {
this.mockMvc = MockMvcBuilders.webAppContextSetup(this.context)
.apply(documentationConfiguration(restDocumentation))
.addFilter(springSecurityFilterChain).build();
}
#Test
public void simpleTest1() throws Exception {
try {
String correlationId = simpleCorrelationBean.getCorrelationId();
}catch(Exception e) {
logger.error("This seem to fail.", e);
}
}
#Test
public void simpleTest2() throws Exception {
//It looks like spring cloud sleuth is not creating a span, so we create one ourselfs
Span newSpan = this.tracer.nextSpan().name("simpleTest2");
try (Tracer.SpanInScope ws = this.tracer.withSpanInScope(newSpan.start())) {
String correlationId = simpleCorrelationBean.getCorrelationId();
}
finally {
newSpan.finish();
}
}
}
The question is: how to enable spring cloud sleuth for a mockMvc during unit tests?
The issue here is that MockMvc is created manually instead of relying on autoconfiguration. In this particular case custom configuration of MockMvc could be necessary. However, at least for my version of Spring Boot (2.7.6), there is no need to manually configure MockMvc, even though I use Spring Security and Spring Security Test. I couldn't figure out how to enable tracing when manually configuring MockMvc though.
I am new to Spring Batch. I have configured my job with inmemoryrepository. But still, it seems it is using DB to persist job Metadata.
My spring batch Configuration is :
#Configuration
public class BatchConfiguration {
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
private JobBuilderFactory jobBuilder;
#Bean
public JobLauncher jobLauncher() throws Exception {
SimpleJobLauncher job =new SimpleJobLauncher();
job.setJobRepository(getJobRepo());
job.afterPropertiesSet();
return job;
}
#Bean
public PlatformTransactionManager getTransactionManager() {
return new ResourcelessTransactionManager();
}
#Bean
public JobRepository getJobRepo() throws Exception {
return new MapJobRepositoryFactoryBean(getTransactionManager()).getObject();
}
#Bean
public Step step1(JdbcBatchItemWriter<Person> writer) throws Exception {
return stepBuilderFactory.get("step1")
.<Person, Person> chunk(10)
.reader(reader())
.processor(processor())
.writer(writer).repository(getJobRepo())
.build();
}
#Bean
public Job job( #Qualifier("step1") Step step1) throws Exception {
return jobBuilder.get("myJob").start(step1).repository(getJobRepo()).build();
}
}
How to resolve above issue?
If you are using Sprint boot
a simple property in your application.properties will solve the issue
spring.batch.initialize-schema=ALWAYS
For a non-Spring Boot setup:This error shows up when a datasource bean is declared in the batch configuration. To workaround the problem I added an embedded datasource, since I didn't want to create those tables in the application database:
#Bean
public DataSource mysqlDataSource() {
// create your application datasource here
}
#Bean
#Primary
public DataSource batchEmbeddedDatasource() {
// in memory datasource required by spring batch
EmbeddedDatabaseBuilder builder = new EmbeddedDatabaseBuilder();
return builder.setType(EmbeddedDatabaseType.H2)
.addScript("classpath:schema-drop-h2.sql")
.addScript("classpath:schema-h2.sql")
.build();
}
The initialization scripts can be found inside the spring-batch-core-xxx.jar under org.springframework.batch.core package.Note I used an in-memory database but the solution is valid also for other database systems.
Those who face the same problem with MySql database in CentOS(Most Unix based systems).
Table names are case-sensitive in Linux. Setting lower_case_table_names=1 has solved the problem.
Find official document here
For those using versions greater then spring-boot 2.5 this worked inside of application.properties
spring.batch.jdbc.initialize-schema = ALWAYS
This solved my case:
spring.batch.jdbc.initialize-schema=ALWAYS
I am trying to use my JPA repositories in order to save test data into h2 to be then used by a spring batch integration test.
Here is my integration test:
#SpringBootTest(webEnvironment = SpringBootTest.WebEnvironment.NONE, classes = Batch.class)
public class MessageDigestMailingStepIT extends AbstractBatchIntegrationTest {
#Autowired
#Qualifier("messagesDigestMailingJob")
private Job messagesDigestMailingJob;
#Autowired
private JobLauncher jobLauncher;
#Autowired
private JobRepository jobRepository;
#Autowired
private UserAccountRepository userAccountRepository;
#Autowired
private MessageRepository messageRepository;
private JobLauncherTestUtils jobLauncherTestUtils;
#Before
public void setUp() {
this.jobLauncherTestUtils = new JobLauncherTestUtils();
this.jobLauncherTestUtils.setJobLauncher(jobLauncher);
this.jobLauncherTestUtils.setJobRepository(jobRepository);
this.jobLauncherTestUtils.setJob(messagesDigestMailingJob);
}
#Test
#Transactional
public void shouldSendMessageDigestAndUpdateNotificationSent() {
UserAccount userAccount = DomainFactory.createUserAccount("me#example.com");
userAccountRepository.save(userAccount);
JobParameters jobParameters = new JobParametersBuilder().addDate("execution_date", new Date()).toJobParameters();
jobLauncherTestUtils.launchStep("messagesDigestMailingStep", jobParameters);
//Assertions
}
}
Notice the #Transactional on the test method. Unfortunately Spring batch uses its own transactions and my use of #Transactional clashes with spring batch transactions.
Here is the error message I get:
java.lang.IllegalStateException: Existing transaction detected in JobRepository. Please fix this and try again (e.g. remove #Transactional annotations from client).
Can someone please advise how to insert test data to be available for a spring batch integration test?
edit: For good measure, here is the definition of the AbstractBatchIntegrationTest class:
#AutoConfigureTestEntityManager
#AutoConfigureJson
#AutoConfigureJsonTesters
#RunWith(SpringJUnit4ClassRunner.class)
#ActiveProfiles(Profiles.TEST)
#ComponentScan(basePackages = {"com.bignibou.it.configuration", "com.bignibou.configuration"})
public abstract class AbstractBatchIntegrationTest {
}
edit: I have decided to rely only on the #Sql annotation as follows:
#Sql(scripts = "insert_message.sql", executionPhase = Sql.ExecutionPhase.BEFORE_TEST_METHOD)
#Sql(scripts = "clean_database.sql", executionPhase = Sql.ExecutionPhase.AFTER_TEST_METHOD)
#Test
public void shouldSendMessageDigestAndUpdateNotificationSent() {
...
Remove #Transactional from the test so that the UserAccount gets immediately persisted to the database. Then use #Sql with ExecutionPhase.AFTER_TEST_METHOD to execute a clean-up script (or inlined statement) to manually undo the changes performed during the test.
Use Case:
During JBoss server startup, one permanent database connection is already made using Spring Data JPA configurations(xml based approach).
Now when application is already up and running, requirement is to connect to multiple Database and connection string is dynamic which is available on run-time.
How to achieve this using Spring Data JPA?
One way to switch out your data source is to define a "runtime" repository that is configured with the "runtime" data source. But this will make client code aware of the different repos:
package com...runtime.repository;
public interface RuntimeRepo extends JpaRepository<OBJECT, ID> { ... }
#Configuration
#EnableJpaRepositories(
transactionManagerRef="runtimeTransactionManager",
entityManagerFactoryRef="runtimeEmfBean")
#EnableTransactionManagement
public class RuntimeDatabaseConfig {
#Bean public DataSource runtimeDataSource() {
DriverManagerDataSource rds = new DriverManagerDataSource();
// setup driver, username, password, url
return rds;
}
#Bean public LocalContainerEntityManagerFactoryBean runtimeEmfBean() {
LocalContainerEntityManagerFactoryBean factoryBean = new LocalContainerEntityManagerFactoryBean();
factoryBean.setDataSource(runtimeDataSource());
// setup JpaVendorAdapter, jpaProperties,
return factoryBean;
}
#Bean public PlatformTransactionManager runtimeTransactionManager() {
JpaTransactionManager jtm = new JpaTransactionManager();
jtm.setEntityManagerFactory(runtimeEmfBean());
return jtm;
}
}
I have combined the code to save space; you would define the javaconfig and the repo interface in separate files, but within the same package.
To make client code agnostic of the repo type, implement your own repo factory, autowire the repo factory into client code and have your repo factory check application state before returning the particular repo implementation.
I am trying use MongoDB, Morphia and Spring and test it, so I started use Embedded Mongo.
When I had only one DAO to persist I did not had any problem with my tests, however, in some cases I needed use more than one DAO, and in that cases my injected Datasore give me an problem: addr already in use.
My Spring Test Database Configuration is this:
#Configuration
public class DatabaseMockConfig {
private static final int PORT = 12345;
private MongodConfigBuilder configBuilder;
private MongodExecutable mongodExecutable;
private MongodProcess mongodProcess;
#Bean
#Scope("prototype")
public MongodExecutable getMongodExecutable() {
return this.mongodExecutable;
}
#Bean
#Scope("prototype")
public MongodProcess mongodProcess() {
return this.mongodProcess;
}
#Bean
public IMongodConfig getMongodConfig() throws UnknownHostException, IOException {
if (this.configBuilder == null) {
configBuilder = new MongodConfigBuilder().version(Version.Main.PRODUCTION).net(new Net(PORT, Network.localhostIsIPv6()));
}
return this.configBuilder.build();
}
#Autowired
#Bean
#Scope("prototype")
public Datastore datastore(IMongodConfig mongodConfig) throws IOException {
MongodStarter starter = MongodStarter.getDefaultInstance();
this.mongodExecutable = starter.prepare(mongodConfig);
this.mongodProcess = mongodExecutable.start();
MongoClient mongoClient = new MongoClient("localhost", PORT);
return new Morphia().createDatastore(mongoClient, "morphia");
}
#Autowired
#Bean
#Scope("prototype")
public EventDAO eventDAO(final Datastore datastore) {
return new EventDAO(datastore);
}
#Autowired
#Bean
#Scope("prototype")
public EditionDAO editionDAO(final Datastore datastore) {
return new EditionDAO(datastore);
}
}
And my DAO classes are similar to that
#Repository
public class EventDAO {
private final BasicDAO<Event, ObjectId> basicDAO;
#Autowired
public EventDAO(final Datastore datastore) {
this.basicDAO = new BasicDAO<>(Event.class, datastore);
}
...
}
My test class is similar to that:
#ContextConfiguration(classes = AppMockConfig.class)
#RunWith(SpringJUnit4ClassRunner.class)
public class EventDAOTest {
#Autowired
private EventDAO eventDAO;
#Autowired
private MongodExecutable mongodExecutable;
#Autowired
private MongodProcess mongodProcess;
#Rule
public ExpectedException expectedEx = ExpectedException.none();
#After
public void tearDown() {
this.mongodProcess.stop();
this.mongodExecutable.stop();
}
...
}
I use prototype scope to solve problem with singleton and make sure that my mock database is clean when I start my test, after that I stop mongod process and mongod executable.
However since I need use more than one DAO I receive that error:
org.springframework.beans.factory.UnsatisfiedDependencyException: Error creating bean with name 'editionDAO' defined in class br.com.mymusicapp.spring.DatabaseMockConfig: Unsatisfied dependency expressed through constructor argument with index 0 of type [org.mongodb.morphia.Datastore]: :
Error creating bean with name 'datastore' defined in class br.com.mymusicapp.spring.DatabaseMockConfig: Bean instantiation via factory method failed; nested exception is org.springframework.beans.BeanInstantiationException: Failed to instantiate [org.mongodb.morphia.Datastore]:
Factory method 'datastore' threw exception; nested exception is java.io.IOException: Could not start process: ERROR: listen(): bind() failed errno:98 Address already in use for socket: 0.0.0.0:12345
2015-01-04T01:05:04.128-0200 [initandlisten] ERROR: addr already in use
I know what the error means, I just do not know how can I design my Configuration to solve that. As last option I am considering install a localhost MongoDB just for tests, however I think could be a better solution
That is based on the embedded mongod by flapdoodle, right?
If you want to run multiple tests in parallel (could be changed via JUnit annotations, but it's probably faster in parallel), you cannot use a single, hardcoded port. Instead, let the embedded process select an available port automatically.