Spring batch Item reading - spring-batch

I'm using JpaPagingItemReaderBuilder to query a DB and the result is being insert in another DB.
Query is returning results with no issue but I'm getting an error with the return of the reader and in the processor you can check my coding and error below.
can someone please give me insight on this? and why I'm not able to process the result?
Here is my code:
#Bean
public Step sampleStep(){
return stepBuilderFactory.get("sampleStep")
.<FCR_HDR,FCR_HDR>chunk(5)
.reader(itemReader())
.processor(processor())
//.writer(i -> i.stream().forEach(j -> System.out.println(j)))
//.writer(i -> i.forEach(j -> System.out.println(j)))
.writer(jpaItemWriter())
.build();
}
#Bean
public Job sampleJob(){
return jobBuilderFactory.get("sampleJob")
.incrementer(new RunIdIncrementer())
.start(sampleStep())
.build();
}
#Bean
public FcrItemProcessor processor() {
return new FcrItemProcessor();
}
#Bean
#StepScope
public JpaPagingItemReader<FCR_HDR> itemReader(/*#Value("${query}") String query*/){
return new JpaPagingItemReaderBuilder<FCR_HDR>()
.name("db2Reader")
.entityManagerFactory(localContainerEntityManagerFactoryBean.getObject())
.queryString("select f.fcr_ref,f.num_subbills from FCR_HDR f where f.fcr_ref in ('R2G0130185','R2G0128330')")
//.queryString(qry)
.pageSize(3)
.build();
}
#Bean
#StepScope
public JpaItemWriter jpaItemWriter(){
JpaItemWriter writer = new JpaItemWriter();
writer.setEntityManagerFactory(emf);
return writer;
}
}
public class FcrItemProcessor implements ItemProcessor<FCR_HDR,FCR_HDR> {
private static final Logger log = LoggerFactory.getLogger(FcrItemProcessor.class);
#Nullable
#Override
public FCR_HDR process(FCR_HDR fcr_hdr) throws Exception {
final String fcrNo = fcr_hdr.getFcr_ref();
final String numsubbills = fcr_hdr.getNum_subbills();
final FCR_HDR transformFcr = new FCR_HDR();
transformFcr.setFcr_ref(fcrNo);
transformFcr.setNum_subbills(numsubbills);
log.info("Converting (" + fcr_hdr + ") into (" + transformFcr + ")");
return transformFcr;
}
}
Error:
java.lang.ClassCastException: [Ljava.lang.Object; cannot be cast to com.electronicfcr.efcr.model.FCR_HDR

Since you configure the following query in the JpaPagingItemReader :
.queryString("select f.fcr_ref,f.num_subbills from FCR_HDR f where f.fcr_ref in ('R2G0130185','R2G0128330')")
The query is in the format of JPQL which will be processed by the JPA and JPA will return a Object[] if you select certain mapped columns from the mapped entity.
Change it to :
.queryString("select f from FCR_HDR f where f.fcr_ref in ('R2G0130185','R2G0128330')")
such that it will return the mapped entity class (i.e FCR_HDR) and should solve your problem.

Related

How to read data from 2 collections and save data from both collection and save it in 3rd collection in SpringBatch

I am using springbatch to read data from mongo db using MongoItemReader bean.Suppose i want to read data from 2 different collections in a same job instance.Is this possible?
#Bean
#StepScope
public MongoItemReader<Object> reader() throws UnexpectedInputException, ParseException, Exception {
DataReader dataReader = new DataReader();
return dataReader.read();
}
#Bean
public DataItemProcessor processor() {
return new DataItemProcessor();
}
#Bean
public MongoItemWriter<DestinationCollectionModelClass> writer() {
MongoItemWriter<DestinationCollectionModelClass> writer = new MongoItemWriter<>();
writer.setCollection("collection_name_where_data_is_saved");
writer.setTemplate(mongoTemplate);
return writer;
}
#Bean
public Step step1(MongoItemWriter<DestinationModelClass> writer) throws UnexpectedInputException, ParseException, Exception {
return stepBuilderFactory.get("step1")
// TODO: P3 chunk size configurable
.<Object, DestinationModelClass>chunk(100)
.reader(dataReader())
.processor(processor())
.writer(writer())
.build();
}
Below is my class DataReader.java
public class DataReader extends MongoItemReader {
#Autowired
private MongoTemplate mongoTemplate;
#Override
public MongoItemReader<Object> read() throws Exception, UnexpectedInputException, ParseException {
List<Object> mongoItemReaderList = new ArrayList<>();
Map<String, Direction> sorts = new HashMap<>();
sorts.put("_id", Direction.ASC);
MongoItemReader<Object> collectionOneReader = new MongoItemReader<>();
collectionOneReader.setTemplate(mongoTemplate);
collectionOneReader.setTargetType(CollectionOneModelClass.class);
collectionOneReader.setQuery("{}");
collectionOneReader.setSort(sorts);
MongoItemReader<Object> collectionTwoReader = new MongoItemReader<>();
collectionTwoReader.setTemplate(mongoTemplate);
collectionTwoReader.setTargetType(CollectionTwoModelClass.class);
collectionTwoReader.setQuery("{}");
collectionTwoReader.setSort(sorts);
mongoItemReaderList.add(collectionOneReader);
mongoItemReaderList.add(collectionTwoReader);
MongoItemReader<Object> readerObject = (MongoItemReader<Object>) mongoItemReaderList;
return readerObject;
}
}
Below is my DataItemProcessor.java
public class DataItemProcessor implements ItemProcessor<Object, DestinationModelClass> {
public DataItemProcessor() {}
#Override
public DestinationModelClass process(Object phi) throws Exception {
DestinationModelClass hbd = new DestinationModelClass();
if(phi instanceof CollectionOneModelClass) {
//Processing code if Object is an instance of CollectionOneModelClass
}
if(phi instanceof CollectionTwoModelClass) {
//Processing code if Object is an instance of CollectionTwoModelClass
}
return hbd;
}
}
You can't have two readers in the same chunk-oriented step. What you can do is use the driving query pattern, which, in your case, could be implemented as follows:
Item Reader: reads items from collection 1
Item Processor: enriches items from collection 2
Item Writer: writes enriched items to collection 3

Spring Boot + Hibernate + JPA + Postgres Multi tenant App unable to persist entity

I am building a Multitenant saas application using single database with multiple schema; one schema per client. I am using Spring Boot 2.1.5, Hibernate 5.3.10 with compatible spring data jpa and postgres 11.2.
I have followed this blogpost https://dzone.com/articles/spring-boot-hibernate-multitenancy-implementation.
Tried debugging the code, below are my findings:
* For the default schema provided in the datasource configuration, hibernate properly validates schema. It creates the tables/entity in the default schema which are missing or new.
* Tenant Identifier is properly resolved and hibernate builds a session using this tenant.
I have uploaded the code in below repo :
https://github.com/naveentulsi/multitenant-lithium
Some important classes I have added here.
#Component
#Log4j2
public class MultiTenantConnectionProviderImpl implements
MultiTenantConnectionProvider {
#Autowired
DataSource dataSource;
#Override
public Connection getAnyConnection() throws SQLException {
return dataSource.getConnection();
}
#Override
public void releaseAnyConnection(Connection connection) throws SQLException {
connection.close();
}
#Override
public Connection getConnection(String tenantIdentifier) throws SQLException {
final Connection connection = getAnyConnection();
try {
if (!StringUtils.isEmpty(tenantIdentifier)) {
String setTenantQuery = String.format(AppConstants.SCHEMA_CHANGE_QUERY, tenantIdentifier);
connection.createStatement().execute(setTenantQuery);
final ResultSet resultSet = connection.createStatement().executeQuery("select current_schema()");
if(resultSet != null){
final String string = resultSet.getString(1);
log.info("Current Schema" + string);
}
System.out.println("Statement execution");
} else {
connection.createStatement().execute(String.format(AppConstants.SCHEMA_CHANGE_QUERY, AppConstants.DEFAULT_SCHEMA));
}
} catch (SQLException se) {
throw new HibernateException(
"Could not change schema for connection [" + tenantIdentifier + "]",
se
);
}
return connection;
}
#Override
public void releaseConnection(String tenantIdentifier, Connection connection) throws SQLException {
try {
String Query = String.format(AppConstants.DEFAULT_SCHEMA, tenantIdentifier);
connection.createStatement().executeQuery(Query);
} catch (SQLException se) {
throw new HibernateException(
"Could not change schema for connection [" + tenantIdentifier + "]",
se
);
}
connection.close();
}
#Override
public boolean supportsAggressiveRelease() {
return true;
}
#Override
public boolean isUnwrappableAs(Class unwrapType) {
return false;
}
#Override
public <T> T unwrap(Class<T> unwrapType) {
return null;
}
}
#Configuration
#EnableJpaRepositories
public class ApplicationConfiguration implements WebMvcConfigurer {
#Autowired
JpaProperties jpaProperties;
#Autowired
TenantInterceptor tenantInterceptor;
#Override
public void addInterceptors(InterceptorRegistry registry) {
registry.addInterceptor(tenantInterceptor);
}
#Bean
public DataSource dataSource() {
return DataSourceBuilder.create().username(AppConstants.USERNAME).password(AppConstants.PASS)
.url(AppConstants.URL)
.driverClassName("org.postgresql.Driver").build();
}
#Bean
public JpaVendorAdapter jpaVendorAdapter() {
return new HibernateJpaVendorAdapter();
}
#Bean
public LocalContainerEntityManagerFactoryBean entityManagerFactory(DataSource dataSource, MultiTenantConnectionProviderImpl multiTenantConnectionProviderImpl, CurrentTenantIdentifierResolver currentTenantIdentifierResolver) {
Map<String, Object> properties = new HashMap<>();
properties.put("hibernate.dialect", "org.hibernate.dialect.PostgreSQLDialect");
properties.put("hibernate.hbm2ddl.auto", "update");
properties.put("hibernate.ddl-auto", "update");
properties.put("hibernate.jdbc.lob.non_contextual_creation", "true");
properties.put("show-sql", "true");
properties.put("hikari.maximum-pool-size", "3");
properties.put("hibernate.default_schema", "master");
properties.put("maximum-pool-size", "2");
if (dataSource instanceof HikariDataSource) {
((HikariDataSource) dataSource).setMaximumPoolSize(3);
}
properties.put(Environment.MULTI_TENANT, MultiTenancyStrategy.SCHEMA);
properties.put(Environment.MULTI_TENANT_CONNECTION_PROVIDER, multiTenantConnectionProviderImpl);
properties.put(Environment.MULTI_TENANT_IDENTIFIER_RESOLVER, currentTenantIdentifierResolver);
properties.put(Environment.FORMAT_SQL, true);
LocalContainerEntityManagerFactoryBean em = new LocalContainerEntityManagerFactoryBean();
em.setDataSource(dataSource);
em.setPackagesToScan("com.saas");
em.setJpaVendorAdapter(jpaVendorAdapter());
em.setJpaPropertyMap(properties);
return em;
}
}
#Component
public class TenantResolver implements CurrentTenantIdentifierResolver {
private static final ThreadLocal<String> TENANT_IDENTIFIER = new ThreadLocal<>();
public static void setTenantIdentifier(String tenantIdentifier) {
TENANT_IDENTIFIER.set(tenantIdentifier);
}
public static void reset() {
TENANT_IDENTIFIER.remove();
}
#Override
public String resolveCurrentTenantIdentifier() {
String currentTenant = TENANT_IDENTIFIER.get() != null ? TENANT_IDENTIFIER.get() : AppConstants.DEFAULT_SCHEMA;
return currentTenant;
}
#Override
public boolean validateExistingCurrentSessions() {
return true;
}
}
On successful injection of TenantId by the TenantResolver, the entityManager should be able to store the entities into the corresponding tenant schema in database. That is, if we create an object of an entity and persist same in db, it should be successfully saved in db. But in my case, entities are not getting saved into any schema other than the default one.
Update 1: I was able to do multi-tenant schema switching using mysql 8.0.12. Still not able to do it with postgres.
you should be using AbstractRoutingDataSource to achieve this, it does all the magic behind the scenes, there are many examples online and you can find one at https://www.baeldung.com/spring-abstract-routing-data-source
In your Class "ApplicationConfiguration.java";
You have to remove this "properties.put("hibernate.default_schema", "master");", Why because of when ever your changing the schema it's able to change but when it's reach this line again and again set the default schema
I hope you got the answer
Thank you all
Take care!

Spring Batch: AsyncItemProcessor and AsyncItemWriter

1) I have a large file (> 100k lines) that needs to be processed. I have a lot of business validation and checks against external systems for each line item. The code is being migrated from a legacy app and i just put these business logic into the AsyncitemProcessor, which also persists the data into the DB. Is this a good practise to create/save records in the ItemProcessor (in lieu of ItemWriter) ?
2) Code is ::
#Configuration
#EnableAutoConfiguration
#ComponentScan(basePackages = "com.liquidation.lpid")
#EntityScan(basePackages = "com.liquidation.lpid.entities")
#EnableTransactionManagement
public class SimpleJobConfiguration {
#Autowired
public JobRepository jobRepository;
#Autowired
private StepBuilderFactory stepBuilderFactory;
#Autowired
#Qualifier("myFtpSessionFactory")
private SessionFactory myFtpSessionFactory;
#Autowired
public JobBuilderFactory jobBuilderFactory;
#Bean
public ThreadPoolTaskExecutor lpidItemTaskExecutor() {
ThreadPoolTaskExecutor tExec = new ThreadPoolTaskExecutor();
tExec.setCorePoolSize(10);
tExec.setMaxPoolSize(10);
tExec.setAllowCoreThreadTimeOut(true);
return tExec;
}
#BeforeStep
public void beforeStep(StepExecution stepExecution){
String name = stepExecution.getStepName();
System.out.println("name: " + name);
}
#Bean
public SomeItemWriterListener someItemWriterListener(){
return new SomeItemWriterListener();
};
#Bean
#StepScope
public FlatFileItemReader<FieldSet> lpidItemReader(#Value("#{stepExecutionContext['fileResource']}") String fileResource) {
System.out.println("itemReader called !!!!!!!!!!! for customer data" + fileResource);
FlatFileItemReader<FieldSet> reader = new FlatFileItemReader<FieldSet>();
reader.setResource(new ClassPathResource("/data/stage/"+ fileResource));
reader.setLinesToSkip(1);
DefaultLineMapper<FieldSet> lineMapper = new DefaultLineMapper<FieldSet>();
DelimitedLineTokenizer tokenizer = new DelimitedLineTokenizer();
reader.setSkippedLinesCallback(new LineCallbackHandler() {
public void handleLine(String line) {
if (line != null) {
tokenizer.setNames(line.split(","));
}
}
});
lineMapper.setLineTokenizer(tokenizer);
lineMapper.setFieldSetMapper(new PassThroughFieldSetMapper());
lineMapper.afterPropertiesSet();
reader.setLineMapper(lineMapper);
return reader;
}
#Bean
public ItemWriter<FieldSet> lpidItemWriter() {
return new LpidItemWriter();
}
#Autowired
private MultiFileResourcePartitioner multiFileResourcePartitioner;
#Bean
public Step masterStep() {
return stepBuilderFactory.get("masterStep")
.partitioner(slaveStep().getName(), multiFileResourcePartitioner)
.step(slaveStep())
.gridSize(4)
.taskExecutor(lpidItemTaskExecutor())
.build();
}
#Bean
public ItemProcessListener<FieldSet,String> processListener(){
return new LpidItemProcessListener();
}
#SuppressWarnings("unchecked")
#Bean
public Step slaveStep() {
return stepBuilderFactory.get("slaveStep")
.<FieldSet,FieldSet>chunk(5)
.faultTolerant()
.listener(new ChunkListener())
.reader(lpidItemReader(null))
.processor(asyncItemProcessor())
.writer(asyncItemWriter()).listener(someItemWriterListener()).build();
}
#Bean
public AsyncItemWriter<FieldSet> asyncItemWriter(){
AsyncItemWriter<FieldSet> asyncItemProcessor = new AsyncItemWriter<>();
asyncItemProcessor.setDelegate(lpidItemWriter());
try {
asyncItemProcessor.afterPropertiesSet();
} catch (Exception e) {
e.printStackTrace();
}
return asyncItemProcessor;
}
#Bean
public ItemProcessor<FieldSet, FieldSet> processor() {
return new lpidCheckItemProcessor();
}
#Bean
public AsyncItemProcessor<FieldSet, FieldSet> asyncItemProcessor() {
AsyncItemProcessor<FieldSet, FieldSet> asyncItemProcessor = new AsyncItemProcessor<FieldSet, FieldSet>();
asyncItemProcessor.setDelegate(processor());
asyncItemProcessor.setTaskExecutor(lpidItemTaskExecutor());
try {
asyncItemProcessor.afterPropertiesSet();
} catch (Exception e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
return asyncItemProcessor;
}
#Bean
public Job job() throws Exception {
return jobBuilderFactory.get("job").incrementer(new RunIdIncrementer()).start(masterStep()).build();
}
}
The itemwriter runs before the itemprocessor has completed. My understanding is: for every chunk, the item reader reads the data, item processor will churn through each item, and at the end of the chunk, the item writer gets called (which in my case,it does not do anything since the itemprocessor persists the data). But the itemwriter gets called before the item processor gets completed and my job never completes. What am i doing incorrectly here? (I looked at previous issues around it and the solution was to wrap the writer around the AsyncItemWriter(), which i am doing) .
Thanks
Sundar

IndexMissingException: [News] missing

I have porject in Spring boot where i utilize elasticsearch. i save my data to my primary database in postgres and for searching in elasticsearch(ES). I have configured added some data. and i can see that there is my data in elasticsearch as well as in postgres. But when i try to run custom Query through searchQuery in returns my an error:
2016-12-09 20:03:09.586 ERROR 1704 --- [pool-2-thread-1] o.s.s.s.TaskUtils$LoggingErrorHandler : Unexpected error occurred in scheduled task.
org.elasticsearch.indices.IndexMissingException: [provenNews] missing
at org.elasticsearch.cluster.metadata.MetaData.convertFromWildcards(MetaData.java:868)
at org.elasticsearch.cluster.metadata.MetaData.concreteIndices(MetaData.java:685)
at org.elasticsearch.action.search.type.TransportSearchTypeAction$BaseAsyncAction.<init>(TransportSearchTypeAction.java:113)
at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.<init>(TransportSearchDfsQueryThenFetchAction.java:75)
at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction$AsyncAction.<init>(TransportSearchDfsQueryThenFetchAction.java:68)
at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction.doExecute(TransportSearchDfsQueryThenFetchAction.java:65)
at org.elasticsearch.action.search.type.TransportSearchDfsQueryThenFetchAction.doExecute(TransportSearchDfsQueryThenFetchAction.java:55)
at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:75)
here is my code:
configuration class
#Configuration
#EnableTransactionManagement
#EnableElasticsearchRepositories(basePackages = "org.news.proven.repository")
public class ProjectConfiguration {
#Bean
public HibernateJpaSessionFactoryBean sessionFactory() {
return new HibernateJpaSessionFactoryBean();
}
#Bean
public ElasticsearchTemplate elasticsearchTemplate() {
return new ElasticsearchTemplate(getNodeClient());
}
private static NodeClient getNodeClient() {
return (NodeClient) nodeBuilder().clusterName(UUID.randomUUID().toString()).local(true).node()
.client();
}
}
my class that calls the searchQuery
#Service
public class NewsSearchServiceBean implements NewsSearchService {
#Autowired
private ProvenNewsRepository newsSearchRepository;
#Autowired
private ElasticsearchTemplate elasticsearchTemplate;
public Page<ProvenNews> search(String query, int page)
{
SearchQuery searchQuery = new NativeSearchQueryBuilder().withQuery(QueryBuilders.multiMatchQuery(query)
.field("title", 0.6f) //boosting title
.field("newsText", 0.4f)
.type(MultiMatchQueryBuilder.Type.BEST_FIELDS)
.slop(50)
.fuzziness(Fuzziness.ONE) //80 % of mispelling have an edit distance 1 (Damerau-Levenshtein edit distance)
)
.withPageable(new PageRequest(page, 15))
.build();
Page<ProvenNews> result = elasticsearchTemplate.queryForPage(searchQuery, ProvenNews.class);//newsSearchRepository.search(searchQuery);
return result;
// return newsSearchRepository.findByNewsTextAndTitle(query,query,new PageRequest(page, 10, Direction.DESC, "newsDate"));
}
MY repositoy:
public interface ProvenNewsRepository extends ElasticsearchCrudRepository<ProvenNews, Long> {
public Page<ProvenNews> findByNewsTextAndTitle(String newsText, String Title, Pageable page);
}
Any advise and assist will be appreciated)

Spring Batch: File not being read

I am trying to create an application that uses the spring-batch-excel extension to be able to read Excel files uploaded through a web interface by it's users in order to parse the Excel file for addresses.
When the code runs, there is no error, but all I get is the following in my log. Even though I have log/syso throughout my Processor and Writer (these are never being called, and all I can imagine is it's not properly reading the file, and returning no data to process/write). And yes, the file has data, several thousand records in fact.
Job: [FlowJob: [name=excelFileJob]] launched with the following parameters: [{file=Book1.xlsx}]
Executing step: [excelFileStep]
Job: [FlowJob: [name=excelFileJob]] completed with the following parameters: [{file=Book1.xlsx}] and the following status: [COMPLETED]
Below is my JobConfig
#Configuration
#EnableBatchProcessing
public class AddressExcelJobConfig {
#Bean
public BatchConfigurer configurer(EntityManagerFactory entityManagerFactory) {
return new CustomBatchConfigurer(entityManagerFactory);
}
#Bean
Step excelFileStep(ItemReader<AddressExcel> excelAddressReader,
ItemProcessor<AddressExcel, AddressExcel> excelAddressProcessor,
ItemWriter<AddressExcel> excelAddressWriter,
StepBuilderFactory stepBuilderFactory) {
return stepBuilderFactory.get("excelFileStep")
.<AddressExcel, AddressExcel>chunk(1)
.reader(excelAddressReader)
.processor(excelAddressProcessor)
.writer(excelAddressWriter)
.build();
}
#Bean
Job excelFileJob(JobBuilderFactory jobBuilderFactory,
#Qualifier("excelFileStep") Step excelAddressStep) {
return jobBuilderFactory.get("excelFileJob")
.incrementer(new RunIdIncrementer())
.flow(excelAddressStep)
.end()
.build();
}
}
Below is my AddressExcelReader
The late binding works fine, there is no error. I have tried loading the resource given the file name, in addition to creating a new ClassPathResource and FileSystemResource. All are giving me the same results.
#Component
#StepScope
public class AddressExcelReader implements ItemReader<AddressExcel> {
private PoiItemReader<AddressExcel> itemReader = new PoiItemReader<AddressExcel>();
#Override
public AddressExcel read()
throws Exception, UnexpectedInputException, ParseException, NonTransientResourceException {
return itemReader.read();
}
public AddressExcelReader(#Value("#{jobParameters['file']}") String file, StorageService storageService) {
//Resource resource = storageService.loadAsResource(file);
//Resource testResource = new FileSystemResource("upload-dir/Book1.xlsx");
itemReader.setResource(new ClassPathResource("/upload-dir/Book1.xlsx"));
itemReader.setLinesToSkip(1);
itemReader.setStrict(true);
itemReader.setRowMapper(excelRowMapper());
}
public RowMapper<AddressExcel> excelRowMapper() {
BeanWrapperRowMapper<AddressExcel> rowMapper = new BeanWrapperRowMapper<>();
rowMapper.setTargetType(AddressExcel.class);
return rowMapper;
}
}
Below is my AddressExcelProcessor
#Component
public class AddressExcelProcessor implements ItemProcessor<AddressExcel, AddressExcel> {
private static final Logger log = LoggerFactory.getLogger(AddressExcelProcessor.class);
#Override
public AddressExcel process(AddressExcel item) throws Exception {
System.out.println("Converting " + item);
log.info("Convert {}", item);
return item;
}
}
Again, this is never coming into play (no logs generated). And if it matters, this is how I'm launching my job from a FileUploadController from a #PostMapping("/") to handle the file upload, which first stores the file, then runs the job:
#PostMapping("/")
public String handleFileUpload(#RequestParam("file") MultipartFile file, RedirectAttributes redirectAttributes) {
storageService.store(file);
try {
JobParameters jobParameters = new JobParametersBuilder()
.addString("file", file.getOriginalFilename().toString()).toJobParameters();
jobLauncher.run(job, jobParameters);
} catch (JobExecutionAlreadyRunningException | JobRestartException | JobInstanceAlreadyCompleteException
| JobParametersInvalidException e) {
e.printStackTrace();
}
redirectAttributes.addFlashAttribute("message",
"You successfully uploaded " + file.getOriginalFilename() + "!");
return "redirect:/";
}
And last by not least
Here is my AddressExcel POJO
import lombok.Data;
#Data
public class AddressExcel {
private String address1;
private String address2;
private String city;
private String state;
private String zip;
public AddressExcel() {}
}
UPDATE (10/13/2016)
From Nghia Do's comments, I also created my own RowMapper instead of using the BeanWrapper to see if that was the issue. Still the same results.
public class AddressExcelRowMapper implements RowMapper<AddressExcel> {
#Override
public AddressExcel mapRow(RowSet rs) throws Exception {
AddressExcel temp = new AddressExcel();
temp.setAddress1(rs.getColumnValue(0));
temp.setAddress2(rs.getColumnValue(1));
temp.setCity(rs.getColumnValue(2));
temp.setState(rs.getColumnValue(3));
temp.setZip(rs.getColumnValue(4));
return temp;
}
}
All it seems I needed was to add the following to my ItemReader:
itemReader.afterPropertiesSet();
itemReader.open(new ExecutionContext());