I am working with Spring Batch 3.0.7.RELEASE version
About Skip I have the following:
<batch:chunk reader="personErrorSkipFileItemReader"
processor="personErrorSkipItemProcessor"
writer="personErrorSkipFileItemWriter"
commit-interval="100"
skip-limit="11">
<batch:skippable-exception-classes>
<batch:include class="org.springframework.batch.item.file.FlatFileParseException"/>
<batch:include class="com.manuel.jordan.batch.exception.PersonBatchException"/>
</batch:skippable-exception-classes>
</batch:chunk>
It works how is expected.
For logging and test purposes I need retrieve the following data:
skipped elements (for example 8 of 11) showing the amount such as:
By FlatFileParseException 5
By PersonBatchException 3
Even better if can show in what step and area (reader, processor, writer) was thrown each skipped item.
Working through JobExecution, I have the following
List<Throwable> exceptions = jobExecution.getAllFailureExceptions();
logger.info("exceptions size: {}", exceptions.size());
for(Throwable throwable : exceptions){
logger.error("Throwable Class: {}", throwable.getClass().getSimpleName());
logger.error("{}", throwable.getMessage());
}
List<Throwable> exceptions_ = jobExecution.getFailureExceptions();
logger.info("exceptions_ size: {}", exceptions_.size());
for(Throwable throwable : exceptions_){
logger.error("Throwable Class: {}", throwable.getClass().getSimpleName());
logger.error("{}", throwable.getMessage());
}
But the size of both collections are higher than 1 and show the exceptions just when the Job completes how FAILED.
Because 8 < 11, the Job completes how COMPLETED and the lists sizes returns 0.
implement SkipListener , It has three callback methods
1 :onSkipInProcess
2 :onSkipInWrite
3: onSkipInRead
http://docs.spring.io/spring-batch/apidocs/org/springframework/batch/core/SkipListener.html
Related
I am raising a custom exception to test failure in my structured streaming job as below. I see the query gets terminated but not able to understand why driver script is not failing with a non zero exit code
streamingDF.writeStream
.trigger(Trigger.ProcessingTime(10000L))
.foreachBatch {
(batchDF: DataFrame, batchId: Long) => {
val transformedDF: DataFrame = DoSomeProcessing(batchDF)
if (batchId == 1) {
throw new Exception("Custom Exception as batchId is 1")
}
I get below trace on my console but the driver script is not exiting and no new logs are printed on console.
Exception in thread "main" org.apache.spark.sql.streaming.StreamingQueryException: Custom Exception as batchId is 1
=== Streaming Query ===
Identifier: [id = 6f4c3b4c-bc30-46fe-93ef-8378c23380ab, runId = 1241cb37-493b-4882-ab28-9df8a8c6fb1a]
Current Committed Offsets: ...
Current Available Offsets: ...
Current State: ACTIVE
Thread State: RUNNABLE
Logical Plan:
RepartitionByExpression [timestamp#12], 10
...
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:295)
at org.apache.spark.sql.execution.streaming.StreamExecution$$anon$1.run(StreamExecution.scala:189)
Caused by: java.lang.Exception: Custom Exception as batchId is 1
at MySteamingApp$$anonfun$startSparkStructuredStreaming$1.apply(MySteamingApp.scala:61)
at MySteamingApp$$anonfun$startSparkStructuredStreaming$1.apply(MySteamingApp.scala:57)
at org.apache.spark.sql.execution.streaming.sources.ForeachBatchSink.addBatch(ForeachBatchSink.scala:35)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5$$anonfun$apply$17.apply(MicroBatchExecution.scala:534)
at org.apache.spark.sql.execution.SQLExecution$$anonfun$withNewExecutionId$1.apply(SQLExecution.scala:78)
at org.apache.spark.sql.execution.SQLExecution$.withSQLConfPropagated(SQLExecution.scala:125)
at org.apache.spark.sql.execution.SQLExecution$.withNewExecutionId(SQLExecution.scala:73)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch$5.apply(MicroBatchExecution.scala:532)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.org$apache$spark$sql$execution$streaming$MicroBatchExecution$$runBatch(MicroBatchExecution.scala:531)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply$mcV$sp(MicroBatchExecution.scala:198)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1$$anonfun$apply$mcZ$sp$1.apply(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProgressReporter$class.reportTimeTaken(ProgressReporter.scala:351)
at org.apache.spark.sql.execution.streaming.StreamExecution.reportTimeTaken(StreamExecution.scala:58)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution$$anonfun$runActivatedStream$1.apply$mcZ$sp(MicroBatchExecution.scala:166)
at org.apache.spark.sql.execution.streaming.ProcessingTimeExecutor.execute(TriggerExecutor.scala:56)
at org.apache.spark.sql.execution.streaming.MicroBatchExecution.runActivatedStream(MicroBatchExecution.scala:160)
at org.apache.spark.sql.execution.streaming.StreamExecution.org$apache$spark$sql$execution$streaming$StreamExecution$$runStream(StreamExecution.scala:279)
... 1 more
I think number of task failures were configured more
spark.task.maxFailures default 4 Number of failures of any particular task before giving up on the job. The total number of failures spread across different tasks will not cause the job to fail; a particular task has to fail this number of attempts. Should be greater than or equal to 1. Number of allowed retries = this value - 1.
Further have a look at Is there a way to dynamically stop Spark Structured Streaming?
I am designing a Spring Batch, which reads multiple csv files. I have used partitioning to read each file in chunk and process it to decrypt a certain column in the csv. Before decrypting if i encounter any validation error , i throw custom exception.
Now what i want is if the processing finds any validation error in the first line, the other lines should not be processed, and the job should end.
How can i achieve this? I tried to implement ProcessorListener too but it has no StepExecution object so that i can call SetTerminateOnly() or ExitStatus=Failed
Also note that i have multiple thread accessing the file in different lines.I want to kill all threads in the event of the first encountered error.
Thanks in advance
So, I identified that running multiple asynchronous concurrent threads (Spring Batch partitioning) was the real issue. Though one of the thread threw an Exception, the other threads were parallely running, and finished executing till the end.
Ath the end, the Job FAILED overall and there was no output processed, but it consumed time to process rest of the data.
Well,the solution to it is as simple as it gets. We just need stop the Job while encountering an error during processing.
The Custom Processor
public class MultiThreadedFlatFileItemProcessor implements ItemProcessor<BinFileVO, BinFileVO>,JobExecutionListener{
private JobExecution jobExecution;
private RSADecrypter decrypter;
public RSADecrypter getDecrypter() {
return decrypter;
}
public void setDecrypter(RSADecrypter decrypter) {
this.decrypter = decrypter;
}
#Override
/**
This method is used process the encrypted data
#param item
* */
public BinFileVO process(BinFileVO item) throws JobException {
if(null!=item.getEncryptedText() && !item.getEncryptedText().isEmpty()){
String decrypted = decrypter.getDecryptedText(item.getEncryptedText());
if(null!=decrypted && !decrypted.isEmpty()){
if(decrypted.matches("[0-9]+")){
if(decrypted.length() >= 12 && decrypted.length() <= 19){
item.setEncryptedText(decrypted);
}else{
this.jobExecution.stop();
throw new JobException(PropertyLoader.getValue(ApplicationConstants.DECRYPTED_CARD_NO_LENGTH_INVALID),item.getLineNumber());
}
}
}else{
this.jobExecution.stop();
throw new JobException(PropertyLoader.getValue(ApplicationConstants.EMPTY_ENCRYPTED_DATA),item.getLineNumber());
}
return item;
}
#Override
public void beforeJob(JobExecution jobExecution) {
this.jobExecution=jobExecution;
}
#Override
public void afterJob(JobExecution jobExecution) {
}
}
The Job xml config
<?xml version="1.0" encoding="UTF-8"?>
<beans xmlns="http://www.springframework.org/schema/beans"
.....>
<!-- JobRepository and JobLauncher are configuration/setup classes -->
<bean id="jobRepository" class="org.springframework.batch.core.repository.support.MapJobRepositoryFactoryBean" />
<bean id="jobLauncher" class="org.springframework.batch.core.launch.support.SimpleJobLauncher">
<property name="jobRepository" ref="jobRepository" />
</bean>
<!-- Job Details -->
<job id="simpleMultiThreadsReaderJob" xmlns="http://www.springframework.org/schema/batch">
<step id="step" >
<partition step="step1" partitioner="partitioner">
<handler grid-size="5" task-executor="taskExecutor"/>
</partition>
</step>
<listeners>
<listener ref="decryptingItemProcessor"/>
</listeners>
</job>
<step id="step1" xmlns="http://www.springframework.org/schema/batch">
<tasklet>
<chunk reader="itemReader" writer="itemWriter" processor="decryptingItemProcessor" commit-interval="500"/>
<listeners>
<listener ref="customItemProcessorListener" />
</listeners>
</tasklet>
</step>
<!-- Processor Details -->
<bean id="decryptingItemProcessor" class="com.test.batch.io.MultiThreadedFlatFileItemProcessor">
<property name="decrypter" ref="rsaDecrypter" />
</bean>
<!-- RSA Decrypter class -->
<bean id="rsaDecrypter" class="test.batch.secure.rsa.client.RSADecrypter"/>
<!-- Partitioner Details -->
<bean class="org.springframework.batch.core.scope.StepScope" />
<bean id="partitioner" class="com.test.batch.partition.FlatFilePartitioner" scope="step">
<property name="resource" ref="inputFile"/>
</bean>
<bean id="taskExecutor"
class="org.springframework.scheduling.concurrent.ThreadPoolTaskExecutor">
<property name="corePoolSize" value="10"/>
</bean>
<!-- Step will need a transaction manager -->
<bean id="transactionManager" class="org.springframework.batch.support.transaction.ResourcelessTransactionManager" />
........
.................
</beans>
Here are the logs
2016-09-01 06:32:40 INFO SimpleJobRepository:273 - Parent JobExecution is stopped, so passing message on to StepExecution
2016-09-01 06:32:43 INFO ThreadStepInterruptionPolicy:60 - Step interrupted through StepExecution
2016-09-01 06:32:43 INFO AbstractStep:216 - Encountered interruption executing step: Job interrupted status detected.
; org.springframework.batch.core.JobInterruptedException
2016-09-01 06:32:45 ERROR CustomJobListener:163 - exception :At line No. 1 : The decrypted card number is less than 12 or greater than 19 in length
2016-09-01 06:32:45 ERROR CustomJobListener:163 - exception :Job interrupted status detected.
2016-09-01 06:32:45 INFO SimpleJobLauncher:135 - Job: [FlowJob: [name=simpleMultiThreadsReaderJob]] completed with the following parameters: [{outputFile=/usr/local/pos/bulktokenization/csv/outputs/cc_output_EDWError_08162016.csv, partitionFile=/usr/local/pos/bulktokenization/csv/partitions/, inputFile=C:\usr\local\pos\bulktokenization\csv\inputs\cc_input_EDWError_08162016.csv, fileName=cc_input_EDWError_08162016}] and the following status: [FAILED]
2016-09-01 06:32:45 INFO BatchLauncher:122 - Exit Status : FAILED
2016-09-01 06:32:45 INFO BatchLauncher:123 - Time Taken : 8969
If we throw Custom Exception in Processor, Spring Batch will terminate and mark the job failed unless you setup 'skipable' exception. You have not mentioned where you perform validate step, are you doing in Processor or Reader? Let me know because it is where Spring Batch decides.
In my project, if I want to stop the job and throw Custom Exception, we put validation logic in a Tasklet or Processor and throw exception as below
private AccountInfoEntity getAccountInfo(Long partnerId) {
if(partnerId != null){
.....
return ....;
} else {
throw new ReportsException("XXXXX");
}
}
I have a standalone spark 1.4.1 job that runs on a Red Hat box I submit via spark-submit that sometimes hangs during insertion of data from an RDD. I have auto-commit on the connection turned off and commit the transactions in batches of insertions. What the logs show me before it hangs:
16/03/25 14:00:05 INFO Executor: Finished task 3.0 in stage 138.0 (TID 915). 1847 bytes result sent to driver
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=0 lim=1847 cap=1
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=0 lim=1847 cap=1847
16/03/25 14:00:05 DEBUG TaskSchedulerImpl: parentName: , name: TaskSet_138, runningTasks: 1
16/03/25 14:00:05 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.118 ms) AkkaMessage(StatusUpdate(915,FINISHED,java.nio.HeapByteBuffer[pos=621 li
16/03/25 14:00:05 INFO TaskSetManager: Finished task 3.0 in stage 138.0 (TID 915) in 7407 ms on localhost (23/24)
16/03/25 14:00:05 TRACE DAGScheduler: Checking for newly runnable parent stages
16/03/25 14:00:05 TRACE DAGScheduler: running: Set(ResultStage 138)
16/03/25 14:00:05 TRACE DAGScheduler: waiting: Set()
16/03/25 14:00:05 TRACE DAGScheduler: failed: Set()
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] received message AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;#7ed52306,BlockManagerId(driver, local
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: Received RPC message: AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;#7ed52306,BlockManagerId(driver, localhos
16/03/25 14:00:10 DEBUG AkkaRpcEnv$$anonfun$actorRef$lzycompute$1$1$$anon$1: [actor] handled message (0.099 ms) AkkaMessage(Heartbeat(driver,[Lscala.Tuple2;#7ed52306,BlockManagerId(dri
And then it just repeats the last 3 lines with this intermittently:
16/03/25 14:01:04 TRACE HeartbeatReceiver: Checking for hosts with no recent heartbeats in HeartbeatReceiver.
The kicker is that I can't take a look at the web UI due to some firewall issues on these machines. What I noticed is that this issue was more prevalent when I was inserting with batches of 1000 than with 100. This is the scala code that looks to be the culprit.
//records should have up to INSERT_BATCH_SIZE entries
private def insertStuff(records: Seq[(String, (String, Stuff1, Stuff2, Stuff3))]) {
if (!records.isEmpty) {
//get statement used for insertion (instantiated in an array of statements)
val stmt = stuffInsertArray(//stuff)
logger.info("Starting insertions on stuff" + table + " for " + time + " with " + records.length + " records")
try {
records.foreach(record => {
//get vals from record
...
//perform sanity checks
if (//validate stuff)
{
//log stuff because it didn't validate
}
else
{
stmt.setInt(1, //stuff)
stmt.setLong(2, //stuff)
...
stmt.addBatch()
}
})
//check if connection is still valid
if (!connInsert.isValid(VALIDATE_CONNECTION_TIMEOUT))
{
logger.error("Insertion connection is not valid while inserting stuff.")
throw new RuntimeException(s"Insertion connection not valid while inserting stuff.")
}
logger.debug("Stuff insertion executing batch...")
stmt.executeBatch()
logger.debug("Stuff insertion execution complete. Committing...")
//commit insert batch. Either INSERT_BATCH_SIZE insertions planned or the last batch to be done
insertCommit() //this does the commit and resets some counters
logger.debug("stuff insertion commit complete.")
} catch {
case e: Exception => throw new RuntimeException(s"insertStuff exception ${e.getMessage}")
}
}
}
And here's how it gets called:
//stuffData is an RDD
stuffData.foreachPartition(recordIt => {
//new instance of the object of whose member function we're currently in
val obj = new Obj(clusterInfo)
recordIt.grouped(INSERT_BATCH_SIZE).foreach(records => obj.insertStuff(records))
})
All the extra logging and connection checking I put in just to isolate the issue but since I write for every batch of insertions, the logs get convoluted. If I serialize the insertions, the issue still persists. Any idea why the last task (out of 24) doesn't finish? Thanks.
I am having an exception caused in Spring Batch code that I suspect is due to some bad configuration. First I will give context and then the problem I am having.
I am using Spring Batch 2.2.6.RELEASE
I have a job defined like this (simplified excerpts that I consider are the relevant ones):
....
<batch:job id="job1">
<batch:step id="step1">
<batch:tasklet ref="myTasklet1"/>
</batch:step>
<batch:step id="step2" >
<batch:tasklet ref="myTasklet2"/>
</batch:step>
<batch:step id="step3">
<batch:tasklet>
<batch:chunk reader="myReader" processor="myProcessor" writer="myCompositeWriter" commit-interval="10" />
</batch:tasklet>
<batch:listeners>
<batch:listener ref="myWriter2" />
</batch:listeners>
</batch:step>
<batch:step id="step4" >
<batch:tasklet ref="myTasklet4"/>
</batch:step>
</batch:job>
...
<bean id="myCompositeWriter " class="org.springframework.batch.item.support.CompositeItemWriter">
<property name="delegates">
<list>
<ref bean="myWriter1" />
<ref bean="myWriter2" />
</list>
</property>
</bean>
<bean id="myWriter2" class="my.test.MyWriter2" scope="step" />
...
The simplified writer2 as follows:
public class MyWriter2 implements ItemWriter<Object>, StepExecutionListener {
private ExecutionContext jobContext;
#Override
public void beforeStep(StepExecution stepExecution) {
JobExecution jobExecution = stepExecution.getJobExecution();
this.jobContext = jobExecution.getExecutionContext();
}
#Override
public ExitStatus afterStep(StepExecution stepExecution) {
return null;
}
#Override
public void write(List<? extends Object> items) {
try {
// database insertion
} catch (Exception e) {
// add exception to context for later notifications
jobContext.put("writer2_error", e);
}
}
}
Some requirements:
Need to access the jobContext from all tasklets and from writer2. Accessing the jobcontext from tasklets is straightforward. The writer2 implements StepExecutionListener and it is registered as a listener in step3 to be able to access it.
The writer2 inserts data into a database. This operation may fail but should allow the job to continue the execution and if everything else works fine then the job should end successfully. That is the reason why the insertion exceptions are all caught.
The problem:
If the writer operation in writer2 fails the exception is caught but the job fails after step3.
In Spring Batch Admin console the steps 1, 2 and 3 statuses are COMPLETED, the step 4 status is NONE, the job status and exit code is FAILED and the next exception is shown:
org.springframework.batch.core.JobExecutionException: Flow execution ended unexpectedly at
org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:141) at
org.springframework.batch.core.job.AbstractJob.execute(AbstractJob.java:301) at
org.springframework.batch.core.launch.support.SimpleJobLauncher$1.run(SimpleJobLauncher.java:134) at
java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1145) at
java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:615) at java.lang.Thread.run(Thread.java:745) Caused by:
org.springframework.batch.core.job.flow.FlowExecutionException: Ended flow=job1 at state=step3 with exception at
org.springframework.batch.core.job.flow.support.SimpleFlow.resume(SimpleFlow.java:160) at
org.springframework.batch.core.job.flow.support.SimpleFlow.start(SimpleFlow.java:130) at
org.springframework.batch.core.job.flow.FlowJob.doExecute(FlowJob.java:135)
... 5 more Caused by: java.util.EmptyStackException at
org.codehaus.jettison.util.FastStack.peek(FastStack.java:39) at
org.codehaus.jettison.mapped.MappedXMLStreamWriter.setNewValue(MappedXMLStreamWriter.java:121) at
org.codehaus.jettison.mapped.MappedXMLStreamWriter.makeCurrentJSONObject(MappedXMLStreamWriter.java:113) at
org.codehaus.jettison.mapped.MappedXMLStreamWriter.writeStartElement(MappedXMLStreamWriter.java:241) at
com.thoughtworks.xstream.io.xml.StaxWriter.startNode(StaxWriter.java:162) at
com.thoughtworks.xstream.io.xml.AbstractXmlWriter.startNode(AbstractXmlWriter.java:37) at
com.thoughtworks.xstream.io.WriterWrapper.startNode(WriterWrapper.java:33) at
com.thoughtworks.xstream.io.path.PathTrackingWriter.startNode(PathTrackingWriter.java:44) at
com.thoughtworks.xstream.io.ExtendedHierarchicalStreamWriterHelper.startNode(ExtendedHierarchicalStreamWriterHelper.java:17) at
com.thoughtworks.xstream.converters.collections.AbstractCollectionConverter.writeItem(AbstractCollectionConverter.java:62) at
com.thoughtworks.xstream.converters.collections.MapConverter.marshal(MapConverter.java:57) at
com.thoughtworks.xstream.core.AbstractReferenceMarshaller.convert(AbstractReferenceMarshaller.java:65) at
com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:78) at
com.thoughtworks.xstream.core.TreeMarshaller.convertAnother(TreeMarshaller.java:63) at com.thoughtworks.xstream.core.Tree
If no exceptions occur then the job ends successfully.
Any ideas why this could be failing?
Thanks.
I am trying to skip all the exceptions during the batch run using the following config:
<chunk reader="aaaFileReader" writer="aaaDBWriter"
commit-interval="100" skip-limit="100000">
<skippable-exception-classes>
<include class="java.lang.Exception" />
<exclude
class="org.springframework.jdbc.CannotGetJdbcConnectionException" />
</skippable-exception-classes>
</chunk>
<listeners>
<listener ref="aaabatchFailureListener" />
</listeners>
And I handle the exception in my listener. But when Spring Batch actually encounters an exception its not being skipped and the batch run ends with a failed state. The actual exception is a FlatFileParse Exception. How do I skip the FlatFileParseException?
Here is the log :
:18:21.257 [main] DEBUG o.s.b.repeat.support.RepeatTemplate - Handling fatal exception explicitly (rethrowing first of 1): org.springframework.batch.core.step.skip.NonSkippableReadException: Non-skippable exception during read
15:18:21.257 [main] ERROR o.s.batch.core.step.AbstractStep - Encountered an error executing the step
org.springframework.batch.core.step.skip.NonSkippableReadException: Non-skippable exception during read
at org.springframework.batch.core.step.item.FaultTolerantChunkProvider.read(FaultTolerantChunkProvider.java:81) ~[spring-batch-core.jar:na]
at org.springframework.batch.core.step.item.SimpleChunkProvider$1.doInIteration(SimpleChunkProvider.java:106) ~[spring-batch-core.jar:na]
at org.springframework.batch.repeat.support.RepeatTemplate.getNextResult(RepeatTemplate.java:367) ~[spring-batch-infrastructure.jar:na]
at org.springframework.batch.repeat.support.RepeatTemplate.executeInternal(RepeatTemplate.java:215) ~[spring-batch-infr
Caused by: org.springframework.batch.item.file.FlatFileParseException: Parsing error at line: 5, input=[0254285458908060150983101150983 AK00055002035201401081044000804CK5861 00Twist,Oliver AT&T 20121208 ]
at org.springframework.batch.
You can add the FlatFileParseException class on your batch job config, for example:
<batch:chunk reader="customImportReader" writer="customImporter" processor="customProcessor" commit-interval="1" skip-limit="10">
<batch:skippable-exception-classes>
<batch:include class="org.springframework.batch.item.file.FlatFileParseException" />
</batch:skippable-exception-classes>
</batch:chunk>
As per Spring Batch Documentation some of the exceptions are not qualified as skippable.
In your case its clear from logs that org.springframework.batch.item.file.FlatFileParseException is not a skippable excepotion hence re throwing org.springframework.batch.core.step.skip.NonSkippableReadException.
Read more about Configuring Skip Logic section that says:
For any exception encountered, the skippability will be determined by the nearest superclass in the class hierarchy. Any unclassifed exception will be treated as 'fatal'.
Read more about NonSkippableReadException that says:
Fatal exception to be thrown when a read operation could not be skipped.
Create a custom fileReader and Override the doRead() method to always throw you CustomException.
public class CustomFlatFileItemReader extends FlatFileItemReader {
#Override
protected T doRead() throws Exception {
T itemRead=null;
try {
itemRead= super.doRead();
} catch (FlatFileParseException e) {
throw new MyException(e.getMessage(), e);
}
return itemRead;}
}
Override your job skip policy to always skip your custom exception as below:
.skipPolicy((Throwable T, int skipCount) -> {
if (T instanceof BatchServiceException)
return true;
else
return false;