I have a spring batch application which run a DB reading/writing job on a cron schedule. I want to cover the scenario where the application maybe stopped for 4 hours, and when it next runs it should use the start timestamp of the last job to determine the data it processes.
I have this code which registers the distinct job parameters, setting a new 'jobID' parameter each time.
#Override
#Scheduled(cron = "0 0 */1 * * ?")
public JobExecution startJob(String cdmrJob,String jobType) {
// lookup the job via the spring context or factory
Job job = applicationContext.getBean(cdmrJob.getJobBeanName(), Job.class);
String jobId = String.valueOf(System.currentTimeMillis());
JobParameters param = new JobParametersBuilder()
.addString("jobName", job.getName(),true)
.addDate("jobDate", new Date(),false)
.addString("jobUuid", UUID.randomUUID().toString(),false)
.addString("jobID", jobId, true)
.addString("jobType", jobType.name(),true)
.toJobParameters();
// get the last jobs start time if present..
getLastJobExecution(cdmrJob,jobType);
log.info("startJob({})",cdmrJob,StructuredArguments.entries(param.getParameters()));
JobExecution jobExecution = null;
try {
jobExecution = jobLauncher.run(job, param);
....
and then I have this code which uses the 'jobRepository' to search for the last instance of the matching job
#Override
public JobExecution getLastJobExecution(String cdmrJob,String jobType) {
// match the job name and type
Map<String, JobParameter> parameters = new HashMap<>();
parameters.put("jobName", new JobParameter(cdmrJob.getBatchJobName(), true));
parameters.put("jobType", new JobParameter(jobType.name(), true));
JobParameters jobParameters = new JobParameters(parameters);
JobExecution jobExecution = jobRepository.getLastJobExecution(cdmrJob.getBatchJobName(),jobParameters);
log.info("/getLastJobExecution{}->{}",param,jobExecution);
return jobExecution;
}
The problem is the 'jobExecution' is always null, since it seems 'jobRepository.getLastJobExecution' can't match the sub-set of job parameters.
When i set the 'jobID' parameter to be non-identiying the 'getLastJobExecution()' method matches and returns, but the batch complains that a completed job cannot be restarted.
Without reverting to writing a JDBC sql query - is there an approach via the batch API to run this type of sub-job parameter search?
I have seen these questions
Querying Spring Batch JobExecution with Batch Param values
Spring Batch Job taking previous execution parameters
Related
hi i have a kafka consumer application that uses spring kafka. It is consuming messages batch wise. But before that it was consuming sequentially. When i consume sequentially i used below annotation
#RetryableTopic(
attempts = "3}",
backoff = #Backoff(delay = 1000, multiplier = 2.0),
autoCreateTopics = "false",
topicSuffixingStrategy = TopicSuffixingStrategy.SUFFIX_WITH_INDEX_VALUE,
exclude = {CustomNonRetryableException.class})
In my code i throw CustomNonRetryableException whenever i dont need to retry an exception scenario. For other exceptions it will retry 3 times.
But when i switched to batch processing, i removed above code and used below kafkalistenercontainerfactory.
#Bean
public ConcurrentKafkaListenerContainerFactory<String, Object> kafkaListenerContainerFactory() {
ConcurrentKafkaListenerContainerFactory<String, Object> factory =
new ConcurrentKafkaListenerContainerFactory<>();
factory.setConsumerFactory(consumerFactory());
factory.getContainerProperties().setCommitLogLevel(LogIfLevelEnabled.Level.DEBUG);
factory.getContainerProperties().setMissingTopicsFatal(false);
factory.setBatchListener(true);
factory.getContainerProperties().setAckMode(ContainerProperties.AckMode.BATCH);
factory.setCommonErrorHandler(new DefaultErrorHandler(new DeadLetterPublishingRecoverer(kafkaTemplate(),
(r, e) -> {
return new TopicPartition(r.topic() + "-dlt", r.partition());
}),new FixedBackOff(1000L, 2L)));
return factory;
}
Now what i'm trying to do is apply that CustomNonRetryableException class so that it wont retry 3 times. I want only CustomNonRetryableException throwing scenarios to be retried one time and send to the dlt topic. How can i achieve it?
It's a bug - see this answer Spring Boot Kafka Batch DefaultErrorHandler addNotRetryableExceptions?
It is fixed and will be available in the next release.
Also see the note in that answer about the preferred way to handle errors when using a batch listener, so that only the failed record is retried, instead of the whole batch.
I want test to that messages from a PubSub topic are parsed correctly into the protobuf structure.
The issue is that PubSubIO.Read is an unbounded source and the test does not terminate on its own.
One option which I tried is to terminate the pipeline manually by setting BlockOnRun=false and calling pipeline.cancel() but in this case the PAssert checks do not fire and any failing test pass.
What is the correct way to test elements of an unbounded PCollection with PAssert?
#Test
public void TestThatPublishedMessagesAreParsedCorrectly() throws IOException {
MyMessage testMessage = TestUtils.makeNewMessage();
String subscriptionName = initPubSubTopicWithMessages(testMessage);
Pipeline pipeline = createTestPipeline(getPubSubEmulatorRoot());
PCollection<MyMessage> messages = pipeline.apply(
PubsubIO
.readProtos(MyMessage.class)
.fromSubscription(subscriptionName));
PAssert.that(messages).containsInAnyOrder(testMessage);
PipelineResult result = pipeline.run();
result.waitUntilFinish(Duration.standardSeconds(5));
result.cancel();
}
I am using spring batch kafkaItemReader in a job which is executed on a fixed delay of 10 seconds. Once the job with a chunk size of 1000 is completed, spring scheduler re-submits the same job again after a delay of 10 seconds. I am observing that KafkaReader is always including the last offset record in the subsequent job executions. Suppose, in the first job execution, records are processed from offset 1-1000, in my next job execution I am expecting kafkaItemReader to pick records from 1001 offset. But, in the next execution, kafkaItemReader is picking it up from offset 1000 (which is already processed).
Adding code blocks
//Job is getting submitted with scheduled task scheduler with below parameters
<task:scheduled-tasks>
<task:scheduled ref="runScheduler" method="run" fixed-delay="5000"/>
</task:scheduled-tasks>
//Job Parameters for each submission
String dateParam = new Date().toString();
JobParameters param =
new JobParametersBuilder().addString("date", dateParam).toJobParameters
//Below is the kafkaItemReader configuration
#Bean
public KafkaItemReader<String, String> kafkaItemReader() {
Properties props = new Properties();
props.put(ProducerConfig.BOOTSTRAP_SERVERS_CONFIG,"");
props.put(ConsumerConfig.GROUP_ID_CONFIG, "");
props.put(CommonClientConfigs.SECURITY_PROTOCOL_CONFIG, "SSL");
props.put(SslConfigs.SSL_TRUSTSTORE_LOCATION_CONFIG, "");
props.put(SslConfigs.SSL_TRUSTSTORE_PASSWORD_CONFIG, "");
props.put(SslConfigs.SSL_KEYSTORE_LOCATION_CONFIG, "");
props.put(SslConfigs.SSL_KEYSTORE_PASSWORD_CONFIG, "");
props.put(SslConfigs.SSL_KEY_PASSWORD_CONFIG, "");
props.put(ConsumerConfig.KEY_DESERIALIZER_CLASS_CONFIG, StringDeserializer.class);
props.put(ConsumerConfig.VALUE_DESERIALIZER_CLASS_CONFIG, JsonDeserializer.class);
Map<TopicPartition,Long> partitionOffset = new HashMap<>();
return new KafkaItemReaderBuilder<String, String>()
.partitions(0)
.consumerProperties(props)
.name("customers-reader")
.saveState(true)
.pollTimeout(Duration.ofSeconds(10))
.topic("")
.partitionOffsets(partitionOffset)
.build();
}
#Bean
public Step kafkaStep(StepBuilderFactory stepBuilderFactory,ItemWriter testItemWriter,KafkaItemReader kafkaItemReader) throws Exception {
return stepBuilderFactory.get("kafkaStep")
.chunk(10)
.reader(kafkaItemReader)
.writer(testItemWriter)
.build();
}
#Bean
public Job kafkaJob(Step kafkaStep,JobBuilderFactory jobBuilderFactory) throws Exception {
return jobBuilderFactory.get("kafkaJob").incrementer(new RunIdIncrementer())
.start(kafkaStep)
.build();
}
Am i missing some config which is causing this behaviour? I don't see this behaviour if i stop and re-run the application, it picks the offset properly in this case.
You are running a new job instance on each shcedule (by using a different date as an identifying job parameter), but your reader is a singleton bean. This means it will be reused for each run without being reinitialized with the correct offset. You can make it step-scoped to have a new instance of the reader for each run:
#Bean
#StepScope
public KafkaItemReader<String, String> kafkaItemReader() {
...
}
This will give you the same behaviour as if you restart the application, which you said fixes the issue.
I'm following the docs: Spring Batch Integration combining with the Integration AWS for pooling the AWS S3.
But the batch execution per each file is not working in some situations.
The AWS S3 Pooling is working correctly, so when I put a new file or when I started the application and there's files in the bucket the application sync with the local directory:
#Bean
public S3SessionFactory s3SessionFactory(AmazonS3 pAmazonS3) {
return new S3SessionFactory(pAmazonS3);
}
#Bean
public S3InboundFileSynchronizer s3InboundFileSynchronizer(S3SessionFactory pS3SessionFactory) {
S3InboundFileSynchronizer synchronizer = new S3InboundFileSynchronizer(pS3SessionFactory);
synchronizer.setPreserveTimestamp(true);
synchronizer.setDeleteRemoteFiles(false);
synchronizer.setRemoteDirectory("remote-bucket");
//synchronizer.setFilter(new S3PersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "simpleMetadataStore"));
return synchronizer;
}
#Bean
#InboundChannelAdapter(value = IN_CHANNEL_NAME, poller = #Poller(fixedDelay = "30"))
public S3InboundFileSynchronizingMessageSource s3InboundFileSynchronizingMessageSource(
S3InboundFileSynchronizer pS3InboundFileSynchronizer) {
S3InboundFileSynchronizingMessageSource messageSource = new S3InboundFileSynchronizingMessageSource(pS3InboundFileSynchronizer);
messageSource.setAutoCreateLocalDirectory(true);
messageSource.setLocalDirectory(new FileSystemResource("files").getFile());
//messageSource.setLocalFilter(new FileSystemPersistentAcceptOnceFileListFilter(new SimpleMetadataStore(), "fsSimpleMetadataStore"));
return messageSource;
}
#Bean("s3filesChannel")
public PollableChannel s3FilesChannel() {
return new QueueChannel();
}
I followed the tutorial so I created the FileMessageToJobRequest I won't put the code here because it's the same as the docs
So I created the beans IntegrationFlow and FileMessageToJobRequest:
#Bean
public IntegrationFlow integrationFlow(
S3InboundFileSynchronizingMessageSource pS3InboundFileSynchronizingMessageSource) {
return IntegrationFlows.from(pS3InboundFileSynchronizingMessageSource,
c -> c.poller(Pollers.fixedRate(1000).maxMessagesPerPoll(1)))
.transform(fileMessageToJobRequest())
.handle(jobLaunchingGateway())
.log(LoggingHandler.Level.WARN, "headers.id + ': ' + payload")
.get();
}
#Bean
public FileMessageToJobRequest fileMessageToJobRequest() {
FileMessageToJobRequest fileMessageToJobRequest = new FileMessageToJobRequest();
fileMessageToJobRequest.setFileParameterName("input.file.name");
fileMessageToJobRequest.setJob(delimitedFileJob);
return fileMessageToJobRequest;
}
So in the JobLaunchingGateway I think is the problem:
If I created like this:
#Bean
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
return jobLaunchingGateway;
}
Case 1 (Bucket is empty when the application starts):
I upload a new file in the AWS S3;
The pooling works and the file appears in the local directory;
But the transform/job isn't fired;
Case 2 (Bucket already has one file when application starts):
The job is launched:
2021-01-12 13:32:34.451 INFO 1955 --- [ask-scheduler-1] o.s.b.c.l.support.SimpleJobLauncher : Job: [SimpleJob: [name=arquivoDelimitadoJob]] launched with the following parameters: [{input.file.name=files/FILE1.csv}]
2021-01-12 13:32:34.524 INFO 1955 --- [ask-scheduler-1] o.s.batch.core.job.SimpleStepHandler : Executing step: [delimitedFileJob]
If I add a second file in S3, the job isn't launched as the case 1.
Case 3 (Bucket has more than one file):
The files are synchronized correctly in local directory
But the job is only executed once for the last file.
So following the docs I change my Gateway to:
#Bean
#ServiceActivator(inputChannel = IN_CHANNEL_NAME, poller = #Poller(fixedRate="1000"))
public JobLaunchingGateway jobLaunchingGateway() {
SimpleJobLauncher simpleJobLauncher = new SimpleJobLauncher();
simpleJobLauncher.setJobRepository(jobRepository);
simpleJobLauncher.setTaskExecutor(new SyncTaskExecutor());
//JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(jobLauncher());
JobLaunchingGateway jobLaunchingGateway = new JobLaunchingGateway(simpleJobLauncher);
//jobLaunchingGateway.setOutputChannel(replyChannel());
jobLaunchingGateway.setOutputChannel(s3FilesChannel());
return jobLaunchingGateway;
}
With this new gateway implementation, if I put a new file in S3 the application reacts but didn't transform giving the error:
Caused by: java.lang.IllegalArgumentException: The payload must be of type JobLaunchRequest. Object of class [java.io.File] must be an instance of class org.springframework.batch.integration.launch.JobLaunchRequest
And if there's two files in the bucket (when the apps starts) FILE1.csv and FILE2.csv, the job runs for the FILE1.csv correctly, but give the error above for the FILE2.csv.
What's the correct way to implement something like this?
Just to be clear I want to receive thousand of csv files in this bucket, read and process with Spring Batch, but I also need to get every new file asap from S3.
Thanks in advance.
The JobLaunchingGateway indeed expects from us only JobLaunchRequest as a payload.
Since you have that #InboundChannelAdapter(value = IN_CHANNEL_NAME, poller = #Poller(fixedDelay = "30")) on the S3InboundFileSynchronizingMessageSource bean definition, it is really wrong to have then #ServiceActivator(inputChannel = IN_CHANNEL_NAME for that JobLaunchingGateway without FileMessageToJobRequest transformer in between.
Your integrationFlow looks OK for me, but then you really need to remove that #InboundChannelAdapter from the S3InboundFileSynchronizingMessageSource bean and fully rely on the c.poller() configuration.
Another way is to leave that #InboundChannelAdapter, but then start the IntegrationFlow from the IN_CHANNEL_NAME not a MessageSource.
Since you have several poller against the same S3 source, plus both of then are based on the same local directory, it is not a surprise to see so many unexpected situations.
I have a spring batch which will process date in the database with processed column set to flag N. There is slave step and a master step. Master step is the partition, by this there will be 10 partitioned slave step running concurrently. Now the problem is the partition step started, but it will skipp slave step and output successful right the way.
I already have another similar partition step working correctly. All the set up are the same. Just different step name, repository in item reader and different logic in itemprocessor etc.
I will provide presudo code
//item reader
itemreader(#Value("stepExecutionContext[to]" long to),#Value("stepExecutionContext[from]" long from),#Value("stepExecutionContext[id]") long id){
logger("partition id: {} process from: {} to: {}",id,to,from);
//logic, read chunk from to to
}
//item processor and writer, not much to say, just business logic.
//partitioner
public Map<String,ExecutionContext> partition(int grideSize){
Map<String,ExecutionContext> map = new HashMap<>();
int from = 1;
int range = 10;
for(i-gridSize) {
ExecutionContext context = new ExecutionContext();
context.put("from",from);
context.put("to",from+range);
from+=range;
map.put(partitionkey + "i");
}
return map;
}
//partition step
step partitionStep(){
return this.stepBuilderFactory.get("step1.master")
.partitioner("step1", partitioner)
.step(step1())
.gridSize(10)
.taskExecutor(taskExecutor)
.build();
}
//step1
step step1(){
return this.stepBuilderFactory.get("step1")
<pojo,pojo> chunk(1)
.reader(itemreader(null,null,null))
.processor(itemprocessor())
.writer(itemwriter())
.build()
}
//job
job partionJob(){
return this.jobBuilderFactory.get("partitionJob")
.start(partitionStep())
.build();
}
I was expected the logger in item reader will print out the information and start processing, because this is how it works with another partition step I had.
In my db, the batch_step_execution table shows the 1 master step(partition step) and 10 slave step(partitioned step) which is what I expected, but for slave step, the read count is 0, which should not be because in batch_step_execution_context table, partitioner info shows correctly, like
"id":0,"from":1,"to":10
itemreader should read from 1 to 10 and passed it to itemprocessor and then itemwriter save it.
I wonder what happened, all info is saved in spring batch meta table, why the slave step is still skipped? the map from paritioner isn't empty at all.
Need help.