What does scheduleJob method of Quartz scheduler exactly do? - triggers

I wrote a simple example using job and scheduler of Quartz.
In this example I run a trigger every two second and the job prints a messago to the console.
The class that implements Job is this:
MyJob.java
public class MyJob implements Job{
public MyJob() {
}
public void execute(JobExecutionContext context)
throws JobExecutionException {
System.out.println("My job is running");
}
}
In another class I have a main method to create a trigger and schedule it with a job of the MyJob class:
public static void main(String args[]) throws Exception {
JobDetail job = JobBuilder.newJob(MyJob.class).withIdentity("dummyJobName", "group1").build();
Trigger trigger = TriggerBuilder
.newTrigger()
.withIdentity("dummyTriggerName", "group")
.withSchedule(
SimpleScheduleBuilder.simpleSchedule()
.withIntervalInSeconds(2).repeatForever())
.build();
Scheduler scheduler = new StdSchedulerFactory().getScheduler();
scheduler.start();
scheduler.scheduleJob(job, trigger);
}
The example works, but my problem is understand what the method "scheduleJob" of quart scheduler does. I try to open the implementation but nothing to do about the code. Can someone tell me what that method does? How job and trigger are related in that method?

Related

X-Ray configuration for Spring Batch Job

X-Ray is integrated into my service and everything works fine when some endpoints are triggered from other services.
The Spring Batch job is used to process some data and push some part of it to SNS topic. This job is launched via SimpleJobLauncher.
The issue is that during the pushing to SNS from my Spring Batch the following exception is thrown: SegmentNotFoundException: No segment in progress .
Based on the documentation it looks like I need to pass the trace ID to the job:
https://docs.aws.amazon.com/xray/latest/devguide/xray-sdk-java-multithreading.html
Does anyone know what is the best way to integrate X-Ray with Spring Batch? And what would be the cleanest solution?
I solved this issue in the following way:
I've passed name, trace id and parent id to my job via job parameters while launching the job:
Entity segment = AWSXRay.getGlobalRecorder().getTraceEntity();
asyncJobLauncher.run(
myJob,
new JobParametersBuilder()
.addLong(JOB_UNIQUENESS_KEY, System.nanoTime())
.addString(X_RAY_NAME_ID_KEY, segment.getName())
.addString(X_RAY_TRACE_ID_KEY, segment.getTraceId().toString())
.addString(X_RAY_PARENT_ID_KEY, segment.getParentId())
.toJobParameters()
);
I've implemented the job listener to create a new X-Ray segment while starting a job:
#Slf4j
#Component
#RequiredArgsConstructor
public class XRayJobListener implements JobExecutionListener {
#Value("${spring.application.name}")
private String appName;
#Override
public void beforeJob(#NonNull JobExecution jobExecution) {
AWSXRayRecorder recorder = AWSXRay.getGlobalRecorder();
String name = Objects.requireNonNullElse(
jobExecution.getJobParameters().getString(X_RAY_NAME_ID_KEY),
appName
);
Optional<String> traceIdOpt =
Optional.ofNullable(jobExecution.getJobParameters().getString(X_RAY_TRACE_ID_KEY));
TraceID traceID =
traceIdOpt
.map(TraceID::fromString)
.orElseGet(TraceID::create);
String parentId = jobExecution.getJobParameters().getString(X_RAY_PARENT_ID_KEY);
recorder.beginSegment(name, traceID, parentId);
}
#Override
public void afterJob(#NonNull JobExecution jobExecution) {
AWSXRay.getGlobalRecorder().endSegment();
}
}
And this listener is added to the configuration of my job:
#Bean
public Job myJob(
JobBuilderFactory jobBuilderFactory,
Step myStep1,
Step myStep2,
XRayJobListener xRayJobListener
) {
return
jobBuilderFactory
.get("myJob")
.incrementer(new RunIdIncrementer())
.listener(xRayJobListener)
.start(myStep1)
.next(myStep2)
.build();
}

Should Job/Step/Reader/Writer all be bean?

As fars as all the examples from the Spring Batch reference doc , I see that those objects like job/step/reader/writer are all marked as #bean, like the following:
#Bean
public Job footballJob() {
return this.jobBuilderFactory.get("footballJob")
.listener(sampleListener())
...
.build();
}
#Bean
public Step sampleStep(PlatformTransactionManager transactionManager) {
return this.stepBuilderFactory.get("sampleStep")
.transactionManager(transactionManager)
.<String, String>chunk(10)
.reader(itemReader())
.writer(itemWriter())
.build();
}
I have a scenario that the server side will receive requests and run job concurrently(different job names or same job name with different jobparameters). The usage is to new a job object(including steps/reader/writers) in concurrent threads, so I propabaly will not state the job method as #bean and new a job each time.
And there is actually a differenence on how to transmit parameters to object like reader. If using #bean , parameters must be put in e.g. JobParameters to be late binding into object using #StepScope, like the following example:
#StepScope
#Bean
public FlatFileItemReader flatFileItemReader(#Value(
"#{jobParameters['input.file.name']}") String name) {
return new FlatFileItemReaderBuilder<Foo>()
.name("flatFileItemReader")
.resource(new FileSystemResource(name))
}
If not using #bean , I can just transmit parameter directly with no need to put data into JobParameter,like the following
public FlatFileItemReader flatFileItemReader(String name) {
return new FlatFileItemReaderBuilder<Foo>()
.name("flatFileItemReader")
.resource(new FileSystemResource(name))
}
Simple test shows that no #bean works. But I want to confirm formally:
1、 Is using #bean at job/step/reader/writer mandatory or not ?
2、 if it is not mandatory, when I new a object like reader, do I need to call afterPropertiesSet() manually?
Thanks!
1、 Is using #bean at job/step/reader/writer mandatory or not ?
No, it is not mandatory to declare batch artefacts as beans. But you would want to at least declare the Job as a bean to benefit from Spring's dependency injection (like injecting the job repository reference into the job, etc) and be able to do something like:
ApplicationContext context = new AnnotationConfigApplicationContext(MyJobConfig.class);
Job job = context.getBean(Job.class);
JobLauncher jobLauncher = context.getBean(JobLauncher.class);
jobLauncher.run(job, new JobParameters());
2、 if it is not mandatory, when I new a object like reader, do I need to call afterPropertiesSet() manually?
I guess that by "when I new a object like reader" you mean create a new instance manually. In this case yes, if the object is not managed by Spring, you need to call that method yourself. If the object is declared as a bean, Spring will call
the afterPropertiesSet() method automatically. Here is a quick sample:
import org.springframework.beans.factory.InitializingBean;
import org.springframework.context.ApplicationContext;
import org.springframework.context.annotation.AnnotationConfigApplicationContext;
import org.springframework.context.annotation.Bean;
import org.springframework.context.annotation.Configuration;
#Configuration
public class TestAfterPropertiesSet {
#Bean
public MyBean myBean() {
return new MyBean();
}
public static void main(String[] args) throws Exception {
ApplicationContext context = new AnnotationConfigApplicationContext(TestAfterPropertiesSet.class);
MyBean myBean = context.getBean(MyBean.class);
myBean.sayHello();
}
static class MyBean implements InitializingBean {
#Override
public void afterPropertiesSet() throws Exception {
System.out.println("MyBean.afterPropertiesSet");
}
public void sayHello() {
System.out.println("Hello");
}
}
}
This prints:
MyBean.afterPropertiesSet
Hello

Repository.Save() inside scheduling is not working

I have a repository.save() method called inside a scheduler. But it is not saving anything to the database.
Following is my scheduler
#Component
#Transactional
#Slf4j
public class WomConditionActionJob {
#Autowired
private Environment env;
#Autowired
private ECCRepository eCCRepository;
#Autowired
private WOCRepository wOCRepository;
#Autowired
private PSRepository pSRepository;
#Scheduled(fixedDelayString = "${wCATrigger.polling.frequency}", initialDelayString = "${wCATrigger.initial.delay}")
public void execute() {
try {
final PauseStatus pause = pSRepository.findByPSName(PSName.PAUSE);
pauseCondition(pause,threshold);
} catch (Exception e) {
log.error("Exception Occured {}", e);
}
}
private void pauseCondition(final PauseStatus pause, final Integer threshold) {
WOTCondition wotCId = workOrderConditionRepository.findById(1).get();
wotCId.setPauseStatus(pause);
wotCId.setIsUserAction(Boolean.FALSE);
workOrderConditionRepository.save(wotConditionbyId);
conditionCount.setErrorCount(0);
errorConditionCountRepository.save(conditionCount);
}
}
I trying using saveAndFlush() but that time I got Following error
[pool-2-thread-1]|ERROR|[o.s.s.s.TaskUtils$LoggingErrorHandler.handleError(96)]|Unexpected error occurred in scheduled task.
org.springframework.transaction.UnexpectedRollbackException: Transaction rolled back because it has been marked as rollback-only
at org.springframework.transaction.support.AbstractPlatformTransactionManager.processRollback(AbstractPlatformTransactionManager.java:873)
at org.springframework.transaction.support.AbstractPlatformTransactionManager.commit(AbstractPlatformTransactionManager.java:710)
Adding this solved my issue
#Transactional(propagation=Propagation.REQUIRES_NEW)
Example
#Scheduled(cron = "0/5 * * * * *")
#Transactional(propagation=Propagation.REQUIRES_NEW)
#Override
public void scheduleJob() {
Message message = new Message();
message.setMessageId(UUID.randomUUID().toString());
message.setAccountId("accountId");
message.setSent(2L);
message.setFailed(2L);
message.setDelivered(2L);
// saves or update message_report table
messageRepository.save(message);
}

Spring Batch How to get JobExecution Object in Process Listner

I have requirement in my project what ever exception occur in ItemProccesor need to store Exception in JobExecution context and at the end of JobExecution send mail for Exceptional records but how to get JobExecution Object in processListner?
I tried using #beforestep in processListner but JobExecution object was null is there any way to get JobExecution context in process Listner
I got solution in spring batch for above issue, need to specify jobscope in process listener and access job execution context in listner class code is mention below.
#Bean
#JobScope
public CaliberatedProcessorListener calibratedProcessorListener() {
return new CaliberatedProcessorListener();
}
public class CaliberatedProcessorListener <T, S> implements ItemProcessListener<T, S> {
#Value("#{jobExecution}")
public JobExecution jobExecution;
#Override
public void beforeProcess(T calibratedProessorInPut) {
// // do nothing
}
#Override
public void afterProcess(T calibratedProessorInput, S calibratedProessorOutPut) {
// do nothing
}
#Override
public void onProcessError(T item, Exception calibratedProcessorEx) {
FtpEmailData ftpEmailData = (FtpEmailData) jobExecution.getExecutionContext().get("calDeviceBatchInfo");
ftpEmailData.getExceptionList().add(new CalibratedDeviceException(calibratedProcessorEx.getMessage()));
}
}

How to continually run a Spring Batch job

What is the best way to continually run a Spring Batch job? Do we need to write a shell file which loops and starts the job at predefined intervals? Or is there a way within Spring Batch itself to configure a job so that it repeats at either
1) pre-defined intervals
2) after the completion of each run
Thanks
If you want to launch your jobs periodically, you can combine Spring Scheduler and Spring Batch. Here is a concrete example : Spring Scheduler + Batch Example.
If you want to re-launch your job continually (Are you sure !), You can configure a Job Listener on your job. Then, through the method jobListener.afterJob(JobExecution jobExecution) you can relaunch your job.
Id did something like this for importing emails, so i have to check it periodically
#SpringBootApplication
#EnableScheduling
public class ImportBillingFromEmailBatchRunner
{
private static final Logger LOG = LoggerFactory.getLogger(ImportBillingFromEmailBatchRunner.class);
public static void main(String[] args)
{
SpringApplication app = new SpringApplication(ImportBillingFromEmailBatchRunner.class);
app.run(args);
}
#Bean
BillingEmailCronService billingEmailCronService()
{
return new BillingEmailCronService();
}
}
So the BillingEmailCronService takes care of the continuation:
public class BillingEmailCronService
{
private static final Logger LOG = LoggerFactory.getLogger(BillingEmailCronService.class);
#Autowired
private JobLauncher jobLauncher;
#Autowired
private JobExplorer jobExplorer;
#Autowired
private JobRepository jobRepository;
#Autowired
private JobBuilderFactory jobBuilderFactory;
#Autowired
private #Qualifier(BillingBatchConfig.QUALIFIER)
Step fetchBillingFromEmailsStep;
#Scheduled(fixedDelay = 5000)
public void run()
{
LOG.info("Procesando correos con facturas...");
try
{
Job job = createNewJob();
JobParameters jobParameters = new JobParameters();
jobLauncher.run(job, jobParameters);
}catch(...)
{
//Handle each exception
}
}
}
Implement your createNewJob logic and try it out.
one easy way would be configure cron job from Unix which will run application at specified interval