schedule a trigger every minute, if job still running then standby and wait for the next trigger - quartz-scheduler

I need to schedule a trigger to fire every minute, next minute if the job is still running the trigger should not fire and should wait another minute to check, if job has finished the trigger should fire
Thanks

In Quartz 2, you'll want to use the DisallowConcurrentExecution attribute on your job class. Then make sure that you set up a key using something similar to TriggerBuilder.Create().WithIdentity( "SomeTriggerKey" ) as DisallowConcurrentExecution uses it to determine if your job is already running.
[DisallowConcurrentExecution]
public class MyJob : IJob
{
...
}

I didnt find any thing about monitor.enter or something like that, thanks any way
the other answer is that the job should implement the 'StatefulJob' interface. As a StatefulJob, another instance will not run as long as one is already running
thanks again

IStatefulJob is the key here. Creating own locking mechanisms may cause problems with the scheduler as you are then taking part in the threading.

If you're using Quartz.NET, you can do something like this in your Execute method:
object execution_lock = new object();
public void Execute(JobExecutionContext context) {
if (!Monitor.TryEnter(execution_lock, 1)) {
return,
}
// do work
Monitor.Exit(execution_lock);
}
I pull this off the top of my head, maybe some names are wrong, but that's the idea: lock on some object while you're executing, and if upon execution the lock is on, then a previous job is still running and you simply return;
EDIT: the Monitor class is in the System.Threading namespace

If you are using spring quartz integration, you can specify the 'concurrent' property to 'false' from MethodInvokingJobDetailFactoryBean
<bean id="positionFeedFileProcessorJobDetail" class="org.springframework.scheduling.quartz.MethodInvokingJobDetailFactoryBean">
<property name="targetObject" ref="xxxx" />
<property name="targetMethod" value="xxxx" />
<property name="concurrent" value="false" /> <!-- This will not run the job if the previous method is not yet finished -->
</bean>

Related

Spring batch jsr 352 how to prevent partitioned job from leaving thread alive which prevent process from ending

Let me explain how my app is set up. First I have a stand alone command line started app that runs a main which in turn calls start on a job operator passing the appropriate params. I understand the start is an async call and once I call start unless I block some how in my main it dies.
My problem I have run into is when I run a partitioned job it appears to leave a few threads alive which prevents the entire processing from ending. When I run a non-partitioned job the process ends normally once the job has completed.
Is this normal and/or expected behavior? Is there a way to tell the partitioned threads to die. It seems that the partitioned threads are blocked waiting on something once the job has completed and they should not be?
I know that I could monitor for batch status in the main and possibly end it but as I stated in another question this adds a ton of chatter to the db and is not ideal.
An example of my job spec
<job id="partitionTest" xmlns="http://xmlns.jcp.org/xml/ns/javaee" version="1.0">
<step id="onlyStep">
<partition>
<plan partitions="2">
<properties partition="0">
<property name="partitionNumber" value="1"></property>
</properties>
<properties partition="1">
<property name="partitionNumber" value="2"></property>
</properties>
</plan>
</partition>
<chunk item-count="2">
<reader id="reader" ref="DelimitedFlatFileReader">
<properties>
<!-- Reads in from file Test.csv -->
<property name="fileNameAndPath" value="#{jobParameters['inputPath']}/CSVInput#{partitionPlan['partitionNumber']}.csv" />
<property name="fieldNames" value="firstName, lastName, city" />
<property name="fullyQualifiedTargetClass" value="com.test.transactionaltest.Member" />
</properties>
</reader>
<processor ref="com.test.partitiontest.Processor" />
<writer ref="FlatFileWriter" >
<properties>
<property name="appendOn" value="true"/>
<property name="fileNameAndPath" value="#{jobParameters['outputPath']}/PartitionOutput.txt" />
<property name="fullyQualifiedTargetClass" value="com.test.transactionaltest.Member" />
</properties>
</writer>
</chunk>
</step>
</job>
Edit:
Ok reading a bit more about this issue and looking into the spring batch code, it appears there is a bug at least in my opinion in the JsrPartitionHandler. Specifically the handle method creates a ThreadPoolTaskExecutor locally but then that thread pool is never cleaned up properly. A shutdown/destroy should be called before that method returns in order to perform some clean up otherwise the threads get left in memory and out of scope.
Please correct me if I am wrong here but that definitely seems like what the problem is.
I am going and try to make a change regarding it and see how it plays out. I'll update after I have done some testing.
I have confirmed this issue to be a bug (still in my opinion atm) in the spring batch core lib.
I have created a ticket over at the spring batch jira site. There is a simple attached java project to the ticket that confirms the issue I am seeing. If any one else runs into the problem they should refer to that ticket.
I have found a temporary work around that just uses a wait/notify scheme and it seems once added that the pooled threads shut down. I'll add each of the classes/code and try and explain what I did.
In main thread/class, this was code that lived in the main method or a method called from main
while(!ThreadNotifier.instance(this).getNotify()){
try {
synchronized(this){
System.out.println("WAIT THREAD IS =======" + Thread.currentThread().getName());
wait();
}
} catch (InterruptedException e) {
// TODO Auto-generated catch block
e.printStackTrace();
}
}
This is the ThreadNotifier class
public class ThreadNotifier {
private static ThreadNotifier tn = null;
private boolean notification = false;
private Object o;
private ThreadNotifier(Object o){
this.o = o;
}
public static ThreadNotifier instance(Object o){
if(tn == null){
tn = new ThreadNotifier(o);
}
return tn;
}
public void setNotify(boolean value){
notification = true;
synchronized(o){
System.out.println("NOTIFY THREAD IS =======" + Thread.currentThread().getName());
o.notify();
}
}
public boolean getNotify(){
return notification;
}
}
And lastly this is a job listener that I used to provide the notification back
public class PartitionWorkAround implements JobListener {
#Override
public void beforeJob() throws Exception {
// TODO Auto-generated method stub
}
#Override
public void afterJob() throws Exception {
ThreadNotifier.instance(null).setNotify(true);
}
}
This best I could come up with until the issue is fixed. For reference I used knowledge learned about guarded blocks here to figure out a way to do this.

Spring Batch Jsr 352, manage processor skip outside/before skip listener

I am trying to find a way to manage a skip scenario in the process listener (or could be read or write listener as well). What I have found is the skip listener seems to be executed after the process listener's on error method. This means that I might be handling the error in some way with out knowledge that it is an exception to be skipped.
Is there some way to know that a particular exception is being skipped out side the skip listener? Something that could be pulled into the process listener or possibly else where.
The best approach I found to do this was just to add property to the step and then wire in the step context where i needed it.
<step id="firstStep">
<properties> <property name="skippableExceptions" value="java.lang.IllegalArgumentException"/> </properties>
</step>
This was not a perfect solution but the skip exceptions only seem to be set in StepFactoryBean and Tasklet and are not directly accessible.
For code in my listeners
#Inject
StepContext stepContext;
.
.
.
Properties p = stepContext.getProperties();
String exceptions = p.getProperty("skippableExceptions");

Spring Batch pause and then continue

I'm writing a job that will read from an excel file, x number of rows and then I'd like it to pause for an hour before it continues with the next x number of rows.
How do I do this?
I have a job.xml file which contains the following. The subscriptionDiscoverer fetches the file and pass it over to the processor. The subscriptionWriter should write another file when the processor is done.
<job id="subscriptionJob" xmlns="http://www.springframework.org/schema/batch" incrementer="jobParamsIncrementer">
<validator ref="jobParamsValidator"/>
<step id="readFile">
<tasklet>
<chunk reader="subscriptionDiscoverer" processor="subscriptionProcessor" writer="subscriptionWriter" commit-interval="1" />
</tasklet>
</step>
</job>
Is there some kind of timer I could use or is it some kind of flow structure? It's a large file of about 160000 rows that should be processed.
I hope someone has a solution they would like to share.
Thank you!
I'm thinking of two possible approaches for you to start with:
Stop the job, and restart again (after an hour) at the last position. You can start by taking a look on how to change the BatchStatus to notify your intent to stop the job. See http://docs.spring.io/spring-batch/2.0.x/cases/pause.html or look at how Spring Batch Admin implements its way of communicating the PAUSE flag (http://docs.spring.io/spring-batch-admin/reference/reference.xhtml). You may need to implement some persistence to store the position (row number) for the job to know where to start processing again. You can use a scheduler as well to restart the job.
-or-
Add a ChunkListener and implement the following in afterChunk(ChunkContext context): Check if x number of rows has been read so far, and if yes, implement your pause mechanism (e.g., a simple Thread.sleep or look for more consistent way of pausing the step). To check for the number of rows read, you may use StepExecution.getReadCount() from ChunkContext.getStepContext().StepExecution().
Do note that afterChunk is called outside the transaction as indicated in the javadoc:
Callback after the chunk is executed, outside the transaction.

Quartz.net tracking and misfiring

I have a few questions regarding quartz.net.
What is it that keeps track of if there has been a missfire situation i Quartz.net?
What happens in the following scenarios:
If a job is run but cannot finnish due to some bug, does that count as a missfire or not?
What happens if i republish the solution, is the tracking reset?
Is there a way to receive information on what the scheduler has done and not been able to do?
I have the following code in my Run method:
IJobDetail dailyUserMailJob = new JobDetailImpl("DailyUserMailJob", null, typeof(Jobs.TestJob));
ITrigger trigger = TriggerBuilder.Create()
.WithIdentity("trigger1", "group1")
.WithCronSchedule("0 0 4 1 * ?", x => x.WithMisfireHandlingInstructionFireAndProceed())
.Build();
this.Scheduler.ScheduleJob(dailyUserMailJob, trigger);
this.Scheduler.Start();
The job is supposed to run the first every month on 4 am.
When testing I have set the system clock so that the jobb is missed for one month. According to the documentation when using WithMisfireHandlingInstructionFireAndProceed the job should be run the first thing that happens, but it dosent. Is there something wrong with the code or could it be some other reason the job is not run when using WithMisfireHandlingInstructionFireAndProceed() ?
If a job is missed, there is logic to bring it back. However, there is a "window" on how far back to go.
<add key="quartz.jobStore.misfireThreshold" value="60000"/>
You can increase this value.
If you have an ADOStore, misfires are persisted. Thus "if the power goes out", when restarting...you can recover from misfires.
If you have a RamStore...if "the power goes out", everything was in memory to begin with..so you won't get mis-fire handling, because everything was "in memory" and the memory is lost.
..
If you use Sql Server (AdoStore) and put a Profiler/Trace on it, you'll see the engine "poll" for misfires.......with a "go back this far in time" based on the misfireThreshold.
See this link:
http://nurkiewicz.blogspot.com/2012/04/quartz-scheduler-misfire-instructions.html
for more detailed info. Which has a "withMisfireHandlingInstructionFireAndProceed" note.

Quartz.Net - delay a simple trigger to start

I have a few jobs setup in Quartz to run at set intervals. The problem is though that when the service starts it tries to start all the jobs at once... is there a way to add a delay to each job using the .xml config?
Here are 2 job trigger examples:
<simple>
<name>ProductSaleInTrigger</name>
<group>Jobs</group>
<description>Triggers the ProductSaleIn job</description>
<misfire-instruction>SmartPolicy</misfire-instruction>
<volatile>false</volatile>
<job-name>ProductSaleIn</job-name>
<job-group>Jobs</job-group>
<repeat-count>RepeatIndefinitely</repeat-count>
<repeat-interval>86400000</repeat-interval>
</simple>
<simple>
<name>CustomersOutTrigger</name>
<group>Jobs</group>
<description>Triggers the CustomersOut job</description>
<misfire-instruction>SmartPolicy</misfire-instruction>
<volatile>false</volatile>
<job-name>CustomersOut</job-name>
<job-group>Jobs</job-group>
<repeat-count>RepeatIndefinitely</repeat-count>
<repeat-interval>43200000</repeat-interval>
</simple>
As you see there are 2 triggers, the first repeats every day, the next repeats twice a day.
My issue is that I want either the first or second job to start a few minutes after the other... (because they are both in the end, accessing the same API and I don't want to overload the request)
Is there a repeat-delay or priority property? I can't find any documentation saying so..
I know you are doing this via XML but in code you can set the StartTimeUtc to delay say 30 seconds like this...
trigger.StartTimeUtc = DateTime.UtcNow.AddSeconds(30);
This isn't exactly a perfect answer for your XML file - but via code you can use the StartAt extension method when building your trigger.
/* calculate the next time you want your job to run - in this case top of the next hour */
var hourFromNow = DateTime.UtcNow.AddHours(1);
var topOfNextHour = new DateTime(hourFromNow.Year, hourFromNow.Month, hourFromNow.Day, hourFromNow.Hour, 0, 0);
/* build your trigger and call 'StartAt' */
TriggerBuilder.Create().WithIdentity("Delayed Job").WithSimpleSchedule(x => x.WithIntervalInSeconds(60).RepeatForever()).StartAt(new DateTimeOffset(topOfNextHour))
You've probably already seen this by now, but it's possible to chain jobs, though it's not supported out of the box.
http://quartznet.sourceforge.net/faq.html#howtochainjobs