I have Application for Job Handler where many jobs are running parallely with Quartz scheduler.
However I have facing problem at time of any exception thrown from Jobs to Scheduler, Job are not retriggering after any exception coming under that job. All jobs are working fine without affection.
I have tried e2.refireImmediately(); method but it not working here.
I am using SimpleTrigger class to manage all jobs.
Kindly assist.
Related
There are some jobs scheduled using any trigger either SimpleTrigger or CronTrigger, now want to unschedule and delete the jobs. The job can be in running or already completed its execution state. If a unschedule or already executed job is deleted then there won't be any worst impact but what happen to the running job, if unschedule using unscheduleJob() or deleted directly by deleteJob() methods of the Quartz?
And if the running job is being halted in-between when the unscheduleJob() or deleteJob() is called upon then is there any way to let the job to complete it's current execution before unscheduling or deleting to avoid any malfunctioning or bad data?
Tried to check the conflicting jobs and make use of SchedulerListener also but didn't get any information.
Thanks in Advance!!!
I am creating multiple SlingJobs on the fly using org.apache.sling.commons.scheduler.Scheduler OSGi service in AEM.
i.e. scheduler.schedule(Runnable, ScheduleOptions);
I have requirement that these Sling Jobs be run only once, so I am using ScheduleOptions.AT(Date date,int times,long period) ScheduleOptions Docs
And passing times=1 as a parameter.
(Also what is period parameter ?)
The Job successfully runs only once.
My question is am I supposed to keep a track of this Job by name and UnSchedule it using Scheduler.unschedule(String jobName) after it has finished running ?
Will completed SlingJobs that are not UnScheduled, consume memory in the AEM server ?
Will these completed BUT unscheduled jobs cause my AEM server to slow down and later on require some purge activity as maintenance?
According to https://sling.apache.org/documentation/bundles/apache-sling-eventing-and-job-handling.html#scheduled-jobs
Internally the scheduled Jobs use the Commons Scheduler Service. But in addition they are persisted (by default below /var/eventing/scheduled-jobs) and survive therefore even server restarts. When the scheduled time is reached, the job is automatically added as regular Sling Job through the JobManager.
I had a problem with a scheduled jobs before(they were triggered on the daily basis). When the server was restarted scheduled jobs wasn't un-persisted and a new job doing the same action was scheduled(job was scheduled on #Activate method). As a result, I got several jobs doing the same action at the scheduled time, so I had to unschedule them in #Deactivate method.
You may make an experiment and make sure that there is no duplicated jobs under /var/eventing/scheduled-jobs
Why rundeck not launching scheduled spark jobs even if the previous job is still executing?
Rundeck is skipping the jobs set to launch during the execution of the previous job, then after the completion of its execution launch new job based on the schedule.
But I want to launch a scheduled job even if the previous job is executing.
Check your workflow strategy, here you have an explanation about that:
https://www.rundeck.com/blog/howto-controlling-how-and-where-rundeck-jobs-execute
You can design a workflow strategy based on "Parallel" to launch the jobs simultaneously on your node.
Example using the parallel strategy with a parent job.
Example jobs:
Job one, Job two and Parent Job (using parallel strategy).
I am using Quartz scheduler to execute jobs. But while trying to schedule jobs for a future time, the jobs get triggered at the right time and immediately goes to failed state without displaying anything in the scheduler logs.
Could not find the root cause for this. But the issue was solved by pointing to a freshly created Quartz database.
The reason could be that the database might have got corrupted in some way.
If I have a job and from that job I create some threads, what happens when I call scheduler.shutdown(true)?
Will the scheduler wait for all of my threads to finish or not?
Quartz 1.8.1 API docs:
Halts the Scheduler's firing of Triggers, and cleans up all resources associated with the Scheduler.
Parameters:
waitForJobsToComplete - if true the scheduler will not allow this method to return until all currently executing jobs have completed.
Quarts neither know nor cares about any threads spawned by your job, it will simply wait for the job to complete. If your job spawns new threads then exits, then as far as Quartz is concerned, it's finished.
If your job needs to wait for its spawned threads to complete, then you need to use something like an ExecutorService (see javadoc for java.util.concurrent), which will allow the job thread to wait for its spawned threads to complete. If you're using raw java threads, then use Thread.join().