We are using Airflow 1.7.3 with CeleryExecutor. Airflow scheduler is setup as systemd service with --num-run set to 10 to stop it and restart every 10th run (as suggested here).
We noticed that at every 9th loop of scheduler takes considerably loger time (about 160 seconds or more) as compared to the regular loop of ~16 seconds. As per the logs, this is the loop when scheduler fills the DagBag by refreshing all the dags. This time gets increased as number of dags/tasks increase in our airflow installation.
Most of our tasks are very small and takes just about few seconds to run but they get stuck in the "undefined" state an does not get queued, while scheduler is busy "filling up dagbag". In the meantime celery workers are sitting idle. We have tried the following:
increased celeryd_concurrency (which gave us ability to send more tasks to workers)
increased non_pooled_task_slot_count (so that more tasks can get queued)
also increased parallelism and dag_concurrency
All these measures allows to launch more tasks, only if scheduler queues them, which it not effectively does when it goes to that refreshh stage. Here are the timings for each scheduler loop:
[2016-11-07 23:18:28,106] {jobs.py:680} INFO - Starting the scheduler
[2016-11-07 23:21:26,515] {jobs.py:744} INFO - Loop took: 16.422769 seconds
[2016-11-07 23:21:46,186] {jobs.py:744} INFO - Loop took: 16.058172 seconds
[2016-11-07 23:22:02,800] {jobs.py:744} INFO - Loop took: 14.410493 seconds
[2016-11-07 23:22:21,310] {jobs.py:744} INFO - Loop took: 16.275255 seconds
[2016-11-07 23:22:41,470] {jobs.py:744} INFO - Loop took: 17.93543 seconds
[2016-11-07 23:22:59,176] {jobs.py:744} INFO - Loop took: 15.484449 seconds
[2016-11-07 23:23:17,455] {jobs.py:744} INFO - Loop took: 16.130971 seconds
[2016-11-07 23:23:35,948] {jobs.py:744} INFO - Loop took: 16.311113 seconds
[2016-11-07 23:23:55,043] {jobs.py:744} INFO - Loop took: 16.830728 seconds
[2016-11-07 23:26:57,044] {jobs.py:744} INFO - Loop took: 179.613778 seconds
[2016-11-07 23:27:09,328] {jobs.py:680} INFO - Starting the scheduler
[2016-11-07 23:29:57,988] {jobs.py:744} INFO - Loop took: 16.881139 seconds
[2016-11-07 23:30:17,584] {jobs.py:744} INFO - Loop took: 17.021958 seconds
[2016-11-07 23:30:36,062] {jobs.py:744} INFO - Loop took: 16.148552 seconds
[2016-11-07 23:30:56,975] {jobs.py:744} INFO - Loop took: 18.532384 seconds
[2016-11-07 23:31:16,214] {jobs.py:744} INFO - Loop took: 16.907037 seconds
[2016-11-07 23:31:39,060] {jobs.py:744} INFO - Loop took: 15.637057 seconds
[2016-11-07 23:31:56,231] {jobs.py:744} INFO - Loop took: 15.003683 seconds
[2016-11-07 23:32:13,618] {jobs.py:744} INFO - Loop took: 15.215657 seconds
[2016-11-07 23:32:35,738] {jobs.py:744} INFO - Loop took: 19.938704 seconds
[2016-11-07 23:35:33,905] {jobs.py:744} INFO - Loop took: 176.030812 seconds
[2016-11-07 23:35:45,908] {jobs.py:680} INFO - Starting the scheduler
Questions:
does --num-run required anymore in 1.7.1.3 version (as mentioned in pitfalls: https://cwiki.apache.org/confluence/display/AIRFLOW/Common+Pitfalls)? do we still have to restart the scheduler after every n number of runs?
increasing the max_threads value (to launch multiple scheduler thread) would help? I think defualt is 2.
Thanks for any help.
Related
I am processing messages from IBM MQ with a Scala program. It was working fine and stopped working without any code change.
This timeout occurs without a specific pattern and from time to time.
I run the application like this:
spark-submit --conf spark.streaming.driver.writeAheadLog.allowBatching=true --conf spark.streaming.driver.writeAheadLog.batchingTimeout=15000 --class com.ibm.spark.streaming.mq.SparkMQExample --master yarn --deploy-mode client --num-executors 1 $jar_file_loc lots of args here >> script.out.log 2>> script.err.log < /dev/null
I tried two properties:
spark.streaming.driver.writeAheadLog.batchingTimeout 15000
spark.streaming.driver.writeAheadLog.allowBatching true
See error:
2021-12-14 14:13:05 WARN ReceivedBlockTracker:90 - Exception thrown while writing record: BatchAllocationEvent(1639487580000 ms,AllocatedBlocks(Map(0 -> Queue()))) to the WriteAheadLog.
java.util.concurrent.TimeoutException: Futures timed out after [5000 milliseconds]
at scala.concurrent.impl.Promise$DefaultPromise.ready(Promise.scala:223)
at scala.concurrent.impl.Promise$DefaultPromise.result(Promise.scala:227)
at org.apache.spark.util.ThreadUtils$.awaitResult(ThreadUtils.scala:220)
at org.apache.spark.streaming.util.BatchedWriteAheadLog.write(BatchedWriteAheadLog.scala:84)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.writeToLog(ReceivedBlockTracker.scala:238)
at org.apache.spark.streaming.scheduler.ReceivedBlockTracker.allocateBlocksToBatch(ReceivedBlockTracker.scala:118)
at org.apache.spark.streaming.scheduler.ReceiverTracker.allocateBlocksToBatch(ReceiverTracker.scala:209)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:248)
at org.apache.spark.streaming.scheduler.JobGenerator$$anonfun$3.apply(JobGenerator.scala:247)
at scala.util.Try$.apply(Try.scala:192)
at org.apache.spark.streaming.scheduler.JobGenerator.generateJobs(JobGenerator.scala:247)
at org.apache.spark.streaming.scheduler.JobGenerator.org$apache$spark$streaming$scheduler$JobGenerator$$processEvent(JobGenerator.scala:183)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:89)
at org.apache.spark.streaming.scheduler.JobGenerator$$anon$1.onReceive(JobGenerator.scala:88)
at org.apache.spark.util.EventLoop$$anon$1.run(EventLoop.scala:49)
2021-12-14 14:13:05 INFO ReceivedBlockTracker:57 - Possibly processed batch 1639487580000 ms needs to be processed again in WAL recovery
2021-12-14 14:13:05 INFO JobScheduler:57 - Added jobs for time 1639487580000 ms
2021-12-14 14:13:05 INFO JobGenerator:57 - Checkpointing graph for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updating checkpoint data for time 1639487580000 ms
rdd is empty
2021-12-14 14:13:05 INFO JobScheduler:57 - Starting job streaming job 1639487580000 ms.0 from job set of time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updated checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO JobScheduler:57 - Finished job streaming job 1639487580000 ms.0 from job set of time 1639487580000 ms
2021-12-14 14:13:05 INFO JobScheduler:57 - Total delay: 5.011 s for time 1639487580000 ms (execution: 0.001 s)
2021-12-14 14:13:05 INFO CheckpointWriter:57 - Submitted checkpoint of time 1639487580000 ms to writer queue
2021-12-14 14:13:05 INFO BlockRDD:57 - Removing RDD 284 from persistence list
2021-12-14 14:13:05 INFO PluggableInputDStream:57 - Removing blocks of RDD BlockRDD[284] at receiverStream at JmsStreamUtils.scala:64 of time 1639487580000 ms
2021-12-14 14:13:05 INFO BlockManager:57 - Removing RDD 284
2021-12-14 14:13:05 INFO JobGenerator:57 - Checkpointing graph for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updating checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO DStreamGraph:57 - Updated checkpoint data for time 1639487580000 ms
2021-12-14 14:13:05 INFO CheckpointWriter:57 - Submitted checkpoint of time 1639487580000 ms to writer queue
Any kind of information would be useful. Thank you!
I'm having a problem in a script that executes a parfor-loop, I hope you can help me with it.
I didn't have this problem before and I think I didn't change anything that could cause it.
The problem is that the parfor-loop restarts after starting the parallel group with 4 workers and executing the first 4 iterations. This happens once and then normally executes all the iterations as it should.
Here is my code, simplified in order to show this problem:
parfor loopVariable = 1 : 21
fprintf('%s - Running iteration %i/%i \n', datestr(datetime), loopVariable, 21)
*statements*
end
And this is the output I get, you will note that the first 4 iterations are repeated:
Starting parallel pool (parpool) using the 'local' profile ...
connected to 4 workers.
04-May-2020 11:43:21 - Running iteration 1/21
04-May-2020 11:43:21 - Running iteration 2/21
04-May-2020 11:43:21 - Running iteration 4/21
04-May-2020 11:43:21 - Running iteration 7/21
Analyzing and transferring files to the workers ...done.
04-May-2020 15:01:12 - Running iteration 7/21
04-May-2020 15:01:12 - Running iteration 1/21
04-May-2020 15:01:12 - Running iteration 2/21
04-May-2020 15:01:12 - Running iteration 4/21
04-May-2020 15:24:29 - Running iteration 3/21
04-May-2020 16:21:16 - Running iteration 6/21
04-May-2020 16:12:52 - Running iteration 13/21
04-May-2020 16:20:32 - Running iteration 10/21
04-May-2020 18:34:27 - Running iteration 12/21
04-May-2020 18:39:20 - Running iteration 9/21
04-May-2020 20:33:04 - Running iteration 5/21
04-May-2020 20:50:08 - Running iteration 11/21
04-May-2020 21:07:43 - Running iteration 8/21
04-May-2020 22:42:34 - Running iteration 15/21
05-May-2020 01:09:18 - Running iteration 14/21
04-May-2020 23:05:16 - Running iteration 18/21
04-May-2020 23:53:35 - Running iteration 19/21
05-May-2020 01:50:12 - Running iteration 17/21
05-May-2020 04:40:23 - Running iteration 16/21
05-May-2020 01:52:47 - Running iteration 21/21
05-May-2020 03:34:10 - Running iteration 20/21
I don't know if this is relevant, but I'm running the script remotely using:
nohup matlab -nodisplay -nosplash -r scriptFile -logfile outputFile.txt < /dev/null &
Thanks in advance for the help.
The segment of code you show is correct. Are you sure it's not some other part of the code or some of the *statements*?
>> parfor loopVariable = 1 : 21
fprintf('%s - Running iteration %i/%i \n', datestr(datetime), loopVariable, 21)
end
Starting parallel pool (parpool) using the 'local' profile ...
Connected to the parallel pool (number of workers: 4).
05-May-2020 11:10:56 - Running iteration 1/21
05-May-2020 11:10:56 - Running iteration 6/21
05-May-2020 11:10:56 - Running iteration 5/21
05-May-2020 11:10:56 - Running iteration 15/21
05-May-2020 11:10:56 - Running iteration 19/21
05-May-2020 11:10:56 - Running iteration 2/21
05-May-2020 11:10:56 - Running iteration 8/21
05-May-2020 11:10:56 - Running iteration 7/21
05-May-2020 11:10:56 - Running iteration 13/21
05-May-2020 11:10:56 - Running iteration 17/21
05-May-2020 11:10:56 - Running iteration 3/21
05-May-2020 11:10:56 - Running iteration 10/21
05-May-2020 11:10:56 - Running iteration 9/21
05-May-2020 11:10:56 - Running iteration 14/21
05-May-2020 11:10:56 - Running iteration 18/21
05-May-2020 11:10:56 - Running iteration 4/21
05-May-2020 11:10:56 - Running iteration 12/21
05-May-2020 11:10:56 - Running iteration 11/21
05-May-2020 11:10:56 - Running iteration 16/21
05-May-2020 11:10:56 - Running iteration 21/21
05-May-2020 11:10:56 - Running iteration 20/21
I'm new to airflow and i tried to manually trigger a job through UI. When I did that, the scheduler keep on logging that it is Failing jobs without heartbeat as follows:
[2018-05-28 12:13:48,248] {jobs.py:1662} INFO - Heartbeating the executor
[2018-05-28 12:13:48,250] {jobs.py:1672} INFO - Heartbeating the scheduler
[2018-05-28 12:13:48,259] {jobs.py:368} INFO - Started process (PID=58141) to work on /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:48,264] {jobs.py:1742} INFO - Processing file /Users/gkumar6/airflow/dags/tutorial.py for tasks to queue
[2018-05-28 12:13:48,265] {models.py:189} INFO - Filling up the DagBag from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:48,275] {jobs.py:1754} INFO - DAG(s) ['tutorial'] retrieved from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:48,298] {models.py:341} INFO - Finding 'running' jobs without a recent heartbeat
[2018-05-28 12:13:48,299] {models.py:345} INFO - Failing jobs without heartbeat after 2018-05-28 06:38:48.299278
[2018-05-28 12:13:48,304] {jobs.py:375} INFO - Processing /Users/gkumar6/airflow/dags/tutorial.py took 0.045 seconds
[2018-05-28 12:13:49,266] {jobs.py:1627} INFO - Heartbeating the process manager
[2018-05-28 12:13:49,267] {dag_processing.py:468} INFO - Processor for /Users/gkumar6/airflow/dags/tutorial.py finished
[2018-05-28 12:13:49,271] {dag_processing.py:537} INFO - Started a process (PID: 58149) to generate tasks for /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:49,272] {jobs.py:1662} INFO - Heartbeating the executor
[2018-05-28 12:13:49,283] {jobs.py:368} INFO - Started process (PID=58149) to work on /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:49,288] {jobs.py:1742} INFO - Processing file /Users/gkumar6/airflow/dags/tutorial.py for tasks to queue
[2018-05-28 12:13:49,289] {models.py:189} INFO - Filling up the DagBag from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:49,300] {jobs.py:1754} INFO - DAG(s) ['tutorial'] retrieved from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:49,326] {models.py:341} INFO - Finding 'running' jobs without a recent heartbeat
[2018-05-28 12:13:49,327] {models.py:345} INFO - Failing jobs without heartbeat after 2018-05-28 06:38:49.327218
[2018-05-28 12:13:49,332] {jobs.py:375} INFO - Processing /Users/gkumar6/airflow/dags/tutorial.py took 0.049 seconds
[2018-05-28 12:13:50,279] {jobs.py:1627} INFO - Heartbeating the process manager
[2018-05-28 12:13:50,280] {dag_processing.py:468} INFO - Processor for /Users/gkumar6/airflow/dags/tutorial.py finished
[2018-05-28 12:13:50,283] {dag_processing.py:537} INFO - Started a process (PID: 58150) to generate tasks for /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:50,285] {jobs.py:1662} INFO - Heartbeating the executor
[2018-05-28 12:13:50,296] {jobs.py:368} INFO - Started process (PID=58150) to work on /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:50,301] {jobs.py:1742} INFO - Processing file /Users/gkumar6/airflow/dags/tutorial.py for tasks to queue
[2018-05-28 12:13:50,302] {models.py:189} INFO - Filling up the DagBag from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:50,312] {jobs.py:1754} INFO - DAG(s) ['tutorial'] retrieved from /Users/gkumar6/airflow/dags/tutorial.py
[2018-05-28 12:13:50,338] {models.py:341} INFO - Finding 'running' jobs without a recent heartbeat
[2018-05-28 12:13:50,339] {models.py:345} INFO - Failing jobs without heartbeat after 2018-05-28 06:38:50.339147
[2018-05-28 12:13:50,344] {jobs.py:375} INFO - Processing /Users/gkumar6/airflow/dags/tutorial.py took 0.048 seconds
And the status of job on UI is stuck at running. Is there something i need to configure to solve this issue?
It seems that it's not a "Failing jobs" problem but a logging problem. Here's what I found when I tried to fix this problem.
Is this message indicates that there's something wrong that I should
be concerned?
No.
"Finding 'running' jobs" and "Failing jobs..." are INFO level logs
generated from find_zombies function of heartbeat utility. So there will be logs generated every
heartbeat interval even if you don't have any failing jobs
running.
How do I turn it off?
The logging_level option in airflow.cfg does not control the scheduler logging.
There's one hard-code in
airflow/settings.py:
LOGGING_LEVEL = logging.INFO
You could change this to:
LOGGING_LEVEL = logging.WARN
Then restart the scheduler and the problem will be gone.
I think in point 2 if you just change the logging_level = INFO to WARN in airflow.cfg, you won't get INFO log. you don't need to modify settings.py file.
Is this normal ?
info t=2016-04-28T09:57:34Z Cinnamon.AppSystem.get_default() started in 51020 ms
info t=2016-04-28T09:57:52Z AppletManager.init() started in 13658 ms
info t=2016-04-28T09:57:52Z Cinnamon took 69321 ms to start
I am using quartz scheduler to schedule my job. I have used CronTrigger. But the problem is there the trigger is getting fired more once. Here is my code to set up cron scheduler..
SchedulerFactory schFactory = new StdSchedulerFactory();
Scheduler sched = null;
CronTrigger cronTrigger = null;
try {
sched = schFactory.getScheduler();
JobDetail jobDetail = new JobDetail("job1", "group1",SchedulerPBGC.class);
String cronTimerStr = "* 16 15 * * ? *";
LOG.warn("CRON TRIGGER FORMAT FOR PROCESSING PB GC DATA:"+cronTimerStr);
cronTrigger = new CronTrigger("SchedTrigger", "Group1", cronTimerStr);
sched.scheduleJob(jobDetail, cronTrigger);
sched.start();
LOG.warn("SCHEDULER REGISTERED FOR PROCESSING PB GC DATA : TIME :"+cronTimerStr);
} catch (SchedulerException se) {
LOG.error("SchedulerException Message::"+se.getLocalizedMessage());
}
Here my scheduler executes the job 10 times...Here you can see the logs ..
2012-06-20 15:16:50,001 DefaultQuartzScheduler_Worker-1 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:50,001 DefaultQuartzScheduler_Worker-1 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:50,001 DefaultQuartzScheduler_Worker-1 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:51,001 DefaultQuartzScheduler_Worker-2 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:51,001 DefaultQuartzScheduler_Worker-2 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:51,001 DefaultQuartzScheduler_Worker-2 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:52,001 DefaultQuartzScheduler_Worker-3 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:52,001 DefaultQuartzScheduler_Worker-3 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:52,001 DefaultQuartzScheduler_Worker-3 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:53,001 DefaultQuartzScheduler_Worker-4 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:53,001 DefaultQuartzScheduler_Worker-4 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:53,001 DefaultQuartzScheduler_Worker-4 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:54,001 DefaultQuartzScheduler_Worker-5 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:54,001 DefaultQuartzScheduler_Worker-5 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:54,001 DefaultQuartzScheduler_Worker-5 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:55,001 DefaultQuartzScheduler_Worker-6 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:55,001 DefaultQuartzScheduler_Worker-6 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:55,001 DefaultQuartzScheduler_Worker-6 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:56,001 DefaultQuartzScheduler_Worker-7 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:56,001 DefaultQuartzScheduler_Worker-7 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:56,001 DefaultQuartzScheduler_Worker-7 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:57,001 DefaultQuartzScheduler_Worker-8 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:57,001 DefaultQuartzScheduler_Worker-8 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:57,001 DefaultQuartzScheduler_Worker-8 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:58,001 DefaultQuartzScheduler_Worker-9 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:58,001 DefaultQuartzScheduler_Worker-9 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:58,001 DefaultQuartzScheduler_Worker-9 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED-----------------------
2012-06-20 15:16:59,001 DefaultQuartzScheduler_Worker-10 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION STARTED-----------------------
2012-06-20 15:16:59,001 DefaultQuartzScheduler_Worker-10 WARN test.SchedulerPBGC - The value of NO_OF_BEFORE_DAY_TO_RUN must be less then zero ..to start the scheduler
2012-06-20 15:16:59,001 DefaultQuartzScheduler_Worker-10 WARN test.SchedulerPBGC - ----------PB GC SCHEDULER EXECUTION COMPLETED--------------------
How can i set the CronTimer iterator to 1 ? OR How can I stop the scheduler to executing the job more than once.
Any suggestion ?
Thanks,
Gunjan Shah.
I got the solution ..
The cron syntax I have used is : String cronTimerStr = "* 16 15 * * ? *";
Here at 15:16 time, Quartz will initialize all possible workers for every second. So let say within that one minute (# 16 min and 60 second), it will initialize maximum 60 threads.
I set the second parameter to zero.
So the new syntax is "0 16 15 * * ? *".
Now it works fine.
Thanks,
Gunjan Shah.