Job Execution And Persistence - quartz-scheduler

I have used jdbcjobstore to persist jobs in database. My Job is being stored successfully but it is not executed. Here comes my quartz.properties file:
org.quartz.scheduler.instanceName = MieScheduler
org.quartz.threadPool.threadCount = 5
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.MSSQLDelegate
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.scheduler.skipUpdateCheck= true
org.quartz.jobStore.dataSource = myDS
org.quartz.dataSource.myDS.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL=jdbc:mysql://localhost:3306/SCHEDULER_DB
org.quartz.dataSource.myDS.user=root
org.quartz.dataSource.myDS.password=user
org.quartz.dataSource.myDS.maxConnections=8
org.quartz.plugin.jobInitializer.class =org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-config.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
I can see records in QRTZ_SIMPLE_TRIGGERS table but the column TIMES_TRIGGERED value is not updated indicating that the job is not executed. How to get over this issue?

The first thing I would check is that the scheduler is actually started. You can check this in the debugger, or you can enable remote JMX access to your Quartz scheduler instance and use jconsole (or any "quartz scheduler gui") to inspect Quartz scheduler's runtime properties including the current state.
Starting the scheduler:
If you are using Spring, the scheduler can be automatically started by setting the Spring SchedulerFactoryBean's autoStartup property to true.
If you are instantiating the scheduler manually, you must not forget to call the start method on the scheduler instance.

Related

Disable InboundChannelAdapter / Poller with autoStartup false

In Spring Integration, I want to disable a poller by setting the autoStartup=false on the InboundChannelAdapter. But with the following setup, none of my pollers are firing on either my Tomcat instance 1 nor Tomcat instance 2. I have two Tomcat instances with the same code deployed. I want the pollers to be disabled on one of the instances since I do not want the same job polling on the two Tomcat instances concurrently.
Here is the InboundChannelAdapter:
#Bean
#InboundChannelAdapter(value = "irsDataPrepJobInputWeekdayChannel", poller = #Poller(cron="${batch.job.schedule.cron.weekdays.irsDataPrepJobRunner}", maxMessagesPerPoll="1" ), autoStartup = "${batch.job.schedule.cron.weekdays.irsDataPrepJobRunner.autoStartup}")
public MessageSource<JobLaunchRequest> pollIrsDataPrepWeekdayJob() {
return () -> new GenericMessage<>(requestIrsDataPrepWeekdayJob());
}
The property files are as follows. Property file for Tomcat instance 1:
# I wish for this job to run on Tomcat instance 1
batch.job.schedule.cron.riStateAgencyTransmissionJobRunner=0 50 14 * * *
# since autoStartup defaults to true, I do not provide:
#batch.job.schedule.cron.riStateAgencyTransmissionJobRunner.autoStartup=true
# I do NOT wish for this job to run on Tomcat instance 1
batch.job.schedule.cron.weekdays.irsDataPrepJobRunner.autoStartup=false
# need to supply as poller has a cron placeholder
batch.job.schedule.cron.weekdays.irsDataPrepJobRunner=0 0/7 * * * 1-5
Property file for Tomcat instance 2:
# I wish for this job to run on Tomcat instance 2
batch.job.schedule.cron.weekdays.irsDataPrepJobRunner=0 0/7 * * * 1-5
# since autoStartup defaults to true, I do not provide:
#batch.job.schedule.cron.weekdays.irsDataPrepJobRunner.autoStartup=true
# I do NOT wish for this job to run on Tomcat instance 2
batch.job.schedule.cron.riStateAgencyTransmissionJobRunner.autoStartup=false
# need to supply as poller has a cron placeholder
batch.job.schedule.cron.riStateAgencyTransmissionJobRunner=0 50 14 * * *
The properties files are passed as a VM option, e.g. "-Druntime.scheduler=dev1". I cannot disable the poller on one of the JVMs using "-" as the cron expression -- something similar to the ask here: Poller annotation with cron expression should support a special disable character
My goal of being able to call the job manually from either Tomcat instance 1 or Tomcat instance 2 is working. My problem with the setup mentioned above, is that none of the pollers are firing as per their cron expression.
Consider to investigate a leader election pattern: https://docs.spring.io/spring-integration/docs/current/reference/html/messaging-endpoints.html#leadership-event-handling.
This way you have those endpoints in non-started state by default and in the same role. The election is going to chose a leader and start only this one.
Of course there has to be some shared external service to control leadership.

Can I configure Quartz to prevent jobs from running on a particular node?

I'm using Quartz 1.6 in a clustered environment and am having some resource issues. Consequently, I want to prevent certain nodes from running any background/scheduled jobs.
Quartz is currently configured using the org.quartz.impl.jdbcjobstore.JobStoreTX, and all my nodes launch the scheduler on startup.
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.MSSQLDelegate
org.quartz.jobStore.useProperties = true
org.quartz.jobStore.dataSource = QUARTZ
org.quartz.jobStore.isClustered = false
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.QUARTZ.jndiURL = java:/AppDB
Is there some configuration parameter that I could pass to disable a node from processing any background jobs? Or to specify which node(s) I want jobs to run on?
I thought I could "trick" Quartz by setting my threadCount to 0, but unfortunately that doesn't work.
Are there other config parameters I could use instead?

(Unknown queue MSG=requested queue not found)

I am trying to run wien2k jobs with torque job scheduler (maui scheduler) in ubuntu 14.04 . the app is installed correctly without any error messages but when add the bash script , I encounter with this error "qsub: submit error (Unknown queue MSG=requested queue not found)". I read about this error in this website.
can any one help me please?
it's my PBS Queue Manager after using qmgr -c "p s" command:
Create queues and set their attributes.
Create and define queue batch
create queue batch
set queue batch queue_type = Execution
set queue batch max_running = 20
set queue batch resources_max.ncpus = 20
set queue batch resources_max.nodes = 20
set queue batch resources_default.ncpus = 1
set queue batch resources_default.nodect = 1
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 76790:53:51
set queue batch enabled = True
set queue batch started = True
set server attributes.
set server scheduling = True
set server acl_hosts = seconduser
set server acl_hosts += firstuser
set server managers = root#localhost
set server managers += secfir#localhost
set server operators = root#localhost
set server operators += secfir#localhost
set server default_queue = batch
set server log_events = 511
set server mail_from = adm
set server scheduler_iteration = 600
set server node_check_rate = 150
set server tcp_timeout = 300
set server job_stat_rate = 45
set server poll_jobs = True
set server mom_job_sync = True
set server keep_completed = 0
set server next_job_number = 47
set server moab_array_compatible = True
set server nppcu = 1
Try running:
qmgr -c "p s"
Look through the config for queue names. If there is a routing queue try submitting your job with qsub -q queuename jobfile . If a routing queue doesn't exist you can try execution queues. Otherwise, it may be best to ask the cluster admin since they have the power to kill your jobs if you don't do it properly.
Just had the same problem and fixed it. The fullest way to do a qsub is (described here)
qsub -q queuename scriptname
Your queuename can be find using the command described in the previous answer, as you did
qmgr -c "p s"
The result for yours (and mine as well) is that the queue name is "batch." Multiple lines determine this, such as "Create and define queue batch," and "create queue batch," but most assuredly:
set server default_queue = batch
So, submit your job script with
qsub -q batch scriptname

quartz scheduler clustering

I have a 2 node HA server. Node 1 is active and node 2 is standby.
I have made one application and used the quartz api to do clustering. I have made all the tables in the db.
Now do i need to run the module in both the nodes or jst node 1 so that when the node 1 goes down the application automatically starts in node 2.
The trigger and the job name should be same or different while running the module in both the nodes?
Quartz.properties:
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
-- Using RAMJobStore
-- if using RAMJobStore, please be sure that you comment out
-- org.quartz.jobStore.tablePrefix, org.quartz.jobStore.driverDelegateClass, org.quartz.jobStore.dataSource
-org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
-- Using JobStoreTX
-- Be sure to run the appropriate script(under docs/dbTables) first to create database/tables
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
-- Using DriverDelegate
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
--New conf for Clustering
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.instanceName = MyClusteredScheduler
-- Configuring JDBCJobStore with the Table Prefix
org.quartz.jobStore.tablePrefix = QRTZ_
-- Using datasource
org.quartz.jobStore.dataSource = qzDS
-- Define the datasource to use
org.quartz.dataSource.qzDS.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.qzDS.URL = jdbc:oracle:thin:#10.172.16.147:1521:emadb0
org.quartz.dataSource.qzDS.user = BLuser
org.quartz.dataSource.qzDS.password = BLuser
org.quartz.dataSource.qzDS.maxConnections = 30
-----------------------
According to :
http://quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering
in particular :
Clustering currently only works with the JDBC-Jobstore (JobStoreTX or JobStoreCMT), and essentially works by having each node of the cluster share the same database.
Load-balancing occurs automatically, with each node of the cluster firing jobs as quickly as it can. When a trigger's firing time occurs, the first node to acquire it (by placing a lock on it) is the node that will fire it.
You should start all your nodes, the fastest will trigger the job, the others will know about it.

find if a job is running in Quartz1.6

I would like to clarify details of the scheduler.getCurrentlyExecutingJobs() method in Quartz1.6. I have a job that should have only one instance running at any given moment. It can be triggered to "run Now" from a UI but if a job instance already running for this job - nothing should happen.
This is how I check whether there is a job running that interests me:
List<JobExecutionContext> currentJobs = scheduler.getCurrentlyExecutingJobs();
for (JobExecutionContext jobCtx: currentJobs){
jobName = jobCtx.getJobDetail().getName();
groupName = jobCtx.getJobDetail().getGroup();
if (jobName.equalsIgnoreCase("job_I_am_looking_for_name") &&
groupName.equalsIgnoreCase("job_group_I_am_looking_for_name")) {
//found it!
logger.warn("the job is already running - do nothing");
}
}
then, to test this, I have a unit test that tries to schedule two instances of this job one after the other. I was expecting to see the warning when trying to schedule the second job, however, instead, I'm getting this exception:
org.quartz.ObjectAlreadyExistsException: Unable to store Job with name:
'job_I_am_looking_for_name' and group: 'job_group_I_am_looking_for_name',
because one already exists with this identification.
When I run this unit test in a debug mode, with the break on this line:
List currentJobs = scheduler.getCurrentlyExecutingJobs();
I see the the list is empty - so the scheduler does not see this job as running , but it still fails to schedule it again - which tells me the job was indeed running at the time...
Am I missing some finer points with this scheduler method?
Thanks!
Marina
For the benefit of others, I'm posting an answer to the issue I was having - I got help from the Terracotta Forum's Zemian Deng: posting on Terracotta's forum
Here is the re-cap:
The actual checking of the running jobs was working fine - it was just timing in the Unit tests, of course. I've added some sleeping in the job, and tweaked unit tests to schedule the second job while the first one is still running - and verified that I could indeed find the first job still running.
The exception I was getting was because I was trying to schedule a new job with the same name, rather than try to trigger the already stored in the scheduler job. The following code worked exactly as I needed:
List<JobExecutionContext> currentJobs = scheduler.getCurrentlyExecutingJobs();
for (JobExecutionContext jobCtx: currentJobs){
jobName = jobCtx.getJobDetail().getName();
groupName = jobCtx.getJobDetail().getGroup();
if (jobName.equalsIgnoreCase("job_I_am_looking_for_name") && groupName.equalsIgnoreCase("job_group_I_am_looking_for_name")) {
//found it!
logger.warn("the job is already running - do nothing");
return;
}
}
// check if this job is already stored in the scheduler
JobDetail emailJob;
emailJob = scheduler.getJobDetail("job_I_am_looking_for_name", "job_group_I_am_looking_for_name");
if (emailJob == null){
// this job is not in the scheduler yet
// create JobDetail object for my job
emailJob = jobFactory.getObject();
emailJob.setName("job_I_am_looking_for_name");
emailJob.setGroup("job_group_I_am_looking_for_name");
scheduler.addJob(emailJob, true);
}
// this job is in the scheduler and it is not running right now - run it now
scheduler.triggerJob("job_I_am_looking_for_name", "job_group_I_am_looking_for_name");
Thanks!
Marina