Can I configure Quartz to prevent jobs from running on a particular node? - quartz-scheduler

I'm using Quartz 1.6 in a clustered environment and am having some resource issues. Consequently, I want to prevent certain nodes from running any background/scheduled jobs.
Quartz is currently configured using the org.quartz.impl.jdbcjobstore.JobStoreTX, and all my nodes launch the scheduler on startup.
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.MSSQLDelegate
org.quartz.jobStore.useProperties = true
org.quartz.jobStore.dataSource = QUARTZ
org.quartz.jobStore.isClustered = false
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.dataSource.QUARTZ.jndiURL = java:/AppDB
Is there some configuration parameter that I could pass to disable a node from processing any background jobs? Or to specify which node(s) I want jobs to run on?
I thought I could "trick" Quartz by setting my threadCount to 0, but unfortunately that doesn't work.
Are there other config parameters I could use instead?

Related

Airflow CeleryExecutor With AWS SQS

I'm trying to cluster my Airflow setup and I'm using this article to do so. I just configured my airflow.cfg file to use the CeleryExecutor, I pointed my sql_alchemy_conn to my postgresql database that's running on the same master node, I've set the broker_url to use SQS (I didn't set the access_key_id or secret_key since it's running on an EC2-Instance it doesn't need those), and I've set the celery_result_backend to my postgresql server too. I saved my new airflow.cfg changes, I ran airflow initdb, and then I ran airflow scheduler and I'm getting this error from the scheduler,
[2018-06-07 21:07:33,420] {celery_executor.py:101} ERROR - Error syncing the celery executor, ignoring it:
[2018-06-07 21:07:33,421] {celery_executor.py:102} ERROR - Can't load plugin: sqlalchemy.dialects:psycopg2
Traceback (most recent call last):
File "/usr/local/lib/python3.6/site-packages/airflow/executors/celery_executor.py", line 83, in sync
state = async.state
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 433, in state
return self._get_task_meta()['status']
File "/usr/local/lib/python3.6/site-packages/celery/result.py", line 372, in _get_task_meta
return self._maybe_set_cache(self.backend.get_task_meta(self.id))
File "/usr/local/lib/python3.6/site-packages/celery/backends/base.py", line 344, in get_task_meta
meta = self._get_task_meta_for(task_id)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 53, in _inner
return fun(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 122, in _get_task_meta_for
session = self.ResultSession()
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/__init__.py", line 99, in ResultSession
**self.engine_options)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 59, in session_factory
engine, session = self.create_session(dburi, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 45, in create_session
engine = self.get_engine(dburi, **kwargs)
File "/usr/local/lib/python3.6/site-packages/celery/backends/database/session.py", line 42, in get_engine
return create_engine(dburi, poolclass=NullPool)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/__init__.py", line 424, in create_engine
return strategy.create(*args, **kwargs)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/strategies.py", line 57, in create
entrypoint = u._get_entrypoint()
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/engine/url.py", line 156, in _get_entrypoint
cls = registry.load(name)
File "/usr/local/lib/python3.6/site-packages/sqlalchemy/util/langhelpers.py", line 221, in load
(self.group, name))
sqlalchemy.exc.NoSuchModuleError: Can't load plugin: sqlalchemy.dialects:psycopg2
Here is my airflow.cfg file,
[core]
# The home folder for airflow, default is ~/airflow
airflow_home = /home/ec2-user/airflow
# The folder where your airflow pipelines live, most likely a
# subfolder in a code repository
# This path must be absolute
dags_folder = /home/ec2-user/airflow/dags
# The folder where airflow should store its log files
# This path must be absolute
base_log_folder = /home/ec2-user/airflow/logs
# Airflow can store logs remotely in AWS S3 or Google Cloud Storage. Users
# must supply an Airflow connection id that provides access to the storage
# location.
remote_log_conn_id =
encrypt_s3_logs = False
# Logging level
logging_level = INFO
# Logging class
# Specify the class that will specify the logging configuration
# This class has to be on the python classpath
# logging_config_class = my.path.default_local_settings.LOGGING_CONFIG
logging_config_class =
# Log format
log_format = [%%(asctime)s] {%%(filename)s:%%(lineno)d} %%(levelname)s - %%(message)s
simple_log_format = %%(asctime)s %%(levelname)s - %%(message)s
# The executor class that airflow should use. Choices include
# SequentialExecutor, LocalExecutor, CeleryExecutor, DaskExecutor
#executor = SequentialExecutor
executor = CeleryExecutor
# The SqlAlchemy connection string to the metadata database.
# SqlAlchemy supports many different database engine, more information
# their website
#sql_alchemy_conn = sqlite:////home/ec2-user/airflow/airflow.db
sql_alchemy_conn = postgresql+psycopg2://postgres:$password#localhost/datalake_airflow_cluster_v1_master1_database_1
# The SqlAlchemy pool size is the maximum number of database connections
# in the pool.
sql_alchemy_pool_size = 5
# The SqlAlchemy pool recycle is the number of seconds a connection
# can be idle in the pool before it is invalidated. This config does
# not apply to sqlite.
sql_alchemy_pool_recycle = 3600
# The amount of parallelism as a setting to the executor. This defines
# the max number of task instances that should run simultaneously
# on this airflow installation
parallelism = 32
# The number of task instances allowed to run concurrently by the scheduler
dag_concurrency = 16
# Are DAGs paused by default at creation
dags_are_paused_at_creation = True
# When not using pools, tasks are run in the "default pool",
# whose size is guided by this config element
non_pooled_task_slot_count = 128
# The maximum number of active DAG runs per DAG
max_active_runs_per_dag = 16
# Whether to load the examples that ship with Airflow. It's good to
# get started, but you probably want to set this to False in a production
# environment
load_examples = True
# Where your Airflow plugins are stored
plugins_folder = /home/ec2-user/airflow/plugins
# Secret key to save connection passwords in the db
fernet_key = ibwZ5uSASmZGphBmwdJ4BIhd1-5WZXMTTgMF9u1_dGM=
# Whether to disable pickling dags
donot_pickle = False
# How long before timing out a python file import while filling the DagBag
dagbag_import_timeout = 30
# The class to use for running task instances in a subprocess
task_runner = BashTaskRunner
# If set, tasks without a `run_as_user` argument will be run with this user
# Can be used to de-elevate a sudo user running Airflow when executing tasks
default_impersonation =
# What security module to use (for example kerberos):
security =
# Turn unit test mode on (overwrites many configuration options with test
# values at runtime)
unit_test_mode = False
# Name of handler to read task instance logs.
# Default to use file task handler.
task_log_reader = file.task
# Whether to enable pickling for xcom (note that this is insecure and allows for
# RCE exploits). This will be deprecated in Airflow 2.0 (be forced to False).
enable_xcom_pickling = True
# When a task is killed forcefully, this is the amount of time in seconds that
# it has to cleanup after it is sent a SIGTERM, before it is SIGKILLED
killed_task_cleanup_time = 60
[cli]
# In what way should the cli access the API. The LocalClient will use the
# database directly, while the json_client will use the api running on the
# webserver
api_client = airflow.api.client.local_client
endpoint_url = http://localhost:8080
[api]
# How to authenticate users of the API
auth_backend = airflow.api.auth.backend.default
[operators]
# The default owner assigned to each new operator, unless
# provided explicitly or passed via `default_args`
default_owner = Airflow
default_cpus = 1
default_ram = 512
default_disk = 512
default_gpus = 0
[webserver]
# The base url of your website as airflow cannot guess what domain or
# cname you are using. This is used in automated emails that
# airflow sends to point links to the right web server
base_url = http://localhost:8080
# The ip specified when starting the web server
web_server_host = 0.0.0.0
# The port on which to run the web server
web_server_port = 8080
# Paths to the SSL certificate and key for the web server. When both are
# provided SSL will be enabled. This does not change the web server port.
web_server_ssl_cert =
web_server_ssl_key =
# Number of seconds the gunicorn webserver waits before timing out on a worker
web_server_worker_timeout = 120
# Number of workers to refresh at a time. When set to 0, worker refresh is
# disabled. When nonzero, airflow periodically refreshes webserver workers by
# bringing up new ones and killing old ones.
worker_refresh_batch_size = 1
# Number of seconds to wait before refreshing a batch of workers.
worker_refresh_interval = 30
# Secret key used to run your flask app
secret_key = temporary_key
# Number of workers to run the Gunicorn web server
workers = 4
# The worker class gunicorn should use. Choices include
# sync (default), eventlet, gevent
worker_class = sync
# Log files for the gunicorn webserver. '-' means log to stderr.
access_logfile = -
error_logfile = -
# Expose the configuration file in the web server
expose_config = False
# Set to true to turn on authentication:
# http://pythonhosted.org/airflow/security.html#web-authentication
authenticate = False
# Filter the list of dags by owner name (requires authentication to be enabled)
filter_by_owner = False
# Filtering mode. Choices include user (default) and ldapgroup.
# Ldap group filtering requires using the ldap backend
#
# Note that the ldap server needs the "memberOf" overlay to be set up
# in order to user the ldapgroup mode.
owner_mode = user
# Default DAG view. Valid values are:
# tree, graph, duration, gantt, landing_times
dag_default_view = tree
# Default DAG orientation. Valid values are:
# LR (Left->Right), TB (Top->Bottom), RL (Right->Left), BT (Bottom->Top)
dag_orientation = LR
# Puts the webserver in demonstration mode; blurs the names of Operators for
# privacy.
demo_mode = False
# The amount of time (in secs) webserver will wait for initial handshake
# while fetching logs from other worker machine
log_fetch_timeout_sec = 5
# By default, the webserver shows paused DAGs. Flip this to hide paused
# DAGs by default
hide_paused_dags_by_default = False
# Consistent page size across all listing views in the UI
page_size = 100
[email]
email_backend = airflow.utils.email.send_email_smtp
[smtp]
# If you want airflow to send emails on retries, failure, and you want to use
# the airflow.utils.email.send_email_smtp function, you have to configure an
# smtp server here
smtp_host = localhost
smtp_starttls = True
smtp_ssl = False
# Uncomment and set the user/pass settings if you want to use SMTP AUTH
# smtp_user = airflow
# smtp_password = airflow
smtp_port = 25
smtp_mail_from = airflow#example.com
[celery]
# This section only applies if you are using the CeleryExecutor in
# [core] section above
# The app name that will be used by celery
celery_app_name = airflow.executors.celery_executor
# The concurrency that will be used when starting workers with the
# "airflow worker" command. This defines the number of task instances that
# a worker will take, so size up your workers based on the resources on
# your worker box and the nature of your tasks
celeryd_concurrency = 16
# When you start an airflow worker, airflow starts a tiny web server
# subprocess to serve the workers local log files to the airflow main
# web server, who then builds pages and sends them to users. This defines
# the port on which the logs are served. It needs to be unused, and open
# visible from the main web server to connect into the workers.
worker_log_server_port = 8793
# The Celery broker URL. Celery supports RabbitMQ, Redis and experimentally
# a sqlalchemy database. Refer to the Celery documentation for more
# information.
#broker_url = sqla+mysql://airflow:airflow#localhost:3306/airflow
broker_url = sqs://
# Another key Celery setting
#celery_result_backend = db+mysql://airflow:airflow#localhost:3306/airflow
celery_result_backend = db+psycopg2://postgres:$password#localhost/datalake_airflow_cluster_v1_master1_database_1
# Celery Flower is a sweet UI for Celery. Airflow has a shortcut to start
# it `airflow flower`. This defines the IP that Celery Flower runs on
flower_host = 0.0.0.0
# This defines the port that Celery Flower runs on
flower_port = 5555
# Default queue that tasks get assigned to and that worker listen on.
default_queue = default
# Import path for celery configuration options
celery_config_options = airflow.config_templates.default_celery.DEFAULT_CELERY_CONFIG
[dask]
# This section only applies if you are using the DaskExecutor in
# [core] section above
# The IP address and port of the Dask cluster's scheduler.
cluster_address = 127.0.0.1:8786
[scheduler]
# Task instances listen for external kill signal (when you clear tasks
# from the CLI or the UI), this defines the frequency at which they should
# listen (in seconds).
job_heartbeat_sec = 5
# The scheduler constantly tries to trigger new tasks (look at the
# scheduler section in the docs for more information). This defines
# how often the scheduler should run (in seconds).
scheduler_heartbeat_sec = 5
# after how much time should the scheduler terminate in seconds
# -1 indicates to run continuously (see also num_runs)
run_duration = -1
# after how much time a new DAGs should be picked up from the filesystem
min_file_process_interval = 0
dag_dir_list_interval = 300
# How often should stats be printed to the logs
print_stats_interval = 30
child_process_log_directory = /home/ec2-user/airflow/logs/scheduler
# Local task jobs periodically heartbeat to the DB. If the job has
# not heartbeat in this many seconds, the scheduler will mark the
# associated task instance as failed and will re-schedule the task.
scheduler_zombie_task_threshold = 300
# Turn off scheduler catchup by setting this to False.
# Default behavior is unchanged and
# Command Line Backfills still work, but the scheduler
# will not do scheduler catchup if this is False,
# however it can be set on a per DAG basis in the
# DAG definition (catchup)
catchup_by_default = True
# This changes the batch size of queries in the scheduling main loop.
# This depends on query length limits and how long you are willing to hold locks.
# 0 for no limit
max_tis_per_query = 0
# Statsd (https://github.com/etsy/statsd) integration settings
statsd_on = False
statsd_host = localhost
statsd_port = 8125
statsd_prefix = airflow
# The scheduler can run multiple threads in parallel to schedule dags.
# This defines how many threads will run.
max_threads = 2
authenticate = False
[ldap]
# set this to ldaps://<your.ldap.server>:<port>
uri =
user_filter = objectClass=*
user_name_attr = uid
group_member_attr = memberOf
superuser_filter =
data_profiler_filter =
bind_user = cn=Manager,dc=example,dc=com
bind_password = insecure
basedn = dc=example,dc=com
cacert = /etc/ca/ldap_ca.crt
search_scope = LEVEL
[mesos]
# Mesos master address which MesosExecutor will connect to.
master = localhost:5050
# The framework name which Airflow scheduler will register itself as on mesos
framework_name = Airflow
# Number of cpu cores required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_cpu = 1
# Memory in MB required for running one task instance using
# 'airflow run <dag_id> <task_id> <execution_date> --local -p <pickle_id>'
# command on a mesos slave
task_memory = 256
# Enable framework checkpointing for mesos
# See http://mesos.apache.org/documentation/latest/slave-recovery/
checkpoint = False
# Failover timeout in milliseconds.
# When checkpointing is enabled and this option is set, Mesos waits
# until the configured timeout for
# the MesosExecutor framework to re-register after a failover. Mesos
# shuts down running tasks if the
# MesosExecutor framework fails to re-register within this timeframe.
# failover_timeout = 604800
# Enable framework authentication for mesos
# See http://mesos.apache.org/documentation/latest/configuration/
authenticate = False
# Mesos credentials, if authentication is enabled
# default_principal = admin
# default_secret = admin
[kerberos]
ccache = /tmp/airflow_krb5_ccache
# gets augmented with fqdn
principal = airflow
reinit_frequency = 3600
kinit_path = kinit
keytab = airflow.keytab
[github_enterprise]
api_rev = v3
[admin]
# UI to hide sensitive variable fields when set to True
hide_sensitive_variable_fields = True
I'm not too sure what's going on here. Is there additional setup I need to do on Celery or anything else? I'm also confused as to how it knows which SQS queue to use on AWS? Does it create a new queue itself or do I need to create the queue on AWS and put that url somewhere?
See this question here: https://stackoverflow.com/a/39967889/5191221
Taken from there:
So replace:
celery_result_backend = postgresql+psycopg2://username:password#192.168.1.2:5432/airflow
with something like:
celery_result_backend = db+postgresql://username:password#192.168.1.2:5432/airflow

launching ipyparallel cluster across multiple nodes using MPI

I am trying to start a ipyparallel cluster using MPI.
The ipcluster_config has following lines modified as such:
c.MPILauncher.mpi_cmd = ['mpiexec']
c.MPIControllerLauncher.controller_args = ['--ip=*']
c.MPILauncher.mpi_args = ["-machinefile", "~/mpi_hosts"]
The ipcontroller_config.py is configured as such:
c.HubFactory.engine_ip = '*'
c.HubFactory.ip = '*'
c.HubFactory.client_ip = '*'
However, when I launch the cluster using command
ipcluster start --profile mpi -n 2
it fails with following message
Engines shutdown early, they probably failed to connect.
You can set this by adding "--ip='*'" to your ControllerLauncher.controller_args
Not sure how to debug further.

(Unknown queue MSG=requested queue not found)

I am trying to run wien2k jobs with torque job scheduler (maui scheduler) in ubuntu 14.04 . the app is installed correctly without any error messages but when add the bash script , I encounter with this error "qsub: submit error (Unknown queue MSG=requested queue not found)". I read about this error in this website.
can any one help me please?
it's my PBS Queue Manager after using qmgr -c "p s" command:
Create queues and set their attributes.
Create and define queue batch
create queue batch
set queue batch queue_type = Execution
set queue batch max_running = 20
set queue batch resources_max.ncpus = 20
set queue batch resources_max.nodes = 20
set queue batch resources_default.ncpus = 1
set queue batch resources_default.nodect = 1
set queue batch resources_default.nodes = 1
set queue batch resources_default.walltime = 76790:53:51
set queue batch enabled = True
set queue batch started = True
set server attributes.
set server scheduling = True
set server acl_hosts = seconduser
set server acl_hosts += firstuser
set server managers = root#localhost
set server managers += secfir#localhost
set server operators = root#localhost
set server operators += secfir#localhost
set server default_queue = batch
set server log_events = 511
set server mail_from = adm
set server scheduler_iteration = 600
set server node_check_rate = 150
set server tcp_timeout = 300
set server job_stat_rate = 45
set server poll_jobs = True
set server mom_job_sync = True
set server keep_completed = 0
set server next_job_number = 47
set server moab_array_compatible = True
set server nppcu = 1
Try running:
qmgr -c "p s"
Look through the config for queue names. If there is a routing queue try submitting your job with qsub -q queuename jobfile . If a routing queue doesn't exist you can try execution queues. Otherwise, it may be best to ask the cluster admin since they have the power to kill your jobs if you don't do it properly.
Just had the same problem and fixed it. The fullest way to do a qsub is (described here)
qsub -q queuename scriptname
Your queuename can be find using the command described in the previous answer, as you did
qmgr -c "p s"
The result for yours (and mine as well) is that the queue name is "batch." Multiple lines determine this, such as "Create and define queue batch," and "create queue batch," but most assuredly:
set server default_queue = batch
So, submit your job script with
qsub -q batch scriptname

Job Execution And Persistence

I have used jdbcjobstore to persist jobs in database. My Job is being stored successfully but it is not executed. Here comes my quartz.properties file:
org.quartz.scheduler.instanceName = MieScheduler
org.quartz.threadPool.threadCount = 5
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.MSSQLDelegate
org.quartz.jobStore.tablePrefix = QRTZ_
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.scheduler.skipUpdateCheck= true
org.quartz.jobStore.dataSource = myDS
org.quartz.dataSource.myDS.driver=com.mysql.jdbc.Driver
org.quartz.dataSource.myDS.URL=jdbc:mysql://localhost:3306/SCHEDULER_DB
org.quartz.dataSource.myDS.user=root
org.quartz.dataSource.myDS.password=user
org.quartz.dataSource.myDS.maxConnections=8
org.quartz.plugin.jobInitializer.class =org.quartz.plugins.xml.XMLSchedulingDataProcessorPlugin
org.quartz.plugin.jobInitializer.fileNames = quartz-config.xml
org.quartz.plugin.jobInitializer.failOnFileNotFound = true
I can see records in QRTZ_SIMPLE_TRIGGERS table but the column TIMES_TRIGGERED value is not updated indicating that the job is not executed. How to get over this issue?
The first thing I would check is that the scheduler is actually started. You can check this in the debugger, or you can enable remote JMX access to your Quartz scheduler instance and use jconsole (or any "quartz scheduler gui") to inspect Quartz scheduler's runtime properties including the current state.
Starting the scheduler:
If you are using Spring, the scheduler can be automatically started by setting the Spring SchedulerFactoryBean's autoStartup property to true.
If you are instantiating the scheduler manually, you must not forget to call the start method on the scheduler instance.

quartz scheduler clustering

I have a 2 node HA server. Node 1 is active and node 2 is standby.
I have made one application and used the quartz api to do clustering. I have made all the tables in the db.
Now do i need to run the module in both the nodes or jst node 1 so that when the node 1 goes down the application automatically starts in node 2.
The trigger and the job name should be same or different while running the module in both the nodes?
Quartz.properties:
org.quartz.threadPool.class = org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount = 10
org.quartz.threadPool.threadPriority = 5
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread = true
-- Using RAMJobStore
-- if using RAMJobStore, please be sure that you comment out
-- org.quartz.jobStore.tablePrefix, org.quartz.jobStore.driverDelegateClass, org.quartz.jobStore.dataSource
-org.quartz.jobStore.class = org.quartz.simpl.RAMJobStore
-- Using JobStoreTX
-- Be sure to run the appropriate script(under docs/dbTables) first to create database/tables
org.quartz.jobStore.class = org.quartz.impl.jdbcjobstore.JobStoreTX
-- Using DriverDelegate
org.quartz.jobStore.driverDelegateClass = org.quartz.impl.jdbcjobstore.StdJDBCDelegate
--New conf for Clustering
org.quartz.jobStore.isClustered = true
org.quartz.jobStore.clusterCheckinInterval = 20000
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.instanceId = AUTO
org.quartz.scheduler.instanceName = MyClusteredScheduler
-- Configuring JDBCJobStore with the Table Prefix
org.quartz.jobStore.tablePrefix = QRTZ_
-- Using datasource
org.quartz.jobStore.dataSource = qzDS
-- Define the datasource to use
org.quartz.dataSource.qzDS.driver = oracle.jdbc.driver.OracleDriver
org.quartz.dataSource.qzDS.URL = jdbc:oracle:thin:#10.172.16.147:1521:emadb0
org.quartz.dataSource.qzDS.user = BLuser
org.quartz.dataSource.qzDS.password = BLuser
org.quartz.dataSource.qzDS.maxConnections = 30
-----------------------
According to :
http://quartz-scheduler.org/documentation/quartz-2.x/configuration/ConfigJDBCJobStoreClustering
in particular :
Clustering currently only works with the JDBC-Jobstore (JobStoreTX or JobStoreCMT), and essentially works by having each node of the cluster share the same database.
Load-balancing occurs automatically, with each node of the cluster firing jobs as quickly as it can. When a trigger's firing time occurs, the first node to acquire it (by placing a lock on it) is the node that will fire it.
You should start all your nodes, the fastest will trigger the job, the others will know about it.