How to keep job schedule data in database for Quartz Scheduler - quartz-scheduler

Currently, we have our job scheduling data stored in the database table. This allows the business user to alter the schedule in the database table through a customised screen. We are planning to migrate our scheduling framework to quartz. I have gone through the quartz documentation, it does not have anything to cover this requirement. Basically, if the schedule is changed, subsequent runs after the schedule job would be continued as per the new schedule, and this can happen without the restart.

You can reschedule the existing quartz trigger using this method
http://www.quartz-scheduler.org/api/2.3.0/org/quartz/Scheduler.html#rescheduleJob-org.quartz.TriggerKey-org.quartz.Trigger-

Related

Is there a way to automate the monitoring and termination of AWS ECS tasks that are silently progressing?

I've been using AWS Fargate for quite a while and have been a big fan of the service.
This week, I created a monitoring dashboard that details the latest runtimes of my containers, and the timestamp watermark of each of my tables (the MAX date updated value). I have SNS topics set up to email me whenever a container exits with code 1.
However, I encountered a tricky issue today that slipped past these safeguards because of what I suspect was a deadlock situation related to a Postgres RDS instance.
I have several tasks running at different points in the day on a scheduler (usually every X or Y hours). Most of these tasks will perform some business logic calculations and insert / update an RDS instance.
One of my tasks (when checking the Cloudwatch logs later) was stuck making an update to a table, and basically just hung there waiting. My guess is that a user (perhaps me) - was manually making a small update statement to the same table, triggering some sort of lock that.
Because I have my tasks set on a recurring basis, the same task had another container provisioned a few hours later, attempted to update the same table, and also hung.
I only noticed this issue because my monitoring dashboard showed that the date updated watermark was still a few days in the past, even though I hadn't gotten any alerts or notifications for errors during my container run time. By this time, I had 3 containers all running, each stuck on the same update to the same table.
After I logged into the ECS console, I saw that my cluster had 3 task instances running - all the same task, all stuck making the same insert.
So my questions are:
is there a way to specify a runtime maximum for these tasks (ie. if the task doesn't finish within 2 hours, terminate with an exit code of 1)?
I'm trying to figure out the best way to prevent this type of "silent failure" in the future? I've added in application logic to execute a query checking for blocked process IDs with queries within my RDS instance, and if it notices any blocked PIDS, it skips the update. But are there any more graceful ways of detecting and handling this issue?

Apache Airflow scheduling with a time bound and triggering

I'm using airflow with celery Executor. Now I'm planning to develop user interaction for a task to decide to select branch using BranchOperator in a DAG. Its working by running continuous loop to checking value in database. But I feel it is not the good way of approach. Is there any alternative to do this?
And I want to wait for this interaction up-to particular time otherwise I want to stop. Is it possible to do this in airflow? And if is possible then is the any possibility to change this time bound dynamically?
Thank you in advance.
You shouldn't be using a BranchOperator for this. If you want to proceed in your dag based on some value in the db, you should use a Sensor. There are some off the shelf sensors in airflow and you could also look at some of those to create your own. Sensors basically poll for a certain criteria and timeout after a configurable period of time. From your question it seems this is exactly what you need.

Quartz scheduler - external Trigger configuration through AdoJobStore and Clustering

Exploring (Ado)JobStore (data base job store in general) I met subjects like clustering, load balancing and sharing jobs' work data state across multiple applications.
But I think I didn't find a JobStore subject that covers my scenario.
I need to run Quartz Jobs in Windows Service and I need to be able to change configuration of Triggers in other application (in Admin panel in web application) and the Triggers to be applied by the Quartz in my Windows Service automatically (Quartz tracks changes and applies them).
Is it possible to do this by using AdoJobStore/Clustering mechanism? I mean in terms of JobStore's features, so by using Quartz scheduler API. Not by using SQL and changing data in Quartz tables directly or any other workarounds (according to Quartz's Best Practices doc).
The Quartz.NET scheduler can be accessed remotely, independently of job stores. Since you already have a web app you can add a reference to the remote scheduler and use the API to administer jobs, triggers etc.

Filling Job detail values in the JDBC datastore

I'm learning Quartz and I've gone through the tutorials. I used the RAM JOB store. Now I want to move it to jdbc job store. I've created database and configured it. But the scheduler has not started. What is the values that I've to populate it with the database.
You don't need to populate the database. It is highly recommended to NOT write directly to quartz tables: http://quartz-scheduler.org/documentation/best-practices.
Just configure the scheduler programmatically by adding triggers and job details as described in example: http://quartz-scheduler.org/documentation/quartz-2.2.x/examples/Example1.

Configuring Quartz.net Tasks

I want to be able to set up one or more Jobs/Triggers when my app runs. The list of Jobs and triggers will come from a db table. I DO NOT care about persisting the jobs to the db for restarting or tracking purposes. Basically I just want to use the DB table as an INIt device. Obviously I can do this by writing the code myself but I am wondering if there is some way to use the SQLJobStore to get this functionality without the overhead of keeping the db updated throughout the life of the app using the scheduler.
Thanks for you help!
Eric
The job store's main purpose is to store the scheduler's state, so there is no way built in to do what you want. You can always write directly to the tables if you want and this will give you the results you want, but this isn't really the best way to do it.
The recommended way to do this would be to write some code that reads the data from your table and then connects to the scheduler using remoting to schedule the jobs.