QuartzScheduler.NET - Adding new job definitions while scheduler is running - quartz-scheduler

Can anyone please say if quartz will allow you to add additional job types once the scheduler is up and running?
I reckon that our implementation of quarts will be an asp.net service using ram store. It is likely that new jobs will be written over time and that we will want to add these jobs into the scheduler without having to shut the service down.

Yes, so long as the classes related to the new job types make it into the classpath, and your class loader will discover them. Quartz does nothing to "preload" or "prediscover" job classes - it just loads them as they're referenced.

Related

Terminate all ECS tasks before new deployment

If an ECS task runs in multiple instances, the rolling deployment methods usually terminate tasks one by one and replace it with a newer version (newer task definition), in order to maximize uptime.
But we have some containers that may make schema upgrades on first-run (as an example), and here we never want old versions of a container to run at the same time as newer versions.
Another example is coupled data formats, that a new front-end talks to a new API. We'd rather have a moment of downtime than have old front-ends talk to new APIs, or new front-ends talk to old APIs.
ECS doesn't seem to provide a deployment strategy that terminates all instances of a task before starting to roll out new ones.
Before I roll some bash script that reduces task counts to 0 prior to an update and returns it to its original value, does anyone know if AWS provides a way to cleanly drain a service of tasks before rolling out a new task definition?
In a similar scenario I would do it as follows.
I would add my pipeline (if using a stack code) a CodeDeploy step and set the traffic to "All-at-once", this doc can start to direct you something.
In case I was unable to work with the stack code, I would create a lambda, which would be triggered by the pipeline, a step before the deploy, this lambda could interact with the ECS, resetting the tasks to receive the new version of the application, but in this situation, a downtime will be felt, of at least 2 ~ 5 minutes, depending on how your application is optimized. You can see a little more about the lib that will allow you to do this here.

How to create a cron job in a Kubernetes deployed app without duplicates?

I am trying to find a solution to run a cron job in a Kubernetes-deployed app without unwanted duplicates. Let me describe my scenario, to give you a little bit of context.
I want to schedule jobs that execute once at a specified date. More precisely: creating such a job can happen anytime and its execution date will be known only at that time. The job that needs to be done is always the same, but it needs parametrization.
My application is running inside a Kubernetes cluster, and I cannot assume that there always will be only one instance of it running at the any moment in time. Therefore, creating the said job will lead to multiple executions of it due to the fact that all of my application instances will spawn it. However, I want to guarantee that a job runs exactly once in the whole cluster.
I tried to find solutions for this problem and came up with the following ideas.
Create a local file and check if it is already there when starting a new job. If it is there, cancel the job.
Not possible in my case, since the duplicate jobs might run on other machines!
Utilize the Kubernetes CronJob API.
I cannot use this feature because I have to create cron jobs dynamically from inside my application. I cannot change the cluster configuration from a pod running inside that cluster. Maybe there is a way, but it seems to me there have to be a better solution than giving the application access to the cluster it is running in.
Would you please be as kind as to give me any directions at which I might find a solution?
I am using a managed Kubernetes Cluster on Digital Ocean (Client Version: v1.22.4, Server Version: v1.21.5).
After thinking about a solution for a rather long time I found it.
The solution is to take the scheduling of the jobs to a central place. It is as easy as building a job web service that exposes endpoints to create jobs. An instance of a backend creating a job at this service will also provide a callback endpoint in the request which the job web service will call at the execution date and time.
The endpoint in my case links back to the calling backend server which carries the logic to be executed. It would be rather tedious to make the job service execute the logic directly since there are a lot of dependencies involved in the job. I keep a separate database in my job service just to store information about whom to call and how. Addressing the startup after crash problem becomes trivial since there is only one instance of the job web service and it can just re-create the jobs normally after retrieving them from the database in case the service crashed.
Do not forget to take care of failing jobs. If your backends are not reachable for some reason to take the callback, there must be some reconciliation mechanism in place that will prevent this failure from staying unnoticed.
A little note I want to add: In case you also want to scale the job service horizontally you run into very similar problems again. However, if you think about what is the actual work to be done in that service, you realize that it is very lightweight. I am not sure if horizontal scaling is ever a requirement, since it is only doing requests at specified times and is not executing heavy work.

Using Akka MicroKernel for standalone scheduler module

We are planning to create a scheduler module. There is an external service which provides all the necessary tasks for the scheduler and the scheduler must invoke these tasks at intervals. Scheduler has various use cases and the scheduling would change drastically and dynamically based for data from the data store. We do not want to include this functionality as a part of the system(in play framework) which we are currently building but rather have a standalone process to execute the scheduling.
Considering the above mentioned use case, would Akka MicroKernel serve the purpose or should we think of deploying the scheduler in some other container or application server(If so which one would guys prefer)?

how to load/boostrap ongoing jobs with a quartz jdbcStore

I am working to migrate from Quartz 1.6 to 2.1 and use a JDBCJobStore. Previously, the the jobs were loaded via an xml file when the webapp started. The scheduler is now running using the JDBCJobStore but I don't understand how to add the jobs to the database which need to run on an ongoing basis (not one-off jobs).
My first thought is to create a servlet which runs on startup which adds the jobs to the database. But my concern is that this will be executed every time I need to restart the app and the jobs will get duplicated.
Thanks,
steve
The Jobs wont disappear from the database when you do a restart. So within your servlet, when it starts up before adding any jobs check to see if they already exist. When you create your jobs you can give them identities. Using the identities and some quartz methods you check if they already exist.
It sounds like the memory based scheduler is a better fit for these fixed jobs. You can create more than one scheduler, one memory, one JDBC if that makes sense for your application.

Spring schedulers in a load balanced environment

I have multiple quartz cron jobs in a load balanced environment. Currently these jobs are running on each node, which is not desirable. I want a node to run only a particular scheduler and if the node crashes, another node should run the scheduler intended for the node that crashed.
How can this be done with spring 2.5.6/tomcat load balancer.
I think there's a few aspects to this question.
Firstly, Quartz has API methods for pausing and resuming the Scheduler, or even individual triggers and jobs
e.g.
http://www.jarvana.com/jarvana/view/opensymphony/quartz/1.6.1/quartz-1.6.1-javadoc.jar!/org/quartz/Scheduler.html#standby()
I would create a spring bean with a reference to the Quartz scheduler or trigger, and a simple isMasterNode boolean member for storing state. I'd then expose 2 [restricted-access] web service calls: makeMaster and makeSlave, which will call Scheduler.resume() or standby/pause, respectively.
Finall, the big question is how & with what you determine that another node has 'crashed'.
If you're using a hardware loadbalancer to manage this, you could configure it to call the 'makeMaster' WS on the new 'primary' node, which in turn calls Scheduler.resume() or similar.
hth