I have multiple quartz cron jobs in a load balanced environment. Currently these jobs are running on each node, which is not desirable. I want a node to run only a particular scheduler and if the node crashes, another node should run the scheduler intended for the node that crashed.
How can this be done with spring 2.5.6/tomcat load balancer.
I think there's a few aspects to this question.
Firstly, Quartz has API methods for pausing and resuming the Scheduler, or even individual triggers and jobs
e.g.
http://www.jarvana.com/jarvana/view/opensymphony/quartz/1.6.1/quartz-1.6.1-javadoc.jar!/org/quartz/Scheduler.html#standby()
I would create a spring bean with a reference to the Quartz scheduler or trigger, and a simple isMasterNode boolean member for storing state. I'd then expose 2 [restricted-access] web service calls: makeMaster and makeSlave, which will call Scheduler.resume() or standby/pause, respectively.
Finall, the big question is how & with what you determine that another node has 'crashed'.
If you're using a hardware loadbalancer to manage this, you could configure it to call the 'makeMaster' WS on the new 'primary' node, which in turn calls Scheduler.resume() or similar.
hth
Related
In Quarkus framework how to schedule a job to execute only in one pod rather running in all pods. I tried (concurrentExecution = SKIP) that didn't help.
Run the job only in one pod on multi instant application.
From Quarkus guide: https://quarkus.io/guides/scheduler-reference#concurrent_execution
Note that only executions within the same application instance are
considered. This feature is not intended to work across the cluster
so I suppose you have to move to Quartz to get cluster support out-of-the-box or create your custom synchronization method (eg. using a database or file,etc).
I am trying to find a solution to run a cron job in a Kubernetes-deployed app without unwanted duplicates. Let me describe my scenario, to give you a little bit of context.
I want to schedule jobs that execute once at a specified date. More precisely: creating such a job can happen anytime and its execution date will be known only at that time. The job that needs to be done is always the same, but it needs parametrization.
My application is running inside a Kubernetes cluster, and I cannot assume that there always will be only one instance of it running at the any moment in time. Therefore, creating the said job will lead to multiple executions of it due to the fact that all of my application instances will spawn it. However, I want to guarantee that a job runs exactly once in the whole cluster.
I tried to find solutions for this problem and came up with the following ideas.
Create a local file and check if it is already there when starting a new job. If it is there, cancel the job.
Not possible in my case, since the duplicate jobs might run on other machines!
Utilize the Kubernetes CronJob API.
I cannot use this feature because I have to create cron jobs dynamically from inside my application. I cannot change the cluster configuration from a pod running inside that cluster. Maybe there is a way, but it seems to me there have to be a better solution than giving the application access to the cluster it is running in.
Would you please be as kind as to give me any directions at which I might find a solution?
I am using a managed Kubernetes Cluster on Digital Ocean (Client Version: v1.22.4, Server Version: v1.21.5).
After thinking about a solution for a rather long time I found it.
The solution is to take the scheduling of the jobs to a central place. It is as easy as building a job web service that exposes endpoints to create jobs. An instance of a backend creating a job at this service will also provide a callback endpoint in the request which the job web service will call at the execution date and time.
The endpoint in my case links back to the calling backend server which carries the logic to be executed. It would be rather tedious to make the job service execute the logic directly since there are a lot of dependencies involved in the job. I keep a separate database in my job service just to store information about whom to call and how. Addressing the startup after crash problem becomes trivial since there is only one instance of the job web service and it can just re-create the jobs normally after retrieving them from the database in case the service crashed.
Do not forget to take care of failing jobs. If your backends are not reachable for some reason to take the callback, there must be some reconciliation mechanism in place that will prevent this failure from staying unnoticed.
A little note I want to add: In case you also want to scale the job service horizontally you run into very similar problems again. However, if you think about what is the actual work to be done in that service, you realize that it is very lightweight. I am not sure if horizontal scaling is ever a requirement, since it is only doing requests at specified times and is not executing heavy work.
Current Setup
We have kubernetes cluster setup with 3 kubernetes pods which run spring boot application. We run a job every 12 hrs using spring boot scheduler to get some data and cache it.(there is queue setup but I will not go on those details as my query is for the setup before we get to queue)
Problem
Because we have 3 pods and scheduler is at application level , we make 3 calls for data set and each pod gets the response and pod which processes at caches it first becomes the master and other 2 pods replicate the data from that instance.
I see this as a problem because we will increase number of jobs for get more datasets , so this will multiply the number of calls made.
I am not from Devops side and have limited azure knowledge hence I need some help from community
Need
What are the options available to improve this? I want to separate out Cron schedule to run only once and not for each pod
1 - Can I keep cronjob at cluster level , i have read about it here https://kubernetes.io/docs/concepts/workloads/controllers/cron-jobs/
Will this solve a problem?
2 - I googled and found other option is to run a Cronjob which will schedule a job to completion, will that help and not sure what it really means.
Thanks in Advance to taking out time to read it.
Based on my understanding of your problem, it looks like you have following two choices (at least) -
If you continue to have scheduling logic within your springboot main app, then you may want to explore something like shedlock that helps make sure your scheduled job through app code executes only once via an external lock provider like MySQL, Redis, etc. when the app code is running on multiple nodes (or kubernetes pods in your case).
If you can separate out the scheduler specific app code into its own executable process (i.e. that code can run in separate set of pods than your main application code pods), then you can levarage kubernetes cronjob to schedule kubernetes job that internally creates pods and runs your application logic. Benefit of this approach is that you can use native kubernetes cronjob parameters like concurrency and few others to ensure the job runs only once during scheduled time through single pod.
With approach (1), you get to couple your scheduler code with your main app and run them together in same pods.
With approach (2), you'd have to separate your code (that runs in scheduler) from overall application code, containerize it into its own image, and then configure kubernetes cronjob schedule with this new image referring official guide example and kubernetes cronjob best practices (authored by me but can find other examples).
Both approaches have their own merits and de-merits, so you can evaluate them to suit your needs best.
I am building a data pipeline for batch processing. And I find that Spring Cloud Data Flow is a quite attractive framework to use. Without much knowledge in SCDF and Kubernetes, I am not sure whether it is possible to conditionally launch a Spring Cloud Task on a specific machine.
Suppose I have two physical servers that are for running the batch process (Server A and Server B). By default, I would like my Spring cloud task to be launched on Server A. If the Server A is shut down, the task should be deployed on server B. Can Kubernetes / SCDF handle this kind of mechanism? I am wondering whether the nodeselector is the thing that I should look into.
Yes, you can pass deployment.nodeSelector as a deployment property when launching the task.
The deployment.nodeSelector is a Kubernetes deployment property and hence, you need to pass something like this:
task launch mytask --properties "deployer.<taskAppName>.kubernetes.deployment.nodeSelector=foo1:bar1,foo2:bar2"
You can check the list of supported Kubernetes deployer properties here
Is there any redis jobStore able to support a quartz cluster?
Have anybody been able to build that?
By other side, what's exactly a quartz cluster? I mean, is it able to have two services running the same quartz.properties file pointing to a redis?
EDIT
I've tried with this redis job store but it seems doesn't supprt quartz clustering:
JobStore class 'net.joelinn.quartz.jobstore.RedisJobStore' props could not be configured. [See nested exception: java.lang.NoSuchMethodException: No setter for property 'isClustered']
quartz.properties:
org.quartz.scheduler.instanceName=office-scheduler-service
org.quartz.scheduler.instanceId=AUTO
org.quartz.jobStore.isClustered=true
org.quartz.jobStore.clusterCheckinInterval=20000
# thread-pool
org.quartz.threadPool.class=org.quartz.simpl.SimpleThreadPool
org.quartz.threadPool.threadCount=2
org.quartz.threadPool.threadsInheritContextClassLoaderOfInitializingThread=true
org.quartz.jobStore.class = net.joelinn.quartz.jobstore.RedisJobStore
org.quartz.jobStore.host = redisbo
org.quartz.jobStore.misfireThreshold = 60000
you don't need to configure cluster, please check the source code, it is already clustered
Quartz JDBC documentation explains how it handles executing jobs in a cluster of application nodes. RedisJobStore extended that to utilize the Redis storage, and it will work in a cluster mode (Quartz cluster - not Redis cluster) by default without requiring you to enable that.
Basically Quartz uses a shared database to record which scheduler instance is currently working on a job, as opposed to direct node communication among application schedulers. When a scheduler instance picks up a job, it safely registers its instance id with the running job and persists it in the database. This support by the job store is evident in the schema used by RedisJobStore, indicated by the blocked_by fields.