Prevent Cron jobs from Single Point of Failure - server

I have many cron jobs running on server, which include like DB Backup (daily), sending Notification to users(hourly).
Currently i have 5 API Servers, and cron jobs is setup on one of the API Server.
I want to prevent Cron jobs from Single Point of failure. What if the machine crashed in which cron jobs has been setup.
Any suggestions please.

Related

Better job scheduler

We are trying to implement a few batch jobs using Spring Batch. Our application is batch heavy, currently, we have jobs in shell scripts. Now, we are trying to move to spring batch. We are looking for a scheduler with monitors.
We are evaluating various schedulers like Spring Cloud Data Flow, Airflow, Argo
We are checking the feasibility of these jobs running on both Kubernetes/OpenShift
We are not sure which one is good, can someone suggest which we would go for?
Things we expect:
List batch jobs in which stage along with logs
Monitor jobs (Dynatrace/Prometheus)
Complex cron jobs scheduling (more flexibility similar to that of unix cron jobs)

Hangfire Postgres - re-enque job when server dies

In our project we are using dotnet core, Hangfire and Postgres. We have some medium duration jobs (~10-15min) that we schedule on Hangfire.
The issue is that from time to time the Hangfire server that is processing a job might die and a new one is started. Obviously the job that was being processed needs re-enquing as it will never be completed otherwise. Hangfire seems to know the server that was executing the job is dead - nonetheless other servers won't pick up the job automatically and we need to re-enque it, which is not great.
Is there a way to get Hangfire to re-enque processing jobs when the server that was executing them is dead?
Thanks a lot!
Donato

PgAgent jobs not executing on remote server

I don't understand why this isn't working, I set up a pgAgent job to send a NOTIFY from the database every hour
The steps
The schedule
Turns out that problem was that heroku doesn't support agAgent and the database was running on heroku, I ended making a work around the scheduling tasks using windows task scheduler - it's not the best solution but it does the job I needed to do...

Jenkins trigger job by another which are running on offline node

Is there any way to do the following:
I have 2 jobs. One job on offline node has to trigger the second one. Are there any plugins in Jenkins that can do this. I know that TeamCity has a way of achieving this, but I think that Jenkins is more constrictive
When you configure your node, you can set Availability to Take this slave on-line when in demand and off-line when idle.
Set Usage as Leave this machine for tied jobs only
Finally, configure the job to be executed only on that node.
This way, when the job goes to queue and cannot execute (because the node is offline), Jenkins will try to bring this node online. After the job is finished, the node will go back to offline.
This of course relies on the fact that Jenkins is configured to be able to start this node.
One instance will always be turn on, on which the main job can be run. And have created the job which will look in DB and if in the DB no running instances, it will prepare one node. And the third job after running tests will clean up my environment.

Quartz Scheduler using database

I am using Quartz to schedule cron jobs in my web application. i am using a oracle Databse to store jobs and related info. When i add the jobs in the Database, i need to re-start the server/application (tomcat server) for these new jobs to get scheduled. How can i add jobs in the database and make them work without restarting the server.
I assume you mean you are using JDBCJobStore? In that case it is not ideal to make direct changes in the database tables storing the job data. However, I suppose you could set up a separate job that runs every X minutes / hours, checks whether there are new jobs in the database (that need to be scheduled), and schedule them as usual.
Add jobs via the Scheduler API.
http://www.quartz-scheduler.org/docs/best_practices.html