I don't understand why this isn't working, I set up a pgAgent job to send a NOTIFY from the database every hour
The steps
The schedule
Turns out that problem was that heroku doesn't support agAgent and the database was running on heroku, I ended making a work around the scheduling tasks using windows task scheduler - it's not the best solution but it does the job I needed to do...
Related
I have a periodic task that uses a crontab to run every day at 1:01 AM using
run_every = crontab(hour=1, minute=1)
Once I get my server up and running, is that enough to trigger the task to run once a day? Or do I also need to use a database scheduler?
Yes. It should be enough as Celery beat has own state file that is enough to run everything as you require.
I had developed a Job in Talend and built the job and automated to run the Windows Batch file from the below build
On the Execution of the Job Start Windows Batch file it will invoke the dimtableinsert job and then after it finishes it will invoke fact_dim_combine it is taking just minutes to run in the Talend Open Studio but when I invoke the batch file via the Task Scheduler it is taking hours for the process to finish
Time Taken
Manual -- 5 Minutes
Automation -- 4 hours (on invoking Windows batch file)
Can someone please tell me what is wrong with this Automation Process
The reason of the delay in the execution would be a latency issue. Talend might be installed in the same server where database instance is installed. And so whenever you execute the job in Talend, it will complete as expected. But the scheduler might be installed in the other server, when you call the job through scheduler, it would take some time to insert the data.
Make sure you scheduler and database instance is on the same server
Execute the job directly in the windows terminal and check if you have same issue
The easiest way to know what is taking so much time is to add some logs to your job.
First, add some tWarn at the start and finish of each of the subjobs (dimtableinsert and fact_dim_combine) to know which one is the longest.
Then add more logs before/after the components inside the jobs.
This way you should have a better idea of what is responsible for the slowdown (DB access, writing of some files, etc ...)
In our project we are using dotnet core, Hangfire and Postgres. We have some medium duration jobs (~10-15min) that we schedule on Hangfire.
The issue is that from time to time the Hangfire server that is processing a job might die and a new one is started. Obviously the job that was being processed needs re-enquing as it will never be completed otherwise. Hangfire seems to know the server that was executing the job is dead - nonetheless other servers won't pick up the job automatically and we need to re-enque it, which is not great.
Is there a way to get Hangfire to re-enque processing jobs when the server that was executing them is dead?
Thanks a lot!
Donato
I'm trying to schedule a function to periodically run and delete records from my google cloudsql (Postgresql) database. I want this to run a couple of times a day and will run under 10 minutes. What options do I have to schedule this function?
Thanks
Ravi
Your best option will be to use Cloud Scheluder to schedule a job that publishes to a Pub/Sub topic. Then, have a Cloud Function subscribed to this topic so it get's triggered by the message sent.
You can configure this job to run as a Daily routine x times a day.
Try pgAgent
pgAgent is a job scheduling agent for Postgres databases, capable of running multi-step batch or shell scripts and SQL tasks on complex schedules.
pgAgent is distributed independently of pgAdmin. You can download pgAgent from the download area of the pgAdmin website.
I have many cron jobs running on server, which include like DB Backup (daily), sending Notification to users(hourly).
Currently i have 5 API Servers, and cron jobs is setup on one of the API Server.
I want to prevent Cron jobs from Single Point of failure. What if the machine crashed in which cron jobs has been setup.
Any suggestions please.