How to setup High available active-passive postgresql which has scheduled job running on server - postgresql

I am new to postgresql, I went through the Postgres documentation which provides details for the active passive server setup.
What I want is the scheduler in spring boot should run only on server which is pointing to active mode, and in passive mode the schedular should not run.
As soon as active goes down and passive takes over as new active, then jobs should start running on new server.
I just have 2 jobs , one runs each 5 mins and one runs everyday at 1AM.
Need help in achieving this

Related

Quartz scheduler stops working after some time in iis

When I publish my application to IIS server, quartz scheduler stop working after some time.In my local machine IIS server it works fine.
I need to perform some functionality every day at 11:55 pm.
By default, IIS recycles an application pool after some inactivity time. I guess this is your problem. The application will be just shutdown if nobody uses it.
While it is possible to make the application pool in IIS to run forever, it is still better to not schedule background tasks using web applications.
Use windows services or just simple Windows Task Scheduler for scheduling.
There are a couple of good solutions for scheduling background tasks with C# in .Net:
Topshelf
Hangfire
Here is a nice topic about using both solutions "Setting up windows service with Topshelf and HangFire"

AWS Fargate vs Batch vs ECS for a once a day batch process

I have a batch process, written in PHP and embedded in a Docker container. Basically, it loads data from several webservices, do some computation on data (during ~1h), and post computed data to an other webservice, then the container exit (with a return code of 0 if OK, 1 if failure somewhere on the process). During the process, some logs are written on STDOUT or STDERR. The batch must be triggered once a day.
I was wondering what is the best AWS service to use to schedule, execute, and monitor my batch process :
at the very begining, I used a EC2 machine with a crontab : no high-availibilty function here, so I decided to switch to a more PaaS approach.
then, I was using Elastic Beanstalk for Docker, with a non-functional Webserver (only to reply to the Healthcheck), and a Crontab inside the container to wake-up my batch command once a day. With autoscalling rule min=1 max=1, I have HA (if the container crash or if the VM crash, it is restarted by AWS)
but now, to be more efficient, I decided to move to some ECS service, and have an approach where I do not need to have EC2 instances awake 23/24 for nothing. So I tried Fargate.
with Fargate I defined my task (Fargate type, not the EC2 type), and configure everything on it.
I create a Cluster to run my task : I can run "by hand, one time" my task, so I know every settings are corrects.
Now, going deeper in Fargate, I want to have my task executed once a day.
It seems to work fine when I used the Scheduled Task feature of ECS : the container start on time, the process run, then the container stop. But CloudWatch is missing some metrics : CPUReservation and CPUUtilization are not reported. Also, there is no way to know if the batch quit with exit code 0 or 1 (all execution stopped with status "STOPPED"). So i Cant send a CloudWatch alarm if the container execution failed.
I use the "Services" feature of Fargate, but it cant handle a batch process, because the container is started every time it stops. This is normal, because the container do not have any daemon. There is no way to schedule a service. I want my container to be active only when it needs to work (once a day during at max 1h). But the missing metrics are correctly reported in CloudWatch.
Here are my questions : what are the best suitable AWS managed services to trigger a container once a day, let it run to do its task, and have reporting facility to track execution (CPU usage, batch duration), including alarm (SNS) when task failed ?
We had the same issue with identifying failed jobs. I propose you take a look into AWS Batch where logs for FAILED jobs are available in CloudWatch Logs; Take a look here.
One more thing you should consider is total cost of ownership of whatever solution you choose eventually. Fargate, in this regard, is quite expensive.
may be too late for your projects but still I thought it could benefit others.
Have you had a look at AWS Step Functions? It is possible to define a workflow and start tasks on ECS/Fargate (or jobs on EKS for that matter), wait for the results and raise alarms/send emails...

Acumatica scheduler

This might be a strange question. Is there any way to make the scheduler stay active perpetually? I have a couple of instances, one on a server for testing and a development instance on my laptop. I setup some business events in both instances that accurately fire as designed. My question comes from the fact that the scheduler seems to stall if no one logs into the instance. Once I login to the instance with any id, the scheduler restarts and runs for about 12 hours and then stalls again. I thought it was only the test instance on the server, but I took a couple days off and my laptop instance also stalled. Is there a setting to overcome this? I know the assumption is that there will be users in the system in production, but what about over the weekend or holidays?
The schedule is run from the IIS worker process (w3wp) of the assigned Application Pool. Normally the worker process is started when the first web request is received.
If you restart the test instance of the server or your laptop instance you may experience this delay until someone logs in.
However, you can set the worker process to start automatically whenever an Application Pool starts.
Check your IIS configuration, look for the Application Pool assigned to your Acumatica instance and edit its Advanced Settings.
There you can change StartMode to AlwaysRunning.
Your app pool might be recycled. Check the following posts that can help you
How to know who kills my threads
https://serverfault.com/questions/333907/what-should-i-do-to-make-sure-that-iis-does-not-recycle-my-application

How to load balance jobs using spring batch when different nodes has different times?

We have so many batch jobs to handle.
Now problem is we have 7 different nodes which has same application deployed(We use JBoss AS 7.1.1. as a application server) and We use Spring batch using quartz scheduler to schedule jobs.And it works just fine.
But 1 of our nodes is diff time then others (e.g. Suppose we have 3 nodes A,B,C so when there's a 12:00:00 in C there's a 11:58:00 in A and B) and all these nodes are been maintained by client.
So when any trigger fires(we use cron trigger) job run on single node only.
Now specific time(take 12:00) we need to fire more than one job, then all of them runs on a single node as all of them were timed out earlier the other nodes(As 12:00 o'clock happened in C before A and B).
I was wondering do we have any such mechanism where we take reference of any centralized time to time out all batch processes(like do not time out batch process when there's 12 O'clock on C but run batch job when there's a 12 O'clock in DB)..?
Thanks in advance :).
Spring Batch provides facilities to launch jobs via messages in the spring-batch-integration module. I'd recommend managing the scheduling from a central point and having it send messages to the servers to be picked up based on the server's availability to run the job. This would also address the issue of time synchronization as the scheduling piece would be handled in a central point.
Ask your client to synchronize servers using NTP. All of your servers should have same time PERIOD. You will have bunch of other problems if you allow your servers stay out of synch with each other.

Quartz job doesn't restart after instance fail

I have QUARTZ 1.8.5 running in a clustered environment (2 nodes, persistence, clustered , JobStoreCMT).
Now I schedule several jobs to run everyday at a specific hour.
I set REQUEST RECOVERY to true for every of these jobs (jobDetail.setRequestsRecovery(true).
I see that the flag is set to 1 into QRTZ_JOB_DETAILS table.
What I want is that a node fails (Jboss server is restarted for example) then the other alive node to restart the failed job. But this doesn't happens.
What I'm doing wrong/ not doing ?
Thanks.
Have you tried to update to the latest Quartz? There is a version 2.1.6 out already.
Otherwise, what you're doing seems to be right.