This might be a strange question. Is there any way to make the scheduler stay active perpetually? I have a couple of instances, one on a server for testing and a development instance on my laptop. I setup some business events in both instances that accurately fire as designed. My question comes from the fact that the scheduler seems to stall if no one logs into the instance. Once I login to the instance with any id, the scheduler restarts and runs for about 12 hours and then stalls again. I thought it was only the test instance on the server, but I took a couple days off and my laptop instance also stalled. Is there a setting to overcome this? I know the assumption is that there will be users in the system in production, but what about over the weekend or holidays?
The schedule is run from the IIS worker process (w3wp) of the assigned Application Pool. Normally the worker process is started when the first web request is received.
If you restart the test instance of the server or your laptop instance you may experience this delay until someone logs in.
However, you can set the worker process to start automatically whenever an Application Pool starts.
Check your IIS configuration, look for the Application Pool assigned to your Acumatica instance and edit its Advanced Settings.
There you can change StartMode to AlwaysRunning.
Your app pool might be recycled. Check the following posts that can help you
How to know who kills my threads
https://serverfault.com/questions/333907/what-should-i-do-to-make-sure-that-iis-does-not-recycle-my-application
Related
I have a BE service in NestJS that is deployed in Vercel.
I need several schedulers, so I have used #nestjs/schedule lib, which is super easy to use.
Locally, everything works perfectly.
For some reason, the only thing that is not working in my production environment is those schedulers. Everything else is working - endpoints, data base access..
Does anyone has an idea why? is it something with my deployment? maybe Vercel has some issue with that? maybe this schedule library requires something the Vercel doesn't have?
I am clueless..
Cold boot is the process of starting a computer from shutdown or a powerless state and setting it to normal working condition.
Which means that the code you deployed in a serveless manner, will run when the endpoint is called. The platform you are using spins up a virtual machine, to execute your code. And keeps the machine running for a certain period of time, incase you get another API hit, it's cheaper and easier on them to keep the machine running for lets say 5 minutes or 60 seconds, than to redeploy it on every call after shutting the machine when function execution ends.
So in your case, most likely what is happening is that the machine that you are setting the cron on, is killed after a period of time. Crons are system specific tasks which run in the kernel. But if the machine is shutdown, the cron dies with it. The only case where the cron would run, is if the cron was triggered at a point of time, before the machine was shut down.
Certain cloud providers give you the option to keep the machines alive. I remember google cloud used to follow the path of that if a serveless function is called frequently, it shifts from cold boot to hot start, which doesn't kill the machine entirely, and if you have traffic the machines stay alive.
From quick research, vercel isn't the best to handle crons, due to the nature of the infrastructure, and this is what you are looking for. In general, crons aren't for serveless functions. You can deploy the crons using queues for example or another third party service, check out this link by vercel.
I am using an Azure batch account to run sqlpackage.exe in order to move databases from a server to another. A task that has started 6 days ago has suddenly been restarted and started from the beginning after 4 days of running (extremely large databases). The task run uninterruptedly up until then and should have continued to run for about 1-2 days.
The PowerShell script that contains all the logic handles all the exceptions that could occur during the execution. Also, the retry count for the task was set to 0 in case it fails.
Unfortunately, I did not have diagnostics settings configured and I could only look at the metrics and there was a short period when there wasn't any node.
What can be the causes for this behavior? Restarting while the node is still running
Thanks
Unfortunately, there is no way to give a definitive answer to this question. You will need to dig into the compute node (interactively log in) and check system logs to give you details on why the node restarted. There is no guarantee that a compute node will have 100% uptime as there may be hardware faults or other service interruptions.
In general, it's best practice to have long running tasks checkpoint progress combined with a retry policy. Programs that can reload state can pick up at the time of the checkpoint when the Batch service automatically reschedules the task execution. Please see the Batch best practices guide for more information.
When I did the upgrade of concourse from 3.4.0 to 3.5.0, suddenly all running jobs changed their state from running to errored. I can see the string 'no workers' appearing at the start of their log now. Starting the jobs manually or triggered by the next changes didn't have any problem.
The upgrade of concourse itself was successful.
I was watching what bosh did at the time and I saw this change of job states took place all at once while either the web or the db VM was upgraded (I don't know which one). I am pretty sure that the worker VMs were not touched yet by bosh.
Is there a way to avoid this behavior?
We have one db, one web VM and six workers.
With only one web VM it's possible that it was out of service for long enough that all workers expired. Workers continuously heartbeat and if they miss two heartbeats (which takes 1 minute by default) they'll stall. They should come back after the deploy is finished but if scheduling happened before they heartbeats, that would cause those errors.
We have an intranet application hosted on IIS 8.0.
We have some web methods available which needs to be executed at certain time.
So, we have used the Quartz scheduler to schedule the job for executing web methods. In Application_Start event of global.asax, we have written the code to start the scheduler.
To keep the scheduler up and running the Application Pool should be in running mode always, so we have set the property startMode=“AlwaysRunning” and also, application should be started so we have set the application property preloadEnabled=“True”.
We are recycling the application pool at every 1740 minutes (29 Hours, the default time).
Here the question is:
If I have a job scheduled at 3:00 AM in the morning. My app pool is in running state.
I have browsed the application at 6:00 PM on one day before of schedule time.
As per the recycling time my app pool has been recycled at 2:00 AM and till 3:00 AM my application was not pinged but my app pool was in running state.
When Application Pool is recycled, the application pool will be started again (because of the property startMode=“AlwaysRunning”) but the process id if that worker process would be different.
Due to the recycle of app pool quartz had not executed the job as per the schedule. If I browse the application after recycling of app pool, then quartz will execute the job as per the schedule.
Can anyone help me on this at the earliest?
Thanks in anticipation.
If your IIS 8.0 is running on Sever 2012 OS, you will need to turn on 'Application Initialization' feature.
Please visit this link for more information.
Only after the feature in added, the property 'preloadEnabled=“True"' will be effective.
Please let me know if you are facing this issue on any other OS.
Hope this helps.
I'm dealing with a very strange problem now.
Since I queue the jobs over 1,000 at once, Gearman doesn't work properly so far...
The problem is that, when I reserve the jobs in background mode, I could see the jobs were correctly queued from the monitoring page (gearman monitor),
but It is drained right after without delivering it to the worker. (within a few seconds)
After all, the jobs never be executed by the worker, just disappeared from the queue (job server).
So I tried rebooting the server entirely, and reinstall gearman as well as php library. (I'm using 1 CentOS, 1 Ubuntu with PHP gearman library, and version is 0.34 and 1.0.2)
But no luck yet... Job server just misbehaving as I explained in aobve.
What should I do for now?
Can I check the workers state, or see and monitor the whole process from queueing the jobs to the delivering to the worker?
When I tried gearmand with a option like: 'gearmand -vvvv' It never print anything on the screen while I register worker to the server, and run a job with client code (PHP)
Any comment will be appreciated.
For your information, I'm not considering persistent queue using MySQL or SQLite for now, because it sometimes occurs performance issue with slow execution.