I have an MVC web site, a storage queue and a WebJob. Users can request the generation of a set of reports by clicking a button on the web page. This inserts a message into the storage queue. In the past, the WebJob ran continuously and processed those requests fine. But the demand and size of the reports has grown to the point where the WebJob is slowing down the web app. I would like to still place the request message in the queue, but delay processing of all requests until the evening, when the web app is mostly idle. This would allow me to continue using the WebJob code and QueueTrigger functionality without having to waste resources by moving to a dedicated Worker Role, etc. The reports don't need to be generated immediately, so a delay is acceptable.
I don't see a built-in way to set a time window on processing. The only thing I have found is a powershell cmdlet for starting and stopping WebJobs (Start-AzureWebsiteJob / Stop-AzureWebsiteJob). So I was thinking that I could create a scheduled powershell job that runs at midnight, starts the webjob, lets it run, and then runs again early in the AM and stops it.
Does anyone know of a better option than this? Anything more "official" that perhaps I could not find?
One possible solution would be to hide the messages in the queue for a certain amount of time when they are inserted.
If you're using AddMessage method, you can specify this timespan value in initialVisibilityDelay parameter.
What this will do is ensure that the messages are not immediately visible in the queue to be picked by WebJob and will become visible only when this timespan elapses.
Will such a solution work for you?
Maybe I didn't fully understand your question, but couldn't you use "Triggered" WebJob that is triggered by CRON schedule? You can then limit it to specific hours
0 * 20-22 * * *
This example will run every minute from 8pm to 10pm
Related
I'm wanting to setup an ECS task to schedule various other application tasks.
The "tasks" this task will schedule will mostly involve calling restful endpoints in another load balanced service.
I know there are other ways to do this, using cloudwatch to trigger a lambda etc. However this seems overly complex for what I need.
I was planning to just make a very simple, light-weight apline based image with a crontab to do the triggering of the restful calls.
This all seems easy enough. The only concern I have is that I would want to prevent, as far as possible, having multiple instances of this task running, even if only for a short period of time.
If my CI/CD pipeline triggers an update to this cron task, then there may be a short period of time, where the old and new task will be running simultaneously.
There may therefore be a small chance that a cron task could be triggered twice.
What I would like to do, is to have ECS stop the currently running task completely, before attempting to start the new one.
This seems to be contrary to the normal way it wants to work, where it will ensure the new task is up, and healthy before stopping the old one.
Is this possible, and if so, how do I configure it?
It's not a problem if my crons don't run for a period of time, but it could be a problem if any get triggered more than once.
Instead of using ECS Service (which makes sure a particular number of tasks is always running and deploys via rolling or B/G deploy strategy which is not you desire) - how about using StopTask and RunTask api to control when a task is stopped and started - gives you complete control.
Instead of using scheduled tasks, you could create an ECS service and use scheduled scaling to scale the desired service count to 1 and back down to zero.
When user registers, an email is sent to user after 20 seconds. Is this possible to code with sleep() in moodle.
sleep(20);
if (!send_confirmation_email($user)) {
print_error('noemail','core_email');
}
The sleep will be fairly poor UX and block the session. An adhoc task is the right way to go as DerKanzler said.
As of https://tracker.moodle.org/browse/MDL-66925 you can run the adhoc tasks in keep alive mode and they will process continuously as a psuedo daemon:
php admin/cli/adhoc_task.php --keep-alive=60 --execute
If you still wanted the email to be sent roughly 20 seconds into the future, when you use the Task API to queue the task you can set the future time that it should run:
https://docs.moodle.org/dev/Task_API#Set_a_task_to_run_at_a_future_time
Either way this sounds like a terrible idea to force a user to either way in their browser for 20 seconds, or to wait reloading their email client for 20 seconds. I'd strongly recommend against it.
If you want to do it the 'hacky' way and patch some core files sleep(seconds) is indeed the easiest way to go.
If you are writing a plugin, you can have a look at the Task API, especially at defining AdHoc Tasks. Though this will not execute until your cron job is fired. So you would have to lower your cron execution time limit. Besides that there is currently no option to do this with the moodle API.
I have a WCF Windows Workflow (4.5) Workflow Service hosted under IIS and using AppFabric 1.1. The workflow instances are long-running (up to about a week), but much of the time is spent in Delay activities.
This seemed to work fine at first, but when running multiple instances of the workflow at the same time (2+ instances causes this), some of them just never wake up once they've unloaded from memory during the Delay step. When I look at the logs, the errors I find all look like this:
System.OperationCanceledException: The execution of InstancePersistenceCommands has been canceled because the InstanceHandle was freed.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activities.Dispatcher.DurableInstanceManager.WaitAndHandleStoreEventsCallback(IAsyncResult result)
Unfortunately, I'm not finding any useful information on that error message.
The SuspensionExceptionName and SuspensionReason fields in the AppFabric Persisted Instances Table show System.NullReferenceException: Object reference not set to an instance of an object. But this doesn't happen inside my workflow, only outside.
Additional Info:
I'm running the activity as a Fire & Forget (receive activity, no send)
My workflow calls into other WCF services to fetch data.
I am running this on Server 2012 R2, IIS 8 (not azure)
Workflow Persistence is working. I can reset IIS, reboot... its just when I run 2 instances that it has problems.
I'm definitely not hitting any kind of throttling limits. While the workflow deals with a few MB of data, this issue happens at 2+ instances.
Any idea what might be happening here?
Edit:
I realized I found more information on how the issue operates and never added it to the question. When the delay issue happens, it operates a lot like a static variable getting written by 2 threads.
Here's a visualization:
WF1 Start ---->Do Stuff--->Sleep------------*1----->Cancelled Exception at some point
------WF 2 Start---->Do Stuff------->Sleep->Wake up---------*2------>More Stuff---->End Successfully
*1 - When WF Instance 1 Should Wake up (Same time as WF 2 wakes)
*2 - When WF Instance 2 Should have woken up (Seems to be ignored)
Before anyone asks... I got rid of every static variable, method, class in my code. Nothing is static anymore.
I've been struggling with similar issues for quite a while. I use WFW4 and I find similar errors when a workflow instance is in a long delay.
I don't know what the cause of the problem is, but I have a work around that you might find helpful.
In my case, the errors I get are from Workflow Management Service and say:
Failed to invoke service management endpoint at 'net.pipe://.svc' to activate service '/Alerts/Workflows/.xamlx'. Exception: 'Access is denied.'
These errors start happening sometime between 6 and 30 hours after the instance goes into a long delay.
I have found that if I create a new instance of the workflow when the first instance is in delay and the errors are happening, then Workflow Management Service is able to resume interacting with the first sleeping instance.
So, I made a new workflow whose sole purpose is to periodically launch and then kill instances of the workflow that contains the long delay.
It actually gets a bit more complicated to make this work. I wanted this new workflow to also go to sleep between times when it creates and kills a new instance of the first workflow. But this going to sleep causes the instance of the new workflow to suffer the same problem as the first workflow. So, I modified the new workflow so it does the following:
-- delay for some rather short period, such as 30 minutes
-- create an instance of the first workflow
-- wait a minute
-- kill the just-created instance of the first workflow
-- create a new instance of this new error-preventing workflow
-- terminate
Since having done this, I no longer get the Access is Denied error from Workflow Management Service!
Hope this helps
Turns out my first answer was not correct, but I believe this answer is right, and solves the issue ChrisG is having.
My workaround did not actually work. Took a while for the problem to resurface. 29 hours to be precise - the default time it takes for an app pool to recycle.
So for me, the solution was to make my app pool not recycle. When an app pool recycles while a workflow instance is in a delay activity, the workflowManagementService is not able to wake up the instance and throws Access is Denied errors. If you create a new instance of the workflow after the app pool has recycled, the first instance will pick up where it left off, but sometimes still has problems, which is what I believe is happening to ChrisG.
ChrisG, looking at your visualization, is it possible that an appPool is recycling during the time wf1 is sleeping? I believe that is the cause the exception. If you then launch a new wf instance after *2 has passed (and if an app pool recycle happened prior to *1), that will wake up both wf1 and wf2, but wf1 won't work properly (at least in my experience)
Also, this happens after iisresets and server reboots. To handle those, you need to use IIS7 which allows the web application (as well as the web site) which is hosting the xamlx files to autostart after an iisreset or server reboot. This option is not available in IIS6. See http://www.postseek.com/meta/991815402b369e71ce925cde47ac907d for details
Hope this helps!
I have recently been thinking about possible architecture for a simple task reminder system. User will schedule a task and reminder in form of SMS/email/android needs to be sent to all stakeholders at some x minutes before the task is scheduled to be performed(much in the same way google calendar works). The problem here is to send the reminder at that precise point in time. Here are the two possible approaches I can think of:
Cron: I can setup a cron to run every minute. This will scan the table for notifications which need to be sent in the next minute and simply sends the notifications. But, precision is lost as there is always the chance of that +/-1 min error.
Work Queues: I can simply put a message with appropriate delay in a queue at the time task was scheduled. Workers will send the notification as and when they receive the message. I can add as many workers as I want in case my real time behavior starts getting affected because of load. There are still a few issues. How to choose the appropriate work queue? I have evaluated RabbitMq and Beanstalk. While Rabbitmq follows standard AMQP protocol and is widely suggested, it doesn't provide the delay functionality out of the box. There are ways to simulate this using dead-letter-exchanges but this will not work in my case because the delay needs to be variable. Beanstalk supports this but the problem is that beanstalk queue resides entirely in memory which I don't like(but can live with). Any possible alternatives?
Third Approach: ??????. I am sure a simple desktop notification tool does neither of the two. What technology do they use to achieve the same thing?
We had the same scenario and we use Redis for long schedules even now reminders for up to 2 years. You can use Sorted Set where the timestamp is the score.
We use Beanstalkd delay jobs for those kind of reminders where we know it's relatively short term couple of hours, and there is no cancellations, as removing from beanstalkd a delayed message you need to retain the job id in a database for later removal, and that is no viable.
Although you mention memory limit, we use persistence on both Redis/Beanstalkd
I have to move an SAP background job (ABAP report for A/P) into Cronacle and can't figure out how to stop the job in SAP so I can start running it with the Cronacle schedule.
The job runs in SAP from user SAPSYS every morning at 7:15am, but if you look it up with sm37 there is no time scheduled for it and it's not triggered by an event; also, it's status is SCHEDULED.
I had our Cronacle team search by job number but they couldn't find any scripts pointing to that job. If you look at the finished job it shows that it's scheduled daily for 7:15am. Also, there is no predecessor or successor jobs listed. Is it possible it's being started from another job? How do I find out without deleting this one?
Some suggestion.
If you don't want to delete the scheduled job. try to rename it, and see if it still runs.
Make sure that the users you are using for sm37 has full authorization for the backround administration.
A previous job can schedule and release and create and whatever a new job. Look at what is running before the problematic job.
Look deeply at the dev traces. They somtimes hints about what is going on in the system.
In addtion to a previous job creating the new job explicitly it is also possible that the job is created manually by an ABAP program that is scheduled in another job. Doing a where-used on the function module OPEN_JOB and looking for Z* or Y* programs may give you a hint.
Another thing: Is this scheduled job ever actually excecuted (i.e. are there any previous "FINISHED" jobs with the same name). A Scheduled job will not run unless it is first released. So if it never runs it may be obsolete.
Thanks for the responses! It turned out to be a case of "newbie ignorance." When using SM37 to view the job I neglected to extend the search date to the next day. I don't know why it doesn't show the released job for the current day, but extending it to the next day showed it. That's a lesson I won't forget!