I want to build a simple application, which does the following:
Make an http call towards a specific end point.
If the end point doesn't respond, I want to reset a scheduled task on a remote machine (restarting the service that is exposing the end point).
But once built, how do I run such an application?
Should I make it a continuously running application, that just performs its logic every X minutes?
Should I make it a scheduled task that runs the application every X minutes?
Or is there a completely different way of doing it? If all options are viable, that's fine as well, I don't want to start a "which do you feel is better" debate. I just want to dodge any land mines I might have missed.
Related
I have a windows PowerShell script that needs to run 24/7.
Occasionally windows requires an automated update and re-boot. Is there any way I can detect when this is about to happen? I'd like to ensure the script does an orderly shutdown, rather than be abruptly terminated.
I'm not looking for a weeks notice of a re-boot, but five minutes warning would be long enough to ensure the script closes various database tables, and does some basic housekeeping rather than simply falling over in an ugly heap.
UPDATE - I know there are a lot of web articles describing how to detect when an update and/or reboot is pending, but none (that I can find) actually pin it down to a time. Some updates/reboots remain pending for hours or days. I'm looking for a flag or notification that 'this server will reboot within the next ten minutes' or something similar.
I'm wanting to setup an ECS task to schedule various other application tasks.
The "tasks" this task will schedule will mostly involve calling restful endpoints in another load balanced service.
I know there are other ways to do this, using cloudwatch to trigger a lambda etc. However this seems overly complex for what I need.
I was planning to just make a very simple, light-weight apline based image with a crontab to do the triggering of the restful calls.
This all seems easy enough. The only concern I have is that I would want to prevent, as far as possible, having multiple instances of this task running, even if only for a short period of time.
If my CI/CD pipeline triggers an update to this cron task, then there may be a short period of time, where the old and new task will be running simultaneously.
There may therefore be a small chance that a cron task could be triggered twice.
What I would like to do, is to have ECS stop the currently running task completely, before attempting to start the new one.
This seems to be contrary to the normal way it wants to work, where it will ensure the new task is up, and healthy before stopping the old one.
Is this possible, and if so, how do I configure it?
It's not a problem if my crons don't run for a period of time, but it could be a problem if any get triggered more than once.
Instead of using ECS Service (which makes sure a particular number of tasks is always running and deploys via rolling or B/G deploy strategy which is not you desire) - how about using StopTask and RunTask api to control when a task is stopped and started - gives you complete control.
Instead of using scheduled tasks, you could create an ECS service and use scheduled scaling to scale the desired service count to 1 and back down to zero.
I'm trying to call a function when my macOS application is in any state, including terminated. Here is what i'm trying to accomplish:
Schedule a function (much like DispatchQueue.main.asyncAfter()) to run daily at a given time (let's say 9AM). I would like to add a feature to my application that allows a user to pick a time of day, and have an Alamofire POST request run at that time every day.
I have tried using a Runloop, and more recently Grand Central Dispatch:
DispatchQueue.main.asyncAfter(wallDeadline: DispatchWallTime.now() + .seconds(60)) {
//Alamofire
}
I can easily accomplish this while the application is running with a timer, but have yet to find a way to accomplish this in the background, with the app running.
This may be pretty heavy to implement (i.e. not straightforward), but if you want a task to run even if your app is terminated, you might need to consider writing your own LaunchAgent.
The trick here would be for the agent to be able to interact with your application (retrieving or sending shared information).
I have a WCF Windows Workflow (4.5) Workflow Service hosted under IIS and using AppFabric 1.1. The workflow instances are long-running (up to about a week), but much of the time is spent in Delay activities.
This seemed to work fine at first, but when running multiple instances of the workflow at the same time (2+ instances causes this), some of them just never wake up once they've unloaded from memory during the Delay step. When I look at the logs, the errors I find all look like this:
System.OperationCanceledException: The execution of InstancePersistenceCommands has been canceled because the InstanceHandle was freed.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activities.Dispatcher.DurableInstanceManager.WaitAndHandleStoreEventsCallback(IAsyncResult result)
Unfortunately, I'm not finding any useful information on that error message.
The SuspensionExceptionName and SuspensionReason fields in the AppFabric Persisted Instances Table show System.NullReferenceException: Object reference not set to an instance of an object. But this doesn't happen inside my workflow, only outside.
Additional Info:
I'm running the activity as a Fire & Forget (receive activity, no send)
My workflow calls into other WCF services to fetch data.
I am running this on Server 2012 R2, IIS 8 (not azure)
Workflow Persistence is working. I can reset IIS, reboot... its just when I run 2 instances that it has problems.
I'm definitely not hitting any kind of throttling limits. While the workflow deals with a few MB of data, this issue happens at 2+ instances.
Any idea what might be happening here?
Edit:
I realized I found more information on how the issue operates and never added it to the question. When the delay issue happens, it operates a lot like a static variable getting written by 2 threads.
Here's a visualization:
WF1 Start ---->Do Stuff--->Sleep------------*1----->Cancelled Exception at some point
------WF 2 Start---->Do Stuff------->Sleep->Wake up---------*2------>More Stuff---->End Successfully
*1 - When WF Instance 1 Should Wake up (Same time as WF 2 wakes)
*2 - When WF Instance 2 Should have woken up (Seems to be ignored)
Before anyone asks... I got rid of every static variable, method, class in my code. Nothing is static anymore.
I've been struggling with similar issues for quite a while. I use WFW4 and I find similar errors when a workflow instance is in a long delay.
I don't know what the cause of the problem is, but I have a work around that you might find helpful.
In my case, the errors I get are from Workflow Management Service and say:
Failed to invoke service management endpoint at 'net.pipe://.svc' to activate service '/Alerts/Workflows/.xamlx'. Exception: 'Access is denied.'
These errors start happening sometime between 6 and 30 hours after the instance goes into a long delay.
I have found that if I create a new instance of the workflow when the first instance is in delay and the errors are happening, then Workflow Management Service is able to resume interacting with the first sleeping instance.
So, I made a new workflow whose sole purpose is to periodically launch and then kill instances of the workflow that contains the long delay.
It actually gets a bit more complicated to make this work. I wanted this new workflow to also go to sleep between times when it creates and kills a new instance of the first workflow. But this going to sleep causes the instance of the new workflow to suffer the same problem as the first workflow. So, I modified the new workflow so it does the following:
-- delay for some rather short period, such as 30 minutes
-- create an instance of the first workflow
-- wait a minute
-- kill the just-created instance of the first workflow
-- create a new instance of this new error-preventing workflow
-- terminate
Since having done this, I no longer get the Access is Denied error from Workflow Management Service!
Hope this helps
Turns out my first answer was not correct, but I believe this answer is right, and solves the issue ChrisG is having.
My workaround did not actually work. Took a while for the problem to resurface. 29 hours to be precise - the default time it takes for an app pool to recycle.
So for me, the solution was to make my app pool not recycle. When an app pool recycles while a workflow instance is in a delay activity, the workflowManagementService is not able to wake up the instance and throws Access is Denied errors. If you create a new instance of the workflow after the app pool has recycled, the first instance will pick up where it left off, but sometimes still has problems, which is what I believe is happening to ChrisG.
ChrisG, looking at your visualization, is it possible that an appPool is recycling during the time wf1 is sleeping? I believe that is the cause the exception. If you then launch a new wf instance after *2 has passed (and if an app pool recycle happened prior to *1), that will wake up both wf1 and wf2, but wf1 won't work properly (at least in my experience)
Also, this happens after iisresets and server reboots. To handle those, you need to use IIS7 which allows the web application (as well as the web site) which is hosting the xamlx files to autostart after an iisreset or server reboot. This option is not available in IIS6. See http://www.postseek.com/meta/991815402b369e71ce925cde47ac907d for details
Hope this helps!
I made an app that I'm deploying as EAR on WAS 8.5. This app works as an app that constantly checks on a DataQueue and transfer whatever message it finds to an MQ. Since I've been testing it, I realized that if I start it, it remains starting the application indefinitely (since it's an endless loop that checks on the queue). Even without the loop, the read() function of the dataqueue reads indefinitely until it finds a message, what also makes the starting of the app to don't end.
Reflecting on it, I realize that an EAR (with WARs, JARs, etc) it's an app that expects a request (if not all, most of the time). So if it's an endless loop, it won't end the starting of the EAR.
Maybe there's another way to deploy this application on WAS. Is there a way to deploy the app so it will be like a background process that does everything I previously mentioned?
There are 2 solutions to this:
Use MDBs and ensure that you receive the message is a message listener thread. This will ensure that threading is completely taken care by WAS.
Here is an article about using threads in WAS: http://wpcertification.blogspot.in/2010/09/developing-multi-threaded-application.html .