I am trying to schedule my application to run every hour. I have done this numerous times before but this time is proving problematic.
The task runs and says it is completed but it doesn't launch my application. It doesn't even display the message I have set. I even set the action to only display the message and it still doesn't work. I've changed the user that runs it, even though we are all admin on the machine.
What could be wrong? What other debug steps could I take? Seeing as the task says it completes, and it has an event saying it has launched the application, surely it should have worked?
Contrary to the person who requested a close above, it turned out it WAS a programming issue.
The task ran correctly when the security level was set to "Run only when user is logged in" but wouldn't run when set to "Run whether user is logged in or not". This was due to me using a mapped drive location in code. I change the mapped drive to the absolute path and it now works successfully.
Related
I have read that "Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting"
But, I had to implement waiting(s) or sleeps in my test script to make them all PASS.
Can anyone help to understand this waiting.
Read At :http://www.protractortest.org/#/
Automatic Waiting:
You no longer need to add waits and sleeps to your test. Protractor can automatically execute the next step in your test the moment the webpage finishes pending tasks, so you don’t have to worry about waiting for your test and webpage to sync.
Right, I find this description as confusing as you. I think it describes the ideal world with no network delays and timeouts, no animations and layout issues.
The description originates from the following:
Protractor runs an extra command before performing any action on the
browser to ensure that the application being tested has stabilized.
This extra command is an async script which asks Angular to respond when the application is done with all timeouts and asynchronous requests, and ready for the test to resume.
Now, what does that "application is ready" statement mean? It basically means that, there are no pending requests, promises and "macro tasks" inside the Angular running application (source for angular testability).
From what I understand, this helps to cover most of the timing and waiting issues, but, if there is a pending JS code executed outside of Angular, or if there are any pending animations or other UI-related changes - this may potentially have an effect on your test stability - for instance, an element might not be yet visible or clickable, an input may not yet get enabled etc.
And, this does not actually contribute to the feedback from the end-to-end tests being stable and helpful - for example, in our project we often find ourselves adding browser.wait()s here and there to tackle occasionally failing tests. Also, here is a set of things that helped us to tackle this flakiness:
Protractor flakiness
I have a project which lets the user to select from the ADFs existing on a Tango-enabled device, to allow them to correctly localize in a number of different spaces.
My code (Unity 5.5, C#, Farandole SDK) essentially performs manual Tango startup with a null AreaDescription as the entry flow. If the user then selects an ADF, I'm calling TangoApplication.Shutdown() then TangoApplication.Startup(newArea).
in Eisa, this works. In Farandole, I get a permissions failure.
if, using Farandole, I explicitly request permissions (after the Shutdown) and wait for the permissions response to come back before calling Startup, the system appears to re-localise against the new ADF, but the Tango system is re-registering callbacks every time around through Startup without unregistering them, meaning I get my callbacks called multiple times for each ADF that I switch to.
What is the correct process to switch between ADFs? Should a shutdown be required before calling Startup, and if so, what is the correct way to shutdown the TangoApplication to avoid multiple callbacks?
I am interested in this answer too. The way I would do it, is reload the scene with the new ADF, just like it is done in the AreaLearning example, so TangoManager and TangoPoseController are reset.
I have a WCF Windows Workflow (4.5) Workflow Service hosted under IIS and using AppFabric 1.1. The workflow instances are long-running (up to about a week), but much of the time is spent in Delay activities.
This seemed to work fine at first, but when running multiple instances of the workflow at the same time (2+ instances causes this), some of them just never wake up once they've unloaded from memory during the Delay step. When I look at the logs, the errors I find all look like this:
System.OperationCanceledException: The execution of InstancePersistenceCommands has been canceled because the InstanceHandle was freed.
at System.Runtime.AsyncResult.End[TAsyncResult](IAsyncResult result)
at System.ServiceModel.Activities.Dispatcher.DurableInstanceManager.WaitAndHandleStoreEventsCallback(IAsyncResult result)
Unfortunately, I'm not finding any useful information on that error message.
The SuspensionExceptionName and SuspensionReason fields in the AppFabric Persisted Instances Table show System.NullReferenceException: Object reference not set to an instance of an object. But this doesn't happen inside my workflow, only outside.
Additional Info:
I'm running the activity as a Fire & Forget (receive activity, no send)
My workflow calls into other WCF services to fetch data.
I am running this on Server 2012 R2, IIS 8 (not azure)
Workflow Persistence is working. I can reset IIS, reboot... its just when I run 2 instances that it has problems.
I'm definitely not hitting any kind of throttling limits. While the workflow deals with a few MB of data, this issue happens at 2+ instances.
Any idea what might be happening here?
Edit:
I realized I found more information on how the issue operates and never added it to the question. When the delay issue happens, it operates a lot like a static variable getting written by 2 threads.
Here's a visualization:
WF1 Start ---->Do Stuff--->Sleep------------*1----->Cancelled Exception at some point
------WF 2 Start---->Do Stuff------->Sleep->Wake up---------*2------>More Stuff---->End Successfully
*1 - When WF Instance 1 Should Wake up (Same time as WF 2 wakes)
*2 - When WF Instance 2 Should have woken up (Seems to be ignored)
Before anyone asks... I got rid of every static variable, method, class in my code. Nothing is static anymore.
I've been struggling with similar issues for quite a while. I use WFW4 and I find similar errors when a workflow instance is in a long delay.
I don't know what the cause of the problem is, but I have a work around that you might find helpful.
In my case, the errors I get are from Workflow Management Service and say:
Failed to invoke service management endpoint at 'net.pipe://.svc' to activate service '/Alerts/Workflows/.xamlx'. Exception: 'Access is denied.'
These errors start happening sometime between 6 and 30 hours after the instance goes into a long delay.
I have found that if I create a new instance of the workflow when the first instance is in delay and the errors are happening, then Workflow Management Service is able to resume interacting with the first sleeping instance.
So, I made a new workflow whose sole purpose is to periodically launch and then kill instances of the workflow that contains the long delay.
It actually gets a bit more complicated to make this work. I wanted this new workflow to also go to sleep between times when it creates and kills a new instance of the first workflow. But this going to sleep causes the instance of the new workflow to suffer the same problem as the first workflow. So, I modified the new workflow so it does the following:
-- delay for some rather short period, such as 30 minutes
-- create an instance of the first workflow
-- wait a minute
-- kill the just-created instance of the first workflow
-- create a new instance of this new error-preventing workflow
-- terminate
Since having done this, I no longer get the Access is Denied error from Workflow Management Service!
Hope this helps
Turns out my first answer was not correct, but I believe this answer is right, and solves the issue ChrisG is having.
My workaround did not actually work. Took a while for the problem to resurface. 29 hours to be precise - the default time it takes for an app pool to recycle.
So for me, the solution was to make my app pool not recycle. When an app pool recycles while a workflow instance is in a delay activity, the workflowManagementService is not able to wake up the instance and throws Access is Denied errors. If you create a new instance of the workflow after the app pool has recycled, the first instance will pick up where it left off, but sometimes still has problems, which is what I believe is happening to ChrisG.
ChrisG, looking at your visualization, is it possible that an appPool is recycling during the time wf1 is sleeping? I believe that is the cause the exception. If you then launch a new wf instance after *2 has passed (and if an app pool recycle happened prior to *1), that will wake up both wf1 and wf2, but wf1 won't work properly (at least in my experience)
Also, this happens after iisresets and server reboots. To handle those, you need to use IIS7 which allows the web application (as well as the web site) which is hosting the xamlx files to autostart after an iisreset or server reboot. This option is not available in IIS6. See http://www.postseek.com/meta/991815402b369e71ce925cde47ac907d for details
Hope this helps!
I have set up a couple of daily tasks that update a SQL table and then sends out an email with a CSV attached. 5 of the scheduled tasks only complete successfully if the first task runs successfully. How would I add an argument in Task Scheduler to run the sequential tasks only if the first task was completed successfully?
The reasoning behind the request is due to the issue that sometimes the first script runs in a few minutes and other days it can take over an hour to complete. Any suggestions?
Thank you
It can be done! See here
http://blogs.msdn.com/b/davethompson/archive/2011/10/25/running-a-scheduled-task-after-another.aspx
In summary though say you have a task called Ping and you want a task called pong to run after it.
Create a task called Pong
Create an On an Event Trigger
Select Custom and edit the XML to be something like this
<QueryList>
<Query Id="0" Path="Microsoft-Windows-TaskScheduler/Operational">
<Select Path="Microsoft-Windows-TaskScheduler/Operational">*[EventData
[#Name='TaskSuccessEvent'][Data[#Name='TaskName']='\Ping']]</Select>
</Query>
</QueryList>
I dont think what you want is possible with the windows task scheduler. I would propose that you start the scripts that depend on the first one running successfully from the first script itself. That way you can be sure it has finished its work.
Also the title of your question is kind of misleading, something like "Creating dependencies in TaskScheduler" would fit better.
If your task that takes a varying amount of time leaves a Windows Event Log entry Event ID code specific to the successful completion of that task, you should be able to make your other tasks use the task scheduler trigger type "On an event" with the associated Log, Source, and Event ID.
If it doesn't, the other proposals are probably the only options left.
We have run into this same need several times. The 2 ways we have created the 'Dependency' type functionality is to:
Set the schedule to be run say every 30 minutes. In the startup of your app see if the dependency has been completed, if not exit, otherwise do you processing.
When there have been multiple dependencies we created an app that managed those. Each process that needed to be run depending on the others would be launched from the new Controller App (CA)'. The CA is scheduled to run every 30 minutes (or what every makes sense for your process) and it controls the multiple apps by checking of the dependencies and running the next app. We don't leave the CA running, we spawn the process to run and exit. Next time CA launches it checks dependencies and takes action needed or exits till launched again.
I have a long running operation that is run as a Job.
Problem is, as long as the job runs at the background , I'm denied of access to my workspace. (a GMF diagram, if relevant). ( Under the progress view, I can see a pending Job saying "waiting user operation")
Is there any flag or priority that needs to be changed in order to make the job a non-blocking one?
Thanks!
It is the developer that decides whether the job is blocking or not. Usually based on whether the internal structures are in a sane state during the execution of the job, etc.
There are no way the stop a job in the general case. Even if you can cancel the job, it is still the developer of the job that decides if and when to check for the cancel state...