Since yesterday my task scheduler has been causing some issues here that I was hoping to find some assistance with if possible.
I have a task "TASK" set to run daily each morning and then repeat every 30minutes with the action of launching a batch script from a directory in the C: drive. This script works appropriately when run on its own. When I create a task for the script it will run, unless it is set to have an "After triggered, repeat every X." In this case it gives the error message of: "An error has occurred for task TASK. Error message: The selected task "{0}" no longer exists. To see the current tasks, click Refresh.
I have attempted wiping all tasks from task scheduler and recreating them from scratch, I have wiped the registry of tasks, I have exported and reimported tasks. The issue only occurs when a task is set to repeat after trigger.
Got it myself. Came up with this error due to the fact that the original start date was set to before my attempt at manually running the tasks to test them. Strange.
Solution: Set next start date to the future.
Related
I created a classic release pipeline in Azure DevOps that looks like the following (this was just to test and verify I could re-create an issue I was having with another pipeline):
Each of the deployment group jobs is configured to "Run this job" "Only when all previous jobs have succeeded". It works great unless one of the PowerShell scripts fails and I want to redeploy. For example, lets say the PowerShell Script task for "Start Maintenance Mode" fails and I redeploy and choose the option to only "Redeploy to All jobs and deployment group targets that are either in failed, cancelled or skipped state". If I do that, it skips the PowerShell Task in "Do Something, Anything" (as expected), it then runs the failed PowerShell task for "Start Maintenance Mode" (as expected, and succeeds this time), but then it skips the PowerShell Task for "Stop Maintenance Mode" (not expected, since it was skipped last time and should be run during the redeploy). It shows "The job was skipped due to an unsatisfied condition.", and no further detail beyond that:
I've played around with custom conditions using variable expressions to try to get it to work, but I've had no luck. What am I missing/not understanding? What needs to be done to get it to redeploy and work correctly so that it actually runs all of the tasks that were skipped previously?
I can see this issue, it is because that the "Do Something, Anything" job has been skipped and then condition for the "Stop Maintenance Mode" job is not met.
As a workaround you could follow below steps.
Set a pipeline variable runYes to false
Add a PowerShell task as the last step for "Start Maintenance Mode" job and set it "Only when all previous tasks have succeeded".
The PowerShell task will run Rest API: Definitions - Update to update pipeline variable runYes to true.
Set custom conditions for "Stop Maintenance Mode" job using variable expressions eq($(runYes), 'true')
Therefore, the "Stop Maintenance Mode" job will be run only the "Start Maintenance Mode" job is success. Another easier way is you choose to redeploy to "All jobs and all deployment group targets", thus all deployment jobs will run.
I have a scheduled parallel Datastage (11.7) job.
This job has a Hive Connector with a Before and After Statement.
The before statement run ok but After statement remains in running state for several hours (on Hue Log i see this job finished in 1hour) and i have to manually abort it on Datastage Director.
Is there the way to "program an abort"?
For example i want schedule the interruption of the running job every morning at 6.
I hope I was clear :)
Even though you can kill the job - as per other responses - using dsjob to stop the job, this may have no effect because the After statement has been issued synchronously; the job is waiting for it to finish, and (probably) not processing kill signals and the like in the meantime. You would be better advised to work out why the After command is taking too long, and addressing that.
I am scratching my head for the last 2 days because of this issue. This error is intermittent on the production server as sometimes the task scheduler works and sometimes not.
The same settings work in the development server.
I also checked the execution policy on both servers and it looks the same.
In your second screenshot, you can choose "Stop existing instance" in the latest dropdown list (if the task is already running). Then the retry option might trigger your task again correctly.
I have a job that uses the Kafka Connector Stage in order to read a Kafka queue and then load into the database. That job runs in Continuous Mode, which it has no time to conclude, since it keeps monitoring the Kafka queue in real time.
For unexpected reasons (say, server issues, job issues etc) that job may terminate with failure. In general, that happens after 300 running hours of that job. So, in order to keep the job alive I have to manually look to the job status and then to do a Reset and Run, in order to keep the job running.
The problem is that between the job termination and my manual Reset and Run can pass several hours, which is critical. So I'm looking for a way to eliminate the manual interaction and to reduce that gap by automating the job invocation.
I tried to use Control-M to daily run the job, but with no success: The first day the Control-M called the job, it ran it fine. But in the next day, when the Control-M did an attempt to instantiate the job again it failed (since it was already running). Besides, the Datastage will never tell back Control-M that a job was successfully concluded, since the job's nature won't allow that.
Said that, I would like to hear ideas from you that can light me up.
The first thing that came in mind is to create a intermediate Sequence and then schedule it in Control-M. Then, this new Sequence would call the continuous job asynchronously by using command line stage.
For the case where just this one job terminates unexpectedly and you want it to be restarted as soon as possible, have you considered calling this job from a sequence? The sequence could be setup to loop running this job.
Thus sequence starts job and waits for it to finish. When job finishes, the sequence will then loop and start the job again. You could have added conditions on job exit (for example, if the job aborted, then based on that job end status, you could reset the job before re-running it.
This would not handle the condition where the DataStage engine itself was shut down (such as for maintenance or possibly an error) in which case all jobs end including your new sequence. The same also applies for a server reboot or other situations where someone may have inadvertently stopped your sequence. For those cases (such as DataStage engine stop) your team would need to have process in place for jobs/sequences that need to be started up following a DataStage or System outage.
For the outage scenario, you could create a monitor script (regardless of whether running the job solo or from sequence) that sleeps/loops on 5-10 minute intervals and then checks the status of your job using dsjob command, and if not running can start that job/sequence (also via dsjob command). You can decide whether that script startup would occur at DataSTage startup, machine startup, or run it from Control M or other scheduler.
I set up a work flow in airflow, one of jobs was failed, after I fixed the problem, I want to rerun the failed task and continue the workflow.the details like below:
as above, I prepared to run the task "20_validation", I pressed the button 'Run' like below:
but the problem is when the task '20_validation' has finished, these downstream tasks were not continue to be started. How should I do?
Use the clear button directly below the run button you drew the box around in your screenshot.
This will clear the failed task state and all the tasks after (since downstream is selected to the right), causing them all to be run/rerun.
Since the upstream task failed, the downstream tasks shouldn't have been run anyway, so it shouldn't cause successful tasks to rerun.