I created a classic release pipeline in Azure DevOps that looks like the following (this was just to test and verify I could re-create an issue I was having with another pipeline):
Each of the deployment group jobs is configured to "Run this job" "Only when all previous jobs have succeeded". It works great unless one of the PowerShell scripts fails and I want to redeploy. For example, lets say the PowerShell Script task for "Start Maintenance Mode" fails and I redeploy and choose the option to only "Redeploy to All jobs and deployment group targets that are either in failed, cancelled or skipped state". If I do that, it skips the PowerShell Task in "Do Something, Anything" (as expected), it then runs the failed PowerShell task for "Start Maintenance Mode" (as expected, and succeeds this time), but then it skips the PowerShell Task for "Stop Maintenance Mode" (not expected, since it was skipped last time and should be run during the redeploy). It shows "The job was skipped due to an unsatisfied condition.", and no further detail beyond that:
I've played around with custom conditions using variable expressions to try to get it to work, but I've had no luck. What am I missing/not understanding? What needs to be done to get it to redeploy and work correctly so that it actually runs all of the tasks that were skipped previously?
I can see this issue, it is because that the "Do Something, Anything" job has been skipped and then condition for the "Stop Maintenance Mode" job is not met.
As a workaround you could follow below steps.
Set a pipeline variable runYes to false
Add a PowerShell task as the last step for "Start Maintenance Mode" job and set it "Only when all previous tasks have succeeded".
The PowerShell task will run Rest API: Definitions - Update to update pipeline variable runYes to true.
Set custom conditions for "Stop Maintenance Mode" job using variable expressions eq($(runYes), 'true')
Therefore, the "Stop Maintenance Mode" job will be run only the "Start Maintenance Mode" job is success. Another easier way is you choose to redeploy to "All jobs and all deployment group targets", thus all deployment jobs will run.
Related
We're using SonarQube for tests, and there's one token it uses, as long as one pipeline is running, it goes fine, but if I run different pipelines (all of them have E2E tests as final jobs), they all fail, because they keep calling a token that expires as soon as its used by one pipeline (job). Would it be possible to have -all- pipelines pause at job "x" if they detect some pipeline running job "x" already? The jobs have same names across all pipelines. Yes, I know this is solved by just running one pipeline at a time, but that's not what my devs wanna do.
The best way to make jobs run one by one is set demands for your agent job to run on a specific self-hosted agent. Just as below, set a user-defined capabilities for the self-hosted agent and then require run on the agent by setting demands in agent job.
In this way, agent jobs will only run on this agent. Build will run one by one until the previous one complete.
Besides, you could control if a job should run by defining approvals and checks. By using Invoke REST API check to make a call to a REST API such as Gets a list of builds, and define the success criteria as build count is zero, then, next build starts.
Currently I have a OneBranch DevOps pipeline that fails every now and then while restoring packages. Usually it fails because of some transient error like a socket exception or timeout. Re-trying the job usually fixes the issue.
Is there a way to configure a job or task to retry?
Azure Devops now supports the retryCountOnTaskFailure setting on a task to do just this.
See this page for further information:
https://learn.microsoft.com/en-us/azure/devops/release-notes/2021/pipelines/sprint-195-update
Update:
Automatic retries for a task was added and when you read this it should be available for usage.
It can be used as follow:
- task: <name of task>
retryCountOnTaskFailure: <max number of retries>
...
Here are a few things to note when using retries:
The failing task is retried immediately.
There is no assumption about the idempotency of the task. If the task has side-effects (for instance, if it created an external resource partially), then it may fail the second time it is run.
There is no information about the retry count made available to the task.
A warning is added to the task logs indicating that it has failed before it is retried.
All of the attempts to retry a task are shown in the UI as part of the same task node.
Original answer:
There is no way of doing that with native tasks. However, if you can script then you can put such logic inside.
You could do this for instance in this way:
n=0
until [ "$n" -ge 5 ]
do
command && break # substitute your command here
n=$((n+1))
sleep 15
done
However there is no native way of doing this for regular tasks.
Automatically retry a task in on roadmap so it could change in near future.
I want to know how to kill an Azure Pipeline task (or any level of execution - run, stage, job, etc.), so that I am not blocked waiting for an errant pipeline to finish executing or timeout.
For example, canceling the pipeline does not stop it immediately if a condition is configured incorrectly. If the condition resolves to true the task will execute even if the pipeline is cancelled.
This is really painful if your org/project only has 1 agent. :(
How can I kill (not cancel) an errant Azure Pipeline run, stage, job, or task?
For the hosted agent, we could not kill that azure pipeline directly, since we cannot directly access the running machine.
As workaround, we could reduce the time that the task continues to run after the job is cancelled by setting a shorter Build job cancel timeout in minutes:
For example, I create a pipeline with task, which will still run for 60 minutes after the job is cancelled. But if I set the value of Build job cancel timeout in minutes to 2 mins, the azure pipeline will be cancelled completely.
For the private agent, we could run services.msc, and look for "VSTS Agent (name of your agent)". Right-click the entry and then choose restart.
In Azure DevOps, I'm trying to make it so that a release job (not a YAML pipeline job) only runs if a specific prior job has failed. One of the pre-defined conditions for running a job is "Only when a previous job has failed", but this is not appropriate because it includes all prior jobs, rather than just the last job (or better yet, a job of the users choosing).
Please note that this question focuses on tasks within a job and is not the answer that I am looking for.
According to the documentation for conditions under "How can I trigger a job if a previous job succeeded with issues?", I can access the result of a previous job -
eq(dependencies.A.result,'SucceededWithIssues')
Looking at the logs for a previous release, the AGENT_JOBNAME for the job that I wish to check is Post Deployment Tests, so that would mean that my condition should look like this. However, Azure DevOps wont even let me save my release -
not(eq(dependencies.Post Deployment Tests.result, 'succeeded'))
Job condition for job "Swap Slots Back on Post Deployment Test Failure" in stage "Dev" is invalid: Unexpected symbol: 'Deployment'.
I've tried to wrap the job name in quotes, but I get similar errors -
not(eq(dependencies.'Post Deployment Tests'.result, 'succeeded'))
not(eq(dependencies."Post Deployment Tests".result, 'succeeded'))
I've also tried to reference my job using underscores, which does allow me to save the release but then results in an error at run time -
not(eq(dependencies.Post_Deployment_Tests.result, 'succeeded'))
Unrecognized value: 'dependencies'.
How can I achieve me goal of conditionally running a job only if a specific prior job has failed?
How can I achieve me goal of conditionally running a job only if a
specific prior job has failed?
1.You should know that Yaml pipeline and Classic pipeline use difference technologies.
See feature ability, they have different features. Also, they have quite different behavior for Conditions though they(Yaml,Classic Build,Classic Release) all support job conditions.
The eq(dependencies.'Post Deployment Tests'.result, 'succeeded') you're trying is not supported in Classic(UI) Release pipeline. It's for Yaml:
It's expected behavior to get error like Unrecognized value: 'dependencies' cause the job dependency is not supported in Release pipeline. See this old ticket.
Since yesterday my task scheduler has been causing some issues here that I was hoping to find some assistance with if possible.
I have a task "TASK" set to run daily each morning and then repeat every 30minutes with the action of launching a batch script from a directory in the C: drive. This script works appropriately when run on its own. When I create a task for the script it will run, unless it is set to have an "After triggered, repeat every X." In this case it gives the error message of: "An error has occurred for task TASK. Error message: The selected task "{0}" no longer exists. To see the current tasks, click Refresh.
I have attempted wiping all tasks from task scheduler and recreating them from scratch, I have wiped the registry of tasks, I have exported and reimported tasks. The issue only occurs when a task is set to repeat after trigger.
Got it myself. Came up with this error due to the fact that the original start date was set to before my attempt at manually running the tasks to test them. Strange.
Solution: Set next start date to the future.