Basically I want two tasks, I want the second task (Task B) to look at the status of first task (Task A).
All the examples I see use yaml, within the Release section of setting up deployments, they all use a user interface.
If I use Agent.JobStatus in Step A or Step B, it shows the Job Status of what we are currently in. I figure I need to capture the value between Task A and Task B (not within either one), how does one capture that? I either can't find it, or not understanding something.
I have put it in the agent job variable expression of Task B....hoping it gathered what was the last job status, but it is null.
in(variables['Agent.JobStatus'], 'Failed', 'SucceededWithIssues')
Related
I have a configuration file for the Azure pipeline that is scheduled through the UI to run Mon to Fri. The file has different stages and each stage calls a different template. What I want to do is run different stages/templates in different days of the week.
I tried to save different schedules through the triggers UI, but they need to be applied to the entire file.
I was also reading this https://learn.microsoft.com/en-us/azure/devops/pipelines/process/scheduled-triggers?view=azure-devops&tabs=yaml but again, the schedule would be applied to the entire file.
Is there a way to apply a different schedule to each step?
No, there is an "out-of-box" way to do that. I think you may try to:
Divide your build into several and schedule them separately.
Or
Add a planning step that detects the day and sets an execution step variable like:
echo "##vso[task.setvariable variable=run.step1;]YES"
Set variables in scripts
Then use it in the conditions:
and(succeeded(), ne(variables['run.step1'], 'YES'))
Specify conditions
I have built a pipeline with 4 tasks
Task 1 Builds a VM
Task 2 Add a Data Disk
Task 3 Add a Second Data Disk
Task 4 Add a Third Data Disk
However, if I only want Task 1 and Task 2 to execute how can I skip Task 3 and 4? For example, if the user only wants 1 Data Disk. I know they can be disabled manually but is there a way to automate this based on a variable?
Every stage, job and task has a condition property. You can use a condition expression to decide which tasks to run and when. You can reference variables in such expressions. By promoting these variables to a "Queue time variable" you can let a user control these.
Make sure you prepend each condition with succeeded() to make the previous steps have completed succesfully.
condition: and(succeeded(), gt(variables.Disks, 2))
See:
Expressions
Specify Conditions
Define Variables - Allow at queue time
Instead of manually redeploying a stage, I want to achieve an automatic way to redeploy(can be done manually).
My stage include some disk operations, which sometime fails on the first attempt but usually succeeds on the second attempt.
I am currently re-running the task group into another job in the same stage.
The second job basically executes only if the first one fails.
But this marks the stage as failed as out of two jobs, first one has failed.
But in my case both the jobs are same. Can't find a way to redeploy the same stage.
Your options are basically keep doing what you have or you can replace the steps that fail with custom PowerShell/Bash script that knows how to retry.
Edit: You could improve your existing solution little bit by putting your second attempt in the same job as your first attempt. That way you wouldn't get a failed stage. You can put conditions on steps and not just jobs.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml
I have several Azure DevOps release pipelines. I have a step which copies all my deployment scripts to a unique and one-time Azure Storage blob container, named for the release variable System.JobId. This is so that I can guarantee that whenever I run a release, I am always using up-to-date scripts and it shouldn't be possible to be referring to old copies, regardless of any unpredictable error conditions. The container is deleted at the end of a release.
The documentation for System.JobId states:
A unique identifier for a single attempt of a single job.
However, intermittently and at random, sometimes my pipeline fails on the blob copy step with:
2020-03-30T19:28:55.6860345Z [2020/03/30 19:28:55][ERROR] [path-to-my-file]: No input is received when user needed to make a choice among several given options.
I can't directly see that this is because the blob already exists, but when I navigate to the storage account, I can see that a container exists and it contains all the files I would expect it to. Most other times when the pipeline runs successfully, the container is gone at this point.
Why does it seem that sometimes the JobId is reused, when the documentation says it is unique? I am not redeploying the same release stage on multiple attempts to trigger this problem. Each occurrence is a new release.
Why does it seem that sometimes the JobId is reused, when the
documentation says it is unique?
Sorry for the mis-leading that our document guide.
The unique only represent relative to current pipeline, instead of the complete system.
Take a simple sample, there has 2 pipeline: p1, p2, and each pipeline has 2 jobs: job1, job2. In pipeline p1, the job id of job1 is unique and other jobs will not has same job id value.
BUT in meanwhile, the corresponding job id value in pipeline p2 is same with the one in pipeline p1. Also, it will has same value even after re-run pipeline p1.
To use your words, this job id value will re-use in a new pipeline execution, including one pipeline re-run.
I have a Task Group that I created out of a set of build tasks. I am able to edit the tasks quite well, but i now realise i will need to add another parameter to the task group. How do I go about doing that?
Task group parameters are automatically created based on the variables used in the tasks. If you reference a new variable in a task that's within a task group, it will pop up.
In addition to the accepted answer, if you want to add parameters that are not directly referenced by tasks within the tasks group (e.g. to use in a config file token replacement task) then you can export your task group, edit the .json file then import it back in. The parameters are in an inputs array near the end of file. You can also hide parameters here if you only want to use them internally to the task group by setting a default value and adding a 'visibleRule' property, see this article for details: https://medium.com/objectsharp/how-to-hide-a-task-group-parameter-b95f7c85870c
This will create a new task group rather than updating your current task group. If you want to update the task group, you can use this REST API:
https://learn.microsoft.com/en-us/rest/api/azure/devops/distributedtask/taskgroups/update?view=azure-devops-rest-5.1