I have Azure DevOps pipeline and I want to run it nightly run with two different agent pool, one dev and one prod.
This is the pipeline with default dev agent pool:
In the schedule setting there is no option to set different agent pool to the runs:
I saw this answer (solution with yaml settings), but I didn't found a way to use it in my pipeline (my pipeline defined in Azure DevOps UI settings).
As you use GUI classic pipelines you could define two different jobs that will run on different agent pools. This way you could have a single pipeline that you will run depending on your schedule.
When using YAML syntax you could define different stages to accomplish the same result.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml
Create a new Stage. The first stage's job will use one pool and the second stage will use a different pool. They can then be scheduled or triggered independently. You can also clone the first stage to save you the time of duplicating the tasks.
Reference
Related
I am using Windows Self hosted agent for my Azure DevOps pipelines. Currently the pipelines are executed sequentially. If more than one pipelines triggered from different ADO projects, then it has to wait in queue to get the agent. In order to execute the pipeline in parallel, I came to know from some tutorials if we increase the paid parallel jobs for self hosted agent under billing section of Organization setting. Is my understanding correct? If so what are the precautionary steps I need to take. Do we have any control of when the pipelines to be executed in parallel?
Thanks.
In order to run self-hosted parallel jobs, you need to purchase parallel jobs and register several self-hosted agents.
For parallel jobs, you can register any number of self-hosted agents in your organization. If you want to run 3 jobs in parallel, then you must register at least 3 self-hosted agents in one agent pool. DevOps charges based on the number of jobs you want to run at a time, not the number of agents registered. There are no time limits on self-hosted jobs. For private projects, you can have one job and one additional job for each active Visual Studio Enterprise subscriber who is a member of your organization.
About how to purchase parallel jobs, please refer to Buy parallel jobs.
For how to control the use of parallel jobs, please refer to the following:
For classic pipeline, you can specify when to run the job through dependencies and Run this job in Additional options in the agent job. Then the pipeline will run in sequence according to your settings.
For YAML pipeline, you can specify the conditions under which the job should run with "dependsOn" and "condition".
For example:
For more info about conditions, please refer to Specify conditions
If you don't specify a specific order, the jobs will run in parallel based on the parallel jobs you purchased.
I don't know if my experience can help. I'll try. I started a new job and we use self-hosted TFS / Azure DevOps. I am changing our build process to create 3 product SKUs (it uses conditional compilation). Let's call them Good, Better & Best.
I edited the Build definition. First I switched to the Variables tab. I created a Process variable named SKUs and set it to Good,Better,Best. The commas are important.
Next I switched to the Tasks tab. I located the Agent Phase. Mine was called Phase 1. Select it. On the right, under Parallelism, I selected Multi-configuration. In the Multipliers text field I entered SKUs. I set Maximum number of agents to 3.
What I don't yet know is the TFS back-end administration and options that the company purchased beforehand.
I am using Azure DevOps Server 2020 and I have a release pipeline which has around 21 copy file tasks in it to copy the output of multiple microservices to different target paths and this takes almost around 23 mins to complete the release pipeline.
I want to optimize the release pipeline and save some time and thus I am thinking of running all the copy task simultaneously.
Under the copy tasks in Control Options section, I see Run this task option is available where we do have the option to define custom conditions but I am not sure which custom conditions do I need to define exactly so that all my copy tasks gets executed parallelly.
Could anyone please let me know what custom conditions will allow all the copy task to get executed in one go?
Currently it is not possible to have tasks run in parallel. It has been raised as a suggestion here but the feature hasn't been implemented
How to run multiple Copy Files task in a Azure DevOps Release pipeline simultaneously with Custom Conditions?
Just as TheWinterCoder pointed, Currently it is not possible to have tasks run in parallel.
But, as a workaround, you could divide the replication task into several different jobs and make the jobs run in parallel:
This requires you to have multiple agents available in the local agent pool:
I have a release pipeline composed of several stages
The tfs server have only one worker.
When i start a release, the worker run stages randomly.
The problem is : if someone else start another pipeline, sometimes the worker take the other pipeline before getting back to the next stage.
Is there a way to lock the worker on the entire release pipeline ?
When running a pipeline, a job is the unit of scale. This means each job can potentially run on a different agent.
Each job runs on an agent. A job represents an execution boundary of a set of steps. All of the steps run together on the same agent.
Next to that,
A stage is a logical boundary in the pipeline. It can be used to mark separation of concerns (for example, Build, QA, and production).
More information: Azure Pipelines - Key concepts.
You should also have a look at the Pipeline run sequence. It clearly explains how the entire pipeline-process works.
Whenever Azure Pipelines needs to run a job, it will ask the pool for an agent.
If the availability of the agent is the issue, you might want to add more agents to the pool. If needed, you can even run multiple agents on the same machine.
Although multiple agents can be installed per machine, we strongly suggest to only install one agent per machine. Installing two or more agents may adversely affect performance and the result of your pipelines.
I have a question on Azure DevOps Release pipelines. My pipeline workflow is multi-stage where the build triggers the QA stage which then triggers the UAT stage which then triggers the PROD stage.
I use pipeline variables to manage each stage and require pre-approval on the UAT and PROD stages so that a change does not instantly get deployed to every stage sequentially.
My question is how to handle the case where I have multiple servers in an environment. I see that each environment should be treated as a stage but right now, I am treating each server in an environment as a stage where the tasks are run in parallel. This works for the first stage (QA), but becomes ugly for UAT since each server then requires pre-approval instead of the environment.
I have pipeline variables that specify paths for files to be dropped on servers as well. At a server per stage level, this works but not for multiple servers in a stage.
My pipeline currently looks like the picture below with UAT1 and UAT2 each requiring approval. How do I handle multiple servers for the QA and UAT stages, and later PROD?
My issue was that I was using a stage to represent a single server in an environment (i.e. QA, UAT, or PROD) instead of bundling the tasks performed per server into a task group and then use multiple task groups within the stage.
My pipeline now looks like the image below.
Within the stage, there is a task group per server.
The common tasks per server are contained in the task group
I am working with 9 multiple repositories with their own separate azure pipelines. However, during the deployment stage, there are some jobs/steps that are dependent on the status of a job from a separate deployment pipeline. If the resource (e.g A Resource Group) was not yet deployed, naturally the job/step dependent on it would fail. I cannot simply use dependsOn because it is from another pipeline.
Questions:
Is there a way for me to check, monitor, or wait (with a specified period of time) for a particular resource to be deployed?
Are these pipelines in need of restructuring?
I've seen documentation for the Multi-Repo pipeline. However, each pipeline must be deployable on its own and not waiting for another pipeline to finish.
If you use YAML based pipeline you may consider pipeline triggers. It will allow you to trigger one pipeline after another. You may also apply stages filter
In this sprint, we added support for 'stages' as a filter for pipeline resources in YAML. With this filter, you don't need to wait for the entire CI pipeline to be completed to trigger your CD pipeline. You can now choose to trigger your CD pipeline upon completion of a specific stage in your CI pipeline.