In AzureDevops yaml pipeline, can we define multiple node pools to the same stage. For example we have vm [vmpool] based and docker [dockerpool] based build agents and they are belongs to separate pools. But Some our pipeline stages can be run in any of these pools and where as some pipelines stages need to be run specific pools. So looking for a way to multiple nodepools for the stages where we can run in both the pools.
Secondly, ca we define the precedence to the stages like, first need to check the available vms in the vmpool, if no vms are free to schedule, then schedul the dockerpool.
By going through the docs, I couldn't find any helpful information on this.
You can use template for each pool if across multiple pools in azure pipelines. A single step can be defined in one file and used multiple places in another file.
Please refer to doc: step.template
For example:
# File: steps/build.yml
steps:
- script: npm install
- script: npm test
Across multiple pools:
# File: azure-pipeline.yml
stages:
- stage : stage1
jobs:
- job: run_in_pool_1
pool:
name: vmpool
steps:
- template: steps/build.yml # Template reference
- job: run_in_pool_2
pool:
name: dockerpool
steps:
- template: steps/build.yml # Template reference
If you want to define the precedence to the stages like : check the available vms in the vmpool, you can use demand command to make sure the capabilities of agents.Please refer to:Demands
for example:
pool:
name: MyPool
demands:
- myCustomCapability # exists check for myCustomCapability
- Agent.Version -equals 2.144.0 # equals check for Agent.Version 2.144.0
Related
I have the following problem: I have specified a pipeline yaml which runs on a self-hosted agent for the first stage. For the second stage I haven't declared an agent. It should default to a Microsoft-hosted agent (and does for all other pipelines with the same syntax and use-case). However, when initializing the agent at execution time, the chosen agent is of the self-hosted variety.
I've tried creating a fork, deleting the repo and re-initializing it. Both to no avail.
When I specify an agent for the stage, like so (as described in this ms documentation):
pool:
vmImage: 'ubuntu-latest'
I get the following message:
There was a resource authorization issue: "The pipeline is not valid. Could not find a pool with name Azure Pipelines. The pool does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
When I click authorize resources, I get the message that it was successful. However, the authorization issue keeps returning.
azure-pipeline.yaml
resources:
repositories:
- repository: templates
type: git
name: azure-devops-reusable-tasks
variables:
- group: mijnverzekeringsvoorwaarden-api-service_ONT # for deployments (build scripts only target ONT)
- template: add-variable-branch-name.yml#templates
name: 1.0.0$(branchName)-$(Build.BuildId)
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build
pool:
name: Default
demands: companyName.Docker
steps:
- template: maven.yml#templates
- publish: $(System.DefaultWorkingDirectory)
artifact: mijnverzekeringsvoorwaarden-api-service
- template: publish-surefire-test-results.yml#templates
- template: publish-jacoco-code-coverage.yml#templates
- template: git-set-version-tag.yml#templates
- stage: Scan
dependsOn: Build
jobs:
- job: Scan
displayName: Dependency scan
steps:
- template: owasp-scan.yml#templates
And this is the template in question being called for the second stage. The one that is supposed to run on a MS-hosted machine:
owasp-scan.yml#templates
steps:
- download: current
- task: dependency-check-build-task#6
displayName: Owasp dependency check
inputs:
projectName: $(Build.Repository.Name)
scanPath: $(Pipeline.Workspace)
format: 'HTML'
Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
Update:
Added a screen shot of the agent pools and agents in the microsoft hosted pool.
Tried referencing the agent pool with the name given in the screenshot, like so:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'ubuntu-latest'
This gives the following error:
##[error]No agent found in pool Hosted Ubuntu 1604 which satisfies the following demand: ubuntu-latest. All demands: ubuntu-latest, Agent.Version -gtVersion 2.188.2
When using:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'Hosted Ubuntu 1604 2'
I get: Encountered error(s) while parsing pipeline YAML: /azure-pipelines.yml (Line: 37, Col: 20): Invalid demand 'Hosted Ubuntu 1604 2'. The demand should be in the format '<NAME>' to test existence, and '<NAME> -equals <VALUE>' to test for a specific value. For example, 'VISUALSTUDIO' or 'agent.os -equals Windows_NT'.
Update 2:
I compared projects. Another project did have the Azure Pipelines agent pool. I approved this project (the one which has the issue) to also have access to this pool. When I explicitly define it in the yaml, it uses this (cloud hosted) pool. However, when I provide no pool information it keeps defaulting to the self-hosted variant. To re-iterate, this only happens for this particular pipeline.
Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
You could try to change the value in the Default agent pool for YAML to Azure Pipelines in the UI:
The problem has been circumvented. When explicitly defining the agent from the agent pool, with correct nomenclature, it does pick an agent from this pool.
However, this does not explain why this behavior is exhibited for this pipeline, while all other pipelines pick an agent from the same pool without defining this. These default to this (the same) pool.
In this specific case the hack is defining the pool like so:
pool:
name: 'Hosted Ubuntu 1604'
I've got a single self-hosted agent. Its used as a kind of deployment agent.
All release versions of our software gets build by this agent and then copied to a network location.
Question: Is there a way I can utilize both the agent from the 'azure-pipelines' Microsoft hosted pool and my own self-hosted pool in my pipelines?
EDIT
Unfortunately this is not possible at the moment.
This is why you should upvote the feature request:
https://developercommunity.visualstudio.com/t/allow-agent-pools-to-contain-microsoft-hosted-and/396893
This is not possible. There is a ticket on developer community asking for similar functionality but it is already closed.
There is another ticket Allow agent pools to contain Microsoft hosted and self-hosted agents which refer to similar case, it is open but MS is silent there.
Which benefits do you want to achieve?
Basically, you can use several agent pools in one build/release definition. You just split your definition into several jobs and assign the needed agent pool to the corresponding job.
If you want to dynamically assign different pools from one pipeline to do the same build steps, we can not do that (as Krzysztof mentioned).
You can do hacky thing and use multiple jobs/stages. Jobs/stages will use different pools. You just need to skip depending if it is release version. Note that pipeline skeleton is not tested.
variables:
${{ eq(variables['Build.SourceBranch'], 'release/*') }}:
release_build: True
stages:
- stage: normal
condition: eq(variables['release_build'], False)
pool:
vmImage: 'windows-latest'
jobs:
- job: Builds
steps:
- template: build.yaml
- stage: release
condition: eq(variables['release_build'], True)
pool: My-agent
jobs:
- job: Builds
steps:
- template: build.yaml
I am exploring Azure Pipeline As Code and would like to understand how to make use of "deploymentMode" for validating and deploying ARM templates for each Azure environments.
I already have Release Pipelines created in Azure DevOps via Visual Builder for deployment tasks with one main ARM template and multiple paramater JSON files corresponding to each environment in Azure. Each of those pipeline has two stages. One for validation of ARM templates and Second for deployment.
I am now trying to converting those release pipelines to Azure Pipeline as Code in YAML format and would like to create one YAML file consolidating deployment validation tasks (deploymentMode: 'Validation') for each environment first followed by actual deployment (deploymentMode: 'Incremental').
1) Is it a right strategy for carrying out Azure DevOps Pipeline As code for a multi environment release cycle?
2) Will the YAML have two stages (one for validation and another one for deployment) and each stage having many tasks (each task for one environment)?
3) Do I need to create each Azure Environment first in 'Environments' section under Pipelines and configure the virtual machine for managing the deployment of various environments via YAML file?
Thanks.
According to your requirements, you could configure virtual machines for each azure environments in the Azure Pipeline -> Environments. Then you could reference the environments in Yaml code.
Here are the steps, you could refer to them.
Step1: Configure virtual machine for each Azure Environments.
Note: If the virtual machines are under the same environment, you need to add tags for each virtual machine. Tags can be used to distinguish virtual machines in the same environment.
Step2: You could create the Yaml file and add multiple stages (e.g. validation stage and deployment stage) in it. Each stage can use the environments and contain multiple tasks.
Here is an example:
trigger:
- master
stages:
- stage: validation
jobs:
- deployment: validation
displayName: validation ARM
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
- stage: deployment
jobs:
- deployment: deployment
displayName: deploy
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
Here are the docs about using multiple stages and virtual machines.
Hope this helps.
When using multistage pipelines from yaml in Azure Pipelines and every stage is deploying resources to a separate environment, I'd like to use a dedicated service connection for each stage. In my case every stage is making use of the same deployment jobs, i.e. yaml templates. So I'm using a lot of variables that have specific values dependent on the environment. This works fine, except for the service connection.
Ideally, the variable that contains the service connection name, is added to the stage level like this:
stages:
- stage: Build
# (Several build-stage specific jobs here)
- stage: DeployToDEV
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_DEV' # This seems like a logical solution
jobs:
# This job would ideally reside in a yaml template
- job: DisplayDiagnostics
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: none
- task: AzurePowerShell#4
inputs:
azureSubscription: $(AzureServiceConnection)
scriptType: inlineScript
inline: |
Get-AzContext
azurePowerShellVersion: LatestVersion
- stage: DeployToTST
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_TST' # Same variable, different value
jobs:
# (Same contents as DeployToDEV stage)
When this code snippet is executed, it results in the error message:
There was a resource authorization issue: "The pipeline is not valid.
Job DisplayDiagnostics: Step AzurePowerShell input
ConnectedServiceNameARM references service connection
$(AzureServiceConnection) which could not be found. The service
connection does not exist or has not been authorized for use. For
authorization details, refer to https://aka.ms/yamlauthz.
So, it probably can't expand the variable AzureServiceConnection soon enough when the run is started. But if that's indeed the case, then what's the alternative solution to make use of separate service connections for every stage?
One option that works for sure is setting the service connection name directly to all tasks, but that would involve duplicating identical yaml tasks for every stage, which I obviously want to avoid.
Anyone has a clue on this? Thanks in advance!
Currently you can not pass a variable as a serviceConnection.
Apparently the service connection name is picked up on push/commit and whatever that is there will be picked up.
E.g. if you have a $(variable) it will pick $(variable) instead of the value.
Workaround I have used so far is to use a template for the steps at each stage and pass a different parameter with the serviceConnection.
Refer: https://github.com/venura9/azure-devops-yaml/blob/master/azure-pipelines.yml for a sample implementation. you are more than welcome to pull request with updates.
- stage: DEV
displayName: 'DEV(CD)'
condition: and(succeeded('BLD'), eq(variables['Build.SourceBranch'], 'refs/heads/develop'))
dependsOn:
- BLD
variables:
stage: 'dev'
jobs:
- job: Primary_AustraliaSouthEast
pool:
vmImage: $(vmImage)
steps:
- template: 'pipelines/infrastructure/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal', location: 'australiasoutheast'}
- template: 'pipelines/application/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal'}
I have a pipeline with 2 stages - a build/test stage, and a Teardown stage that cleans up external resources after the build/test stage. The teardown stage depends on some state information that gets generated in the build/test stage. I'm trying to use Azure hosted agents to do this. The problem is that the way I have it now, each stage deploys a new agent, so I lose the state I need for the teardown stage.
My pipeline looks something like this:
trigger:
- master
stages:
- stage: Build_stage
jobs:
- job: Build_job
pool:
vmImage: 'ubuntu-latest'
steps:
- task: InstallSomeTool#
- script: invoke someTool
- script: run some test
- stage: Teardown_stage
condition: always()
jobs:
- job: Teardown_job
pool:
vmImage: 'ubuntu-latest'
steps:
- script: invoke SomeTool --cleanup
The teardown stage fails because it's a brand new agent that knows nothing about the state created by the previous invoke someTool script.
I'm trying to do it this way because the Build stage creates some resources externally that I want to be cleaned up every time, even if the Build stage fails.
Is it possible to have an Azure hosted build agent persist between
pipeline stages?
No, you can't. The hosted agent are all randomly assigned by server. You could not use any script or command to specify a specific one.
Since you said that the Build_Stage will create some resources externally, so that you want to execute clean up to clean it.
In fact, for this, you can execute this clean up command as the last steps in Build_Stage. If this, whether using hosted or private agent will not affect what you want.