Azure pipeline defaults to wrong agent - azure-devops

I have the following problem: I have specified a pipeline yaml which runs on a self-hosted agent for the first stage. For the second stage I haven't declared an agent. It should default to a Microsoft-hosted agent (and does for all other pipelines with the same syntax and use-case). However, when initializing the agent at execution time, the chosen agent is of the self-hosted variety.
I've tried creating a fork, deleting the repo and re-initializing it. Both to no avail.
When I specify an agent for the stage, like so (as described in this ms documentation):
pool:
vmImage: 'ubuntu-latest'
I get the following message:
There was a resource authorization issue: "The pipeline is not valid. Could not find a pool with name Azure Pipelines. The pool does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
When I click authorize resources, I get the message that it was successful. However, the authorization issue keeps returning.
azure-pipeline.yaml
resources:
repositories:
- repository: templates
type: git
name: azure-devops-reusable-tasks
variables:
- group: mijnverzekeringsvoorwaarden-api-service_ONT # for deployments (build scripts only target ONT)
- template: add-variable-branch-name.yml#templates
name: 1.0.0$(branchName)-$(Build.BuildId)
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build
pool:
name: Default
demands: companyName.Docker
steps:
- template: maven.yml#templates
- publish: $(System.DefaultWorkingDirectory)
artifact: mijnverzekeringsvoorwaarden-api-service
- template: publish-surefire-test-results.yml#templates
- template: publish-jacoco-code-coverage.yml#templates
- template: git-set-version-tag.yml#templates
- stage: Scan
dependsOn: Build
jobs:
- job: Scan
displayName: Dependency scan
steps:
- template: owasp-scan.yml#templates
And this is the template in question being called for the second stage. The one that is supposed to run on a MS-hosted machine:
owasp-scan.yml#templates
steps:
- download: current
- task: dependency-check-build-task#6
displayName: Owasp dependency check
inputs:
projectName: $(Build.Repository.Name)
scanPath: $(Pipeline.Workspace)
format: 'HTML'
Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
Update:
Added a screen shot of the agent pools and agents in the microsoft hosted pool.
Tried referencing the agent pool with the name given in the screenshot, like so:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'ubuntu-latest'
This gives the following error:
##[error]No agent found in pool Hosted Ubuntu 1604 which satisfies the following demand: ubuntu-latest. All demands: ubuntu-latest, Agent.Version -gtVersion 2.188.2
When using:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'Hosted Ubuntu 1604 2'
I get: Encountered error(s) while parsing pipeline YAML: /azure-pipelines.yml (Line: 37, Col: 20): Invalid demand 'Hosted Ubuntu 1604 2'. The demand should be in the format '<NAME>' to test existence, and '<NAME> -equals <VALUE>' to test for a specific value. For example, 'VISUALSTUDIO' or 'agent.os -equals Windows_NT'.
Update 2:
I compared projects. Another project did have the Azure Pipelines agent pool. I approved this project (the one which has the issue) to also have access to this pool. When I explicitly define it in the yaml, it uses this (cloud hosted) pool. However, when I provide no pool information it keeps defaulting to the self-hosted variant. To re-iterate, this only happens for this particular pipeline.

Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
You could try to change the value in the Default agent pool for YAML to Azure Pipelines in the UI:

The problem has been circumvented. When explicitly defining the agent from the agent pool, with correct nomenclature, it does pick an agent from this pool.
However, this does not explain why this behavior is exhibited for this pipeline, while all other pipelines pick an agent from the same pool without defining this. These default to this (the same) pool.
In this specific case the hack is defining the pool like so:
pool:
name: 'Hosted Ubuntu 1604'

Related

use files checked out from previous job in another job in an Azure pipeline

I have a pipeline I created in Azure DevOps that builds an Angular application and runs some tests on it. I separated the pipeline into two jobs, Build and Test. The Build job completes successfully. The Test job checks out the code from Git again even though the Build job already did it. The Test job needs the files created in the Build job in order to run successfully like the npm packages.
Here is my YAML file:
trigger:
- develop
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
system.debug: false
stages:
- stage: Client
pool:
name: Windows
jobs:
- job: Build
displayName: Build Angular
steps:
- template: templates/angularprodbuild.yml
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- template: templates/angularlinttest.yml
- template: templates/angularunittest.yml
- template: templates/cypresstest.yml
My agent pool is declared at the stage level so both jobs would be using the same agent. Also I added a dependsOn to the Test job to ensure the same agent would be used. After checking logs, the same agent is in fact used.
How can I get the Test job to use the files that were created in the Build job and not checkout the code again? I'm using Angular 11 and Azure DevOps Server 2020 if that helps.
use files checked out from previous job in another job in an Azure pipeline
If you are using a self-hosted agent, by default, none of the workspace are cleaned in between two consecutive jobs. As a result, you can do incremental builds and deployments, provided that tasks are implemented to make use of that.
So, we could use - checkout: none in the next job to skip checking out the same code in the Build job:
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- checkout: none
- template: templates/angularlinttest.yml
But just as Bo Søborg Petersen said, DependsOn does not ensure that the same agent is used. You need add a User Capability to that specific build agent then in the build definition you put that capability as a demand:
pool:
name: string
demands: string | [ string ]
Please check this document How to send TFS build to a specific agent or server for some more info.
In the test job, we could use predefined variables like $(System.DefaultWorkingDirectory) to access the files for Node and npm.
On the other hand, if you are using the Hosted agent, we need use PublishBuildArtifacts task to publish Artifact to the azure artifacts, so that we could use the DownloadBuildArtifacts task to download the artifacts in the next job:
jobs:
- job: Build
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: npm test
- task: PublishBuildArtifacts#1
inputs:
pathtoPublish: '$(System.DefaultWorkingDirectory)'
artifactName: WebSite
# download the artifact and deploy it only if the build job succeeded
- job: Deploy
pool:
vmImage: 'ubuntu-16.04'
steps:
- checkout: none #skip checking out the default repository resource
- task: DownloadBuildArtifacts#0
displayName: 'Download Build Artifacts'
inputs:
artifactName: WebSite
downloadPath: $(System.DefaultWorkingDirectory)
You could check Official documents and examples for some more details.
Assume that the agent is cleaned between jobs, so to access the files, you need to create an artifact during the build job and then download it during the test job.
Also, DependsOn does not ensure that the same agent is used, only that the second job runs after the first job.
Also you can set the second job to not checkout the code with "-checkout: none"

Can I use 2 agent pools in my azure pipelines?

I've got a single self-hosted agent. Its used as a kind of deployment agent.
All release versions of our software gets build by this agent and then copied to a network location.
Question: Is there a way I can utilize both the agent from the 'azure-pipelines' Microsoft hosted pool and my own self-hosted pool in my pipelines?
EDIT
Unfortunately this is not possible at the moment.
This is why you should upvote the feature request:
https://developercommunity.visualstudio.com/t/allow-agent-pools-to-contain-microsoft-hosted-and/396893
This is not possible. There is a ticket on developer community asking for similar functionality but it is already closed.
There is another ticket Allow agent pools to contain Microsoft hosted and self-hosted agents which refer to similar case, it is open but MS is silent there.
Which benefits do you want to achieve?
Basically, you can use several agent pools in one build/release definition. You just split your definition into several jobs and assign the needed agent pool to the corresponding job.
If you want to dynamically assign different pools from one pipeline to do the same build steps, we can not do that (as Krzysztof mentioned).
You can do hacky thing and use multiple jobs/stages. Jobs/stages will use different pools. You just need to skip depending if it is release version. Note that pipeline skeleton is not tested.
variables:
${{ eq(variables['Build.SourceBranch'], 'release/*') }}:
release_build: True
stages:
- stage: normal
condition: eq(variables['release_build'], False)
pool:
vmImage: 'windows-latest'
jobs:
- job: Builds
steps:
- template: build.yaml
- stage: release
condition: eq(variables['release_build'], True)
pool: My-agent
jobs:
- job: Builds
steps:
- template: build.yaml

Conditional Approval gate in deployment jobs in azure pipelines

Since conditional approval doesn't work in azure yaml pipeline i've been trying a workaround using 2 environment in deployment stage, shown in yaml.
using a conditions in job and a variable i want to check if approval required or not
but when i run the pipeline , i see its still asking for approval even though the condition is not satisfied for the deployment job that requires approval. Post approval though the job that required approval skips as expected. I dont understand why its asking for approval.
Are approval executed first for a stage before jobs conditions are evaluated?
Did i miss something in the yaml?
trigger:
- none
variables:
- group: pipelinevariables
# Agent VM image name
- name: vmImageName
value: 'ubuntu-latest'
stages:
- stage: Deploy
displayName: Deploy stage
jobs:
- deployment: DeployWebWithoutApprval
displayName: deploy Web App without approval
condition: and(succeeded(),ne(variables.DEV_APPROVAL_REQUIRED,'true'))
pool:
vmImage: $(vmImageName)
# creates an environment if it doesn't exist
environment: 'app-dev'
strategy:
runOnce:
deploy:
steps:
- script: echo No approval
- deployment: DeployWebWithApprval
displayName: deploy Web App with approval
dependsOn: DeployWebWithoutApprval
condition: and(eq(dependencies.DeployWebWithoutApprval.result,'Skipped'),eq(variables.DEV_APPROVAL_REQUIRED,'true'))
pool:
vmImage: $(vmImageName)
# creates an environment if it doesn't exist
environment: 'app-dev-with-approval'
strategy:
runOnce:
deploy:
steps:
- script: echo requires approval
update :
this works if i define 2 stages and and same set of conditions but that would show 2 stages in build details page which we don't want
Another question is
Can we conditionally insert stage template based on variable value from variable group
stages
​${{ifeq(variables['Policy_Approval_Required'],'true')}}:
Insert template conditionally is supported, you can check the following link: https://github.com/microsoft/azure-pipelines-agent/issues/1749. Check the following example:
- ${{ if eq(variables['Build.Reason'], 'PullRequest') }}:
- template: sharedstep.yml#templates
parameters:
value: true
I have had the exact same issue with approval gates and conditions. It is unfortunately not supported as of yet, but it's reported to Microsoft (here). There is also this issue.
Seems like an issue with the order of evaluating approvals vs conditions.

Azure DevOps Pipeline As Code - Validate ARM Template

I am exploring Azure Pipeline As Code and would like to understand how to make use of "deploymentMode" for validating and deploying ARM templates for each Azure environments.
I already have Release Pipelines created in Azure DevOps via Visual Builder for deployment tasks with one main ARM template and multiple paramater JSON files corresponding to each environment in Azure. Each of those pipeline has two stages. One for validation of ARM templates and Second for deployment.
I am now trying to converting those release pipelines to Azure Pipeline as Code in YAML format and would like to create one YAML file consolidating deployment validation tasks (deploymentMode: 'Validation') for each environment first followed by actual deployment (deploymentMode: 'Incremental').
1) Is it a right strategy for carrying out Azure DevOps Pipeline As code for a multi environment release cycle?
2) Will the YAML have two stages (one for validation and another one for deployment) and each stage having many tasks (each task for one environment)?
3) Do I need to create each Azure Environment first in 'Environments' section under Pipelines and configure the virtual machine for managing the deployment of various environments via YAML file?
Thanks.
According to your requirements, you could configure virtual machines for each azure environments in the Azure Pipeline -> Environments. Then you could reference the environments in Yaml code.
Here are the steps, you could refer to them.
Step1: Configure virtual machine for each Azure Environments.
Note: If the virtual machines are under the same environment, you need to add tags for each virtual machine. Tags can be used to distinguish virtual machines in the same environment.
Step2: You could create the Yaml file and add multiple stages (e.g. validation stage and deployment stage) in it. Each stage can use the environments and contain multiple tasks.
Here is an example:
trigger:
- master
stages:
- stage: validation
jobs:
- deployment: validation
displayName: validation ARM
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
- stage: deployment
jobs:
- deployment: deployment
displayName: deploy
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
Here are the docs about using multiple stages and virtual machines.
Hope this helps.

How to use different Service Connection for every stage in Azure Pipelines?

When using multistage pipelines from yaml in Azure Pipelines and every stage is deploying resources to a separate environment, I'd like to use a dedicated service connection for each stage. In my case every stage is making use of the same deployment jobs, i.e. yaml templates. So I'm using a lot of variables that have specific values dependent on the environment. This works fine, except for the service connection.
Ideally, the variable that contains the service connection name, is added to the stage level like this:
stages:
- stage: Build
# (Several build-stage specific jobs here)
- stage: DeployToDEV
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_DEV' # This seems like a logical solution
jobs:
# This job would ideally reside in a yaml template
- job: DisplayDiagnostics
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: none
- task: AzurePowerShell#4
inputs:
azureSubscription: $(AzureServiceConnection)
scriptType: inlineScript
inline: |
Get-AzContext
azurePowerShellVersion: LatestVersion
- stage: DeployToTST
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_TST' # Same variable, different value
jobs:
# (Same contents as DeployToDEV stage)
When this code snippet is executed, it results in the error message:
There was a resource authorization issue: "The pipeline is not valid.
Job DisplayDiagnostics: Step AzurePowerShell input
ConnectedServiceNameARM references service connection
$(AzureServiceConnection) which could not be found. The service
connection does not exist or has not been authorized for use. For
authorization details, refer to https://aka.ms/yamlauthz.
So, it probably can't expand the variable AzureServiceConnection soon enough when the run is started. But if that's indeed the case, then what's the alternative solution to make use of separate service connections for every stage?
One option that works for sure is setting the service connection name directly to all tasks, but that would involve duplicating identical yaml tasks for every stage, which I obviously want to avoid.
Anyone has a clue on this? Thanks in advance!
Currently you can not pass a variable as a serviceConnection.
Apparently the service connection name is picked up on push/commit and whatever that is there will be picked up.
E.g. if you have a $(variable) it will pick $(variable) instead of the value.
Workaround I have used so far is to use a template for the steps at each stage and pass a different parameter with the serviceConnection.
Refer: https://github.com/venura9/azure-devops-yaml/blob/master/azure-pipelines.yml for a sample implementation. you are more than welcome to pull request with updates.
- stage: DEV
displayName: 'DEV(CD)'
condition: and(succeeded('BLD'), eq(variables['Build.SourceBranch'], 'refs/heads/develop'))
dependsOn:
- BLD
variables:
stage: 'dev'
jobs:
- job: Primary_AustraliaSouthEast
pool:
vmImage: $(vmImage)
steps:
- template: 'pipelines/infrastructure/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal', location: 'australiasoutheast'}
- template: 'pipelines/application/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal'}