I want to create a parameter in YAML deploy pipeline to let user mention the build id they want to pass for deployment while running manually.
How can I use that specific build id passed as parameter during deployment inside deployment pipeline?
Deployment pipeline resource definition is:
resources:
pipelines:
- pipeline: build
source: build_pipeline_name
trigger:
branches:
- master
Choosing from Resources is not an option due to access restriction on the Environments we are using in pipeline.
If you want to download just specific artifact you won't be able to do this usisng just resource, as you cannot parameterize resources. However, if this is your goal you can parameterize this task:
parameters:
- name: runId
type: number
# Download an artifact named 'WebApp' from a specific build run to 'bin' in $(Build.SourcesDirectory)
steps:
- task: DownloadPipelineArtifact#2
inputs:
source: 'specific'
artifact: 'WebApp'
path: $(Build.SourcesDirectory)/bin
project: 'FabrikamFiber'
pipeline: 12
runVersion: 'specific'
runId: ${{ parameters.runId }}
However, I'm not sure if I understood you.
Related
I have the following problem: I have specified a pipeline yaml which runs on a self-hosted agent for the first stage. For the second stage I haven't declared an agent. It should default to a Microsoft-hosted agent (and does for all other pipelines with the same syntax and use-case). However, when initializing the agent at execution time, the chosen agent is of the self-hosted variety.
I've tried creating a fork, deleting the repo and re-initializing it. Both to no avail.
When I specify an agent for the stage, like so (as described in this ms documentation):
pool:
vmImage: 'ubuntu-latest'
I get the following message:
There was a resource authorization issue: "The pipeline is not valid. Could not find a pool with name Azure Pipelines. The pool does not exist or has not been authorized for use. For authorization details, refer to https://aka.ms/yamlauthz."
When I click authorize resources, I get the message that it was successful. However, the authorization issue keeps returning.
azure-pipeline.yaml
resources:
repositories:
- repository: templates
type: git
name: azure-devops-reusable-tasks
variables:
- group: mijnverzekeringsvoorwaarden-api-service_ONT # for deployments (build scripts only target ONT)
- template: add-variable-branch-name.yml#templates
name: 1.0.0$(branchName)-$(Build.BuildId)
stages:
- stage: Build
displayName: Build
jobs:
- job: Build
displayName: Build
pool:
name: Default
demands: companyName.Docker
steps:
- template: maven.yml#templates
- publish: $(System.DefaultWorkingDirectory)
artifact: mijnverzekeringsvoorwaarden-api-service
- template: publish-surefire-test-results.yml#templates
- template: publish-jacoco-code-coverage.yml#templates
- template: git-set-version-tag.yml#templates
- stage: Scan
dependsOn: Build
jobs:
- job: Scan
displayName: Dependency scan
steps:
- template: owasp-scan.yml#templates
And this is the template in question being called for the second stage. The one that is supposed to run on a MS-hosted machine:
owasp-scan.yml#templates
steps:
- download: current
- task: dependency-check-build-task#6
displayName: Owasp dependency check
inputs:
projectName: $(Build.Repository.Name)
scanPath: $(Pipeline.Workspace)
format: 'HTML'
Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
Update:
Added a screen shot of the agent pools and agents in the microsoft hosted pool.
Tried referencing the agent pool with the name given in the screenshot, like so:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'ubuntu-latest'
This gives the following error:
##[error]No agent found in pool Hosted Ubuntu 1604 which satisfies the following demand: ubuntu-latest. All demands: ubuntu-latest, Agent.Version -gtVersion 2.188.2
When using:
pool:
name: 'Hosted Ubuntu 1604'
vmImage: 'Hosted Ubuntu 1604 2'
I get: Encountered error(s) while parsing pipeline YAML: /azure-pipelines.yml (Line: 37, Col: 20): Invalid demand 'Hosted Ubuntu 1604 2'. The demand should be in the format '<NAME>' to test existence, and '<NAME> -equals <VALUE>' to test for a specific value. For example, 'VISUALSTUDIO' or 'agent.os -equals Windows_NT'.
Update 2:
I compared projects. Another project did have the Azure Pipelines agent pool. I approved this project (the one which has the issue) to also have access to this pool. When I explicitly define it in the yaml, it uses this (cloud hosted) pool. However, when I provide no pool information it keeps defaulting to the self-hosted variant. To re-iterate, this only happens for this particular pipeline.
Any insight as to why it is defaulting to the wrong pool for this particular pipeline?
You could try to change the value in the Default agent pool for YAML to Azure Pipelines in the UI:
The problem has been circumvented. When explicitly defining the agent from the agent pool, with correct nomenclature, it does pick an agent from this pool.
However, this does not explain why this behavior is exhibited for this pipeline, while all other pipelines pick an agent from the same pool without defining this. These default to this (the same) pool.
In this specific case the hack is defining the pool like so:
pool:
name: 'Hosted Ubuntu 1604'
I have a pipeline I created in Azure DevOps that builds an Angular application and runs some tests on it. I separated the pipeline into two jobs, Build and Test. The Build job completes successfully. The Test job checks out the code from Git again even though the Build job already did it. The Test job needs the files created in the Build job in order to run successfully like the npm packages.
Here is my YAML file:
trigger:
- develop
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
system.debug: false
stages:
- stage: Client
pool:
name: Windows
jobs:
- job: Build
displayName: Build Angular
steps:
- template: templates/angularprodbuild.yml
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- template: templates/angularlinttest.yml
- template: templates/angularunittest.yml
- template: templates/cypresstest.yml
My agent pool is declared at the stage level so both jobs would be using the same agent. Also I added a dependsOn to the Test job to ensure the same agent would be used. After checking logs, the same agent is in fact used.
How can I get the Test job to use the files that were created in the Build job and not checkout the code again? I'm using Angular 11 and Azure DevOps Server 2020 if that helps.
use files checked out from previous job in another job in an Azure pipeline
If you are using a self-hosted agent, by default, none of the workspace are cleaned in between two consecutive jobs. As a result, you can do incremental builds and deployments, provided that tasks are implemented to make use of that.
So, we could use - checkout: none in the next job to skip checking out the same code in the Build job:
- job: Test
displayName: Run Unit and Cypress Tests
dependsOn: Build
steps:
- checkout: none
- template: templates/angularlinttest.yml
But just as Bo Søborg Petersen said, DependsOn does not ensure that the same agent is used. You need add a User Capability to that specific build agent then in the build definition you put that capability as a demand:
pool:
name: string
demands: string | [ string ]
Please check this document How to send TFS build to a specific agent or server for some more info.
In the test job, we could use predefined variables like $(System.DefaultWorkingDirectory) to access the files for Node and npm.
On the other hand, if you are using the Hosted agent, we need use PublishBuildArtifacts task to publish Artifact to the azure artifacts, so that we could use the DownloadBuildArtifacts task to download the artifacts in the next job:
jobs:
- job: Build
pool:
vmImage: 'ubuntu-16.04'
steps:
- script: npm test
- task: PublishBuildArtifacts#1
inputs:
pathtoPublish: '$(System.DefaultWorkingDirectory)'
artifactName: WebSite
# download the artifact and deploy it only if the build job succeeded
- job: Deploy
pool:
vmImage: 'ubuntu-16.04'
steps:
- checkout: none #skip checking out the default repository resource
- task: DownloadBuildArtifacts#0
displayName: 'Download Build Artifacts'
inputs:
artifactName: WebSite
downloadPath: $(System.DefaultWorkingDirectory)
You could check Official documents and examples for some more details.
Assume that the agent is cleaned between jobs, so to access the files, you need to create an artifact during the build job and then download it during the test job.
Also, DependsOn does not ensure that the same agent is used, only that the second job runs after the first job.
Also you can set the second job to not checkout the code with "-checkout: none"
We are using Azure devops for our CI/CD. Typically, all the CI are written as azure yaml files and the release jobs have to be created on devops portal (using GUI). One of the general principle which we want to follow is to have everything as code.
Questions:
Can Azure release pipelines be created as code (yaml , etc) ?
I spent some time on it and it seems it is limited. Please correct me if i am wrong here.
Release pipelines have numerous things like approvals, auto trigger, release trigger, etc . Is it possible with release pipelines in yaml ?
Azure deployments can be configured with code. You can add multiple release triggers (pipeline, pull request etc.) Approvals can be configured per environment (https://www.programmingwithwolfgang.com/deployment-approvals-yaml-pipeline/), then reference the environment in your pipeline.
The example below is triggered when its own yaml code changes and when the Build pipeline completes.
trigger:
branches:
include:
- myBranch
paths:
include:
- '/Deployment/azure-deploy.yml'
resources:
pipelines:
- pipeline: BuildPipeline
project: myProjectName
source: 'myBuildPipeline'
trigger:
enabled: true
jobs:
- deployment: Deploy
displayName: Deploy
environment: $(environment)
pool:
vmImage: 'windows-latest'
strategy:
runOnce:
deploy:
steps:
- task: AzureRmWebAppDeployment#4
displayName: Deploy Web App
inputs:
ConnectionType: 'AzureRM'
azureSubscription: $(azureSubscription)
appType: 'webApp'
appSettings:
-SETTING-1 "$(mySetting1)"
WebAppName: '$(myAppName)'
package: '$(Pipeline.Workspace)/**/*.zip'
[1]: https://www.programmingwithwolfgang.com/deployment-approvals-yaml-pipeline/
I am exploring Azure Pipeline As Code and would like to understand how to make use of "deploymentMode" for validating and deploying ARM templates for each Azure environments.
I already have Release Pipelines created in Azure DevOps via Visual Builder for deployment tasks with one main ARM template and multiple paramater JSON files corresponding to each environment in Azure. Each of those pipeline has two stages. One for validation of ARM templates and Second for deployment.
I am now trying to converting those release pipelines to Azure Pipeline as Code in YAML format and would like to create one YAML file consolidating deployment validation tasks (deploymentMode: 'Validation') for each environment first followed by actual deployment (deploymentMode: 'Incremental').
1) Is it a right strategy for carrying out Azure DevOps Pipeline As code for a multi environment release cycle?
2) Will the YAML have two stages (one for validation and another one for deployment) and each stage having many tasks (each task for one environment)?
3) Do I need to create each Azure Environment first in 'Environments' section under Pipelines and configure the virtual machine for managing the deployment of various environments via YAML file?
Thanks.
According to your requirements, you could configure virtual machines for each azure environments in the Azure Pipeline -> Environments. Then you could reference the environments in Yaml code.
Here are the steps, you could refer to them.
Step1: Configure virtual machine for each Azure Environments.
Note: If the virtual machines are under the same environment, you need to add tags for each virtual machine. Tags can be used to distinguish virtual machines in the same environment.
Step2: You could create the Yaml file and add multiple stages (e.g. validation stage and deployment stage) in it. Each stage can use the environments and contain multiple tasks.
Here is an example:
trigger:
- master
stages:
- stage: validation
jobs:
- deployment: validation
displayName: validation ARM
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
- stage: deployment
jobs:
- deployment: deployment
displayName: deploy
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
Here are the docs about using multiple stages and virtual machines.
Hope this helps.
When using multistage pipelines from yaml in Azure Pipelines and every stage is deploying resources to a separate environment, I'd like to use a dedicated service connection for each stage. In my case every stage is making use of the same deployment jobs, i.e. yaml templates. So I'm using a lot of variables that have specific values dependent on the environment. This works fine, except for the service connection.
Ideally, the variable that contains the service connection name, is added to the stage level like this:
stages:
- stage: Build
# (Several build-stage specific jobs here)
- stage: DeployToDEV
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_DEV' # This seems like a logical solution
jobs:
# This job would ideally reside in a yaml template
- job: DisplayDiagnostics
pool:
vmImage: 'Ubuntu-16.04'
steps:
- checkout: none
- task: AzurePowerShell#4
inputs:
azureSubscription: $(AzureServiceConnection)
scriptType: inlineScript
inline: |
Get-AzContext
azurePowerShellVersion: LatestVersion
- stage: DeployToTST
dependsOn: Build
condition: succeeded()
variables:
AzureServiceConnection: 'AzureSubscription_TST' # Same variable, different value
jobs:
# (Same contents as DeployToDEV stage)
When this code snippet is executed, it results in the error message:
There was a resource authorization issue: "The pipeline is not valid.
Job DisplayDiagnostics: Step AzurePowerShell input
ConnectedServiceNameARM references service connection
$(AzureServiceConnection) which could not be found. The service
connection does not exist or has not been authorized for use. For
authorization details, refer to https://aka.ms/yamlauthz.
So, it probably can't expand the variable AzureServiceConnection soon enough when the run is started. But if that's indeed the case, then what's the alternative solution to make use of separate service connections for every stage?
One option that works for sure is setting the service connection name directly to all tasks, but that would involve duplicating identical yaml tasks for every stage, which I obviously want to avoid.
Anyone has a clue on this? Thanks in advance!
Currently you can not pass a variable as a serviceConnection.
Apparently the service connection name is picked up on push/commit and whatever that is there will be picked up.
E.g. if you have a $(variable) it will pick $(variable) instead of the value.
Workaround I have used so far is to use a template for the steps at each stage and pass a different parameter with the serviceConnection.
Refer: https://github.com/venura9/azure-devops-yaml/blob/master/azure-pipelines.yml for a sample implementation. you are more than welcome to pull request with updates.
- stage: DEV
displayName: 'DEV(CD)'
condition: and(succeeded('BLD'), eq(variables['Build.SourceBranch'], 'refs/heads/develop'))
dependsOn:
- BLD
variables:
stage: 'dev'
jobs:
- job: Primary_AustraliaSouthEast
pool:
vmImage: $(vmImage)
steps:
- template: 'pipelines/infrastructure/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal', location: 'australiasoutheast'}
- template: 'pipelines/application/deploy.yml'
parameters: {type: 'primary', spn: 'SuperServicePrincipal'}