I have an Azure DevOps pipeline which runs npm run based on some runtime parameters.
Is there a possibility to trigger for example 3 jobs in the same pipeline, each of the job with different runtime parameters?
Thank you
Is there a possibility to trigger for example 3 jobs in the same pipeline, each of the job with different runtime parameters?
Yes. You can define the object type parameter and use each expression to iterate over the values.
Here is an example:
parameters:
- name: tests
type: object
default: [test1,test2,test3]
jobs:
- ${{ each test in parameters.tests }}:
- job:
steps:
- script: echo ${{ test }}
Result:
For more detailed info, you can refer to this doc: Loop through parameters
Related
I have an Azure DevOps Deployment YAML Pipeline that creates an Azure App Service and deploys code to it. The actual pipeline is more complex, but I am simplifying it for this question.
Currently my pipeline can successfully deploy to a specific Azure Subscription (Service Connector) with resource names defined in variables.
I need to parametrize the pipeline so that it can deploy to several different environments (means Azure Subscriptions) using multiple Service Connectors. Each environment has a different Azure Resource naming convention.
Is there any way to read the value of pipeline variables from an XML or JSON file? This way I can have multiple config files for each environment and store them as part of my repository.
Is this a right approach for multi-environment deployment pipeline configuration?
You can use variable templates. There is another interesting link: Learn more about variable reuse with templates.
Here I have this flat folder structure (for the clarity of the sample):
.
| deploy-app.job.yaml
| deploy-app.pipeline.yaml
| variables.dev.yaml
| variables.prod.yaml
So here we're trying to run the reusable job deploy-app.job.yaml with different variable sets.
I've defined some variables in each variable.{env}.yaml files
# variables.dev.yaml
variables:
vmImage: ubuntu-20.04
serviceConnection: dev-service-connection
# variables.prod.yaml
variables:
vmImage: ubuntu-20.04
serviceConnection: prod-service-connection
The deploy-app.job.yaml file accepts a parameter that allow to inject a variable template:
# deploy-app.job.yaml
parameters:
- name: envVariablesTemplate
type: string
jobs:
- deployment: deploy
variables:
# Inject the verianle template here
- template: ${{ parameters.envVariablesTemplate }}
pool:
# Use the variable from the template
vmImage: ${{ variables.vmImage }}
strategy:
runOnce:
deploy:
steps:
- task: AzureCLI#2
displayName: Hello from azure cli
inputs:
# Use the variable from the template
azureSubscription: ${{ variables.serviceConnection }}
scriptType: pscore
scriptLocation: inlineScript
inlineScript: echo 'Hello from azure cli'
In the main pipeline, I can create different stages and inject the desired vairables:
# deploy-app.pipeline..yaml
stages:
- stage: dev
condition: succeeded()
jobs:
- template: ./deploy-app.job.yaml
parameters:
envVariablesTemplate: ./variables.dev.yaml
- stage: prod
dependsOn: dev
condition: succeeded()
jobs:
- template: ./deploy-app.job.yaml
parameters:
envVariablesTemplate: ./variables.prod.yaml
Based on your needs, you can add multiple variable templates, having a naming convention etc. Really up to you and depends on the complexity of your pipelines.
By using XML transformation, we can perform the operation. Check the below link to get the complete steps.
https://www.dragonspears.com/blog/how-to-handle-continuous-deployment-across-multiple-environments
Create different stages.
Create build transformation using XML transformer
Another option is to utilize ‘XML variable substitution.’ Sensitive information can be stored and secured within Azure Pipelines vs. in plain text transformation files.
Continuous Deployment
The same steps mentioned here are available in the link mentioned above.
How to create a multi-stage pipeline depending on a stage/job-name derived from a parameter whereas stages run firstly in parallel and eventually one stage that waits for all previous stages?
Here's what I've tried so far:
A multi-stage pipeline runs for several stages depending on a tool parameter in parallel, whereas dependsOn is passed as parameter. Running it in parallel for each tool waiting for the previous stage for the said tool works smoothly.
Main template: all wait for for all
- ${{ each tool in parameters.Tools }}:
- template: ../stages/all-wait-for-all.yml
parameters:
Tool: ${{ tool }}
stages/all-wait-for-all.yml
parameters:
- name: Tool
type: string
stages:
- stage: ALL_WAIT_${{ parameters.Tool}}
dependsOn:
- PREPARE_STAGE
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ parameters.Tool }}
Now there should be one stage that should only run once and not per tool, but it should only run after the individual tool stages are done. It can't be hardcoded as there are various tools. So I hoped defining the individual wait-stages in a prepare job would work out:
Main template: prepare-stage
- script: |
toolJson=$(echo '${{ convertToJson(parameters.Tools) }}')
tools=$(echo "$toolJson" | jq '.[]' | xargs)
stage="ALL_WAIT"
for tool in $tools; do
stageName="${stage}_${tool }"
stageWaitArray+=($stageName)
done
echo "##vso[task.setvariable variable=WAIT_ON_STAGES]${stageWaitArray}"
echo "##vso[task.setvariable variable=WAIT_ON_STAGES;isOutput=true]${stageWaitArray}"
displayName: "Define wait stages"
name: WaitStage
stages/one-waits-for-all.yml
stages:
- stage: ONE_WAITS
dependsOn:
- $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['waitStage.WAIT_ON_STAGES'] ]
whereas below error is shown:
Stage ONE_WAITS depends on unknown stage $[ stageDependencies.PREPARE_STAGE.PREPARE_JOB.outputs['WaitStage.WAIT_ON_STAGES'] ].
As I understand depends on can not have dynamic $[] or macro $() expressions evaluated at runtime. You can use template expressions ${{}} which are evaluated at queue time.
Guess I was overthinking the solution as eventually it was pretty obvious.
So first template can be called within a loop from the main template whereas it's executed as many times as tools we got. Second template shall be called once waiting on previous stages for all tools, whereas the job/stage prefix is known, only the tool name as postfix was unknown. So just add them in a loop directly in dependsOn..
Here you go:
stages:
- stage: ONE_WAITS
dependsOn:
- PREPARE_STAGE
- ${{ each tool in parameters.Tools }}:
- OTHER_TEMPLATE_EXECUTED_FOR_ALL_TOOLS_${{ tool}}
- ALL_WAIT_${{ tool }}
If I create a parameter, I can set its value when I run a pipeline manually. But when the pipeline runs automatically, it uses the default value. When a pipeline runs automatically (say in response to pushing to a repo) is there any way to pass it the value of a parameter?
This is the yaml file I was playing around with. The goal is to be able to control which tests get run in the pipeline.
parameters:
- name: testFiles
type: string
default: cypress/integration/first.spec.js
trigger:
- azure-test
pool:
vmImage: ubuntu-latest
steps:
- task: NodeTool#0
inputs:
versionSpec: "10.x"
displayName: "Install Node.js"
- script: npm install
displayName: "npm install"
- script: npx cypress run --record --key [record key removed] --spec ${{ parameters.testFiles }}
displayName: "run cypress"
When a pipeline runs automatically (say in response to pushing to a repo) is there any way to pass it the value of a parameter?
When running the automatical pipeline, it will only use the default parameter values.
So we can achieve this requirement by changing the default value of parameters.
Based on my test, I have found a convenient method: you could use If expression to check the trigger method (manually or CI). Then you could set the value of the parameters.
Note: We cannot use if expressions when defining parameters, so we need to use variables to pass values.
You could refer to this ticket.
Here is my example:
trigger:
- master
variables:
${{ if eq( variables['Build.Reason'], 'IndividualCI' ) }}:
buildVersion: $(BUILD.SOURCEVERSIONMESSAGE)
${{ if ne( variables['Build.Reason'], 'IndividualCI' ) }}:
buildVersion: cypress/integration/first.spec.js
parameters:
- name: testFiles
type: string
default: $(buildVersion)
pool:
vmImage: ubuntu-latest
steps:
- script: echo ${{ parameters.testFiles }}
displayName: 'Run a one-line script'
The variable $(Build.Reason) is used to confirm the trigger method.
The variable $(BUILD.SOURCEVERSIONMESSAGE) contains the message of the commit.
Here are the steps:
When you push the changes in a repo, you need to add a comment. The comment is the testfile path.
The CI triggered pipeline will get this comment and set it as Parameters default value.
In this case, it is similar to manually running the pipeline to set the parameters value.
When you manually run the pipeline, it will use the defined default value. You can also modify the value when you maually run the pipeline.
Is it possible to create a wrapper pipeline in Azure DevOps that simply runs two or more independent pipelines (in parallel) and does nothing else?
I have a problem to solve. and the scenario looks like this "*
In my project, I have say 9 teams and each designing separate Sanity Test Script. All of them have their own existing Sanity Pipeline. i.e. 9 Sanity Pipelines*
There is a plan that there will be only One Master/ Wrapper pipeline and this in turn calls upon 9 child pipelines pertaining to Sanity
When master run by Release Engineer or IT Area lead to get report, the child pipelines run in Parallel
Also in master Pipeline, I do not want to be too much lengthy. Simply I want to mention the name of Child pipeline in my individual Job tag ( with params may be ) and it will run. easy configurable " So I was thinking to use following at my master pipeline: resources: pipelines:
pipeline: Sanity1 Source: P00xxx-Sanity1-Pipeline
pipeline: Sanity2 Source: P00xxx-Sanity2-Pipeline
This list should be easily expandable.......
Then How in Jobs--> Job --> Steps can I run the pipeline using alias, e.g. Sanity1 ?? Any example code snippet?
Another approach would be to take the pipeline and leverage templates. The wrapper pipeline can call a template which will leverage all the desired tasks and execution and can be setup to run in parallel as part of the pipeline.
Here's a blog post about this
You can use PowerShell and rest API (Builds - Queue). Add PowerShell step and compose any run sequence. Here you can find different examples to queue builds:
Build Pipeline using powershell
Trigger another build exist in project in Azure Devops
According to your description, you can try to use parameters and conditions to set up the pipeline.
You can try the following Yaml sample:
trigger:
- none
parameters:
- name: pipeline1
displayName: Gradle sample #PipelineName
type: boolean
default: false
- name: pipeline2
displayName: groovy-spring-boot-restdocs-example.git #PipelineName
type: boolean
default: false
- name: pipeline3
displayName: Gradle sample-CI #PipelineName
type: boolean
default: false
pool:
vmImage: 'windows-latest'
steps:
- ${{ if eq(parameters.pipeline1, true) }}:
- task: TriggerPipeline#1
inputs:
serviceConnection: 'TestBuild'
project: '966ef694-1a7d-4c35-91f3-41b8c5363c48'
Pipeline: 'Build'
buildDefinition: 'Gradle sample' #PipelineName
Branch: 'master'
- ${{ if eq(parameters.pipeline2, 'true') }}:
- task: TriggerPipeline#1
inputs:
serviceConnection: 'TestBuild'
project: '966ef694-1a7d-4c35-91f3-41b8c5363c48'
Pipeline: 'Build'
buildDefinition: 'groovy-spring-boot-restdocs-example.git' #PipelineName
Branch: 'master'
...
Explanation:
I use the Trigger Azure DevOps Pipeline task from the Trigger Azure DevOps Pipeline Extension to trigger the child pipelines.
The parameters is used to list the pipeline names and the if condition is used to determine whether the pipeline name is selected.
Result:
When you run the pipeline you could select the checkbox.
If the pipeline name has been selected, the corresponding task will run and trigger the corresponding pipeline.
This should make the selection interface clearer.
We looking to create a pipeline to update our multi-tenant azure environment. We need to perform some actions during the update per tenant. To accomplish this, we would like to create a job per tenant, so we can process tenants in parallel. To accomplish this, I want to use a runtime parameter to pass the tenants to update to my pipeline as follows:
parameters:
- name: tenants
type: object
the value of the tenants parameter might look like something like this:
- Name: "customer1"
Someotherproperty: "some value"
- Name: "customer2"
Someotherproperty: "some other value"
to generate the jobs, we do something like this:
stages:
- stage:
jobs:
- job: Update_Tenant
strategy:
matrix:
${{ each tenant in parameters.Tenants }}:
${{ tenant.tenantName }}:
name: ${{ tenant.tenantName }}
someproperty: ${{ tenant.otherProperty }}
maxParallel: 2
steps:
- checkout: none
- script: echo $(name).$(someproperty)
Now what we need, is some way to fill this tenants parameter. Now I tried a few solutions:
Ideally I would like to put a build stage before the Update_Tenants stage to call a REST api to get the tenants, and expand the tenants parameter when the Update_Tenants stage starts, but this is not supported AFAIK, since parameter expansion is done when the pipeline starts.
A less ideal but still workable option would have been to create a variable group yaml file containing the tenants, and include this variable group in my pipeline, and use the ${{ variables.Tenants }} syntax to reference them. However, for some reason, variables can only be strings.
The only solution I can currently think of, is to create a pipeline that calls a REST api to get the tenants to update, and then uses the azure devops api to queue the actual update process with the correct parameter value. But this feels like a bit of a clunky workaround to accomplish this.
Now my question is, are there any (better?) alternatives to accomplish what I want to do?
Maybe this can help. I was able to use external source (.txt file) to fill array variable in azure pipelines.
Working example
# Create a variable
- bash: |
arrVar=()
for images in `cat my_images.txt`;do
arrVar+=$images
arrVar+=","
done;
echo "##vso[task.setvariable variable=list_images]$arrVar"
# Use the variable
# "$(list_images)" is replaced by the contents of the `list_images` variable by Azure Pipelines
# before handing the body of the script to the shell.
- bash: |
echo my pipeline variable is $(list_images)
Sources (there is also example for matrix)
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/variables?view=azure-devops&tabs=yaml%2Cbatch#set-a-job-scoped-variable-from-a-script
Other sources
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/runtime-parameters?view=azure-devops&tabs=script
To accomplish this, we would like to create a job per tenant, so we
can process tenants in parallel.
Apart from rolling deployment strategy, you can also check Strategies and Matrix.
You can try something like this unless you have to use Runtime parameters:
jobs:
- job: Update
strategy:
matrix:
tenant1:
Someotherproperty1: '1.1'
Someotherproperty2: '1.2'
tenant2:
Someotherproperty1: '2.1'
Someotherproperty2: '2.2'
tenant3:
Someotherproperty1: '3.1'
Someotherproperty2: '3.2'
maxParallel: 3
steps:
- checkout: none
- script: echo $(Someotherproperty1).$(Someotherproperty2)
displayName: 'Echo something'