I got a setup where I want to trigger the CI to build on each pull request to our Bitbucket Cloud repository. In the same setup I also have three different stages that's I would like to trigger manually when we would like to build the artefact to deploy to our environments.
The problem I got is that the pull request trigger doesn't trigger after I added stages in our build. This is how the configuration looks like:
pr:
branches:
include:
- '*'
pool:
vmImage: 'macos-latest'
stages:
- stage: CI
displayName: 'Continues build'
jobs:
- job: C1
steps:
- template: azure-pipelines-ios.yml
parameters:
environment: 'ci'
- task: PublishBuildArtifacts#1
- stage: Test
displayName: 'Building for Test'
jobs:
- job: T1
steps:
- template: azure-pipelines-ios.yml
parameters:
environment: 'test'
- task: PublishBuildArtifacts#1
- stage: Stage
displayName: 'Building for Stage'
jobs:
- job: S1
steps:
- template: azure-pipelines-ios.yml
parameters:
environment: 'stage'
- task: PublishBuildArtifacts#1
I would like to trigger the CI stage build on each pull request. How do I do that?
If you want to skip other stages for you should use condition:
pr:
branches:
include:
- '*'
pool:
vmImage: 'macos-latest'
stages:
- stage: CI
displayName: 'Continues build'
condition: eq(variables['Build.Reason'], 'PullRequest')
jobs:
- job: C1
steps:
- script: echo "Hello $(System.StageName)"
- stage: Test
displayName: 'Building for Test'
condition: ne(variables['Build.Reason'], 'PullRequest')
jobs:
- job: T1
steps:
- script: echo "Hello $(System.StageName)"
- stage: Stage
displayName: 'Building for Stage'
condition: ne(variables['Build.Reason'], 'PullRequest')
jobs:
- job: S1
steps:
- script: echo "Hello $(System.StageName)"
Related
How can i ensure that all stages of my pipelines are performed in the same working directory.
I have pipeline that looks like this:
resources:
repositories:
- repository: AzureRepoDatagovernance
type: git
name: DIF_data_governance
ref: develop
trigger:
branches:
include:
- main
paths:
include:
- terraform/DIF
variables:
- group: PRD_new_resources
- name: initial_deployment
value: false
pool: $(agent_pool_name)
stages:
- stage: VariableCheck
jobs:
- job: VariableMerge
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- ${{ if eq(variables.initial_deployment, 'false') }}:
- task: PythonScript#0
inputs:
scriptSource: filePath
scriptPath: DIF-devops/config/dynamic_containers.py
pythonInterpreter: /usr/bin/python3
arguments: --automount-path $(System.DefaultWorkingDirectory)/DIF_data_governance/data_ingestion_framework/$(env)/AutoMount_Config.json --variables-path $(System.DefaultWorkingDirectory)/DIF-devops/terraform/DIF/DIF.tfvars.json
displayName: "Adjust container names in variables.tf.json"
- stage: Plan
jobs:
- job: Plan
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- script: |
cd $(System.DefaultWorkingDirectory)$(terraform_folder_name) && ls -lah
terraform init
terraform plan -out=outfile -var-file=DIF.tfvars.json
displayName: "Plan infrastructure changes to $(terraform_folder_name) environment"
- stage: ManualCheck
jobs:
- job: ManualCheck
pool: server
steps:
- task: ManualValidation#0
timeoutInMinutes: 5
displayName: "Validate the configuration changes"
- stage: Apply
jobs:
- job: Apply
steps:
- checkout: self
- checkout: AzureRepoDatagovernance
- script: |
cd $(System.DefaultWorkingDirectory)$(terraform_folder_name) && ls -lah
terraform apply -auto-approve "outfile"
displayName: "Apply infrastructure changes to $(terraform_folder_name) environment"
How can I make sure that all 4 stages are inside this same working directory so I can check out just once and all stages have access to work done by previous jobs? I know that this
I know that my pipeline has some flaws that will need to be polished.
This is not possible. Each azure devops stage has its own working directory and it is considered a different devops agent job. The jobs inside the stage will use the same working directory for the steps that are included on them.
If you need to pass code or artifacts between stages you should use publish pipeline artifacts and download pipeline artifacts native devops tasks.
I've looked at the instructions here https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops and set up an environment called test. However when I put this environment: test line in the below pipeline I get an error "unexpected value". Where do I need to put the environment: test ?
pr:
branches:
include:
- '*'
trigger:
branches:
include:
- master
pool:
vmImage: ubuntu-latest
stages:
- stage: Build
jobs:
- job: Build
steps:
- template: templates/build.yml
- stage: Release
condition: and(succeeded('Build'), eq(variables['Build.SourceBranch'], 'refs/heads/master'))
jobs:
- job: DeployDev
environment: test
variables:
You need to change your ordinary job into a deployment job :
jobs:
- deployment: DeployDev
environment: test
I want to be able to work through the array.. run the job however many times (as many times as items in the array) - But I need the next job to depend on the LAST job.
How do I get the value of the last element in the array into the dependsOn: parameter of the next job ?? Is it even possible? If not then how do I make the next job dependOn the success of the last run of the previous job?
parameters:
- name: solutionName
type: object
default: ['entry1', 'entry2']
variables:
- group: 'Key Vault Test'
- name: Var1
value: 'lol'
trigger:
- main
jobs:
- ${{each Solution in parameters.SolutionName}}:
- job: Job_${{Solution}}
displayName: ${{Solution}}
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo $(Var1)
echo ${{Solution}}
echo ${{join(' ',parameters.SolutionName)}}
- job: Job_2
dependsOn: HOW_DO_I_PUT_JOB_NAME_IF_I_DONT_KNOW_IT
displayName: Job2
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo 'Second Job'
You can use another each to define the list of dependencies in the second job:
jobs:
- ${{each Solution in parameters.SolutionName}}:
- job: Job_${{Solution}}
displayName: ${{Solution}}
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo $(Var1)
echo ${{Solution}}
echo ${{join(' ',parameters.SolutionName)}}
- job: Job_2
dependsOn:
- ${{each Solution in parameters.SolutionName}}:
- Job_${{Solution}}
displayName: Job2
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo 'Second Job'
Note that the 'last' job from that first each loop is not a definite thing; those jobs will all be run in parallel, so any of them might finish first or last. So to make your second job wait for them all to finish, it needs an explicit dependency on each of them.
${{parameters.SolutionName}} is object type which is not accepted by the keyword dependsOn(it accepts string), to specify the job name, you need to use below format to depends on the previous jobs.
- job: Job_2
dependsOn:
- Job_${{parameters.SolutionName[0]}} # it points to the 1st job
- Job_${{parameters.SolutionName[1]}} # it points to the 2nd job
displayName: Job2
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo 'Second Job'
Or you can put the Job_2 into a new stage.
stages:
- stage: stage1
jobs:
- ${{each Solution in parameters.SolutionName}}:
- job: Job_${{Solution}}
displayName: ${{Solution}}
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo $(Var1)
echo ${{Solution}}
echo ${{join(' ',parameters.SolutionName)}}
- stage: stage2
dependsOn: stage1 # depends on stage instead.
jobs:
- job: Job_2
displayName: Job2
pool:
vmImage: ubuntu-latest
steps:
- script: |
echo 'Second Job'
The deploy stage of the pipeline fails without error after build stage completes successfully.
Enabling system diagnostics does not give in any additional information (see the screenshot below).
The following pipelines yaml file was used:
trigger:
- master
resources:
- repo: self
variables:
vmImageName: 'ubuntu-latest'
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
pool:
vmImage: $(vmImageName)
steps:
- task: CmdLine#2
inputs:
script: |
ls -la
- stage: Deploy
displayName: Deploy Notebook Instance Stage
dependsOn: Build
jobs:
- deployment: Deploy
displayName: Deploy
pool:
vmImage: $(vmImageName)
environment: 'myenv.default'
strategy:
runOnce:
deploy:
steps:
- task: CmdLine#2
inputs:
script: echo Some debug text!
I used your script and I change only environment as I don't have myenv.default and all is fine.
Please double check your environment setting.
I am trying to get the new stage variables to work.Here is my stripped down example:-
stages:
- stage: firstStage
jobs:
- job: varSetJob
pool:
vmImage: 'windows-latest'
steps:
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
Write-Output ("##vso[task.setvariable variable=DeployEnvironment;isOutput=true]AnEnvironment")
Write-Output ("vso[task.setvariable variable=DeployEnvironment;isOutput=true]AnEnvironment")
name: varStep
- script: echo $(varStep.deployEnvironment)
name: show
- stage: secondStage
dependsOn: firstStage
variables:
- name: DeployEnvironmentstage
value: $[ stageDependencies.firstStage.varSetJob.outputs['varStep.DeployEnvironment'] ]
jobs:
- job: showvar
pool:
vmImage: 'windows-latest'
steps:
- script: echo $(DeployEnvironmentstage)
name: show
This pipeline fails to start the second step and no logs are made, running in diagnostic mode.
I've checked the azure devops version and it is on the latest sprint version.
Has anyone had this working yet?
Try to put the variables under a Job:
- stage: secondStage
dependsOn: firstStage
jobs:
- job: showvar
pool:
vmImage: 'windows-latest'
variables:
- name: DeployEnvironmentstage
value: $[ stageDependencies.firstStage.varSetJob.outputs['varStep.DeployEnvironment'] ]
steps:
- script: echo $(DeployEnvironmentstage)
name: show