I'm a bit confused with how to set this workflow up using pull requests.
I have in place an existing multi-stage YAML build pipeline that in summary does the following:
Runs a build from any branch
Runs a deployment job to a Development Environment/Resource, if the source branch is feature/* or release/* and the build succeeds
Runs a deployment job to a UAT Environment/Resource, if the source branch is release/*
Runs a deployment job to Live/Production Environment/Resource, if the source branch is master
So off the back of CI this workflow seems to work fine, the correct stages are run depending on the branch etc.
I then decided that branch policies and pull requests might be a better option for code quality rather than let any of the main branches be direct committed to, so I started to rework the YAML to account - mainly by removing the trigger to
trigger: none
This now works correctly in line with a branch policy, the build only gets kicked off when a pull request on to develop or master is opened.
However this is then where I'm a bit confused with how this is supposed to work and how I think it works ....
Firstly - is it not possible to trigger the multi-stage YAML off the back of pull requests (using Azure Repos) ? In my head all I want to do is introduce the pull request and branch policies but keep the multi-stage deployments to environments as is.
However, the deployment job stages all get skipped now - but this might be to do with my conditions within the YAML, which are as follows:
- stage: 'Dev'
displayName: 'Development Deployment'
dependsOn: 'Build'
condition: |
and
(
succeeded()
eq(variables['Build.Reason'], 'PullRequest'),
ne(variables['System.PullRequest.PullRequestId'], 'Null'),
or
(
startsWith(variables['Build.SourceBranch'], 'refs/heads/feature/'),
startsWith(variables['Build.SourceBranch'], 'refs/heads/release/')
)
)
jobs:
- deployment: Deploy
pool:
name: 'Development Server Agent Pool'
variables:
Parameters.WebsitePhysicalPath: '%SystemDrive%\inetpub\wwwroot\App'
Parameters.VirtualPathForApplication: ''
Parameters.VirtualApplication: ''
environment: 'Development.Resource-Name'
.....
Is there something I am missing?
Or do I have to remove the multi-stage deployments from YAML and revert back to using Release Pipelines for pull requests (maybe with approval gates??)
Thanks in advance!
It looks like the issue of the conditions within the YAML.
If the pipeline is triggered by a PR. the value variables['Build.SourceBranch'] will be refs/pull/<PR id>/merge. The express in above condtion startsWith(variables['Build.SourceBranch'], 'refs/heads/feature/') will be false, which caused the stage to be skipped. See build variables for more information.
You can try using variables['System.PullRequest.SourceBranch'], which will be evaluated to the value of the source Branch of the PR. Check System variables for more information. See below:
condition: |
and
(
succeeded(),
eq(variables['Build.Reason'], 'PullRequest'),
ne(variables['System.PullRequest.PullRequestId'], ''),
or
(
startsWith(variables['System.PullRequest.SourceBranch'], 'refs/heads/feature/'),
startsWith(variables['System.PullRequest.SourceBranch'], 'refs/heads/release/')
)
)
Related
I've two Azure DevOps CI pipelines
DataPipeline\Windows - Build
DataPipeline\TestPipeline
The second pipeline is a pipeline resource trigger. I've used this doc as a reference.
When a pull request is created (e.g. with dev branch as source branch and main branch as target branch, not merged yet) both these pipelines are triggered (by default configuration). The second pipeline may pick up old artefacts (this is a different part altogether). A second run for pipeline DataPipeline\TestPipeline is triggered automatically as soon as DataPipeline\Windows - Build completes for a commit in the dev branch. Is this possible? I've specified the branch name in the resources trigger. Is something wrong with the YAML configuration?
My expected behaviour is when a pull request is created for branch name: xyz (not main), DataPipeline\TestPipeline shouldn't be triggered again after DataPipeline\Windows - Build completes.
DataPipeline\TestPipeline.yml
trigger: none
resources:
pipelines:
- pipeline: DataPipeline
source: 'DataPipeline\Windows - Build'
branch: main
trigger:
branches:
include:
- main
name: TestPipeline_$(variable)
jobs:
// template based job
I am trying to build a YAML release pipeline in Azure DevOps
for that I have created multiple branches to keep environment-specific files
I created 4 pipelines for release as well:
Problem: Whenever I am making any changes in any branch, all the branch pipeline starts running. If I run an individual pipeline it works fine.
# Starter pipeline
# Start with a minimal pipeline that you can customize to build and deploy your code.
# Add steps that build, run tests, deploy, and more:
# https://aka.ms/yaml
trigger:
- acc
pool:
name: 'Agent'
steps:
- task: Kubernetes#1
displayName: 'Deploy on K8s Cluster'
inputs:
connectionType: 'Azure Resource Manager'
azureSubscriptionEndpoint: 'vs-aks-sc'
azureResourceGroup: 'azwe-rg-01'
kubernetesCluster: 'azwe-aks-01'
command: 'apply'
arguments: '-f $(System.DefaultWorkingDirectory)/kubernetes/acc.yaml'
Problem: Whenever I am making any changes in any branch, all the branch pipeline starts running.
If you just want to run the corresponding pipeline of the branch you modified, you need to make sure set the pipeline with the YAML file in the corresponding branch and set the correct branch trigger.
For example, for the Acc branch:
We need create a YAML file under the branch Feature/TestSample-acc with the branch trigger in the YAML file:
trigger:
branches:
include:
- Feature/TestSample-acc
Then create a pipeline with existing Azure pipeline YAML file:
New Pipeline-> Azure Repos Git(YAML)-> Select your repository-> Select existing Azure pipeline YAML file:
Now, this pipeline only triggered by the modification on the branch Feature/TestSample-acc:
You could set the same this for other branches, like
trigger:
branches:
include:
- Feature/TestSample-dev
Besides, if you do not want to control the trigger by the YAML file, you could override it by the UI trigger settings:
This option is disable by default, so that we could control the trigger in the YAML file directly.
If you enable it, you just need to add the Branch filters for just one branch:
If I am not understand your question correctly, please let me know for free that what you want to achieve.
You should check the CI trigger setting of the pipeline to only allow it to trigger on your wanted branches
Change CI Trigger
The answer by "Leo Liu-MSFT" is correct.
BUT you also need to make sure that on each branch the YAML file has a different name, otherwise a commit to a single branch triggers more than one pipeline build.
With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.
I have the following YML file for my pipeline:
trigger: none
stages:
# Other stages here...
- stage: Release
jobs:
- deployment: Staging
environment: staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: DownloadBuildArtifacts#0
# ...
- task: AzureRmWebAppDeployment#4
displayName: Deploy in staging
# ...
- deployment: Production
environment: prod
dependsOn: Staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: AzureAppServiceManage#0
displayName: Swap stg-prod slots
# ...
Based on this, to give more context, my thinking is to have 2 stages: the first one is to build my application, the second one is to release in staging (QA) and to production next.
The environment "prod" though, has a check (or approval, whatever you want to call it).
I'm not sure if I'm encountering a bug or not, but what is happening is that when stage 1 completes (the build phase), the release phase of stage 2 is blocked and waiting for approval even considering that "staging" has not any check enabled (only prod).
The easiest workaround is to create different stages, one for staging and one for production, but the thing is that it's not matching my expected behaviour. I'm expecting that the deployment for the job staging completes successfully, then the job "production" waits for the approval.
Do you have any suggestion regarding this? Is this a bug?
Checks (approvals) for a deployment job are blocking the entire stage
Sorry for any inconvenience.
Personally, This behavior is by designed at this moment.
As the document state:
Approvals in multi-stage YAML pipelines
We continue to improve multi-stage YAML pipelines, we now let you add manual approvals to
these pipelines. Infrastructure owners can protect their environments
and seek manual approvals before a stage in any pipeline deploys to
them.
This feature is designed based on stage not environment, so it block the whole stage.
As I test, I could reproduce this issue as you. But your request is reasonable (Personally), this feature should be designed based on environment.
You could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
Hope this helps.
I have a travis script deploying to different S3 buckets based on 2 conditions:
1. the branch name
2. the $TRAVIS_BRANCH env variable
... travis stuff
deploy:
- provider: s3
... other config
bucket: my-staging-bucket
on:
repo: MyOrg/my-repo
branch: staging
condition: $TRAVIS_BRANCH = staging
- provider: s3
... other config
bucket: my-prod-bucket
on:
repo: MyOrg/my-repo
branch: production
condition: $TRAVIS_BRANCH = production
It's working as expected:
When I deploy to staging, the first config successfully builds and deploys and I'm given appropriate messaging in Travis' job log.
It also tries to deploy to production and is stopped by the on: conditions, again providing messaging that indicates as much. The resulting log messages look like so, the first two lines indicating successful depoyment to staging and no deployment to production.
-Preparing deploy
-Deploying application
-Skipping a deployment with the s3 provider because a custom condition was not met
This is consistent when the situation is reversed:
-Skipping a deployment with the s3 provider because this branch is not permitted: production
-Skipping a deployment with the s3 provider because a custom condition was not met
...
-Preparing deploy
-Deploying application
This has lead to some confusion amonst the team as the messaging appears to be a false negative, indicating the deployment failed when it's actually functioning as intended. What I would like do is set up Travis so that it only runs the deployment script approprite for that branch and env variable combo.
Is there a way to do that? I was under the impression this was the method for conditional deployment.
If there's no way to prevent both deploy jobs from running, is there a way to at suppress the messaging in the job log?
The best way to do this would be to use Travis' stages and jobs features. Stages are groups of jobs. Jobs inside stages run in parallel. Stages run in sequence, one after the other. Entire stages can be conditional, and stages can also contain conditional jobs. Jobs in a stage can be deploy jobs too (i.e. the entire deploy: in your travis.yml can be nested inside a conditional stage. Most importantly for your goals, conditional stages and their included jobs are silently skipped if the condition is not met.
This is very different to the standard deploy: matrix that you already have. i.e. your current deploy step contains 2 deployments and so you get the message that it is skipping a deployment.
Instead, you can change that into separate deploy stages with conditional jobs.
The downside to using stages like this is that each stage runs in its own VM and so you can't share data from one stage to the next. (i.e build artifacts from previous stages do not propagate to subsequent stages). You can get around this by sharing the build results of a lengthy compile stage via S3, for example.
More information can be found here:
https://docs.travis-ci.com/user/build-stages
I have a working example here in my github: https://github.com/brianonn/travis-test
jobs:
include:
- stage: compile
script: bash scripts/compile.sh
- stage: test
script: bash scripts/test.sh
- stage: deploy-staging
if: branch = staging
name: "Deploy to staging S3"
script: skip
deploy:
provider: script
script: bash scripts/deploy.sh staging
on:
branch: staging
condition: $TRAVIS_BRANCH = staging
- stage: deploy-prod
if: branch = production
name: "Deploy to production S3"
script: skip
deploy:
provider: script
script: bash scripts/deploy.sh production
on:
branch: production
condition: $TRAVIS_BRANCH = production
This produces a Travis job log that is specific to each one of staging and production: