Question
Is there some way to set per-label permissions in GitHub?
Background / Context
I'm working on CI/CD pipelines for a project, the code for which is hosted under a GitHub Enterprise account, using GitHub Actions.
I'm hoping to control the process through labels on a PR. The manual key points of this process are outlined below:
When someone creates a PR to master, the CI pipeline kicks off and creates an image and pushes it to AWS ECR. NB: A lot of other stuff also happens here; e.g. linting, vulnerability scanning, automated testing, etc; but needs no manual involvement. If things fail the PR is rejected & no image is pushed to ECR.
The QA team apply the label ApprovedForUAT which checks there are no competing labels (e.g. or ApprovedForUAT or DeployedToUAT labeled PRs; if there are it fails until this is corrected), then kicks off the CD pipeline to deploy that image to our UAT environment.
Once the image is successfully deployed to UAT, the CD pipeline removes the ApprovedForUAT label and applies the DeployedToUAT label.
Once manual testing is completed, the QA team apply the label PassedUAT or FailedUAT as appropriate; a pipeline checks that the label DeployedToUAT had been present to ensure this status update is valid. If things fail, the image is removed from ECR.
The release manager then applies the label ApprovedForProd (can only be applied if the label PassedUAT is already present), causing the CD pipeline to run at a predetermined time to update production with the new image, and removes the ApprovedForProd label, and closes the pull request as complete.
Desired Permissions
The following roles should only be able to apply the given workflow labels:
QA Team
ApprovedForUAT
PassedUAT
FailedUAT
Release Manager
ApprovedForProd
GitHub Actions
DeployedToUAT
Related
I am looking for a way to add a simple custom task with a button to the releases pipelines of Azure Devops (not the new yaml based pipelines).
What I basically need to do is pause the deployment and provide button for a user to click that will bring them to a separate HTML page I am hosting in an S3 bucket. The button link would be dynamic (set at deployment time).
I create a small json file with Cloud Formation stack details and save it during the first agent job. From there I set a pipeline release variable.
resultsUrl=$(jq -r .resultsUrl deploy-$(Release.ReleaseId)-$(Release.EnvironmentName)-stackInfo.json)
echo "##vso[task.setvariable variable=deploymentResultsUrl;]$resultsUrl"
Right now I have the Manual Intervention task in place - but since that is an agentless job - it does not pick up the environment variables I set in the previous agent job. The variable would need to be set at the time the release is triggered - but I won't know it then.
I know we can extend the UI for the new yaml based pipelines (I have already authored a few in house extensions). I need this to be in the "classic" release pipeline. That is where all (100s) of our deployment definitions live.
I'll do my best to explain my problem.
In standard practice, I have an Azure Devops pipeline that creates a Terraform payload, invokes Terraform API, and lets Terraform do its deployment based off the payload. I do this by "Build Validation" - whenever something is PR'd into my branch, the pipeline runs to make sure I'm deploying proper Terraform infrastructure, and in the process, deploys said resources if the pipeline runs succeeds.
Meaning, the current workflow is:
Incoming PR -> Build Valdiation starts -> Pipeline runs -> Pipeline run succeeds -> Accept the PR and do a merge
However, the team I'm working with now wants the following:
Incoming PR -> Accept the PR and do a merge -> Build Validation starts -> Pipeline runs -> Pipeline run succeeds
Basically, they want to actually review the incoming PR, accept and merge it, and ONLY THEN have the actual pipeline/deployment process start. And I'm not sure how to perform this step. Looking into CI triggers, I couldn't find what I need. Any help appreciated.
As you said, you need to use the CI trigger.
Assuming they merge is to master branch and you want to run the pipeline after the merge add to the yaml the trigger:
trigger:
- master
I was looking for the same earlier. Unfortunately, azure does not offer anything like this.
I think the easiest solution is to set up a protected intermediate branch. E.g. pre-master and then make pr's towards this one and disallow humanly issued master merges. Then, as proposed by others, set a trigger on pre-merge after which you then commit to the master.
You can then complete the ping-pong by defining a trigger on master that aligns pre-master afterwards.
We have an Azure DevOps YAML multi-stage pipeline where code is built and then deployed to a sequence of environments. Deployment is achieved using Terraform.
i.e.
Builds -> Test -> Deploy to DEV -> Deploy to TEST - - ->
This pipeline is used for both CI/CD builds and also PR builds. For the PR build, the only part of the deployment stage is running terraform plan on the TF scripts.
For PRs, the pipeline is configured to cancel the ongoing build if changes are pushed to that PR.
The problem we're seeing is that when changes are pushed to the PR and the ongoing build is cancelled, sometimes that cancellation happens during the terraform plan step. This occasionally means that the blob lease taken by terraform plan is not released. From that point onwards, manual intervention is required (break the blob lease) in order for the deployment stages to run successfully.
I believe we can switch off the setting which causes the ongoing PR build to be cancelled if changes are pushed.
But I wondered if there was a way of marking a pipeline step as critical - i.e. it should run to completion if the build is cancelled?
There are other ways of cancelling a pipeline build and there must be other tasks/steps which should not be cancelled part-way through. Such a critical-task setting would cover these situations too.
Not sure if you solved this, but I had the exact same issue. Adding condition: always() to my task forced it to complete when DevOps cancelled the pipeline after additional changes were pushed.
See https://learn.microsoft.com/en-us/azure/devops/pipelines/process/conditions?view=azure-devops&tabs=yaml:
jobs:
- job: Foo
steps:
- script: echo Hello!
condition: always() # this step will always run, even if the pipeline is canceled
I'm afraid you won't be able to achieve this using just YAML. What you can do will require some additional effort (and in some cases quite big):
replace terraform script with bicep for isntance - for me syntax iq quite similar and what you get here is lack of terrafrom state
add in your very first step a check if your state is locked and break a lease if it is needed
I understood that you would like to hear something better than this but at the moment there is no way to mark a task as non-cancellable. However, this sound like a cool feature and candidate for feature request.
I'm having a hard time getting a grasp on this to be honest.
Right now my lab project is as follows:
PR to master -> Triggers Pre-Build Pipeline as condition to merge the code ->
On merge Infrastructure pipe runs only if any changes happen in my Infrastructure folder ->
On merge I want to run my deploy pipeline to deploy my web app to Azure.
The pipes in question do the things they ought to, i.e.
Pre build builds, publishes artifact, runs Unit tests, validates ARM templates.
Infra pipe deploys the necessary infra for my web app such as ResourceGroup, App plan, app service, key vault.
Deploy Pipe downloads the artifact produced in pre deploy and deploys to a stage slot and swaps it to production slot.
What I can't seem to get to work is the pipeline chaining through dependencies, if changes happen to both infra and web app code in master I want the infra pipe to run first and the deploy pipe only if it succeeds.
If I merge only app code I want only the deploy pipe to run regardless if the infra pipe ran or not.
If I merge only infra code I want only the infra pipe to run.
If I merge both app and infra code I want both infra and deploy pipe to run in specific order.
I feel this shouldn't be all that hard to accomplish, but I've spent way too much time trying to solve this to no avail, anyone able to help? :)
Edit:
Hey Sorry #HughLin-MSFT Been Trying to work around this a bit since we're trying to avoid running scripts left and right. :)
I saw you have Build Queuing planned in an upcoming release so for now I think we might have to wait for that.
If I were to merge my deploy and infra pipe, can I use:
trigger:
branches:
include:
- master
paths:
include:
- Infrastructure/*
At stage level and somehow skip a stage instead?
Seen multiple articles mention "Continue if skipped" but can't find any information on how to actually skip a stage.
For the first and second cases, you just need to set Path filters in Triggers, the pipeline only triggers when the file at the specified path is changed. Please refer to this.
For the third case, you can try to add two agent jobs in the infra pipe, add Trigger Azure DevOps Pipeline task to the second agent job to trigger the deploy pipe, and then set Only when all previous jobs have succeeded in Run this job drop-down box for job2. In addition, you need to add a powershell task before the Trigger Azure DevOps Pipeline task, and use a script to detect whether there is app code, run job2 if there is, and cancel job2 if not.
Update:
First you can create a new pipeline and create a variable:changedcode
Use Builds - Get rest api to get the commit , then get the changed code folder with Commits - Get Changes rest api.
Assign changed code folder name as value to changedcode variable.
Set custom conditions for the agent job. In the Infra job, if the changedcode variable value is Infra, run the Infra job. In the Infra job, use the Builds-Queue rest api or Trigger Azure DevOps Pipeline task to trigger the Infra pipeline. The same is true for Deploy job, the only difference is the custom condition expression.
Here is a sample structure in yaml:
jobs:
variables:
changedcode: ""
- job:
steps:
- powershell: |
#Get the changed code folder with rest api
- job: Infra
condition: containsValue($(changedcode), "Infra"))
- powershell: |
#queue Infra pipeline with rest api or Trigger Azure DevOps Pipeline task
- job: Deploy
condition: (containsValue($(changedcode), "deploy")),and ....
- powershell: |
#queue Deploy pipeline with rest api or Trigger Azure DevOps Pipeline task
Our current pipeline deploys a new instance of our app for every new branch created on azure repos, just like review apps on Heroku or Gitlab.
The creation part went smooth, but I'm not sure what to do with the orphaned resources and deployment once the branch is deleted (hopefully by an accepted pr).
Deleting them manually is not an option and there I can't find a trigger in the documentation for branch deletes.
The only option I can see right now is to create a scheduled job for the master branch(since it always exists) with a bash script that compares the list of deployed apps and existing branches and cleans up the resources.
Is it my only option, or is there another way without a fairly complex, all-access, destroy machine?
So did a little investigation dumping all enviroment vars to Notepad++ and using the compare plugin i realized that when a PR is accepted 2 env variables are different.
First of the initial variable "BUILD_REASON" during a push is set to "IndividualCI", but with the "BUILD_SOURCEBRANCH" set to "refs/heads/feature/******". When a pull request is initiated the "BUILD_REASON" changes to "pullrequest" and "BUILD_SOURCEBRANCH" to "refs/pull/***".
Finally when a PR is accepted the variables change to "BUILD_REASON" = "IndividualCI" and "BUILD_SOURCEBRANCH" = "refs/heads/master".
Once i figured out this i could create stage that have the following conditions:
- stage: CleanUp
displayName: 'CleanUp'
dependsOn: Test
condition: and(succeeded(), in(variables['Build.Reason'], 'IndividualCI'),in(variables['Build.SourceBranchName'], 'master'))
The above stage will be triggered when PR is accepted so to cleanup resources created during PR :-) havnt tested all the way but seems to do the job.
You can use a webhook in Azure DevOps to watch the pull request for updates. When the pull request status changes to completed, fire a script that deletes the resources used for the PR.