Azure DevOps: Can we identify Builds triggered from a changeset number? - azure-devops

I have certain builds set for Continuous Integration, i.e., build for every check-in.
I have an automated method to perform code merge and check-ins; now I want to get the list of builds triggered for a particular changeset created. Is there any way we could get this information?

I would use the the REST API so you could check for builds that were run:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds
will return all builds that you could then go through and check for more details. You can also have more filters in the request (for example based on the build definition).
The build specifics you could then get via:
GET https://dev.azure.com/{organization}/{project}/_apis/build/builds/<buildid>
This will return you the infos like:
"triggerInfo": {
"ci.sourceBranch": "refs/heads/master",
"ci.sourceSha": "0fcb5a27ca2f73561dde0a066a1ec1781128fa81",
"ci.message": ""
},
...
"sourceBranch": "refs/heads/master",
"sourceVersion": "0fcb5a27ca2f73561dde0a066a1ec1781128fa81",
for builds queued from a git repository or
{ ...
"sourceBranch": "$/Build Test",
"sourceVersion": "93",
... }
for TFVC repositories. It actually also would contain a trigger info but I don't have any build around that was triggered automatically based on TFVC.
The sourceVersion in git will be the commit hash, where in TFVC it's the changeset.
More details on the REST API can be found in the Microsoft Docs

Related

Run Jenkins stage after pipeline success independently

So I have a groovy Jenkinsfile script that runs on my Jenkins server for pull requests. On my repos I have a protected master branch. I want my tests and stuff to run only on certain branches but what I want to happen is when the correct branch is detected it will attempt to merge the pull request automatically. The issue I'm getting is I need the pipeline job to report status success before this stage works. For example
pipeline {
agent none
options {...}
environment {...}
stages {
stage("1") {...}
stage("2") {...}
stage('Merge Pull Request') {
when {
allOf{
not { branch 'master' }
expression{env.CHANGE_TITLE.startsWith('branch title')}
}
}
agent {
docker {
label '...'
image '...'
}
}
steps {
touch "f.txt"
sh "echo \${GITHUB_TOKEN} > f.txt"
sh "gh auth --hostname <hostname> login --with-token < f.txt"
sh "gh pr merge -s ${env.CHANGE_URL}"
}
}
}
}
This works the way I want with the caveat that the merge occurs before the status is reported back so it fails because I'm trying to merge to a protected branch that hasn't received the success yet. Any idea on what I can do or how to trigger another job that is not downstream and waited on by this job to complete?
Presumably, you want the branch(es) to be protected so that manual merges cannot occur outside of the pipeline.
For this, I also presume you define a Required commit status check in your protected branch(es) settings.
By default, the pipeline will start the default 'pr-merge' status check at the beginning of the pipeline, and then mark it passed/failed at the very end. The PR Merge is denied because the required status check is not yet passed.
For your solution to work, The final stage ("Merge Pull Request") should first SET the desired status check to PASSED (via the GitHub API), and then make the PR Merge api call.
By the way, you can choose to use a different PR check than the one used by default in Jenkins, and name it what you want. This has the additional advantage than someone cannot circumvent your clever scheme by setting up a rogue Jenkins service with a no-op pipeline (that will mark the status check as "PASSED" automatically). It's not iron-clad, but it requires a bit more effort.

How do you restrict which branches can be pulled into a target branch

I'm trying to set up policies on my Azure DevOps Branches.
I'm able to state that a branch must build and pass our unit tests before allowing a merge but is there a way to restrict which branch is allowed to merge into it.
I have two branches that this would impact.
I have my 'master' branch that I would like to restrict to only accept pull requests from a branch called 'UAT'.
I have a branch called 'UAT' that I would like to restrict to only accept pull requests coming from a branch called 'Dev'.
The closest workaround I could think of is to have a very simple pipeline that would run on pull requests and check System.PullRequest.SourceBranch and System.PullRequest.TargetBranch. If the values don't match your policy, then fail the pipeline, which in turn will block the PR.
Based on the answer by qbik i created this short yaml code. Replace the source and target as needed for your use case. The code below is only for testing in my pipeline, to create the desired failure.
- powershell: >
if ("$(System.PullRequest.SourceBranch)" -ne "refs/heads/acc" -And "$(System.PullRequest.TargetBranch)" -eq "refs/heads/test")
{
Write-Error "
=========================================================================================================
Branch check failed.
Illegal Pull Request from $(System.PullRequest.SourceBranch) into $(System.PullRequest.TargetBranch).
========================================================================================================="
}
displayName: Branch Check

Download artifact from Azure DevOps Pipeline grandparent Pipeline

Given 3 Azure DevOps Pipelines (more may exist), as follows:
Build, Unit Test, Publish Artifacts
Deploy Staging, Integration Test
Deploy Production, Smoke Test
How can I ensure Pipeline 3 downloads the specific artifacts published in Pipeline 1?
The challenge as I see it is that the Task DownloadPipelineArtifact#2 only offers a means to do this if the artifact came from the immediately preceding pipeline. By using the following Pipeline task:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'specific'
project: '$(System.TeamProjectId)'
definition: 1
specificBuildWithTriggering: true
buildVersionToDownload: 'latest'
artifactName: 'example.zip'
This works fine for a parent "triggering pipeline", but not a grandparent. Instead it returns the error message:
Artifact example.zip was not found for build nnn.
where nnn is the run ID of the immediate predecessor, as though I had specified pipelineId: $(Build.TriggeredBy.BuildId). Effectively, Pipeline 3 attempts to retrieve the Pipeline 1 artifact from Pipeline 2. It would be nice if that definition: 1 line did something, but alas, it seems to do nothing when specificBuildWithTriggering: true is set.
Note that buildType: 'latest' isn't safe; it appears it permits publishing an untested artifact, if emitted from Pipeline 1 while Pipeline 2 is running.
There may be no way to accomplish this with the DownloadPipelineArtifact#2. It's hard to be sure because the documentation doesn't have much detail. Perhaps there's another reasonable way to accomplish this... I suppose publishing another copy of the artifact at each of the intervening pipelines, even the ones that don't use it, is one way, but not very reasonable. We could eliminate the ugly aspect of creating copies of the binaries, by instead publishing an artifact with the BuildId recorded in it, but we'd still have to retrieve it and republish it from every pipeline.
If there is a way to identify the original CI trigger, e.g. find the hash of the initiating GIT commit, I could use that to name and refer to the artifacts. Does Build.SourceVersion remain constant between triggered builds? Any other "Initiating ID" would work equally well.
You are welcome to comment on the example pipeline scenario, as I'm actually currently using it, but it isn't the point of my question. I think this problem is broadly applicable, as it will apply when building dependent packages, or for any other reasons for which "Triggers" are useful.
An MS representative suggested using REST for this. For example:
HTTP GET https://dev.azure.com/ORGNAME/PROJECTGUID/_apis/build/Builds/2536
-
{
"id": 2536,
"definition": {
"id": 17
},
"triggeredByBuild": {
"id": 2535,
"definition": {
"id": 10
}
}
}
By walking the parents, one could find the ancestor with the desired definition ID (e.g. 10). Then its run ID (e.g. 2535) could be used to download the artifact.
#merlin-liang-msft suggested a similar process for a different requirement from #sschmeck, and their answer has accompanying code.
There are extensions that allow you to do this, but the official solution it to use a multi-stage pipeline and not 3 independent pipelines.
One way is using release pipelines (you can't code/edit it in YAML) but you can use the same artifacts through whole deployment.
Release pipeline
You can also specify required triggers to start deployment on
Approval and triggers
Alternatively, there exist multi-stage pipeline, that are in preview.(https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/ ).
You can access it by enabling it in your "preview feature".
Why don't you output some pipeline artifacts with meta info and concatenate these in the preceding pipes like.
Grandparent >meta about pipe
Parent > meta about pipe and grantparent meta
Etc

Is there a way to script repetitive tasks in Azure DevOps?

We have a number of tasks that we carry out every time we create a new GIT repository in our project, and I would like to know if there's a way to script (PowerShell or any other method) these out. for e.g. every these are the steps we follow everytime we create a new repo
Create a new GIT repo
Create a Build pipeline for Build validations during
pull request
Add branch policies to Master including a step to validate build using the above build
Create a Build pipeline for releases
Create a Release pipeline
Is there a way to script repetitive tasks in Azure DevOps?
Of course yes! As Daniel said in comment, just use REST API can achieve these all. But since the steps you want to achieve are little much, the script might be little complex.
Create a new GIT repo
If you also want to use API to finish this step, it needs 3 steps to finish that( Since this does not be documented in doc, I will described it very detailed ):
Step1: Create the validation of importing repository
POST https://dev.azure.com/{org name}/{project name}/_apis/git/import/ImportRepositoryValidations?api-version=5.2-preview.1
Request body:
{
"gitSource":
{
"url":"${ReposURL}",
"overwrite":false
},
"tfvcSource":null,
"username":"$(username}"/null,
"password":"${pw}"/"${PAT}"/null
}
Step2: Create the new repos name
POST https://dev.azure.com/{org name}/{project name}/_apis/git/Repositories?api-version=5.2-preview.1
Request body:
{
"name":"${ReposName}",
"project":
{
"name":"{project name}",
"id":"{this project id}"
}
}
Step3: Import repos
POST https://dev.azure.com/{org name}/{project name}/_apis/git/repositories/{the new repos name you create just now}/importRequests?api-version=5.2-preview.1
Request body:
{
"parameters":
{
"deleteServiceEndpointAfterImportIsDone":true,
"gitSource":
{
"url":"${ReposURL}",
"overwrite":false
},
"tfvcSource":null,
"serviceEndpointId":null
}
}
In these script, you can set variables in Variable tab, then use ${} to get them in the script.
Create a Build pipeline for Build validations during pull request
This step you'd better finish manually, because you can configure more about tasks and trigger with UI. If still want use API, refer to this doc: create build definition. There has detailed sample you can try with.
Add branch policies to Master including a step to validate build using the above build
This API still be documented in doc: create build policy. Just refer to that, and ensure use the correct policy type and the corresponding buildDefinitionId.
Create a Build pipeline for releases
This still recommend you finish manually, same with the step3 you mentioned.
Create a Release pipeline
See this doc: create release.
Note: For some parameter which will be used many times, you can set it as variable. For the parameter which need get from previous API response, you can define a variable to get its value then pass this variable into the next API to use.For e.g. :
$resultT= $result.Headers.ETag
Write-Host "##vso[task.setvariable variable=etag;]$resultT"
Now, you can directly use the $(etag) in the next API.

Can I skip an AWS CodePipeline build?

I am currently developing a personal project on master. Every time I push to origin master a build is triggered on CodePipeline. As I am the only developer working on this project and don't want to bother with branches at this stage it would be nice to skip unnecessary builds. I wouldn't mind pushing to another branch, but it's a small annoyance.
CodeShip allows you to skip a build by including --skip-ci in your commit message. Is something like this possible with CodePipeline?
None of my Google searches have yielded results. The CodePipeline documentation makes no mention of such a feature either.
A valid reason for not wanting to build a certain commit is when you use CodeBuild to generate a commit for you. For example, I have some code on the master branch which passes all the tests. I then want to update the changelog, package.json version and create a git tag on a new commit and push it back to the CodeCommit repo.
If I do this on Codebuild, the version-commit triggers another build! Given the contents of the commit does not materially change the behaviour of the code, there is no need to build & test this commit.
Besides all of this, Amazon should be looking at the features in the marketplace and attempting to provide at least feature-parity. Adding a RegEx check for "skip-ci" to the CodeBuild trigger-code would take a few hours to implement, at most.
By default codepipeline creates a cloudwatch event which triggers your pipeline on all changes of the specific branch.
What you can do is to set this cloudwatch event to trigger a lambda function. This function can check whether it is necessary to build this commit and start your CodePipeline.
Here is an example of how to achieve this:
https://aws.amazon.com/blogs/devops/adding-custom-logic-to-aws-codepipeline-with-aws-lambda-and-amazon-cloudwatch-events/
Here is a simple example for lambda function. It checks if the last commit has no [skip-CI] in its message and after that executes the pipeline.
Keep in mind, that this code checks only the last commit if your change was a series of commits you might want to check everything between oldCommitId and commitId.
const AWS = require('aws-sdk');
const codecommit = new AWS.CodeCommit();
const codepipeline = new AWS.CodePipeline();
exports.handler = async (event) => {
const { detail: { repositoryName, commitId, oldCommitId } } = event
const { commit } = await codecommit.getCommit({
commitId,
repositoryName
}).promise()
if(commit.message.search(/\[skip-CI\]/) === -1) {
const { pipelineExecutionId } = await codepipeline.startPipelineExecution({
name: 'your-pipeline-name'
}).promise()
console.log(`Pipeline have started. Execution id: ${pipelineExecutionId}!`)
} else {
console.log('Pipeline execution is not required')
}
return;
};
This isn't a feature offered by CodePipeline at this time.
I'd be curious to know why you view some builds as unnecessary. Do you push a sequence of commits and only want a build against the last commit? This may be more applicable in a team environment, but I would tend to want a build for every commit so I don't find myself in a situation where I push code and pick up and build someone else's broken code for the first time in my build.
I would suggest a manual review stage in your pipeline before it builds. Then you can just approve it when you are ready to build.