Azure DevOps - send email notification after test results have been published - azure-devops

We have a nightly scheduled pipeline that runs tests and publishes the results to a test run. We can see the the url to the test run as this is generated by the PublishTestResults#2 task. I'd like to extend this functionality by emailing a set of users the devops link to the test run.
Here's how the publish task currently looks:
steps:
# Publish test results (to show the test details in JUnit format)
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '*.xml'
searchFolder: '$(Build.SourcesDirectory)/cypress/reports/junit'
mergeTestResults: true
testRunTitle: 'Publish Test Results'
condition: succeededOrFailed()
continueOnError: true
Are there any recommended approaches to doing this?

Option 1: Notifications
If you just need a link to the pipeline run itself, you can configure a notification.
Option 2: Email Report Extension
If you want more control than notifications offer the Email Report Extension from Microsoft DevLabs generates (a customizable) report sent to a list of recipients that contains:
Overall Test Summary : This info mirrors the Test Tab in the Pipeline.
Test Run Summary: Information about individual test runs happened in
the Pipeline if any.
Test Failures: Shows Test Failures, their stack traces (Configurable
in the Task Input) and associated workitems.
Commits/Changeset Information
Phases/Environments Information
Task information: Task(s) that ran in the pipeline - Name, Duration
and any error logs if failed.
You will need to provide credentials for the smtp server the email should be sent through
Option 3: Other extension
There is a number of 3rd party email sending extensions for Azure DevOps,
Option 4: Custom script
There is of course also the option to create a bash/powershell script to send the email, here is a simple Powershell Example: How to send email with PowerShell

Related

Azure DevOps The code coverage value (0%, 0 lines) is lower than the minimum value

Azure Devops pipeline started failing on self hosted agent on Check build quality task
steps:
- task: mspremier.BuildQualityChecks.QualityChecks-task.BuildQualityChecks#6
displayName: 'Check build quality'
inputs:
checkCoverage: true
coverageFailOption: fixed
coverageType: lines
coverageThreshold: 50
buildPlatform: '$(BuildPlatform)'
enabled: false
timeoutInMinutes: 24
Tried with all the task versions
Tried all the coverage types and added timeout minutes too but It's not working
I tried on Microsoft hosted agent win 2019 and It worked
, Till last week it was working on self hosted 2019 agent
Happy to provide more information if needed
Also I would like to understand if we are already doing sonar code scan does this task adds any value ?
If you're running this check on a PR without any code changes, ie. only adding this build check. By default the code coverage check will fail as theres no additional coverage added.
Have a look atthe treat0of0as100 option listed here https://github.com/MicrosoftPremier/VstsExtensions/blob/master/BuildQualityChecks/en-US/overview.md.

How to stop matrix builds on the first error

I have a CI pipeline setup for release and debug builds:
trigger:
batch: true
branches:
include:
- "master"
- "main"
- "feature/*"
- "hotfix/*"
strategy:
matrix:
'Release':
buildConfiguration: 'Release'
'Debug':
buildConfiguration: 'Debug'
Both are ran regardless of errors:
I want to change this behaviour so that when one job fails the other job also stops - saving me build minutes.
Is this possible?
We do not have any built-in method can easily automatically cancel all in-progress jobs if any matrix job fails.
As a workaround, you can try the following method:
Add a script task (such as PowerShell or Bash) as the last step of the matrix job.
Set the script task runs when any of the previous tasks is failed (condition: failed()).
On the script task, set the script to execute the REST API "Builds - Update Build" to cancel current build.
With this way, when any task in the job is failed, the script task will run and execute the REST API to cancel the whole build.
Of course, if you really want a built-in easy method can be used (for example, add the option jobs.job.strategy.fail-fast), I recommend that you can report a feature request on Developer Community. That will make it possible for you to interact with the appropriate engineering team, and make it more convenient for the engineering team to collect and categorize your suggestions.

Azure devops jmeter load test - how to access your jmeter summary reports

Usually one would create one or more linux VMs, and run one or more jmeter master/slaves. Then you can collect the output of the threadgroups summary report listener, which contains fields like average, min, max, std.deviation, 95 percentile etc.
When you run your jmeter project in devops under "Load tests"->New->"Apache Jmeter Test", it does output some standard info under charts, summary and logs, but this is not the output from your summary report listener, it must be the output from some other report listener. It does have total average response time (not response time per api call which I need), and doesn't have std. deviation, 95th percentile etc. which I get when I run the project manually in jmeter myself. Under the devops jmeter tool it does have jmeter.logs and DefaultCTLAttributes.csv, but neither of these contain my summary data.
how do I get the devops jmeter tool to output my summary report listener?
To get the JMeter Reports available as an Azure DevOps Pipeline tab you can also use the extension https://marketplace.visualstudio.com/items?itemName=LakshayKaushik.PublishHTMLReports&targetId=c2bac9a7-71cb-49a9-84a5-acfb8db48105&utm_source=vstsproduct&utm_medium=ExtHubManageList , with htmlType='JMeter' .
The Post https://lakshaykaushik2506.medium.com/running-jmeter-load-tests-and-publishing-jmeter-report-within-azure-devops-547b4b986361 provides the details with a sample pipeline.
Based on my test, I could reproduce this situation. The test result( jmeter.logs and DefaultCTLAttributes.csv) in Test Plan -> Load test indeed doesn't contain the fields(e.g. min,max,std.deviation).
It seems that there is no option to create summary that could contain these points.
For a Workaround, you could run the Jmeter test in the Pipeline.
For example:
steps:
- task: CmdLine#2
inputs:
script: |
cd JmeterPath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -t Path\Jmeter.jmx -n -l Path\report.jtl
- task: CmdLine#2
inputs:
script: |
cd Jmeterpath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -g Path/report.jtl -o OutPutPath
Since the hosted agents haven't install the Jmeter, you need to run the Pipeline on Self-hosted agents.
Then you could get the Chart in the Html file. This Html file contains these information.
If you want to publish this file to Azure Devops, you could use the Publish Build Artifacts task.
On the other hand, you can report your needs to our UserVoice website.
Hope this helps.
You also can use the extension called Taurus available on:
https://marketplace.visualstudio.com/items?itemName=AlexandreGattiker.jmeter-tasks
You can also use the following pipeline template:
https://github.com/Azure-Samples/jmeter-aci-terraform
It leverages Azure Container Instances as JMeter agents. It publishes a JMeter dashboard (with those metrics you need) as a build artifact.

New Azure DevOps pipeline using ASP.NET yaml template failing

I have a GitHub repository with a .NET Core 3.0 website solution in it. In Azure DevOps, I went through the wizard to create a new pipeline linked to that repository using the ASP.NET Core template on the Configure step of the wizard. This is what my YAML looks like:
# ASP.NET Core
# Build and test ASP.NET Core projects targeting .NET Core.
# Add steps that run tests, create a NuGet package, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- develop
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
When I try to manually run the pipeline to test it, this is the output I get everytime:
##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[error]Provisioning request delayed or failed to send 5 time(s). This is over the limit of 3 time(s).
Pool: Azure Pipelines
Image: ubuntu-latest
Started: Yesterday at 10:04 PM
Duration: 10h 54m 5s
Job preparation parameters
ContinueOnError: False
TimeoutInMinutes: 60
CancelTimeoutInMinutes: 5
Expand:
MaxConcurrency: 0
########## System Pipeline Decorator(s) ##########
Begin evaluating template 'system-pre-steps.yml'
Evaluating: eq('true', variables['system.debugContext'])
Expanded: eq('true', Null)
Result: False
Evaluating: resources['repositories']['self']
Expanded: Object
Result: True
Evaluating: not(containsValue(job['steps']['*']['task']['id'], '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Expanded: not(containsValue(Object, '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Result: True
Evaluating: resources['repositories']['self']['checkoutOptions']
Result: Object
Finished evaluating template 'system-pre-steps.yml'
********************************************************************************
Template and static variable resolution complete. Final runtime YAML document:
steps:
- task: 6d15af64-176c-496d-b583-fd2ae21d4df4#1
inputs:
repository: self
I thought maybe ubuntu-latest was maybe no longer a valid vmImage, so I tried changing it to ubuntu-18.04 and got the same result. The Micosoft-hosted agents documentation says either should be valid.
Do I have something wrong with my yaml file? I have setup pipelines before with the old no-yaml interface with no issues, so I am a little confused.
i think nowadays it should looks like this:
trigger:
- develop
jobs:
- job: buildjob
variables:
buildConfiguration: 'Release'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
although this says you can omit jobs if you only have a single job, but I dont see anything wrong with your yaml other than the fact you use steps directly (which, again, should be fine)
Looks like nothing wrong with your yaml file and format.
Since you are using a GitHub repository with a .NET Core 3.0 website. Please pay attention when you create pipeline, make sure you have selected GitHub not Azure Repos Git.
Also as you have mentioned I have setup pipelines before with the old no-yaml interface with no issues, you could setup your pipelines with classic editor first.
There is also a view yaml option.
You could follow that format and content to create a yaml template, which may do the trick.
I was looking through my account and noticed my Agent Pool settings looked a little suspect on the project. It showed I had 11 available agents that were online even though I was using the free plan on a private repository so it should only have one.
I ended up deleting my Azure DevOps organization and creating a new one. Now the YAML configuration I initially posted works fine.

Specify runtime parameter in a pipeline task

We have a requirement to somehow pass a dynamic runtime parameter to a pipeline task.
For example below paramater APPROVAL would be different for each run of the task.
This APPROVAL parameter is for the change and release number so that the task can tag it on the terraform resources created for audit purposes.
Been searching the web for a while but with no luck in finding a solution, is this possible in a concourse pipeline or best practice?
- task: plan-terraform
file: ci/concourse-jobs/pipelines/tasks/terraform/plan-terraform.yaml
params:
ENV: dev
APPROVAL: test
CHANNEL: Developement
GITLAB_KEY: ((gitlab_key))
REGION: eu-west-2
TF_FOLDER: terraform/squid
input_mapping:
ci: ci
tf: squid
output_mapping:
plan: plan
tags:
- dev
From https://concourse-ci.org/tasks.html:
ideally tasks are pure functions: given the same set of inputs, it should either always succeed with the same outputs or always fail.
A dynamic parameter would break that contract and produce different outputs from the same set of inputs. Could you possibly make APPROVAL an input? Then you'd maintain your build traceability. If it's a (file) input, you could then load it into a variable:
APPROVAL=$(cat <filename>)