Azure devops jmeter load test - how to access your jmeter summary reports - azure-devops

Usually one would create one or more linux VMs, and run one or more jmeter master/slaves. Then you can collect the output of the threadgroups summary report listener, which contains fields like average, min, max, std.deviation, 95 percentile etc.
When you run your jmeter project in devops under "Load tests"->New->"Apache Jmeter Test", it does output some standard info under charts, summary and logs, but this is not the output from your summary report listener, it must be the output from some other report listener. It does have total average response time (not response time per api call which I need), and doesn't have std. deviation, 95th percentile etc. which I get when I run the project manually in jmeter myself. Under the devops jmeter tool it does have jmeter.logs and DefaultCTLAttributes.csv, but neither of these contain my summary data.
how do I get the devops jmeter tool to output my summary report listener?

To get the JMeter Reports available as an Azure DevOps Pipeline tab you can also use the extension https://marketplace.visualstudio.com/items?itemName=LakshayKaushik.PublishHTMLReports&targetId=c2bac9a7-71cb-49a9-84a5-acfb8db48105&utm_source=vstsproduct&utm_medium=ExtHubManageList , with htmlType='JMeter' .
The Post https://lakshaykaushik2506.medium.com/running-jmeter-load-tests-and-publishing-jmeter-report-within-azure-devops-547b4b986361 provides the details with a sample pipeline.

Based on my test, I could reproduce this situation. The test result( jmeter.logs and DefaultCTLAttributes.csv) in Test Plan -> Load test indeed doesn't contain the fields(e.g. min,max,std.deviation).
It seems that there is no option to create summary that could contain these points.
For a Workaround, you could run the Jmeter test in the Pipeline.
For example:
steps:
- task: CmdLine#2
inputs:
script: |
cd JmeterPath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -t Path\Jmeter.jmx -n -l Path\report.jtl
- task: CmdLine#2
inputs:
script: |
cd Jmeterpath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -g Path/report.jtl -o OutPutPath
Since the hosted agents haven't install the Jmeter, you need to run the Pipeline on Self-hosted agents.
Then you could get the Chart in the Html file. This Html file contains these information.
If you want to publish this file to Azure Devops, you could use the Publish Build Artifacts task.
On the other hand, you can report your needs to our UserVoice website.
Hope this helps.

You also can use the extension called Taurus available on:
https://marketplace.visualstudio.com/items?itemName=AlexandreGattiker.jmeter-tasks

You can also use the following pipeline template:
https://github.com/Azure-Samples/jmeter-aci-terraform
It leverages Azure Container Instances as JMeter agents. It publishes a JMeter dashboard (with those metrics you need) as a build artifact.

Related

How to change behavior of Azure Devops to report tests run with DataTestMethod attribute as seperate tests

I have a MSTest test that uses the DataTestMethod attribute to dynamically generate a matrix of values to test a function with. I could describe it generally like
Example:
[DataTestMethod]
[DynamicData(nameof(DynamicTestData), DyanmicDataSourceType.Property]
public void Run_test_on_function_xzy(int input, int expected)
{
// Run test using the to input values.
}
For purpose of discussion, I'll say DyanmicTestData returns 10 values, which results in 10 tests being run.
Now on the Azure Devops side when I run the tests in Azure Pipeline, Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?
Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?
Check the pic below, in the build summary page, we could see the test run, and expend it, we could see the test result. We cannot report the result for each subtest at the root level, the root level shows the test run instead of test result.

Azure DevOps - send email notification after test results have been published

We have a nightly scheduled pipeline that runs tests and publishes the results to a test run. We can see the the url to the test run as this is generated by the PublishTestResults#2 task. I'd like to extend this functionality by emailing a set of users the devops link to the test run.
Here's how the publish task currently looks:
steps:
# Publish test results (to show the test details in JUnit format)
- task: PublishTestResults#2
displayName: 'Publish test results'
inputs:
testResultsFormat: 'JUnit'
testResultsFiles: '*.xml'
searchFolder: '$(Build.SourcesDirectory)/cypress/reports/junit'
mergeTestResults: true
testRunTitle: 'Publish Test Results'
condition: succeededOrFailed()
continueOnError: true
Are there any recommended approaches to doing this?
Option 1: Notifications
If you just need a link to the pipeline run itself, you can configure a notification.
Option 2: Email Report Extension
If you want more control than notifications offer the Email Report Extension from Microsoft DevLabs generates (a customizable) report sent to a list of recipients that contains:
Overall Test Summary : This info mirrors the Test Tab in the Pipeline.
Test Run Summary: Information about individual test runs happened in
the Pipeline if any.
Test Failures: Shows Test Failures, their stack traces (Configurable
in the Task Input) and associated workitems.
Commits/Changeset Information
Phases/Environments Information
Task information: Task(s) that ran in the pipeline - Name, Duration
and any error logs if failed.
You will need to provide credentials for the smtp server the email should be sent through
Option 3: Other extension
There is a number of 3rd party email sending extensions for Azure DevOps,
Option 4: Custom script
There is of course also the option to create a bash/powershell script to send the email, here is a simple Powershell Example: How to send email with PowerShell

Specify runtime parameter in a pipeline task

We have a requirement to somehow pass a dynamic runtime parameter to a pipeline task.
For example below paramater APPROVAL would be different for each run of the task.
This APPROVAL parameter is for the change and release number so that the task can tag it on the terraform resources created for audit purposes.
Been searching the web for a while but with no luck in finding a solution, is this possible in a concourse pipeline or best practice?
- task: plan-terraform
file: ci/concourse-jobs/pipelines/tasks/terraform/plan-terraform.yaml
params:
ENV: dev
APPROVAL: test
CHANNEL: Developement
GITLAB_KEY: ((gitlab_key))
REGION: eu-west-2
TF_FOLDER: terraform/squid
input_mapping:
ci: ci
tf: squid
output_mapping:
plan: plan
tags:
- dev
From https://concourse-ci.org/tasks.html:
ideally tasks are pure functions: given the same set of inputs, it should either always succeed with the same outputs or always fail.
A dynamic parameter would break that contract and produce different outputs from the same set of inputs. Could you possibly make APPROVAL an input? Then you'd maintain your build traceability. If it's a (file) input, you could then load it into a variable:
APPROVAL=$(cat <filename>)

Download artifact from Azure DevOps Pipeline grandparent Pipeline

Given 3 Azure DevOps Pipelines (more may exist), as follows:
Build, Unit Test, Publish Artifacts
Deploy Staging, Integration Test
Deploy Production, Smoke Test
How can I ensure Pipeline 3 downloads the specific artifacts published in Pipeline 1?
The challenge as I see it is that the Task DownloadPipelineArtifact#2 only offers a means to do this if the artifact came from the immediately preceding pipeline. By using the following Pipeline task:
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'specific'
project: '$(System.TeamProjectId)'
definition: 1
specificBuildWithTriggering: true
buildVersionToDownload: 'latest'
artifactName: 'example.zip'
This works fine for a parent "triggering pipeline", but not a grandparent. Instead it returns the error message:
Artifact example.zip was not found for build nnn.
where nnn is the run ID of the immediate predecessor, as though I had specified pipelineId: $(Build.TriggeredBy.BuildId). Effectively, Pipeline 3 attempts to retrieve the Pipeline 1 artifact from Pipeline 2. It would be nice if that definition: 1 line did something, but alas, it seems to do nothing when specificBuildWithTriggering: true is set.
Note that buildType: 'latest' isn't safe; it appears it permits publishing an untested artifact, if emitted from Pipeline 1 while Pipeline 2 is running.
There may be no way to accomplish this with the DownloadPipelineArtifact#2. It's hard to be sure because the documentation doesn't have much detail. Perhaps there's another reasonable way to accomplish this... I suppose publishing another copy of the artifact at each of the intervening pipelines, even the ones that don't use it, is one way, but not very reasonable. We could eliminate the ugly aspect of creating copies of the binaries, by instead publishing an artifact with the BuildId recorded in it, but we'd still have to retrieve it and republish it from every pipeline.
If there is a way to identify the original CI trigger, e.g. find the hash of the initiating GIT commit, I could use that to name and refer to the artifacts. Does Build.SourceVersion remain constant between triggered builds? Any other "Initiating ID" would work equally well.
You are welcome to comment on the example pipeline scenario, as I'm actually currently using it, but it isn't the point of my question. I think this problem is broadly applicable, as it will apply when building dependent packages, or for any other reasons for which "Triggers" are useful.
An MS representative suggested using REST for this. For example:
HTTP GET https://dev.azure.com/ORGNAME/PROJECTGUID/_apis/build/Builds/2536
-
{
"id": 2536,
"definition": {
"id": 17
},
"triggeredByBuild": {
"id": 2535,
"definition": {
"id": 10
}
}
}
By walking the parents, one could find the ancestor with the desired definition ID (e.g. 10). Then its run ID (e.g. 2535) could be used to download the artifact.
#merlin-liang-msft suggested a similar process for a different requirement from #sschmeck, and their answer has accompanying code.
There are extensions that allow you to do this, but the official solution it to use a multi-stage pipeline and not 3 independent pipelines.
One way is using release pipelines (you can't code/edit it in YAML) but you can use the same artifacts through whole deployment.
Release pipeline
You can also specify required triggers to start deployment on
Approval and triggers
Alternatively, there exist multi-stage pipeline, that are in preview.(https://devblogs.microsoft.com/devops/whats-new-with-azure-pipelines/ ).
You can access it by enabling it in your "preview feature".
Why don't you output some pipeline artifacts with meta info and concatenate these in the preceding pipes like.
Grandparent >meta about pipe
Parent > meta about pipe and grantparent meta
Etc

Azure Pipepline with task Sonarqube https

I added a Sonarqube task into my azure build pipeline, in order to login to my sonarqube server I need to run a command, which uses trunst store ssl.
my pipeline looks just like this:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: abc-sonarqube
scannerMode: CLI
configMode: manual
cliProjectKey: 'abc'
cliProjectName: 'abc'
cliSources: src
extraProperties: |
sonar.host.url=https://sonarqube.build.abcdef.com
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit
I am not sure, if this command "sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit" correct is.
I got the error "API GET '/api/server/version' failed, error was: {"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE"}
"
PS: my project is angular project.
any solutions?
Azure Pipepline with task Sonarqube https
This issue should be related in how the configure task works. So, even if we add the certificate to the java trustore, the task that sets the configuration uses a different runtime (not java at least) to communicate with the server, that’s why you still get that certificate error.
To resolve this issue, you could try to:
set a global variable, NODE_EXTRA_CA_CERTS, and set it to a copy of
the root cert we had stored locally in a directory. See this article.
Check the related ticket for some more details.
Hope this helps.