How to change behavior of Azure Devops to report tests run with DataTestMethod attribute as seperate tests - azure-devops

I have a MSTest test that uses the DataTestMethod attribute to dynamically generate a matrix of values to test a function with. I could describe it generally like
Example:
[DataTestMethod]
[DynamicData(nameof(DynamicTestData), DyanmicDataSourceType.Property]
public void Run_test_on_function_xzy(int input, int expected)
{
// Run test using the to input values.
}
For purpose of discussion, I'll say DyanmicTestData returns 10 values, which results in 10 tests being run.
Now on the Azure Devops side when I run the tests in Azure Pipeline, Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?

Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?
Check the pic below, in the build summary page, we could see the test run, and expend it, we could see the test result. We cannot report the result for each subtest at the root level, the root level shows the test run instead of test result.

Related

How to pull code from different branches at runtime and pass parameter to NUnit.xml file?

We recently moved Java(TestNG) to C#.Net (NUnit). Sametime migrated to Jenkins to Team-city. Currently we are facing some challenges while we configuring the new build pipeline in Team-City.
Scenario 1: Our project has multiple branches, we generally pull code from different Git-branches then trigger the automation.
In Jenkins we used to create build-parameter(list), when user try to execute the Job, he/she select the branch-name from the list (build-parameters), then git will pull code user selected branch then trigger execution.
Can you please help how to implement a similar process in Team-City?
How to configure the default value in the list parameter?
Scenario 2: In Jenkins build-parameter use used to pass to (TestNG.xml). eg: browser, environment. When the user select browser and environment from build parameters, when execution trigger TestNG pull those values and initiate the regression.
How should create build parameters (browser, envi.) and pass those
values to NUnit/ config file?
Thanks
Raghu

Azure DevOps Pipeline using old connection string

I have an Azure DevOps pipeline which is failing to run because it seems to be using an old connection string.
The pipeline is for a C# project, where a FileTransform task updates an appsettings.json file with variables set on the pipeline.
The variables were recently updated to use a new connection string, however, when running a Console.PrintLn before using it and viewing it on the pipeline, it shows an outdated value.
Many updates similar to this have been run in the past without issue.
I've also recently added a Powershell task to echo what the value is in the variables loaded while the pipeline is running, which does display the new value.
I've checked the order of precedence of variables and there shouldn't be any other variables being used.
There is no CacheTask being used in this pipeline.
Does anyone have any advice to remedy this? It seems that the pipeline itself is just ignoring the variables set on the pipeline.
There is a problem with the recent File transform task version v1.208.0.
It will shows the warning message and not update the variable value correctly.
Warning example:
Resource file haven't been set, can't find loc string for key: JSONvariableSubstitution
Refer to this ticket: File transform task failing to transform files, emitting "Resource file haven't been set" warnings
The issue is from Task itself instead of the Pipeline configuration. Many users have the same issue.
Workaround:
You can change to use the File Transform task Version 2 to update the appsettings.json file.
Here is an example: Please remove the content in XML Transformation rules field and set the JSON file path

Azure devops jmeter load test - how to access your jmeter summary reports

Usually one would create one or more linux VMs, and run one or more jmeter master/slaves. Then you can collect the output of the threadgroups summary report listener, which contains fields like average, min, max, std.deviation, 95 percentile etc.
When you run your jmeter project in devops under "Load tests"->New->"Apache Jmeter Test", it does output some standard info under charts, summary and logs, but this is not the output from your summary report listener, it must be the output from some other report listener. It does have total average response time (not response time per api call which I need), and doesn't have std. deviation, 95th percentile etc. which I get when I run the project manually in jmeter myself. Under the devops jmeter tool it does have jmeter.logs and DefaultCTLAttributes.csv, but neither of these contain my summary data.
how do I get the devops jmeter tool to output my summary report listener?
To get the JMeter Reports available as an Azure DevOps Pipeline tab you can also use the extension https://marketplace.visualstudio.com/items?itemName=LakshayKaushik.PublishHTMLReports&targetId=c2bac9a7-71cb-49a9-84a5-acfb8db48105&utm_source=vstsproduct&utm_medium=ExtHubManageList , with htmlType='JMeter' .
The Post https://lakshaykaushik2506.medium.com/running-jmeter-load-tests-and-publishing-jmeter-report-within-azure-devops-547b4b986361 provides the details with a sample pipeline.
Based on my test, I could reproduce this situation. The test result( jmeter.logs and DefaultCTLAttributes.csv) in Test Plan -> Load test indeed doesn't contain the fields(e.g. min,max,std.deviation).
It seems that there is no option to create summary that could contain these points.
For a Workaround, you could run the Jmeter test in the Pipeline.
For example:
steps:
- task: CmdLine#2
inputs:
script: |
cd JmeterPath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -t Path\Jmeter.jmx -n -l Path\report.jtl
- task: CmdLine#2
inputs:
script: |
cd Jmeterpath\apache-jmeter-5.3\apache-jmeter-5.3\bin
jmeter -g Path/report.jtl -o OutPutPath
Since the hosted agents haven't install the Jmeter, you need to run the Pipeline on Self-hosted agents.
Then you could get the Chart in the Html file. This Html file contains these information.
If you want to publish this file to Azure Devops, you could use the Publish Build Artifacts task.
On the other hand, you can report your needs to our UserVoice website.
Hope this helps.
You also can use the extension called Taurus available on:
https://marketplace.visualstudio.com/items?itemName=AlexandreGattiker.jmeter-tasks
You can also use the following pipeline template:
https://github.com/Azure-Samples/jmeter-aci-terraform
It leverages Azure Container Instances as JMeter agents. It publishes a JMeter dashboard (with those metrics you need) as a build artifact.

Printing the Console output in the Azure DevOps Test Run task

I am doing some initial one off setup using [BeforeTestRun] hook for my specflow tests. This does check on some users to make sure if they exist and creates them with specific roles and permissions if they are not so the automated tests can use them. The function to do this prints a lot of useful information on the Console.Writeline.
When I run the test on my local system I can see the output from this hook function on the main feature file and the output of each scenario under each of them. But when I run the tests via Azure DevOps pipleine, I am not sure where to find the output for the [BeforeTestRun] because it is not bound a particular test scenario. The console of Run Tests Tasks has no information about this.
Can someone please help me to show this output somewhere so I can act accordingly.
I tried to use System.Diagnostics.Debug.Print, System.Diagnostics.Debug.Print, System.Diagnostics.Debug.WriteLine and System.Diagnostics.Trace.WriteLine, but nothing seems to work on pipeline console.
[BeforeTestRun]
public static void BeforeRun()
{
Console.WriteLine(
"Before Test run analyzing the users and their needed properties for performing automation run");
}
I want my output to be visible somewhere so I can act based on that information if needed to.
It's not possible for the console logs.
The product currently does not support printing console logs for passing tests and we do not currently have plans to support this in the near future.
(Source: https://developercommunity.visualstudio.com/content/problem/631082/printing-the-console-output-in-the-azure-devops-te.html)
However, there's another way:
Your build will have an attachment with the file extension .trx. This is a xml file and contains an Output element for each test (see also https://stackoverflow.com/a/55452011):
<TestRun id="[omitted]" name="[omitted] 2020-01-10 17:59:35" runUser="[omitted]" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Times creation="2020-01-10T17:59:35.8919298+01:00" queuing="2020-01-10T17:59:35.8919298+01:00" start="2020-01-10T17:59:26.5626373+01:00" finish="2020-01-10T17:59:35.9209479+01:00" />
<Results>
<UnitTestResult testName="TestMethod1">
<Output>
<StdOut>Test</StdOut>
</Output>
</UnitTestResult>
</Results>
</TestRun>

How to get Deployment Risk (Bluemix Devops Insights) Gate pass?

I setup bluemix devops pipeline with DevOps insights Gate node included. Unit test result (mocha format) and coverage result (istanbul format) have been uploaded in test jobs (using grunt-idra3 npm plugin as same as the tutorial did ⇒github url).
However, my gate job is still failed, though unit test is showing 100% pass.
Much appreciated if someone can help me.
Snapshot of DevOps Insight⇒
All unit test passed, but still "decision for Unit Test" is red failed⇒
Detail of policy & rules :
policy "Standard Mocha Test Policy"
Rule-1: Functional verification test,
Rule type: Functional verification test,
Results file format: xUnit,
Percent Passes: 100%
Rule-2: Istanbul Coverage Rule,
Rule type: Code Coverage,
Result file format: istanbul,
Minimum code coverage required: 80%
Rule-3: Mocha Unit Test Rule,
Rule type: Unit Test,
Results file format: xUnit,
Percent Passes: 100%
There seems to be a mismatch between the format specified in Rule (xUnit) and the format of the actual test results (Mocha).
Please update the rule to select "Mocha" format for Unit Tests. Then rerun the gate.
After spending almost 3 weeks on this, finally I get DevOps Gate Job all green. Thanks #Vijay Aggarwal, and everyone else who helped on this issue.
Here is actually what happened and how it is solved finally.
[Root Cause]
DevOps Insight is "environment sensitive" in decision phase (not in
result display though). In my case, I put "STAGING" into "Environment Name" property of Gate Job, thus DevOps Insight does not properly evaluate all the test result I uploaded in both Staging phase and Build phase.
DevOps Rules are
"Result Format Sensitive" too, so people must be careful in choosing
"reporter" for Mocha or Istanbul. In my case, I defined the gulp
file as follows, but incorrectly set result type to "mocha" in
Policy Rule definition.
gulp.task("test", ["pre-test"], function() {
return gulp.src(["./test/**/*.js"], {read: false})
.pipe(mocha({
reporter: "mocha-junit-reporter",
reporterOptions: {
mochaFile: './testResult/testResult-summary.xml'
}
}));
[How it is solved]
Keep "Environment Name" field empty for Gate Job.
In Rule definition page (inside DevOps Policy page), make sure the format type of unit test result is "xUnit".
Screenshot when DevOps Gate is finally passed