I setup bluemix devops pipeline with DevOps insights Gate node included. Unit test result (mocha format) and coverage result (istanbul format) have been uploaded in test jobs (using grunt-idra3 npm plugin as same as the tutorial did ⇒github url).
However, my gate job is still failed, though unit test is showing 100% pass.
Much appreciated if someone can help me.
Snapshot of DevOps Insight⇒
All unit test passed, but still "decision for Unit Test" is red failed⇒
Detail of policy & rules :
policy "Standard Mocha Test Policy"
Rule-1: Functional verification test,
Rule type: Functional verification test,
Results file format: xUnit,
Percent Passes: 100%
Rule-2: Istanbul Coverage Rule,
Rule type: Code Coverage,
Result file format: istanbul,
Minimum code coverage required: 80%
Rule-3: Mocha Unit Test Rule,
Rule type: Unit Test,
Results file format: xUnit,
Percent Passes: 100%
There seems to be a mismatch between the format specified in Rule (xUnit) and the format of the actual test results (Mocha).
Please update the rule to select "Mocha" format for Unit Tests. Then rerun the gate.
After spending almost 3 weeks on this, finally I get DevOps Gate Job all green. Thanks #Vijay Aggarwal, and everyone else who helped on this issue.
Here is actually what happened and how it is solved finally.
[Root Cause]
DevOps Insight is "environment sensitive" in decision phase (not in
result display though). In my case, I put "STAGING" into "Environment Name" property of Gate Job, thus DevOps Insight does not properly evaluate all the test result I uploaded in both Staging phase and Build phase.
DevOps Rules are
"Result Format Sensitive" too, so people must be careful in choosing
"reporter" for Mocha or Istanbul. In my case, I defined the gulp
file as follows, but incorrectly set result type to "mocha" in
Policy Rule definition.
gulp.task("test", ["pre-test"], function() {
return gulp.src(["./test/**/*.js"], {read: false})
.pipe(mocha({
reporter: "mocha-junit-reporter",
reporterOptions: {
mochaFile: './testResult/testResult-summary.xml'
}
}));
[How it is solved]
Keep "Environment Name" field empty for Gate Job.
In Rule definition page (inside DevOps Policy page), make sure the format type of unit test result is "xUnit".
Screenshot when DevOps Gate is finally passed
Related
Please, observe:
[Error]Failed to create ref refs/tags/dryrun-master-CI_64.0.0.25914-noat-test at 330dd52a89ed97f5dcd216bcf89e04b864247053.
[Error]Failed to create ref refs/tags/dryrun-master-CI_64.0.0.25914-noat-test at 330dd52a89ed97f5dcd216bcf89e04b864247053.
Created ref refs/tags/dryrun-master-CI_64.0.0.25914-noat-test at 330dd52a89ed97f5dcd216bcf89e04b864247053.
Running the build with the diagnostics does not actually produce any more output in this step.
The build url is https://dev.azure.com/Ceridian-dryrun/SharpTop/_build/results?buildId=1672629&view=logs&j=ca395085-040a-526b-2ce8-bdc85f692774&t=9ff468ea-e6fc-49e0-b3ce-f8332e9d6e3d, but I doubt it can be viewed by anyone.
I tried to reproduce it on a small repo, but apparently only this particular build is vulnerable.
How is one supposed to troubleshoot it? I am more than willing to inspect the source code of that task, but it is not amongst the tasks found in https://github.com/microsoft/azure-pipelines-tasks/tree/master/Tasks. So, what can we do here?
Another weird thing - the duration of the step. It took 5 minutes to fail.
Our structure for a release in azure devops is to
deploy our app to our DEV environment.
Kick off my Selenium (Visual Studio) tests against that environment.
If passes, moves to our TEST environment.
If fails/hard stop.
We want to add new piece/functionality, starts same as above, Except instead of hard stop. 5) if default step fails, continue to next step. 6) New detail testing starts (turns on screen recorder)
The new detailed step has 'Agent Job' settings/parameters, I have the section "Run this job", set to "Only when previous job has failed".
My results have been, that if the previous/default/basic testing passed, the detailed step is skipped. As expected.
But if the previous step fails....the following new detailed step does not kick off.
Is it possible because the step is set up that if it fails hard stop and does not even evaluate the next step?
Or is it because the previous step says 'partially succeeded'. is this basically seen not as a failure?
Yes, this is correct. Because failed is equivalent of eq(variables['Agent.JobStatus'], 'Failed') status. But partially succeeded is eq(variables['Agent.JobStatus'], 'SucceededWithIssues').
Please check here.
You may try custom conditions like :
in(variables['Agent.JobStatus'], 'Failed', 'SucceededWithIssues')
As an addition to the solution, a piece I missed was on the 'detailed' job, the 'Trigger even when the selected stages partially succeed', also needed to be checked, as well as the solution for the same step above.
I have a MSTest test that uses the DataTestMethod attribute to dynamically generate a matrix of values to test a function with. I could describe it generally like
Example:
[DataTestMethod]
[DynamicData(nameof(DynamicTestData), DyanmicDataSourceType.Property]
public void Run_test_on_function_xzy(int input, int expected)
{
// Run test using the to input values.
}
For purpose of discussion, I'll say DyanmicTestData returns 10 values, which results in 10 tests being run.
Now on the Azure Devops side when I run the tests in Azure Pipeline, Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?
Azure Devops reports only one test result, not 10. Is there a way, I can modify this behavior in MSTest or Azure DevOps to report a Result for each subtest at the root level?
Check the pic below, in the build summary page, we could see the test run, and expend it, we could see the test result. We cannot report the result for each subtest at the root level, the root level shows the test run instead of test result.
A certain task generates a ##[warning] and has a warning status.
It causes the final status of stage to be an orange exclamation mark.
I want to suppress this so that the stage will show as succeeded (green check).
Is there a way to achieve this?
Ive looked at the options at the task itself, but it only has ContinueOnError
*Edit:
Im talking about the Azure App Configuration Extension.
I've even delved into the path of updating the Build Result via REST API
but to unfortunaly, the PATH method doesn't seem to update the build result.
Update from OP
It's a known limitation/bug of task:
Azure DevOps Extension: Azure AppConfiguration - Partial Complete
Currently, by default the build result is "Failed" if anything failed to compile/build, "Partially Succeeded" if there are any unit test failures,ContinueOnError checked, and "Succeeded" otherwise.
It causes the final status of stage to be an orange exclamation mark.
According to your description, that task showing as "Partially succeeded" may due to you checked "Continue on error" option.
Continue on error (partially successful)
Select this option if you want subsequent tasks in the same job to
possibly run even if this task fails. The build or deployment will be
no better than partially successful. Whether subsequent tasks run
depends on the Run this task setting.
Please refer to this document for more info: Task control options
We have a requirement to somehow pass a dynamic runtime parameter to a pipeline task.
For example below paramater APPROVAL would be different for each run of the task.
This APPROVAL parameter is for the change and release number so that the task can tag it on the terraform resources created for audit purposes.
Been searching the web for a while but with no luck in finding a solution, is this possible in a concourse pipeline or best practice?
- task: plan-terraform
file: ci/concourse-jobs/pipelines/tasks/terraform/plan-terraform.yaml
params:
ENV: dev
APPROVAL: test
CHANNEL: Developement
GITLAB_KEY: ((gitlab_key))
REGION: eu-west-2
TF_FOLDER: terraform/squid
input_mapping:
ci: ci
tf: squid
output_mapping:
plan: plan
tags:
- dev
From https://concourse-ci.org/tasks.html:
ideally tasks are pure functions: given the same set of inputs, it should either always succeed with the same outputs or always fail.
A dynamic parameter would break that contract and produce different outputs from the same set of inputs. Could you possibly make APPROVAL an input? Then you'd maintain your build traceability. If it's a (file) input, you could then load it into a variable:
APPROVAL=$(cat <filename>)