Importing results from X-ray api updates each test's labels - nunit

I am using X-Ray API to import NUnit TestResult.xml result file to Jira. The scenarios that are being run are present in Jira as Test (XRay). Each NUnit scenario has a tag that matches the Jira test key, there are other tags which are not relevant to the Jira test. When the result is imported, new execution is created and the tests are matched and added to that execution. The problem I am facing is that the Jira tests are updated with new labels (the ones that are present in nunit).
Is it possible to disable editing of labels for tests in Jira and to only have the tests added to the execution as-is and to only have their status changed?
Steps:
Run any nunit test containing scenarios that can be matched to Jira tests.
Import TestResults.xml using the "rest/raven/1.0/import/execution/nunit/multipart" endpoint.
New execution is created with Jira, and existing tests are matched based on their key and added to the execution.
Notice that the execution tests in Jira are updated with added labels from the TestResults.xml file and additional labels generated by the test name and any error screenshot names.
My info.json file:
{
"fields": {
"project": {
"key": "SB"
},
"summary": "Automatic result import from automation run",
"issuetype": {
"name": "Test Execution"
}
}
}
Specflow scenario that is executed:
#Regression #SB_110325
#Web #ResponsiveDesktop
Scenario: Favorites for Log in user
Given Home page is open
And I login successfully
Updated Jira test after the imported result:
Notice only Regression_pack was the original label.
Update: Currently this is not possible. I have reported this to the developers and an improvement task was created.
If other people need this implemented, they can vote for it here: External link so it will be picked up and implemented by the developers.

Related

Botium CLI tests for rasa chatbot passing for any random convo (that should not pass)

I'm trying to setup Botium CLI with my rasa chatbot for automated integration testing and dialog flow tests. However, the botium framework passes tests that do not describe a conversation flow that would be possible with my chatbot.
I'm using it with botium-connector-rasa and this is my botium.json config file:
{
"botium": {
"Capabilities": {
"PROJECTNAME": "change me later",
"CONTAINERMODE": "rasa",
"RASA_MODE": "DIALOG_AND_NLU",
"RASA_ENDPOINT_URL": "http://localhost:8080/"
},
"Sources": {},
"Envs": {}
}
}
When I try to run botium-cli pointing --convos to my folder with the .convos.txt files, it passes the tests even if they should have failed.
.convo.txt file:
Test case 02: Robots' hell
# me
random question
# bot
random answer
Command used for running the tests:
botium-cli run --config botium.json --convos ./convos/
The output is this
What is going on? Why is botium passing my random tests when it should've failed these tests?
I've tried to talk with the bot using the emulator and if i run botium-cli emulator it works properly and I can communicate with my chatbot as expected.
The issue was in the .convo.txt files' syntax.
I just had to remove the spaces between the # and the me/bot. The provided example convo should look like this instead:
Test case 02: Robots' hell
#me
random question
#bot
random answer

Printing the Console output in the Azure DevOps Test Run task

I am doing some initial one off setup using [BeforeTestRun] hook for my specflow tests. This does check on some users to make sure if they exist and creates them with specific roles and permissions if they are not so the automated tests can use them. The function to do this prints a lot of useful information on the Console.Writeline.
When I run the test on my local system I can see the output from this hook function on the main feature file and the output of each scenario under each of them. But when I run the tests via Azure DevOps pipleine, I am not sure where to find the output for the [BeforeTestRun] because it is not bound a particular test scenario. The console of Run Tests Tasks has no information about this.
Can someone please help me to show this output somewhere so I can act accordingly.
I tried to use System.Diagnostics.Debug.Print, System.Diagnostics.Debug.Print, System.Diagnostics.Debug.WriteLine and System.Diagnostics.Trace.WriteLine, but nothing seems to work on pipeline console.
[BeforeTestRun]
public static void BeforeRun()
{
Console.WriteLine(
"Before Test run analyzing the users and their needed properties for performing automation run");
}
I want my output to be visible somewhere so I can act based on that information if needed to.
It's not possible for the console logs.
The product currently does not support printing console logs for passing tests and we do not currently have plans to support this in the near future.
(Source: https://developercommunity.visualstudio.com/content/problem/631082/printing-the-console-output-in-the-azure-devops-te.html)
However, there's another way:
Your build will have an attachment with the file extension .trx. This is a xml file and contains an Output element for each test (see also https://stackoverflow.com/a/55452011):
<TestRun id="[omitted]" name="[omitted] 2020-01-10 17:59:35" runUser="[omitted]" xmlns="http://microsoft.com/schemas/VisualStudio/TeamTest/2010">
<Times creation="2020-01-10T17:59:35.8919298+01:00" queuing="2020-01-10T17:59:35.8919298+01:00" start="2020-01-10T17:59:26.5626373+01:00" finish="2020-01-10T17:59:35.9209479+01:00" />
<Results>
<UnitTestResult testName="TestMethod1">
<Output>
<StdOut>Test</StdOut>
</Output>
</UnitTestResult>
</Results>
</TestRun>

Heroku Review Apps not deploying at all

I'm trying to automatically create review apps as part of my pipeline and testing procedure when pull requests are created on the corresponding GitHub repository. When the PR is created, it appears as a review app, but doesn't actually get created.
In the DevTools console, a 404 error is there about the review-app-config. I'm not sure if this is directly related, as I've successfully created a review app on a different pipeline (with a different owner) with the same error.
This 404 error changes between the file not being available at all, or that it's returning an error. When it's the latter, the file contains the following:
{"id":"missing_version","error":"Please specify a version along with Heroku's API MIME type. For example, `Accept: application/vnd.heroku+json; version=3`.\n"}
I'm creating and managing all of the apps/pipelines with the GUI on dashboard.heroku.com. The version accept header appears to be needed for the Heroku API but I've no idea how to implement it. Any help would be greatly appreciated!
Firstly check that your app.json file is valid json. If it isn't then that will cause the deployment to fall over.
Secondly check if you have any scripts in the app.json key. If you have any here and they are incorrect then this will also cause it to hand and fall over with no warning displayed.
{
"name": "App name",
"scripts": {
"deploy": "command that won't work!!"
},
...
}
You many not need any scripts in here so it can also be empty!
{
"name": "App name",
"scripts": {},
...
}

How to get Deployment Risk (Bluemix Devops Insights) Gate pass?

I setup bluemix devops pipeline with DevOps insights Gate node included. Unit test result (mocha format) and coverage result (istanbul format) have been uploaded in test jobs (using grunt-idra3 npm plugin as same as the tutorial did ⇒github url).
However, my gate job is still failed, though unit test is showing 100% pass.
Much appreciated if someone can help me.
Snapshot of DevOps Insight⇒
All unit test passed, but still "decision for Unit Test" is red failed⇒
Detail of policy & rules :
policy "Standard Mocha Test Policy"
Rule-1: Functional verification test,
Rule type: Functional verification test,
Results file format: xUnit,
Percent Passes: 100%
Rule-2: Istanbul Coverage Rule,
Rule type: Code Coverage,
Result file format: istanbul,
Minimum code coverage required: 80%
Rule-3: Mocha Unit Test Rule,
Rule type: Unit Test,
Results file format: xUnit,
Percent Passes: 100%
There seems to be a mismatch between the format specified in Rule (xUnit) and the format of the actual test results (Mocha).
Please update the rule to select "Mocha" format for Unit Tests. Then rerun the gate.
After spending almost 3 weeks on this, finally I get DevOps Gate Job all green. Thanks #Vijay Aggarwal, and everyone else who helped on this issue.
Here is actually what happened and how it is solved finally.
[Root Cause]
DevOps Insight is "environment sensitive" in decision phase (not in
result display though). In my case, I put "STAGING" into "Environment Name" property of Gate Job, thus DevOps Insight does not properly evaluate all the test result I uploaded in both Staging phase and Build phase.
DevOps Rules are
"Result Format Sensitive" too, so people must be careful in choosing
"reporter" for Mocha or Istanbul. In my case, I defined the gulp
file as follows, but incorrectly set result type to "mocha" in
Policy Rule definition.
gulp.task("test", ["pre-test"], function() {
return gulp.src(["./test/**/*.js"], {read: false})
.pipe(mocha({
reporter: "mocha-junit-reporter",
reporterOptions: {
mochaFile: './testResult/testResult-summary.xml'
}
}));
[How it is solved]
Keep "Environment Name" field empty for Gate Job.
In Rule definition page (inside DevOps Policy page), make sure the format type of unit test result is "xUnit".
Screenshot when DevOps Gate is finally passed

TeamCity not showing service messages with powershell

I'm running a project configuration using powershell/psake and I'm using the TeamCity powershell module (https://github.com/JamesKovacs/psake-contrib/wiki/teamcity.psm1) yet TeamCity only shows the configuration as "Running"
However, the build log clearly displays all of the service messages:
[15:41:34]WARNING: Some imported command names include unapproved verbs which might make
[15:41:34]them less discoverable. Use the Verbose parameter for more detail or type
[15:41:34]Get-Verb to see the list of approved verbs.
[15:41:34]##teamcity[progessMessage 'Running task Clean']
[15:41:34]Executing Clean
[15:41:34]running the build
[15:41:34]##teamcity[progessMessage 'Running task Build']
[15:41:34]Executing Build
Am I wrong to thing these should be showing up in the project status instead of just "Running"?
It is typo in generated message. I just created pull request with fix. https://github.com/JamesKovacs/psake-contrib/pull/1