Jenkins NUnit/XUnit Test Descriptions: How do I import them into the results? - nunit

Greetings and salutations:
I am looking for how to make sure the test descriptions that I have verified made it into the test results get shown when you click on a given test result. Example: I have a test "My_Test_One" that has a description of "This is test one". When the Jenkins user clicks on the test result and drills down to My_Test_One, they will see the description. How do I get that description into Jenkins?
I have been looking in both of the following plugins for a solution to this problem:
Jenkins NUnit Plugin
xUnit Plugin
After a few days of looking in the Jenkins JIRA site and many Google searches I have to admit that I am stumped. Any assistance any of you have would be appreciated.

It appears that the xUnit plugin strips out the descriptions.
I'm using NUnit, and correctly have a description attribute on my tests.
My XML output from NUnit shows the description, but when XUnit aggregates my test results, it strips the description field out.
NUnit also supports ProperyAttributes, but the XUnit plugin also strips these out as well.
I've also tried to use TestCaseSourceAttributes to somehow modify the name of the tests on the fly, this resulted in the tests not even showing in the test results.
As we are fairly set on using Jenkins for our test runs (as we are using it for all builds and environment maintenance) I'll be looking to take our raw NUnit XML output and make some SSRS reports out of it for testers instead.

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

How to display junit test reports in concourse in a usable/interactive way?

the company where I work is evaluating different CI/CD systems, we tried GoCD (v17.4), Jenkins 2 (v2.7) and Concourse (v3.2.1).
We liked Concourse, but a big downside was the fact that the test reports were not displayed in a usable way. I asked in the slack chat, I was told Concourse shows the output of the console, respecting the ANSI colors, if any...
...but the thing is, XML test reports contain a lot more information than just a red color for failing tests and we need to use that information.
I created a failing test and Jenkins has a nice plugin to group all tests, show extra info/metrics and group the failing tests to spot them at once. It also keeps the history of test results.
In Concourse, without a tests reporter one has to scroll down a log to see all failing tests... my colleagues are concerned about this.
Is there a way in Concourse to parse a junit XML test report and show it in the UI in a usable/interactive (clickable) way, as jenkins does?
As I learnt is that Concourse has no plugins and simplicity by design, it seems that the answer is: "NO, there isn't: you can just see the console logs as is". but if I'm wrong, please let me know... Thanks
Concourse doesn't discriminate against types of outputs on purpose.
Concourse is made to be generic. That way there isn't highly specialize, unrepeatable deployments of itself.
Jenkins is specialized to solve these types of issues. To the extent it has deep integration for having UIs display custom output.
It sounds like Jenkins solves all your use cases. I wouldn't try to hammer concourse into this use case.
Concourse is minimal in that way. Concourse is made to run tasks in a pipeline configuration, and do so in an atomic container setup. That is also way it does not store build artifacts and so on. It forces you to do the right thing and save everything you need elsewhere, like buckets etc. Push the XML to a service or store it in a bucket for a tool to use later on.

Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...
Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave
Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

Writing Logs/results or generating reports using Selenium C# API

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.
Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.
For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task