Requirements coverage using pytest - pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.

we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

Related

How to make a Github action pipeline to build, test (gtest), and document (doxygen?) C++

I'm wanting to create a pipeline on Github for a C++ project that will build, test, and document it. The project is supposed to be compiled with GNU Make, but for now, it can be done using CMake as I can change it later. I want it to run tests using google test and also automatically create documentation for it (I've used Doxygen in the past which nicely makes HTML formatted documentation from your comments).
I've tried to get this working and used a bunch of different yaml files I've found online, but I can't get it working exactly right. The best I've been able to do is get it to build and for the tests to run, but I can't get the automatic documentation to work. Doxygen is reliant on a Doxyfile to configure it, but I'm not sure of a simple way to configure it (stuff I've found online seems overly complicated for what I want). I'm open to using a different method for automatically generating documentation if there's one that would work better.

Do we have any sample ui integration tests for vscode extensions

I am trying to write e2e integration tests for a vscode extension. I didn't find any ui integration tests. Can you please provide me the links if any
I recommend using extensions/vscode-api-tests/src/singlefolder-tests/editor.test.ts in the vscode sources as a starting point for integration tests. If that particular test isn't quite what you want, there are a bunch of tests adjacent to it that might be.
See also this answer I gave to a related question about using the API from within tests.

Is it possible to specify aggregate code coverage testing when deploying with a list of tests to Salesforce

I am automating deployment and CI to our Salesforce orgs using Ant. In my build xml, I am specifying the complete list of our tests to run. Salesforce is returning code coverage errors, demanding 75% code coverage on a per file basis rather than allowing only 75% based on the total code base. Some of our old files do not have that level of coverage, and I am trying not to have to go back and create a ton of new tests for old software.
It seems like Salesforce is doing the code coverage based on the quickdeploy model, rather that the aggregate.
Does anyone know a way I can tell Salesforce not to use the quickdeploy model (if that is what it is doing). I have checked the Migration tool docs, but don't see anything.
Thanks...
Have you tried setting the attribute runAllTests="true" inside the sf:deploy tasks, rather than listing each test out?

Jenkins NUnit/XUnit Test Descriptions: How do I import them into the results?

Greetings and salutations:
I am looking for how to make sure the test descriptions that I have verified made it into the test results get shown when you click on a given test result. Example: I have a test "My_Test_One" that has a description of "This is test one". When the Jenkins user clicks on the test result and drills down to My_Test_One, they will see the description. How do I get that description into Jenkins?
I have been looking in both of the following plugins for a solution to this problem:
Jenkins NUnit Plugin
xUnit Plugin
After a few days of looking in the Jenkins JIRA site and many Google searches I have to admit that I am stumped. Any assistance any of you have would be appreciated.
It appears that the xUnit plugin strips out the descriptions.
I'm using NUnit, and correctly have a description attribute on my tests.
My XML output from NUnit shows the description, but when XUnit aggregates my test results, it strips the description field out.
NUnit also supports ProperyAttributes, but the XUnit plugin also strips these out as well.
I've also tried to use TestCaseSourceAttributes to somehow modify the name of the tests on the fly, this resulted in the tests not even showing in the test results.
As we are fairly set on using Jenkins for our test runs (as we are using it for all builds and environment maintenance) I'll be looking to take our raw NUnit XML output and make some SSRS reports out of it for testers instead.

Writing Logs/results or generating reports using Selenium C# API

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.
Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.
For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task