Writing Logs/results or generating reports using Selenium C# API - nunit

I am testing the web application using Selenium RC. All things works fine and I have written many test cases and executing these test cases using Nunit.
Now the hurdle that I am facing is how to keep track of failures or how to generate the Reports?
Please advice which could be best way to capture this.

Because you're using NUnit you'll want to use NUnits reporting facilities. If the GUI runner is enough, that's great. But if you're running from the command line, or NAnt, you'll want to use the XML output take a look at the NAnt documentation for more information.
If you're using Nant you'll want to look at NUnit2Report. It's nolonger maintained, but it may suit your needs. Alternatively, you could extract it's XSLT files and apply it against the XML output.
Selenium itself doesn't have report because it is only a library used by many different languages.

For anyone else happening randomly into this question, the 'nunit2report' task is now available in NAntContrib.
NantContrib Nunit2report task

Related

Requirements coverage using pytest

We use LabGrid for our testing, which is based on pytest. I would like to do some requirements coverage measuring. All the searches on covertage for pytest, ends up in line coverage for python and that is not what I want.
Actually I'm not testing python code at all, but remotely testing features on an embedded target.
Now my idea was to create a marker for each test function with an URL to a requirement (eg in Jira). Now when a requirement is identified, then first thing to do is to add an empty test case in the pytest, and mark it as skipped (or some not tested).
After running the test, a report could be generated, telling the total coverage, with links. This would require the information being dumped into the junit.xml file.
In this way I get a tight link between the test and the requirement, bound in the test code.
Now anybody knows of some markers which could help do this. Or even some projects which has gotten a similar idea.
we are using a marker which we create by ourselves:
pytest.mark.reqcov(JIRA-123)
Afterwards we analyzing the test run with self written script.
Using some pytest hooks, collecting the marker, checking JIRA via python api and creating metrics out of it (we are using Testspace).
Not found a way to add this marker to junit report.
Link to JIRA can be done in different ways,
using SPHINX docu and link the Jira id automatically from test
case description to Jira
use the python script, which analyze the requirements coverage
and create a link to Jira.
hope that helps

Do we have any sample ui integration tests for vscode extensions

I am trying to write e2e integration tests for a vscode extension. I didn't find any ui integration tests. Can you please provide me the links if any
I recommend using extensions/vscode-api-tests/src/singlefolder-tests/editor.test.ts in the vscode sources as a starting point for integration tests. If that particular test isn't quite what you want, there are a bunch of tests adjacent to it that might be.
See also this answer I gave to a related question about using the API from within tests.

Show only specific Tests or TestFixtures in NUNIT via a configuration file or another way

I have a bunch of NUNIT tests in several TestFixtures. Currently, I just display all the tests for everyone. Is there a way to hide some tests and/or test fixtures. I have various "customers" and they don't all need to see every test. For example, I have engineers using low level tests, and I have a QA department that is using higher level tests. If I could have a configuration (XML?) file that I distributed with the dll that would be ideal. Can someone point me to the documentation and example? I did search the NUNIT site and did not see anything.
I am aware of the [IGNORE] attribute and I suppose a somewhat acceptable solution would be to have a configuration file that can apply IGNORE to various tests or testfixtures. I'd hand out a different version of the configuration file to each customer. At least that way certain customers would not be able run certain tests.
I'm using version 2.5.5
Ideas?
Thanks,
Dave
Yes - if the tests are in seperate assemblies, this can be accomplished by proper configuration of your NUnit projects. However, this is not an option if the tests are in one large test assembly. If this is the case, you may wish to break up the test assembly. Here is the documentation on the NUnit ProjectEditor: http://www.nunit.org/index.php?p=projectEditor&r=2.2.10

Using a build system for reproducible research?

I am doing a research project that involves a pipeline of programs, each generating an output file that becomes the input for the next program. I would like to make it easy to repeat the series of commands that I used to create the desired output. It seems like make or any other build system would be a good fit for this task, but all the build systems that I've looked at (except for maybe make itself) seem to be strongly biased toward building executabe files from source code, and I can't figure out how to do anything else with them. Does anyone have experience using a build system for tasks other than compiling source code into executables? Can I easily use a build system to facilitate reproducible research, or should I be looking for a different kind of tool?
Well, I figured this out by myself eventually. I'm using plain old (GNU) Makefiles.

Using NUnit to test script and show output

I am learning how to use NUnit to test some scripts written within asp.net in c#. Does anyone know how to tell if the script has pass/failed on NUnit, as in what the test result output is.
Or does anyone know any good websites, where I can read up on learning how to use NUnit?
http://www.nunit.org/index.php?p=quickStart&r=2.4
http://nunitasp.sourceforge.net/tutorial/index.html
Lots of more links c/o Google. Your question does not enough information - 'scripts written within asp.net' is very vague.
When using VS with NUnit, you need to open your Test Explorer window. In there, you'll see a pass/fail for tests as well as output.