Running NCover from code - nunit

Is it possible to run NCover automatically from code instead of running NCover manually or via command line?
Here is the scenario, I have written a few tests, I execute all the tests and after the tests are completed, NCover should run automatically for that particular test project and store the coverage report as an XML in a location.
Is this possible to do? Kindly help.

Running NCover from the command line was the only option with NC3. When we updated NC4 the default works like this --> you create a project, the NCover service watches for a process to start that meets the match rules defined in the project, and then collects coverage on it.
This doc may be of some help: http://www.ncover.com/support/docs/desktop/user-guide/coverage_scenarios/how_do_i_collect_data_from_nunit
If you have more questions, please reach out to us at support#ncover.com.

Related

Store results of unit test run into variables

I have a TeamCity build configuration that builds a C# project, runs some unit tests, and then does some extra things. My question is: Can I get information about my unit test run stored into build configuration variables (i.e. how many tests were run, how many were successful, how many failed, how many were skipped) so that I can then check these variables in a PowerShell script in later build steps and perform different actions depending on how many tests have passed?
AFAIK the best way is to ask these information directly to teamcity server using its REST API (pay attention, maybe the build locator could be a little be tricly to be found, if the build is still running).
By other hand, you can parse your NUnit test result file (or files if you run more than one NUnit test runner step in your build) inside your build agent machine.

Attach Current Build to Test

I'm playing around with Microsoft Test Manager 2013 (though it appears it is just MTM2012) to try and get a better understanding of test cases and test suites as I want to use this at work. So I was hoping that I could run a test suite on a build which gets included in this test suite. That is what I WANT to do, but it could very well be wrong. So maybe a better scope of what I'm doing at work might lend to a better answer.
My company makes tablet PC's. I write programs for those tablets. For sake of argument lets just say there are 5 tablets, that run a similar array of OS's. Tablet1,2,3 and 4 can run WinXP, WinXP embedded, Win7, and Win7 Embeded, and Tablet5 can run Win7, Win7 Embedded, and Win8 embedded. Lets say i'm making a Display test program. Naturally this display test will run differently on each tablet, but the program it self is supposed to be able to handle that along with not being able to worry about OS. So I wrote out a very simple test. Open Program, try to open again, verify only 1 instance, check display, close program.
I figured it would be good to make a Test Suite called "Complete Display Program Test" and put 5 sub test suites to that for each tablet. Then moved the 5 test cases to a single test suite. I configured all test cases to only have the correct tablet/OS configuration. Queued a build and waited for it to finish. I then attached that build to the main test suite. I then clicked on run a test for tablet 1 but I didn't see the build attached to the test runner. I've looked around a little bit to see why or how and haven't found anything. Question is is how do I do that? Or if you are scratching your head and wondering why in the world I am doing it this way then by all means suggest another way. This is the second time I have ever looked into MTM so I might not be doing it right.
Thank you for your time.
When running manual tests from MTM you will not see the build you are using in Test Runner.
But if you complete the test and set the test outcome you will be able to check which build you've ran the test against.
Just double-click on the test or select "View Results" to display test results:
This column is not visible by default. You will have to right-click on the column row and select the column "Buld number" to be displayed.
You will also be able to see the build number in "Analyse Test Runs" area:
The things are slightly different if you are running automated test.
Consider following approach:
Automate your Test Cases
See How to: Associate an Automated Test with a Test Case for details.
Create a Build Definition building your application under test AND assemblies containing your tests.
I strongly recommend build application you want to test and test assemblies using in the same Build Definition. (You will see why a little bit later).
Run this build definition and deploy the latest version of the application to the environment where you want run the tests.
This is very important to understand: if you run automated tests the tests assemblies only would be deployed automatically to the environment.
It's your job to deploy the right version of the application you are going to test.
Now you can run tests from MTM.
You can do it the way described by #AndrewClear in the comment to this answer: "choose "Run with Options" when you're beginning a test run" and select the latest build.
Now test assemblies containing tests which are using to automate Test Cases will be deployed automatically to the test environment and the tests will be executed.
That is the point you should recognize why is it so important to build application and tests with a single Build Definition: since the build number you've just selected when starting the tests will be stored along with the test results on TFS you will later know what version of you application you were testing (assuming you deployed the right version, of course).
You could go a little bit further if you want even more automation (This is the way I'm currently running automated tests)
Use Deploy-Build-Test template (this is a good place to start reading about Setting Up Automated Build-Deploy-Test Workflows).
Using this approach you will be able to automate deployment of the application you want to test.

OpenCover without running unit tests

Is it possible to run opencover without running unit tests?
I have the TestResults.xml from NUnit and want to pass this to OpenCover without running the unit tests again.
Is this possible?
Q1. Is it possible to run opencover without running unit tests?
OpenCover can run against most .NET application that can be launched from the commandline. With a little effort you can get it to run against a service like IIS.
Q2. I have the TestResults.xml from NUnit and want to pass this to
OpenCover without running the unit tests again. Is this possible?
No, it will not be able to do what you want as the information in the TestResults.xml is about tests (pass/fail) and is not enough to determine what code was actually executed by those tests.
Just run your tests with OpenCover using the nunit-console.exe as the target - instructions exist in the documentation provided to help you.
I do not know OpenCover but from what I guess about dotCover it needs to go alongside the unit tests as they progress through your code line by line. Code coverage is then determined by what percentage of your code has been visited.

Is it possible to suppress NAnt's exec task's "[exec]" console output prefix

I'm trying to integrate Robot Framework (an acceptance testing framework) with TeamCity. In order to do this it needs to send messages to the console output which TeamCity will then read and return realtime test progress/results. I'm doing this by calling the command line to run the tests with a simple exec task. Everything seemed to be working other than I was only getting the results at the end of the run and not on the fly.
After a bit of struggling with NAnt I swapped to using MSBuild and everything worked first time.
I have what I need now, but for completeness I'd like to find out why I couldn't get it working with NAnt. As far as I can tell the issue is that NAnt is prefixing all console output with [exec]. Is it possible to suppress this?
I don't think the console output is configurable.
NAnt is open source: you could fork your own copy and/or submit a feature patch.

How can I run NUnit tests in parallel?

I've got a large acceptance test (~10 seconds per test) test suite written using NUnit. I would like to make use of the fact that my machines are all multiple core boxes. Ideally, I'd be able to have one test running per core, independently of other tests.
There is PNUnit, but it's designed for testing for threading synchronization issues and things like that, and I didn't see an obvious way to accomplish this.
Is there a switch/tool/option I can use to run the tests in parallel?
If you want to run NUnit tests in parallel, there are at least 2 options:
NCrunch offers it out of the box (without changing anything, but is a commercial product)
NUnit 3 offers a Parallelizable attribute, which can be used to denote which tests can be run in parallel
NUnit version 3 will support running tests in parallel:
Adding the attribute to a class: [Parallelizable(ParallelScope.Self)] will run your tests in parallel.
• ParallelScope.None indicates that the test may not be run in parallel
with other tests.
• ParallelScope.Self indicates that the test
itself may be run in parallel with other tests.
• ParallelScope.Children indicates that the descendants of the test may
be run in parallel with respect to one another.
• ParallelScope.Fixtures indicates that fixtures may be run in parallel
with one another.
NUnit Framework-Parallel-Test-Execution
If your project contains multiple test DLLs you can run them in parallel using this MSBuild script. Obviously you'll need to tweak the paths to suit your project layout.
To run with 8 cores run with: c:\proj> msbuild /m:8 RunTests.xml
RunTests.xml
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="RunTestsInParallel" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Release</Configuration>
<Nunit Condition=" '$(Nunit)' == '' ">$(MSBuildProjectDirectory)\..\tools\nunit-console-x86.exe</Nunit>
</PropertyGroup>
<!-- see http://mikefourie.wordpress.com/2010/12/04/running-targets-in-parallel-in-msbuild/ -->
<Target Name="RunTestsInParallel">
<ItemGroup>
<TestDlls Include="..\bin\Tests\$(Configuration)\*.Tests.dll" />
</ItemGroup>
<ItemGroup>
<TempProjects Include="$(MSBuildProjectFile)" >
<Properties>TestDllFile=%(TestDlls.FullPath)</Properties>
</TempProjects>
</ItemGroup>
<MSBuild Projects="#(TempProjects)" BuildInParallel="true" Targets="RunOneTestDll" />
</Target>
<Target Name="RunOneTestDll">
<Message Text="$(TestDllFile)" />
<Exec Command="$(Nunit) /exclude=Integration $(TestDllFile) /labels /xml:$(TestDllFile).results.xml"
WorkingDirectory="$(MSBuildProjectDirectory)\..\bin\Tests\$(Configuration)" />
</Target>
</Project>
Update
If I were answering this question now I would highly recommend NCrunch and its command line test running tool for maximum test run performance. There's nothing like it and it'll revolutionise your code-test-debug cycle at the same time.
As an alternative to adding the Parallelizable attribute to every test class:
Add this into the test project AssemblyInfo.cs class for nunit3 or greater:
// Make all tests in the test assembly run in parallel
[assembly: Parallelizable(ParallelScope.Fixtures)]
In this article it is mentioned that in order to speed up tests the poster runs multiple instances of NUnit with command parameters specifying which tests each instance should run.
FTA:
I ran into an odd problem.
We use nunit-console to run test on
our continuous integration server.
Recently we were moving from Nunit
2.4.8 to 2.5.5 and from .Net 3.5 to 4.0. To speed up test execution we run multiple instances of Nunit in
parallel with different command line
arguments
We have two copies of our test assemblies and the nunit binaries in
folder A and B.
In folder A we execute
nunit-console-x86.exe Model.dll
Test.dll /exclude:MyCategory
/xml=TestResults.xml
/framework=net-4.0 /noshadow
In folder B we execute
nunit-console-x86.exe Model.dll
Test.dll /include:MyCategory
/xml=TestResults.xml
/framework=net-4.0 /noshadow
If we execute the commands in sequence
both run successfully. But if we
execute them in parallel only one
succeeds. As far as I can tell it's
the one that first loads the test
fixtures. The other fails with the
message "Unable to locate fixture".
Is this problem already known? I could
not find anything related in the bug
list on launchpad. BTW Our server runs
Windows Server 2008 64-bit. I could
also reproduce the problem on Windows
7 64-bit.
Assuming this bug is fixed or you are not running the newer version(s) of the software mentioned you should be able to replicate their technique.
Update
TeamCity looks like a tool you can use to automatically run NUnit tests. They have an NUnit launcher discussed here that could be used to launch multiple NUnit instances. Here is a blog post discussing the mergind of multiple NUnit XML results into a single result file.
So theoretically you could have TeamCity automatically launch multiple NUnit tests based on however you want to split up the workload and then merge the results into a single file for post test processing.
Is that automated enough for your needs?
Just because PNUnit can do synchronization inside test code doesn't mean that you actually have to use that aspect. As far as I can see there's nothing to prevent you from just spawning a set and ignoring the rest till you need it.
BTW I don't have the time to read all of their source but was curious to check out the Barrier class and it's a very simple lock counter. It just waits till N threads enter and then sends the pulse for all of them to continue running at the same time. That's all there is to it - if you don't touch it, it won't bite you.
Might be a bit counter intuitive for a normal threaded development (locks are normally used to serialize access - 1 by 1) but it is quite a spirited diversion :-)
You can now use NCrunch to parallelize your unit tests and you can even configure how many cores should be used by NCrunch and how many should be used by Visual Studio.
plus you get continuous testing as a bonus :)
It would be a bit of a hack, but you could split the unit tests into a number of categories. Then, start up a new instance of NUnit for each category.
Edit: It looks like they have added a /process option to the console app. The command-line help states this is the "Process model for tests: Single, Separate, Multiple". The test runner also appears to have this feature.
Edit 2: Unfortunately, although it does create separate processes for each assembly, the process isolation option (/process from the command line) runs the agents one at a time.
Since the project hasn't been mentioned here, I would like to bring up NUnit.Multicore. I haven't tried the project myself, but it seems to have an interesting approach to the parallel test problem with NUnit.
You can try my small tool TBox or console parallel Runner or even plugin to do distributed calulations, which also can run unit tests on the set of PCs SkyNet
TBox is created to simplify work with big solutions, which contains many projects. It supports many plugins and one of them provide ability to run NUnit tests in parallel. This plugin does not require any changes to your existing tests.
Also it support:
Cloning of the folder with unit test (if your tests changes local data),
Synchronizations of the tests (for example if your tests on
testfixtureteardown kills all dev servers or chromerunner for qunit )
x86 mode and Admin privileges to run tests
Batch run - you can run tests for many assemblies in parallel
Even for single thread run, works faster than standart nunit runner, if you have much small tests.
Also this tool supports command line tests runner (for parallel run) and you can use it with continuous integration.
I have successfully used NUnit 3.0.0 beta-4 to run tests in parallel
Runs on build server
Runs Selenium tests
Has Visual Studio support
no Resharper support yet
Thanks for peers answer.
Gotchas:
Parallelizable attribute is not inherited, so it has to be specified on the test class.
You can use following PowerShell command (for NUnit3, for NUnit2 change runner name):
PS> nunit3-console (ls -r *\bin\Debug\*.Tests.dll | % FullName | sort-object -Unique)
Presented command runs all test assemblies in single nunit instance, which allows to leverage engine built-in parallel test run.
Remarks
Remember to tweak directory search pattern. Given example runs only assemblies ending with .Tests.dll and inside \bin\Debug directories.
Be aware of Unique filtering - you may not want to have it.
To achieve level of parallelism ensure to do these two:
1)Nunit Explorer - Settings - Run tests in parallel
2)LevelOfParallelism
This is an assembly-level attribute used to specify the level of parallelism, that is, the maximum number of worker threads executing tests in the assembly.
In Assemblyinfo.cs, set
[assembly:LevelOfParallelism(N)] => here N is number