When running NUnit and specifying a category, can all uncategorized tests be included too? - nunit

We have several hundred test classes, with a few dozen of them marked with the following attributes:
[TestFixture]
[Explicit]
[Category("IntegrationTests")]
so they will only be run in our over-night automated build. The remaining TestFixtures don't have a Category specified (and are not marked Explicit either).
Here is the NAnt task we are running to execute our tests:
<nunit2>
<test>
...
<categories>
<include name="IntegrationTests" />
</categories>
...
</test>
</nunit2>
This, of course, will not execute any of the uncategorized tests.
I'd like to be able to do something like this:
<nunit2>
<test>
...
<categories>
<include name="*" />
<include name="IntegrationTests" />
</categories>
...
</test>
</nunit2>
where all of the uncategorized tests will be run along with the integration tests. Is this possible? If so, what is the syntax?
(Note: I'm looking for either a NAnt solution, as above, or an NUnit command-line solution. I can certainly run NUnit twice with different options, or put Categories on all of my TestFixtures. These are workarounds that I'm OK using if I have to, but it would be more cool to be able to specify uncategorized tests directly.)

I'm in the same boat, and was getting frustrated until I just discovered that the Category attribute can be applied not just to a test or test fixture, but to a whole assembly.
I have two test assemblies with tests that I run locally, and one more with tests that should only run on the build server. I added this attribute in AssemblyInfo.cs in the first two projects : [assembly: NUnit.Framework.Category("Always")]. The third test project uses category attibutes like [Explicit, Category("PublicDatabase")] as you describe. The build server invokes NUnit with /include=Always,PublicDatabase and has the desired result: all of the tests in the first two assemblies run, and just the PublicDatabase tests in the third assembly run.
When I run NUnit locally on the first two projects, I just run it on the individual assemblies, and don't have to specify categories at all.

No, given you situation there is no way to do what you want in a single run of NUnit. If you took off the explicit attribute, you could do it in a single run by excluding all the categorized tests you don't want.
Basically, if you make the jump to categories, all you tests should be categorized.

Related

NUnit results file in TeamCity

With TeamCity 8, how do I produce / find a results file for an NUnit run?
We currently also run MsTest which produces a TRX file. We then use a TRX->HTML report tool to pass a report up the management food chain. How do we do the same with NUnit in TeamCity?
Right now I'm thinking I need to execute NUnit as a CommandLine build step, but that seems crazy considering there's an NUnit add-in and the MsTest add-in offers me a "Results file:" option
TeamCity executes MSTest and NUnit differently.
NUnit is not run through the NUnit console executable but instead through TeamCity's own NUnit runner. This allows TeamCity to report NUnit test results on the fly--executing test 3...4...5...of 78--and allows instant notification of failed tests, even if all tests have not yet been executed.
MSTest, on the other hand, goes directly through the MSTest executables and does not have on-the-fly reporting. There is no progress other than "in progress". Test Results, including any failures, are only reported after every tests has been run.
TeamCity requires and parses the MSTest TRX file to do its own reporting, including on any failures, so it is also made available to you. However, the NUnit reporting files are a part of the NUnit console, and not a part of the TeamCity runner, so there is no report file to provide.
If you need the report file, you will need to run the NUnit tests through the NUnit console. There are several ways of doing this, only one of which is using a Command Line step. But be aware, you will lose the on-the-fly reporting, no matter which alternative you use.
Jay's description is correct; this is the TeamCity behaviour that makes this task impossible out of the box.
There is a known workaround though:
http://devnet.jetbrains.com/message/5218450#5218450
Essentially, you invoke the TeamCity NUnit runner manually (e.g. from MSBuild). The runner can then output a result.xml file (one per test assembly). Those result files then have to be merged back into one in order to simulate the behaviour of nunit-console.
Davy Brion has even posted the MSBuild tasks for this:
http://web.archive.org/web/20080808215345/http://davybrion.com/blog/2008/07/using-teamcitys-nunit-support-while-keeping-the-output-around/
http://web.archive.org/web/20080809002009/http://davybrion.com/blog/stuff/
He has since nuked his blog, so waybackmachine to the rescue. In case those links die too, here are the snippets:
NUnitMergeOutput
This task combines the output of multiple NUnit xml reports into one combined xml report.
The combined report will contain the results of each xml report that was fed to it, and it contains the total number of tests, failures, duration and overall success status of the entire test run.
To define the task:
<UsingTask AssemblyFile="$(MSBuildProjectDirectory)\Libs\Brion.MSBuildTasks\Brion.MSBuildTasks.dll"
TaskName="NUnitMergeOutput"/>
And to use it in a target:
<CreateItem Include="TestResults\*.xml" >
<Output TaskParameter="Include" ItemName="NUnitOutputXmlFiles"/>
</CreateItem>
<NUnitMergeOutput NUnitOutputXmlFiles="#(NUnitOutputXmlFiles)"
PathOfMergedXmlFile="TestResults\TestResults.xml" />
BuildTeamCityNUnitArguments
TeamCity doesn’t easily allow you to enable its integrated NUnit testing support while still keeping the NUnit output xml files around after the build. This task prepares an xml arguments file to pass to TeamCity’s NUnitLauncher task which does make it possible to keep the NUnit output xml in a directory you can specify. You can find more info on this problem here and more info on this workaround here.
To define the task:
<UsingTask AssemblyFile="$(MSBuildProjectDirectory)\Libs\Brion.MSBuildTasks\Brion.MSBuildTasks.dll"
TaskName="BuildTeamCityNUnitArguments"/>
And to use it in a target:
<CreateItem Include="**\Bin\Debug\*Tests*.dll" >
<Output TaskParameter="Include" ItemName="TestAssemblies" />
</CreateItem>
<BuildTeamCityNUnitArguments HaltOnError="true" HaltOnFirstTestFailure="true"
HaltOnFailureAtEnd="true" TestAssemblies="#(TestAssemblies)"
NUnitResultsOutputFolder="TestResults"
PathOfNUnitArgumentsXmlFile="nunitarguments.xml" />
<Exec Command="$(teamcity_dotnet_nunitlauncher) ## nunitarguments.xml" />

SpecFlow wrongly using NUnit

I've just (today) tried SpecFlow for the first time. I'm playing about by creating a new class library in VS2010 Pro and adding a SpecFlow Feature Definition file.
Thing is, the integration doesn't appear to be working properly, with a variety of different errors. I've selected MsTest as the test runner, because I can't be bothered with invoking NUnit (I'd like to use NUnit in the long term but at the moment I just want to get some BDD code working). The generated code files however continue to reference NUnit - which is obviously wrong, since I've just told SpecFlow to run using MsTest. I've done everything I can think of to invoke the code generation again, including creating a brand new class library project with the MsTest option selected in Tools > Options > SpecFlow.
If I leave the test runner field set to 'Auto' and right-click a feature file, then select 'Run SpecFlow Scenarios' I get an error message "Could not find matching test runner".
If I instead change the test runner field to MsTest, I get a different error message on doing the same thing - "Object Reference not set to an instance of an object". I'm not surprised at this one since it's still trying to run NUnit tests even though I've explicitly asked for MsTest, though obviously it shouldn't nullref and present that to the user.
What am I doing wrong? The documentation is not helpful, and as far as I can see, there's no FAQ.
edit #1: I've established that the actual setting I'm looking for is provided using App.Config using the field <unitTestProvider name="MsTest" />. I can see what's happened - the field in the Visual Studio options menu doesn't seem to modify the project you're currently working on. Thing is, this makes it look like that field doesn't do anything at all. I've now persuaded SpecFlow to generate MsTest classes and run using the MSTest runner.
So now the question morphs into a slightly different one: What (if anything) does the Tools > Options > SpecFlow > Test Runner Tool field do?
With VS2010 the correct value is MsTest.2010 not MsTest as documented. Change your app.config (for the test assembly) and it will work fine (at least with SpecFlow 1.8)
<?xml version="1.0" encoding="utf-8"?>
<configuration>
<configSections>
<section name="specFlow" type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow" />
</configSections>
<specFlow>
<unitTestProvider name="MsTest.2010" />
<!-- For additional details on SpecFlow configuration options see https://github.com/techtalk/SpecFlow/wiki/Configuration -->
</specFlow>
</configuration>
In answer to your latest Question. What is the setting "Tools > Options > SpecFlow > Test Runner Tool" this setting controls what will actually run the tests, not what will generate the test code. If it is set to auto i believe it will look at the App.config file where you have set the unitTestProvider to determine what the best tool is to run the tests. An alternaive Test runner made by the same guys as SpecFlow is SpecRun http://www.specrun.com/
So when you go to run the tests it will use this option. As you have discovered though the code generator uses the config file to determine what type of test it should generate (mstest/nunit..)
If you ran the specfow installer ( https://github.com/downloads/techtalk/SpecFlow/SpecFlowSetup_v1.8.1.msi ) to install all the Visual Studio Intergration components when you change the App.config file it normally promps to regenerate the features using the new provider. The manual way to do this though is to right click the Feature and select "Run Custom Tool"
In regards to documentation have you found the git hub wiki?
https://github.com/techtalk/SpecFlow/wiki/Documentation
The way I've read this is that the test runner is entirely different to that of the code generator although that doesn't always make sense when the MsTest runner doesn't know about NUnit (I think). Out of the box, the latest version (v2.3.2) even when installed with SpecFlow.MsTest nuget package (of the same version) does not configure your machine to generate MsTest based classes in the background. I am running VS2017 and have Resharper installed as my 'test runner' but the main requirement for generating MsTest based code is a change to the app.config. As per the wiki documentation you also need the following in your app.config. When you save the config you should be prompted for the files to be regenerated.
<?xml version="1.0" encoding="utf-8" ?>
<configuration>
<configSections>
<section name="specFlow"
type="TechTalk.SpecFlow.Configuration.ConfigurationSectionHandler, TechTalk.SpecFlow"/>
</configSections>
<specFlow>
<unitTestProvider name="MsTest" />
</specFlow>
</configuration>
We are using ReSharper as a runner for SpecFlow acceptance tests; it worked well right out of the box. Although ReSharper is not free, but it worth every penny...
I was never able to get SpecFlow working right from Visual Studio, I spent some time working on it but never go anywhere. Though I found these instructions on setting up NUnit in Visual Studio 2010 and I use this shortcut to run my SpecFlow tests with good effect.
Overall we use PowerShell to run a lot of tests and I was able to incorporate the NUnit command line runner and SpecFlow report generator into a single script I can run easily.

TFS2010 Build Definition to Deploy to multiple servers?

I've been looking into TFS2010 new build and deployment features with MSDeploy. So far everything is going well (although its been hard to find information about specific scenarios).
Can I modify my Build Definition to specify 2 or more servers to deploy to? What I need to do is deploy to multiple servers (as I have two in my testing environment which uses a NLB).
What I have now is a Build definition which Builds, runs my tests, and then Deploys to ONE of my testing servers (which has the MsDeployAgentService running on it). It works fine, and each web project is deployed as configured in its project file. The MSBuild Arguments I use are:
* /p:DeployOnBuild=True
* /p:DeployTarget=MsDeployPublish
* /p:MSDeployServiceURL=http://oawww.testserver1.com.au/MsDeployAgentService
* /p:CreatePackageOnPublish=True
* /p:MsDeployPublishMethod=RemoteAgent
* /p:AllowUntrustedCertificated=True
* /p:UserName=myusername
* /p:Password=mypassword
NB: I dont use /p:DeployIISAppPath="xyz" as it doesnt deploy all my projects and overrides my project config.
Can I add another build argument to get it to call more than one MSDeployServiceURL? Like something like a second /p:MSDeployServiceURL argument that specifies another server?
Or do I have to look for another solution, such as editing the WF?
I saw an almost exact same question here posted 2 months ago: TFS 2010 - Deploy to Multiple Servers After Build , so it doesn't look like I'm the only one trying to solve this.
I also posted on the IIS.NET forums where MSDeploy is discussed: http://forums.iis.net/t/1170741.aspx . It's had quite a lot of views, but again, no answers.
You don't have to build the project twice to deploy to two servers. The build process will build a set of deployment files. You can then use the InvokeProcess to deploy to multiple servers.
First create a variable named ProjectName. Then add an Assign activity to the "Compile the Project" sequence. This is located in the "Try to Compile the Project" sequence. Here are the properties of the Assign:
To: ProjectName
Value: System.IO.Path.GetFileNameWithoutExtension(localProject)
Here are the properties of our InvokeProcess activity that deploys to the test server:
Arguments: "/y /M:<server> /u:<domain>\<user> /p:<password>"
FileName: String.Format("{0}\{1}.deploy.cmd", BuildDetail.DropLocation, ProjectName)
You will need to change <server>, <domain>, <user>, and <password> to the values that reflect your environment.
If you need to manually deploy to a server you can run the command below from your build folder:
deploy.cmd /y /M:<server> /u:<domain>\<user> /p:<password>
I couldn't find the solution I was looking for, but here's what I came up with in the end.
I wanted to keep the solution simple and configurable within the TFS arguments while at the same time staying in line with the already provided MSBuildArguments method which has been promoted a lot. So I created a new Build Template, and added a new TFS WorkFlow Argument called MSBuildArguments2 in the Arguments tab of the WorkFlow.
I searched through the BuildTemplate WorkFlow for all occurances of the MSBuildArguments (there were two occurances).
The two tasks that use MSBuildArguments are called Run MSBuild for Project. Directly below this task, I added a new "If" block with the condition:
Not String.IsNullOrEmpty(MSBuildArguments2)
I then copied the "Run MSBuild for Project" task and pasted it into the new If's "Then" block, updating its title accordingly. You'll also need to update the new Task's ConmmandLineArguments property to use your new Argument.
CommandLineArguments = String.Format("/p:SkipInvalidConfigurations=true {0}", MSBuildArguments2)
After these modifications, the WorkFlow looks like this:
Save and Check In the new WorkFlow. Update your Build Definition to use this new WorkFlow, then in the build definition's Process tab you will find a new section called Misc with the new argument ready to be used. Because I'm simply using this new argument for deployment, I copied the exact same arguments I used for MSBuild Arguments and updated the MSDeployServiceURL to my second deployment server.
And that's that. I suppose a more elegant method would be to convert MSBuildArguments into an array of strings and then loop through them during the WorkFlow process. But this suits our 2 server requirements.
Hope this helps!
My solution to this is a new Target that runs after Package. Each project that needs to produce a package includes this targets file, and I chose to make the Include conditional on an externally-set "DoDeployment" property. Additionally each project defines the DeploymentServerGroup property so that the destination server(s) are properly filtered depending on what kind of project it is.
As you can see towards the bottom I'm simply executing the command file with the server list, pretty simple.
<!--
This targets file allows a project to deploy its package
As it is used by all project typesconditionally included from the project file
-->
<UsingTask TaskName="Microsoft.TeamFoundation.Build.Tasks.BuildStep" AssemblyFile="$(TeamBuildRefPath)\Microsoft.TeamFoundation.Build.ProcessComponents.dll" />
<!-- Each Server needs the Group metadatum, either Webservers, Appservers, or Batch. -->
<Choose>
<When Condition="'$(Configuration)' == 'DEV'">
<ItemGroup>
<Servers Include="DevWebServer">
<Group>Webservers</Group>
</Servers>
<Servers Include="DevAppServer">
<Group>Appservers</Group>
</Servers>
</ItemGroup>
</When>
<When Condition="'$(Configuration)' == 'QA'">
<ItemGroup>
<Servers Include="QAWebServer1">
<Group>Webservers</Group>
</Servers>
<Servers Include="QAWebServer2">
<Group>Webservers</Group>
</Servers>
<Servers Include="QAAppServer1">
<Group>Appservers</Group>
</Servers>
<Servers Include="QAAppServer2">
<Group>Appservers</Group>
</Servers>
</ItemGroup>
</When>
</Choose>
<!-- DoDeploy can be set in the build defintion -->
<Target Name="StartDeployment" AfterTargets="Package">
<PropertyGroup>
<!-- The _PublishedWebsites area -->
<PackageLocation>$(WebProjectOutputDir)_Package</PackageLocation>
<!-- Override for local testing -->
<PackageLocation Condition="$(WebProjectOutputDirInsideProject)">$(IntermediateOutputPath)Package\</PackageLocation>
</PropertyGroup>
<Message Text="Tier servers are #(Servers)" />
<!-- A filtered list of the servers. DeploymentServerGroup is defined in each project that does deployment -->
<ItemGroup>
<DestinationServers Include="#(Servers)" Condition="'%(Servers.Group)' == '$(DeploymentServerGroup)'" />
</ItemGroup>
<Message Text="Dest servers are #(DestinationServers)" />
</Target>
<!-- Only perform the deployment if any servers fit the filters -->
<Target Name="PerformDeployment" AfterTargets="StartDeployment" Condition="'#(DestinationServers)' != ''">
<Message Text="Deploying $(AssemblyName) to #(DestinationServers)" />
<!-- Fancy build steps so that they better appear in the build explorer -->
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Message="Deploying $(AssemblyName) to #(DestinationServers)...">
<Output TaskParameter="Id" PropertyName="StepId" />
</BuildStep>
<!-- The deployment command will be run for each item in the DestinationServers collection. -->
<Exec Command="$(AssemblyName).deploy.cmd /Y /M:%(DestinationServers.Identity)" WorkingDirectory="$(PackageLocation)" />
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Status="Succeeded"
Message="Deployed $(AssemblyName) to #(DestinationServers)"/>
<OnError ExecuteTargets="MarkDeployStepAsFailed" />
</Target>
<Target Name="MarkDeployStepAsFailed">
<BuildStep
TeamFoundationServerUrl="$(TeamFoundationServerUrl)"
BuildUri="$(BuildUri)"
Id="$(StepId)"
Status="Failed" />
</Target>
I am the author of the other similar post. I have yet to find a solution. I believe it is going to be modifying the workflow to add a postprocessing MSBUILD -sync task. That seems to be the most elegant, but was still hoping to find something a bit less intrusive.
I'm not sure if that could help you with TFS 2010, but I have a blog post for TFS 2012: Multiple web projects deployment from TFS 2012 to NLB enabled environment.

How can I run NUnit tests in parallel?

I've got a large acceptance test (~10 seconds per test) test suite written using NUnit. I would like to make use of the fact that my machines are all multiple core boxes. Ideally, I'd be able to have one test running per core, independently of other tests.
There is PNUnit, but it's designed for testing for threading synchronization issues and things like that, and I didn't see an obvious way to accomplish this.
Is there a switch/tool/option I can use to run the tests in parallel?
If you want to run NUnit tests in parallel, there are at least 2 options:
NCrunch offers it out of the box (without changing anything, but is a commercial product)
NUnit 3 offers a Parallelizable attribute, which can be used to denote which tests can be run in parallel
NUnit version 3 will support running tests in parallel:
Adding the attribute to a class: [Parallelizable(ParallelScope.Self)] will run your tests in parallel.
• ParallelScope.None indicates that the test may not be run in parallel
with other tests.
• ParallelScope.Self indicates that the test
itself may be run in parallel with other tests.
• ParallelScope.Children indicates that the descendants of the test may
be run in parallel with respect to one another.
• ParallelScope.Fixtures indicates that fixtures may be run in parallel
with one another.
NUnit Framework-Parallel-Test-Execution
If your project contains multiple test DLLs you can run them in parallel using this MSBuild script. Obviously you'll need to tweak the paths to suit your project layout.
To run with 8 cores run with: c:\proj> msbuild /m:8 RunTests.xml
RunTests.xml
<?xml version="1.0" encoding="utf-8"?>
<Project DefaultTargets="RunTestsInParallel" xmlns="http://schemas.microsoft.com/developer/msbuild/2003">
<Import Project="$(MSBuildExtensionsPath)\MSBuildCommunityTasks\MSBuild.Community.Tasks.Targets"/>
<PropertyGroup>
<Configuration Condition=" '$(Configuration)' == '' ">Release</Configuration>
<Nunit Condition=" '$(Nunit)' == '' ">$(MSBuildProjectDirectory)\..\tools\nunit-console-x86.exe</Nunit>
</PropertyGroup>
<!-- see http://mikefourie.wordpress.com/2010/12/04/running-targets-in-parallel-in-msbuild/ -->
<Target Name="RunTestsInParallel">
<ItemGroup>
<TestDlls Include="..\bin\Tests\$(Configuration)\*.Tests.dll" />
</ItemGroup>
<ItemGroup>
<TempProjects Include="$(MSBuildProjectFile)" >
<Properties>TestDllFile=%(TestDlls.FullPath)</Properties>
</TempProjects>
</ItemGroup>
<MSBuild Projects="#(TempProjects)" BuildInParallel="true" Targets="RunOneTestDll" />
</Target>
<Target Name="RunOneTestDll">
<Message Text="$(TestDllFile)" />
<Exec Command="$(Nunit) /exclude=Integration $(TestDllFile) /labels /xml:$(TestDllFile).results.xml"
WorkingDirectory="$(MSBuildProjectDirectory)\..\bin\Tests\$(Configuration)" />
</Target>
</Project>
Update
If I were answering this question now I would highly recommend NCrunch and its command line test running tool for maximum test run performance. There's nothing like it and it'll revolutionise your code-test-debug cycle at the same time.
As an alternative to adding the Parallelizable attribute to every test class:
Add this into the test project AssemblyInfo.cs class for nunit3 or greater:
// Make all tests in the test assembly run in parallel
[assembly: Parallelizable(ParallelScope.Fixtures)]
In this article it is mentioned that in order to speed up tests the poster runs multiple instances of NUnit with command parameters specifying which tests each instance should run.
FTA:
I ran into an odd problem.
We use nunit-console to run test on
our continuous integration server.
Recently we were moving from Nunit
2.4.8 to 2.5.5 and from .Net 3.5 to 4.0. To speed up test execution we run multiple instances of Nunit in
parallel with different command line
arguments
We have two copies of our test assemblies and the nunit binaries in
folder A and B.
In folder A we execute
nunit-console-x86.exe Model.dll
Test.dll /exclude:MyCategory
/xml=TestResults.xml
/framework=net-4.0 /noshadow
In folder B we execute
nunit-console-x86.exe Model.dll
Test.dll /include:MyCategory
/xml=TestResults.xml
/framework=net-4.0 /noshadow
If we execute the commands in sequence
both run successfully. But if we
execute them in parallel only one
succeeds. As far as I can tell it's
the one that first loads the test
fixtures. The other fails with the
message "Unable to locate fixture".
Is this problem already known? I could
not find anything related in the bug
list on launchpad. BTW Our server runs
Windows Server 2008 64-bit. I could
also reproduce the problem on Windows
7 64-bit.
Assuming this bug is fixed or you are not running the newer version(s) of the software mentioned you should be able to replicate their technique.
Update
TeamCity looks like a tool you can use to automatically run NUnit tests. They have an NUnit launcher discussed here that could be used to launch multiple NUnit instances. Here is a blog post discussing the mergind of multiple NUnit XML results into a single result file.
So theoretically you could have TeamCity automatically launch multiple NUnit tests based on however you want to split up the workload and then merge the results into a single file for post test processing.
Is that automated enough for your needs?
Just because PNUnit can do synchronization inside test code doesn't mean that you actually have to use that aspect. As far as I can see there's nothing to prevent you from just spawning a set and ignoring the rest till you need it.
BTW I don't have the time to read all of their source but was curious to check out the Barrier class and it's a very simple lock counter. It just waits till N threads enter and then sends the pulse for all of them to continue running at the same time. That's all there is to it - if you don't touch it, it won't bite you.
Might be a bit counter intuitive for a normal threaded development (locks are normally used to serialize access - 1 by 1) but it is quite a spirited diversion :-)
You can now use NCrunch to parallelize your unit tests and you can even configure how many cores should be used by NCrunch and how many should be used by Visual Studio.
plus you get continuous testing as a bonus :)
It would be a bit of a hack, but you could split the unit tests into a number of categories. Then, start up a new instance of NUnit for each category.
Edit: It looks like they have added a /process option to the console app. The command-line help states this is the "Process model for tests: Single, Separate, Multiple". The test runner also appears to have this feature.
Edit 2: Unfortunately, although it does create separate processes for each assembly, the process isolation option (/process from the command line) runs the agents one at a time.
Since the project hasn't been mentioned here, I would like to bring up NUnit.Multicore. I haven't tried the project myself, but it seems to have an interesting approach to the parallel test problem with NUnit.
You can try my small tool TBox or console parallel Runner or even plugin to do distributed calulations, which also can run unit tests on the set of PCs SkyNet
TBox is created to simplify work with big solutions, which contains many projects. It supports many plugins and one of them provide ability to run NUnit tests in parallel. This plugin does not require any changes to your existing tests.
Also it support:
Cloning of the folder with unit test (if your tests changes local data),
Synchronizations of the tests (for example if your tests on
testfixtureteardown kills all dev servers or chromerunner for qunit )
x86 mode and Admin privileges to run tests
Batch run - you can run tests for many assemblies in parallel
Even for single thread run, works faster than standart nunit runner, if you have much small tests.
Also this tool supports command line tests runner (for parallel run) and you can use it with continuous integration.
I have successfully used NUnit 3.0.0 beta-4 to run tests in parallel
Runs on build server
Runs Selenium tests
Has Visual Studio support
no Resharper support yet
Thanks for peers answer.
Gotchas:
Parallelizable attribute is not inherited, so it has to be specified on the test class.
You can use following PowerShell command (for NUnit3, for NUnit2 change runner name):
PS> nunit3-console (ls -r *\bin\Debug\*.Tests.dll | % FullName | sort-object -Unique)
Presented command runs all test assemblies in single nunit instance, which allows to leverage engine built-in parallel test run.
Remarks
Remember to tweak directory search pattern. Given example runs only assemblies ending with .Tests.dll and inside \bin\Debug directories.
Be aware of Unique filtering - you may not want to have it.
To achieve level of parallelism ensure to do these two:
1)Nunit Explorer - Settings - Run tests in parallel
2)LevelOfParallelism
This is an assembly-level attribute used to specify the level of parallelism, that is, the maximum number of worker threads executing tests in the assembly.
In Assemblyinfo.cs, set
[assembly:LevelOfParallelism(N)] => here N is number

how to do builds with nant

In my org, we are planning to go for nant for .net web applications. Source control is TFS, visual studio 2008. I would like to know how to do Builds with Nant? How to create msi and deploy the application using Nant? Is separate Build machine is required to do builds with nant? Somebody please help me out. I need step wise process. Thanks in advance.
Thanks
Shanthi
For a step-by step guide to using NAnt I suggest referring to the NAnt project documentation for the fundamental concepts. Once you are familiar with it's basic usage I suggest investigating the nant-contrib project to obtain more build tasks.
One part of your question that I would like to address directly here is the question of whether a separate machine is required to use NAnt. NAnt does not strictly require a separate machine, however a separate machine might be beneficial if your build process is automated or particularly intensive
[Update]
In response to comment from OP:
NAnt views the build process as a series of individual tasks to be performed as part of a target. The normal process for building an application would be to invoke a compiler on the source files in order to produce a binary, NAnt has a number of tasks that invoke language compilers
In this example I will invoke the C# language compiler (csc.exe) using the task in an NAnt build file for a Hello World application that consists of a single source file named hello.cs.
<?xml version="1.0"?>
<project name="Hello World" default="build" basedir=".">
<property name="debug" value="true" overwrite="false" />
</target>
<target name="build" description="compiles the source code">
<csc target="exe" output="HelloWorld.exe" debug="${debug}">
<sources>
<includes name="HelloWorld.cs" />
</sources>
</csc>
</target>
</project>
Let's examine this XML:
<project name="Hello World" default="build" basedir=".">
Things to Note:
The value of the default property is "build". This means that the target named "build" will be invoked if no other target is specified.
This is the build target, as the description states it will compile the source code. To do this the csc task is used. The csc task has a number of options including
target: This specifies the type of binary the target will produce. In this case an executable will be produced
output: specifies the name of the executable file that will be created
debug: The value of this property used a conditional property debug (defined above as false) which will determine whether the compiler produces an executable that contains debugging information
sources & include:
specifies the source files that the compiler will parse in order to produce the executable
As you can see the actions necessary to build the source code are defined in a target. A build file can define many targets which each call many tasks. To produce an MSI file you would inoke a task that produces an MSI file, unfortunately as I don't actually use NAnt regularly, you will have to do some research to find one although I have a feeling the nant-contrib project includes one given how common it is to produce an MSI.
I hope this explanation as clarified things for you
The information in this update has been distilled from this document in the NAnt documentation
Seperate build machine is not necessarily required, but it's definetely recommended.
You'll want to look into using the following tools:
CruiseControl .NET
TFS Plugin for CC.NET