What happended to nunit extensions/rowtest? - nunit

In NUnit 2.4.7, nunit.framework.extensions.dll was included which made it possible to do RowTests.
When downloading the newest version (2.5.8) I can't find it. What happened to it?

Instead of using RowTest, you can use TestCase. A previous testing using RowTest would look like:
[RowTest]
[Row("foo", false)]
[Row("", true)]
public void Some_test(string value, bool expected)
{
// test
}
And the same thing with TestCase looks like this:
[TestCase("foo", false)]
[TestCase("", true)]
public void Some_test(string value, bool expected)
{
// test
}

RowTest was an extension that was merged in temporarily, and was removed in 2.5 Alpha 2
Quote from the Release Notes for 2.4.8:
NUnit now includes the RowTest
extension, written by Andreas
Schlapsi, in it's extension
assemblies. This extension allows you
to write test methods that take
arguments and to provide multiple sets
of argument values using the
RowAttribute. To use RowTest, your
test must reference the
nunit.framework.extensions assembly.
Note: Merging extensions into NUnit's
own extension assembly is an
experiment we are trying for this
release. The approach may change in
future releases.future releases.
Quote from the 2.5 alpha 2 Release Notes:
The RowTestExtension, which was merged
into the nunit extension dlls in
Alpha-1, is now provided as a separate
addin. This is the general approach we
plan to take with regard to any
bundled addins, since it permits the
creator of an addin to provide updates
separately from the NUnit release.
You can now download the RowTest extension from here.

Related

How may I set category in NUnit 3 in GUI

I have the following code:
[TestFixture]
class A: B
{
[Test(Description = "SW"), Category("Minimum")]
public void TS1()
{
}
}
[TestFixture]
class C: B
{
[Test(Description = "ex"), Category("Minimum")]
public void TS2()
{
}
}
}
When I run the GUI of NUnit 3.0.1, filter test, and category I get only one test in the Minimum category and not 2 tests in the Minimum category. (the other test is under None category although it's under Minimum)
How may I get the correct filter?
Sounds like a bug, which you can report at https://github.com/nunit/nunit-gui/issues.
That said, since the GUI is not yet released there could be an issue in how you are building it. You should also keep in mind that the source for the GUI may depend on a version of NUnit that has not yet been released.
Even when we do release a binary, it will be called 0.1 and will be mainly for the purpose of getting feedback on usability of the GUI.
The NUnit team is very conservative about recommending use of our pre-alpha source code in production use - that is, we don't recommend it. :-)
All that said, if it's a problem, we like to hear about it. :-)

How to wrap all calls to an assembly with PostSharp?

My C# project refers to an external .NET assembly. I would like to insert locking statements around every call from my project to that assembly. I've been trying to establish this with PostSharp but can't find a way to do it. I have the source to the external assembly and I probably could achieve my goal the easiest by inserting the aspect there, but I prefer a non-intrusive solution where I can leave the external assembly untouched.
Approach 1
I have found out that I can wrap calls to the external assembly. Sadly, PostSharp is unable to wrap calls to abstract methods, and interface members are abstract methods. Therefore this approach doesn't cover calls through interface types.
[assembly: WrappingAspect(
AttributeTargetAssemblies = "Library",
AttributeTargetExternalMemberAttributes = MulticastAttributes.NonAbstract)]
[Serializable]
internal class WrappingAspect : OnMethodBoundaryAspect {
public override void OnEntry(MethodExecutionArgs args) {
Monitor.Enter(SyncRoot);
}
public override void OnExit(MethodExecutionArgs args) {
Monitor.Exit(SyncRoot);
}
}
Approach 2
Perhaps I could wrap all the methods in my project that refer to types in the external assembly. I'm thinking along the lines below. However, I cannot try this out because ReflectionSearch requires a PostSharp license that I don't currently have.
[assembly: WrappingAspect]
[Serializable]
internal class WrappingAspect : OnMethodBoundaryAspect {
public override void OnEntry(MethodExecutionArgs args) {
Monitor.Enter(SyncRoot);
}
public override void OnExit(MethodExecutionArgs args) {
Monitor.Exit(SyncRoot);
}
public override bool CompileTimeValidate(MethodBase method) {
return ReflectionSearch.GetDeclarationsUsedByMethod(method)
.Any(r => r.UsedType.Assembly.FullName.StartsWith("Library"));
}
}
Questions
Is there a non-intrusive way to wrap all calls to an external assembly, including calls to methods through an interface type?
Would my second approach work; to detect which methods refer to the external assembly and wrap them?
Are there other approaches to this problem?
Have you tried adding them via the XML approach? Straight from the PostSharp (slightly outdated) docs
Adding aspects through XML gives the advantage of applying aspects without modifying the source code, which could be an advantage in some legacy projects.
Answering my own question number 1; Yes, there is. Here's how I did it, using PostSharp configuration files as #Mikee suggested. I used PostSharp 3.1.39.
In short, you can run PostSharp to weave code into a DLL without changing the source code of that DLL. The command might look like this (split into multiple lines for readability)
postsharp.4.0-x64.exe temp\mylib.dll /P:Output=mylib.dll /NoLogo
/X:myconfig.psproj
"/P:ReferenceDirectory=$(ProjectDir) "
"/P:SearchPath=$(OutDir) "
"/P:Configuration=$(Configuration)"
"/P:Platform=$(Platform)"
"/P:MSBuildProjectFullPath=$(ProjectPath) "
/P:TargetFrameworkIdentifier=.NETFramework
The $(variables) in this command come straight out of Visual Studio, e.g. if you run this in your post-build event. Beware trailing backslashes in Visual Studio variables; adding an extra space before the closing quote is a necessary precaution.
mylib.dll is the target assembly where the weaving will be done. The input and output DLL must be two different files, hence the input is in a temp folder.
The configuration file myconfig.psproj looks like this in my case:
<?xml version="1.0" encoding="utf-8"?>
<Project xmlns="http://schemas.postsharp.org/1.0/configuration">
<Multicast xmlns:my="clr-namespace:MyApp.Aspects;assembly:MyApp">
<my:MyAspect AttributeTargetMemberAttributes="Public"/>
</Multicast>
</Project>
This configuration will apply the aspect MyApp.Aspects.MyAspect from the MyApp assembly into all public members in the target assembly. More configuration attributes can be found from the documentation of MulticastAttribute
To run PostSharp in a more complex scenario, more configuration parameters may be required. Running postsharp.4.0-x64.exe /? gives you a somewhat unhelpful list of command line parameters. To find out what kind of parameters PostSharp really uses when it's run as part of a Visual Studio project, you can do this:
Add PostSharp to your C# project (as a NuGet).
Add some dummy aspect to a method.
Run the build with verbose output (in Visual Studio 2012: Tools -> Options -> Projects and Solutions -> Build and Run -> MSBuild project build output verbosity -> Diagnostic).
After the build, search the build output window for a line containing Task "PostSharp30" and then browse down to find a line starting with "Connected to the pipe after XXX ms. Requesting with XX arguments".
What follows on the line is a list of parameters to postsharp.4.0-x64.exe. Note that the parameters are separated by semicolons; remove the semicolons and quote the parameters to preserve meaningful spaces.
The list of parameters I got for my test project was much longer than the final command above. Many of the parameters weren't necessary.
Caveat: The free Express version of PostSharp doesn't support weaving iterator methods (ones that use return yield). You'll get warnings of them.

How to Order NUnit Tests

More than once the question has been asked on SO. But the only answers that are given read "you should not need to order your unit tests, it is bad because" or "you can avoid that if..."
I already know it is bad, why it is bad, and techniques to avoid it. But that is not what I want to know. I'd like to know if it is possible to order the execution of NUnit tests, other than an alphabetical order. To be blunt: I actually want state to propogate from one test to the next. Trust me that I have a clever reason for this, that defies the usual philosophy.
MSTest has the "ordered test" capability, which is very useful in certain cases. I'd like to have that ability in NUnit. Can it be done?
Update for NUnit 3.2.0 - now it support OrderAttribute.
The OrderAttribute may be placed on a test method to specify the order in which tests are run.
Example:
public class MyFixture
{
[Test, Order(1)]
public void TestA() { ... }
[Test, Order(2)]
public void TestB() { ... }
[Test]
public void TestC() { ... }
}
https://github.com/nunit/docs/wiki/Order-Attribute
The work-around (hack) is to alphabetize your test case names. See this thread:
https://bugs.launchpad.net/nunit-3.0/+bug/740539
Relying on alphabetical order is a workaround that you can use but it is not documented and supported beyond the visual order of the display. In theory it could change at any time. In practice it won't change until NUnit 3.0, so you're pretty safe using it as a workaround
This quote is from Charlie Poole, the main dev on NUnit.
It also seems they have a scheme cooking to support ordered tests in NUnit 3, though how they will do so is still under discussion.
Just an update for NUnit 2.5.1. According to documentation there are cases that even alphabetical order is not supported.
NUnit TestCaseAttribute
Order of Execution
In NUnit 2.5, individual test cases are sorted alphabetically and
executed in that order. With NUnit 2.5.1, the individual cases are not
sorted, but are executed in the order in which NUnit discovers them.
This order does not follow the lexical order of the attributes and
will often vary between different compilers or different versions of
the CLR.
As a result, when TestCaseAttribute appears multiple times on a method
or when other data-providing attributes are used in combination with
TestCaseAttribute, the order of the test cases is undefined.
Try to use NameParameters argument to pass the TestName with a string you wish, in order the TestCase() to be ordered by TestName.
[TestCase(..., TestName = "1stTest")]
[TestCase(..., TestName = "2ndTest")]
for Nuint you can use following code .
[TestMethod]
[Priority(2)]

How can I get DevExpress CodeRush to create NUnit test methods with the tm template instead of microsoft test methods in VS 2010

when I try to execute the test or testmethod templates respectively in visual studio 2010 using t and tm respectively it is giving me ...
using Microsoft.VisualStudio.TestTools.UnitTesting;
[TestMethod]
public void Test()
{
}
I want it to give me the nunit decorator instead and stop adding the using reference to the microsoft tooling.
I can't figure it out.
Thanks.
This is likely being caused by your test project having a reference to "Microsoft.VisualStudio.TestTools.UnitTesting.dll"
Remove this reference and replace it with one to "nUnit.Framework.dll", and the templates should detect and use that instead.

Can CodeDom add Source Code Files to a Project?

I have been using CodeDom to do some code generation. It works great, but I haven't found a way to include the generated source code files in a project. I started using T4 and the T4Toolbox to generate code because it supports integration with project files.
Does anyone know if CodeDom supports this functionality too? I'd consider taking a second look at CodeDom if it only supported this one feature.
Here is an example of how I make a source code file with CodeDom:
protected void CreateSourceFile(CodeCompileUnit codeCompileUnit,
string fileName,
out string fileNameWithExtension)
{
fileNameWithExtension = string.Format("{0}.{1}",
fileName,
CodeProvider.FileExtension);
var indentedTextWriter =
new IndentedTextWriter(new StreamWriter(fileNameWithExtension,
false),
TabString);
CodeProvider.GenerateCodeFromCompileUnit(codeCompileUnit,
indentedTextWriter,
new CodeGeneratorOptions());
indentedTextWriter.Close();
}
That works fine but it just outputs the file to the hard drive somewhere (probably bin folder).
Here is a second example of some code I use with T4, this one specifies the output as part of the project the template is transformed in:
public class RDFSClassGenerator : Generator
{
private readonly string rootNamespace;
private readonly string ontologyLocation;
public RDFSClassGenerator(
string rootNamespace,
string ontologyLocation)
{
this.rootNamespace = rootNamespace;
this.ontologyLocation = ontologyLocation;
}
protected override void RunCore()
{
XElement ontology = XElement.Load(ontologyLocation);
var service = new RDFSGeneratorService(ontology);
foreach (MetaClass metaClass in service.MetaClasses)
{
var rdfsClassTemplate = new RDFSClassTemplate(rootNamespace, metaClass);
rdfsClassTemplate.Output.File = "Domain/" + metaClass.Name + ".cs";
rdfsClassTemplate.Render();
}
}
}
So the T4 code will output the file into the "Domain" folder of my project. But the CodeGen stuff just outputs the file on disk and doesn't update the project file.
Here is a visual:
Yes, it can. Here is how: http://www.olegsych.com/2009/09/t4-and-codedom-better-together/
Short answer is no, but I could be wrong (ever try to prove a negative?)
Your question was a little confusing as CodeDom isn't exactly equitable with T4. T4 templates are a convenient way of generating code files the same way, for example, asp.net generates HTML files, mixing text and code that gets executed to generate a file that is then interpreted by something else (such as a compiler or a browser). CodeDom is usually used to generate assemblies at runtime rather than files, although you can do it (as you have discovered).
While T4 makes it easy to add files to the solution, you can do this with CodeDom as well. I don't believe it supports interaction with the solution directly, but you can manage this using EnvDTE, or the automation model for Visual Studio.
The problem with this is that the automation model isn't easy to work with. EnvDTE is a wrapper around COM classes, which is always fun to code against. Also, you have to be careful when attempting to get the object. The naive implementation will get the object from the first instance of Visual Studio loaded. You have to poll the Running Object Table to find the current instance. Once you have it, you must deal with searching through the dte for the location you're looking for, deal with source control, locked files, etc etc.
Working with it, you start to learn why T4 was created in the first place.
The question you have to ask yourself is, "Does CodeDom give me enough that T4 doesn't to make up for all its shortcomings?"