Execute TestSetup only some of the time? - nunit

Within one fixture is it possible to markup tests in such a way that the test-setup is only called for some tests and not for others?
[Test] public void TestWithoutSetup() { .. }
[Test] public void TestWithSetup() { .. }
What would I need to do make the above work?

There's no attribute to accomplish what you want.
I would suggest refactoring your test cases into two separate classes that inherit base functionality from an abstract test class. Each test class can have its own setup method.

Related

Run long-running NUnit/xUnit tests so that they are not blocking other tests

I'm running a set of integration tests and while most of them finish within reasonable timeline, there're two tests that are waiting for specific conditions (financial markets conditions to be precise) and they can last for 2-3 hours. So ideally I'd like to achieve two things:
Start those two tests after other tests are finished
Run them in parallel
Is there a way to achieve that in NUnit/XUnit (or other test runner)?
Start those two tests after other tests are finished
You could keep those two tests in a separate nunit test project, allowing you to run all the other tests separately.
For running tests in parallel, this blog has a nice article:
https://blog.sanderaernouts.com/running-unit-tests-in-parallel-with-nunit
Mark your test fixtures with the Parallelizable attribute and set the parallel scope to ParallelScope.All.
Create a private class called TestScope and implement IDisposable.
Put all startup and clean-up logic inside the TestScope constructor and .Dispose() method respectively.
Wrap your test code in a using (var scope = new TestScope) { ... } block
[TestFixture]
[Parallelizable(ParallelScope.All)]
public class MyClassTests {
[Test]
public void MyParallelTest() {
using(var scope = new TestScope()) {
scope.Sut.DoSomething();
scope.Repository.Received(1).Save();
}
}
private sealed class TestScope : IDisposable {
public IRepository Repository{get;}
public MyClass Sut {get;}
public TestScope() {
Repository = Substitute.For<IRepository>();
Sut = new MyClass(Repository);
}
public void Dispose() {
//clean-up code goes here
Repository?.Dispose()
}
}
}
You should take precautions to ensure that while running in parallel, your tests do not interfere with each other.
As the article states:
How to safely run tests in parallel
To allow tests to run in parallel
without them interfering with each other, I have been applying the
following pattern for a while:
Create a nested private TestScope class that implements IDisposable.
All initialization or startup code that would go into the SetUp method
goes into the constructor of the TestScope class.
Any clean-up or
teardown code that would go into the TearDown method goes into the
Dispose method All tests run inside a using block that handles the
creation and disposal of the TestScope.
[TestFixture]
[Parallelizable(ParallelScope.All)]
public class MyClassTests {
[Test]
public void MyParallelTest() {
using(var scope = new TestScope()) {
scope.Sut.DoSomething();
scope.Repository.Received(1).Save();
}
}
private sealed class TestScope : IDisposable {
public IRepository Repository{get;}
public MyClass Sut {get;}
public TestScope() {
Repository = Substitute.For<IRepository>();
Sut = new MyClass(Repository);
}
public void Dispose() {
//clean-up code goes here
Repository?.Dispose()
}
}
}
The article provides more valuable advice. I suggest reading it, and thanking the author.
Parallel test run depends on arguments of test runner, if you are using xUnit console test runner there is a -parallel argument or MSBuild Options, see: https://xunit.net/docs/running-tests-in-parallel. But in any case you have to split your long running tests on separate Test Classes.
It is harder to guarantee order of test running you could use TestCollection (however according to quide collection running sequentially). You could rename your long running test to place them at the end of list, i.e TestClass2 will be executed after TestClass1. You also could use category attribute parameter to separate tests and run them via 2 commands from dotnet test --filter=TestCategory=LongTests (one for long and another for others), see https://learn.microsoft.com/ru-ru/dotnet/core/testing/selective-unit-tests?pivots=mstest

Why Nunit3 OneTimeSetUp() is called after [Test] and not before

I have my unit tests written in Nunit 2.6 but planning to upgrade to Nunit 3.6.1 , however I noticed a weird problem with Nunit 3.6.1 (or may be I did not understood it correctly). The problem is around OneTimeSetUp().
In Nunit 2.6.3, I had SetUpFixtureAttribute [SetUpFixture] and inside that SetUpAttribute [SetUp] and it worked as expected for me, the flow was
SetUpFixture.Setup
TestFixture.Setup
TestFixture.Test
TestFixture.TearDown
TestFixture.Setup
TestFixture.Test
TestFixture.TearDown
SetUpFixture.TearDown
When I upgraded to Nunit 3, I replaced the SetUp() inside SetUpFixture with OneTimeSetUp, and after running my code I got following flow
TestFixture.Setup
TestFixture.Test
TestFixture.TearDown
SetUpFixture.OneTimeSetUp
SetUpFixture.OneTimeTearDown
Following is the sample code which I tried on my machine and also the command line output
[SetUpFixture]
public class TestBase
{
[OneTimeSetUp]
//[SetUp]
public static void MyTestSetup()
{
Console.WriteLine(" ---------- Calling OneTimeSetUp ----------");
}
}
[TestFixture]
class TestClass : TestBase
{
[Test]
public void test()
{
Console.WriteLine("\n ....I'm inside TestClass.test() ....");
}
}
Console Output
=> TestSample.TestClass.test
....I'm inside TestClass.test() ....
=> TestSample.TestClass
---------- Calling OneTimeSetUp ----------
=> TestSpecflow.TestBase
---------- Calling OneTimeSetUp ----------
Can someone please suggest what am i missing here ?
I'm running the test via nunit-console
The issue is that the output is misleading and not in the order the code is executed. Because NUnit 3 supports parallel execution, it captures output and displays it on the console when that level of test execution is completed.
In your case, the fixture setup wraps the tests, so it finishes executing after the tests and outputs the captured text afterward.
If you debug your tests, or switch your Console.WriteLine calls to TestContext.Progress.WriteLine which outputs immediately, you will see that the code is executed in the order you expect.
If it is not in the order you expect, look at the namespaces. Remember that a [SetupFixture] is used to setup at the namespace level. If your tests are in a different namespace, they may be called in a different order. If you want a setup for all of your tests, put the class in your top level namespace, or if you have multiple namespaces, in no namespace.
Here is some test code,
namespace NUnitFixtureSetup
{
[SetUpFixture]
public class SetupClass
{
[OneTimeSetUp]
public void MyTestSetup()
{
TestContext.Progress.WriteLine("One time setup");
}
}
[TestFixture]
public class TestClass
{
[Test]
public void TestMethod()
{
TestContext.Progress.WriteLine("Test Method");
}
}
}
And here is the output from running with nunit3-console.exe
=> NUnitFixtureSetup.SetupClass
One time setup
=> NUnitFixtureSetup.TestClass.TestMethod
Test Method
Rob's answer is the fundamental reason you have a problem, but there is an additional one, which is present in your code although not in Rob's.
In your code, you are using TestBase twice: as a SetUpFixture and as the base class for your TestFixture.
That means the OneTimeSetUp method will be used twice... Once before all the fixtures in the namespace and once before any test fixture that inherits from it. Using a SetUpFixture in this way defeats it's purpose, which is to have some code that runs only once before all the fixtures in a namespace.
Use separate classes as a base class (if you need one) and as a setup fixture (if you need one of those).

Mocking super method call using easymock

Is it possible to mock super class method call? I have seen many posts, but they are either irrelevant or using different testing framework.
Is it possible with easymock?
If not, what other framework would allow me to do it?
No it's not. And I don't think it is with other frameworks. That would required bytecode manipulation of the base class. So maybe Powermock but I'm not sure.
However, I have never needed to do that in 20 years. In general, it means a bad implementation of the template pattern.
So instead of something like
public void foo() {
// do stuff
super.foo(); // don't forget to call super
// do some other stuff
}
you better do
base class:
public void foo() {
doBeforeFoo();
// ... stuff that is in super
doAfterFoo();
}
and then you fill the holes in the child class

How to verify a statc methode in Moq

I am new to Nunit and Moq
I have a static class like this:
public static class StaticClass1
{
public static void Prepare()
{
//some logic
}
}
public static class StaticClass2
{
public static void Initialize(some_parameter)
{
//some logic
if (some_condition(some_parameter))
{
StaticClass1.Prepare();
}
}
}
I need to test the function AccountService.Initialize() in which I need to verify StaticClass1.Prepare() is being called at least once
I think that to answer this question I would say something like "You need to get experience in how to layer a project".
When unit testing a method you want to unit test that single method, and mock the dependencies, exactly as you try to do if I understand you correctly. Now it's not optimal to call static public methods from one class to another static method in another class since it makes it difficult to isolate you unit tests and what they should test (you end up with testing two completely different methods in the same unit test instead of separating the code and unit tests).
On another approach you break the D in SOLID (Dependency inversion principle) that you can read more about here -> https://en.wikipedia.org/wiki/SOLID_(object-oriented_design). You want to depend upon abstractions rather than concrete classes.
Lastly I thought that I would be a bit selfish and share a link to an article series that I have written myself. It's about Test Driven Development and uses Moq as unit testing tool and focuses on how to think when layering and unit testing a complete project (in a small scale). I'm absolute certain that it will help you understand on how to continue with you own projects and code.
It's based upon 4 articles. The first in the series is here -> http://www.andreasjohansson.eu/technical-blog/getting-started-unit-testing-a-web-project-part-1-introduction-and-setting-up-the-project/
Hope it helps!

Best practice in dependency injection

This is a question about how best to do DI, so it's not tied to any particular DI/IoC framework because, well, framework should be chosen based on pattern and practice rather than the other way around, no?
I'm doing a project where repository has to be injected into services, a service may require multiple repositories and I'm curious about the pros and cons between following approaches:
Inject repositories in service constructor
public class SomeService : ISomeService
{
private IRepository1 repository1;
private IRepository2 repository2;
public SomeService(IRepository1 repository1, IRepository2 repository2)
{
this.repository1 = repository1;
this.repository2 = repository2;
}
public void DoThis()
{
//Do something with repository1
}
public void DoThat()
{
//Do something with both repository1 and repository2
}
}
Inject a custom context class that include everything any service may need but lazy instantiated (the IServiceContext will be a protected field in BaseService)
public class SomeService : BaseService, ISomeService
{
public SomeService(IServiceContext serviceContext)
{
this.serviceContext= serviceContext;
}
public void DoThis()
{
//Do something with serviceContext.repository1
}
public void DoThat()
{
//Do something with both serviceContext.repository1 and serviceContext.repository2
}
}
Inject into methods that need them only
public class SomeService : ISomeService
{
public void DoThis(IRepository1 repository1)
{
//Do something with repository1
}
public void DoThat(IRepository1 repository1, IRepository2 repository2)
{
//Do something with both repository1 and repository2
}
}
Some pointers would be appreciated, moreover what're the aspects that I should consider in evaluating alternative like these?
The preferred way of injecting dependencies is Constructor Injection.
Method Injection is less ideal, because this will quickly result in having to pass around many dependencies from service to service and it will cause implementation details (the dependencies) to leak through the API (your method).
Both options 1 and 2 do Constructor Injection, which is good. If you find yourself having to inject too many dependencies in a constructor, there is something wrong. Either you are violating the Single Responsibility Principle, or you are missing some sort of aggregate service, and this is what you are doing in option 2.
In your case however, your IServiceContext aggregate service is grouping multiple repositories together. Many repositories behind one class smells like a unit of work to me. Just add a Commit method to the IServiceContext and you will surely have a unit of work. Think about it: don't you want to inject an IUnitOfWork into your service?
The first option seems to be the most natural from a DI perpective. The service class requires both repositories to perform its function, so making them required in order to construct an instance makes sense semantically (and practically).
The second option sounds a bit like Service Location, which is generally considered an anti-pattern (see http://blog.ploeh.dk/2010/02/03/ServiceLocatorIsAnAntiPattern.aspx). In a nutshell, it creates implicit dependencies, where explicit dependencies are always preferred.
I would do either constructor based injection or property based injection. I would not pass in a context that contains the dependencies unless that context is serving some other purpose.
I prefer constructor based injection for required dependencies, as it makes it super easy for the object creation to blow up if something is missing. I got that from here. If you are going to verify that your dependencies are met, then you have to do it with constructor based injection, since there is no way to tell which setter is the last setter to be fired.