StructureMap is not reset between NUnit tests - nunit

I'm testing some code that uses StructureMap for Inversion of Control and problems have come up when I use different concrete classes for the same interface.
For example:
[Test]
public void Test1()
{
ObjectFactory.Inject<IFoo>(new TestFoo());
...
}
[Test]
public void Test2()
{
ObjectFactory.Initialize(
x => x.ForRequestedType<IFoo>().TheDefaultIsConcreteType<RealFoo>()
);
// ObjectFactory.Inject<IFoo>(new RealFoo()) doesn't work either.
...
}
Test2 works fine if it runs by itself, using a RealFoo. But if Test1 runs first, Test2 ends up using a TestFoo instead of RealFoo. Aren't NUnit tests supposed to be isolated? How can I reset StructureMap?
Oddly enough, Test2 fails if I don't include the Initialize expression. But if I do include it, it gets ignored...

If you must use ObjectFactory in your tests, in your SetUp or TearDown, make a call to ObjectFactory.ResetAll().
Even better, try to migrate your code away from depending on ObjectFactory. Any class that needs to pull stuff out of the container (other than the startup method) can take in an IContainer, which will automatically be populated by StructureMap (assuming the class itself is retrieved from the container). You can reference the IContainer wrapped by ObjectFactory through its Container property. You can also avoid using ObjectFactory completely and just create an instance of a Container that you manage yourself (it can be configured in the same way as ObjectFactory).

Yes, NUnit tests are supposed to be isolated and it is your responsibility to make sure they are isolated. The solution would be to reset ObjectFactory in the TearDown method of your test fixture. You can use ObjectFactory.EjectAllInstancesOf() for example.

Of course it doesn't reset between tests. ObjectFactory is a static wrapper around an InstanceManager; it is static through an AppDomain and as tests run in the same AppDomain this is why it is not reset. You need to TearDown the ObjectFactory between tests or configure a new Container for each test (i.e., get away from using the static ObjectFactory).
Incidentally, this is the main reason for avoiding global state and singletons: they are not friendly to testing.
From the Google guide to Writing Testable Code:
Global State: Global state is bad from theoretical, maintainability, and understandability point of view, but is tolerable at run-time as long as you have one instance of your application. However, each test is a small instantiation of your application in contrast to one instance of application in production. The global state persists from one test to the next and creates mass confusion. Tests run in isolation but not together. Worse yet, tests fail together but problems can not be reproduced in isolation. Order of the tests matters. The APIs are not clear about the order of initialization and object instantiation, and so on. I hope that by now most developers agree that global state should be treated like GOTO.

Related

Define order of "Parallelizable" in NUnit

Im having 4 test classes inside one project (Lets call them Class A, Class B, Class C, Class D)
Each of these 3 classes have two [TestFixture("string")], which makes it to 8 tests in total.
All classes are having the [Parallelizable] parameter.
When i start the test all at once by clicking inside the Test Explorer on the name of the project and "Run", then it will start all 8 tests at the same time.
The problem here is, that it consumes a lot RAM and the tests fail because it takes too long to load and i get a timeout error (Im doing automation tests with selenium in chrome)
Now i want to define a order.
For example:
Class A and Class B should start parallel
Class C and Class D should start parallel when Class A and Class B is done
Is it possible?
I tried the parameter [Order(1)] for Class A and Class B and Order(2)] for Class C and Class D
But when i run the tests, all 8 tests start to load.
Example from my code:
[TestFixture("normalUser")]
[TestFixture("adminUser")]
[Parallelizable]
public class ImportTest
{
private IWebDriver webDriver;
private const int waitTimer = 60;
public WebDriverWait w;
public string userRole;
// Constructor
public ImportTest(string userRole)
{
this.userRole = userRole;
Console.WriteLine(userRole);
}
////-----------------------------
[SetUp]
{
}
//-------------------------------
[Test]
public void Test1()
{
Do Test
}
[Test]
public void Test2()
{
Do Test
}
//--------------------------
[TearDown]
public void CloseBrowser()
{
webDriver.Quit();
}
}
First, I'll describe what's happening...
The OrderAttribute was created in NUnit V2, before parallel tests existed. It defines the order in which tests are started. Since there was no parallelism at the time, one test had to finish before the next one started.
When parallel execution was introduced in NUnit 3, Order was not exactly broken, because it continued to start tests in the specified order. But many users perceptions were "broken", because they thought that one test would not start until the prior one finished.
Order could, of course, be changed to work like that. However, at this point, that would be a breaking change for some people, so you most likely won't see it happen until there's an NUnit 4.
So... what can you do as a workaround? I can see three options...
The simplest approach would be to make each fixture [NonParallelizable]. Then they would all run separately. You should try that first and see if the performance is acceptable to you. If you want the tests within each fixture to run in parallel, you could use [Parallelizable(ParallelScope.Children)] instead but that might break things if the tests change the state of the fixture or of any common references found in the fixture.
Alternatively, you could pick only some fixtures to mark as [NonParallelizable]. In that case, I'd do it for the ones that consume a lot of memory.
For the most effort required, you could implement ordering yourself for these classes. I'd do that by creating some sort of shared token... e.g. a lock... which each class had to acquire on startup. I'd grab the lock in the OneTimeSetUp for a fixture and release it in the onetime teardown. The locking code should be before any setup code, which acquires resources and should be released after your teardown releases those resources.
I made option 3 rather sketchy because (a) I don't know precisely how your application works and (b) I presume that you won't do it unless it's absolutely necessary.
Final advice: don't make any assumptions about the performance impact of any of these options, even the first. Measure first!

Modifying Autofac Scope to Support XUnit Testing

I use Autofac extensively. Recently I've gotten interested in tweaking the lifetime scopes when registering items for XUnit testing. Basically I want to register a number of standard components I use as "instance per test" rather than what I normally do for runtime (I've found a useful library on github that defines an instance-per-test lifetime).
One way to do this is to define two separate container builds, one for runtime and one for xunit testing. That would work but it gets increasingly expensive to maintain.
What I'd like to do (I think) is modify the registration pipeline dynamically depending upon the context -- runtime or xunit test -- in which it is being built. In pseudocode:
builder.RegisterType<SomeType>().AsImplementedInterfaces().SingleInstance();
...
void TweakPipeline(...)
{
if( Testing )
{
TypeBeingRegistered.InstancePerTest();
}
else
{
TypeBeingRegistered.SingleInstance();
}
}
Is this something Autofac middleware can do? If not is there another capability in the Autofac API which could address it? As always, links to examples would be appreciated.
This is an interesting question. I like that you started thinking about some of the new features in Autofac, very few do. So, kudos for the good question.
If you think about the middleware, yes, you can probably use it to muck with lifetime scope, but we didn't really make "change the lifetime scope on the fly" something easy to do and... I'll be honest, I'm not sure how you'd do it.
However, I think there are a couple of different options you have to make life easier. In the order in which I'd do them if it was me...
Option 1: Container Per Test
This is actually what I do for my tests. I don't share a container across multiple tests, I actually make building the container part of the test setup. For Xunit, that means I put it in the constructor of the test class.
Why? A couple reasons:
State is a problem. I don't want test ordering or state on singletons in the container to make my tests fragile.
I want to test what I deploy. I don't want something to test out OK only to find that it worked because of something I set up in the container special for testing. Obvious exceptions for mocks and things to make the tests actually unit tests.
If the problem is that the container takes too long to set up and is slowing the tests down, I'd probably troubleshoot that. I usually find the cause of this to be either that I'm assembly scanning and registering way, way too much (oops, forgot the Where statement to filter things down) or I've started trying to "multi-purpose" the container to start orchestrating my app startup logic by registering code to auto-execute on container build (which is easy to do... but don't forget the container isn't your app startup logic, so maybe separate that out).
Container per test really is the easiest, most isolated way to go and requires no special effort.
Option 2: Modules
Modules are a nice way to encapsulate sets of registrations and can be a good way to take parameters like this. In this case, I might do something like this for the module:
public class MyModule : Module
{
public bool Testing { get; set; }
protected override void Load(ContainerBuilder builder)
{
var toUpdate = new List<IRegistrationBuilder<object, ConcreteReflectionActivatorData, SingleRegistrationStyle>>();
toUpdate.Add(builder.RegisterType<SomeType>());
toUpdate.Add(builder.RegisterType<OtherType>());
foreach(var reg in toUpdate)
{
if(this.Testing)
{
reg.InstancePerTest();
}
else
{
reg.SingleInstance();
}
}
}
}
Then you could register it:
var module = new MyModule { Testing = true };
builder.RegisterModule(module);
That makes the list of registrations easier to tweak (foreach loop) and also keeps the "things that need changing based on testing" isolated to a module.
Granted, it could get a little complex in there if you have lambdas and all sorts of other registrations in there, but that's the gist.
Option 3: Builder Properties
The ContainerBuilder has a set of properties you can use while building stuff to help avoid having to deal with environment variables but also cart around arbitrary info you can use while setting up the container. You could write an extension method like this:
public static IRegistrationBuilder<TLimit, TActivatorData, TRegistrationStyle>
EnableTesting<TLimit, TActivatorData, TRegistrationStyle>(
this IRegistrationBuilder<TLimit, TActivatorData, TRegistrationStyle> registration,
ContainerBuilder builder)
{
if(builder.Properties.TryGetValue("testing", out var testing) && Convert.ToBoolean(testing))
{
registration.InstancePerTest();
}
return registration;
}
Then when you register things that need to be tweaked, you could do it like this:
var builder = new ContainerBuilder();
// Set this in your tests, not in production
// builder.Properties["testing"] = true;
builder.RegisterType<Handler>()
.SingleInstance()
.EnableTesting(builder);
var container = builder.Build();
You might be able to clean that up a bit, but again, that's the general idea.
You might ask why use the builder as the mechanism to transport properties if you have to pass it in anyway.
Fluent syntax: Due to the way registrations work, they're all extension methods on the registration, not on the builder. The registration is a self-contained thing that doesn't have a reference to the builder (you can create a registration object entirely without a builder).
Internal callbacks: The internals on how registration works basically boil down to having a list of Action executed where the registrations have all the variables set up in a closure. It's not a function where we can pass stuff in during build; it's self-contained. (That might be interesting to change, now I think of it, but that's another discussion!)
You can isolate it: You could put this into a module or anywhere else and you won't be adding any new dependencies or logic. The thing carting around the variable will be the builder itself, which is always present.
Like I said, you could potentially make this better based on your own needs.
Recommendation: Container Per Test
I'll wrap up by just again recommending container per test. It's so simple, it requires no extra work, there are no surprises, and it "just works."

Consecutive unit tests fail, although they succeed individually

I have a unit test suite where every test succeeds when it is executed individually.
However, if I execute the whole suite, one test hangs when it should init a singleton. It hangs only if it is executed after a certain other unit test - if I change the order, the whole test suit executes successfully.
If I pause the execution of the hanging unit test, the stack trace is the following:
The execution hangs at the statement static let shared = StoreManager():
class StoreManager: NSObject, CalledByDataStoreInStoreManager {
static let shared = StoreManager() // Instantiate the singleton
// …
}
The other unit test that it executed before and that causes the test to hang does not use the StoreManager singleton.
My question is:
What could be the reason that the 1st test lets the initialisation of the singleton in the 2nd test fail, although this singleton is not used in the 1st test?
So the obvious answer is that there's a side effect occurring in your 1st test that is affecting your second test. Exactly what that side effect is, though, is dependent on two things:
What is being tested in the first test
What occurs in the initializer for StoreManager
From your description, it sounds like the first test is somehow affecting what's then used in the StoreManager init, which is unaffected when that test is not run before your 2nd test.
Solved: The side effect that the answer of BHendricks pointed me to, was the following:
Since my app uses also the watch, the watch session is activated in the app delegate, as recommended by Apple. This was done by calling a function in a WatchSessionManager instance, and the initialisation of this instance tried to init a singleton that is also tried to be init by the Storemanager.
This created an init cycle deadlock.
Since I use a mock app delegate for unit tests, I activate now the watch connectivity session there without initialising the unnecessary initialisation of the WatchSessionManager instance, and the deadlock is avoided.

Why is this Autofac mock's lifetime disposed in a simple MSpec test?

I've got a base class I'm using with MSpec which provides convenience methods around AutoMock:
public abstract class SubjectBuilderContext
{
static AutoMock _container;
protected static ISubjectBuilderConfigurationContext<T> BuildSubject<T>()
{
_container = AutoMock.GetLoose();
return new SubjectBuilderConfigurationContext<T>(_container);
}
protected static Mock<TDouble> GetMock<TDouble>()
where TDouble : class
{
return _container.Mock<TDouble>();
}
}
Occasionally, I'm seeing an exception happen when attempting to retrieve a Mock like so:
It should_store_the_receipt = () => GetMock<IFileService>().Verify(f => f.SaveFileAsync(Moq.It.IsAny<byte[]>(), Moq.It.IsAny<string>()), Times.Once());
Here's the exception:
System.ObjectDisposedExceptionInstances cannot be resolved and nested
lifetimes cannot be created from this LifetimeScope as it has already
been disposed.
I'm guessing it has something to do with the way MSpec runs the tests (via reflection) and that there's a period of time when nothing actively has references to any of the objects in the underlying lifetime scope being used by AutoMock which causes the lifetimescope to get disposed. What's going on here, and is there some simple way for me to keep it from happening?
The AutoMock lifetime scope from Autofac.Extras.Moq is disposed when the mock itself is disposed. If you're getting this, it means the AutoMock instance has been disposed or has otherwise lost scope and the GC has cleaned it up.
Given that, there are a few possibilities.
The first possibility is that you've got some potential threading challenges around async method calls. Looking at the method that's being mocked, I see you're verifying the call to a SaveFileAsync method. However, I don't see any async related code in there, and I'm not entirely sure when/how the tests running are calling it given the currently posted code, but if there is a situation where an async call causes the test to run on one thread while the AutoMock loses scope or otherwise gets killed on the other thread, I could see this happening.
The second possibility is the mix of static and instance items in the context. You are storing the AutoMock as a static, but it appears the context class in which it resides is a base class that is intended to supply instance-related values. If two tests run in parallel, for example, the first test will set the AutoMock to what it thinks it needs, then the second test will overwrite the AutoMock and the first will go out of scope, disposing the scope.
The third possibility is multiple calls to BuildSubject<T> in one test. The call to BuildSubject<T> initializes the AutoMock. If you call that multiple times in one test, despite changing the T type, you'll be stomping the AutoMock each time and the associated lifetime scope will be disposed.
The fourth possibility is a test ordering problem. If you only see it sometimes but not other times, it could be that certain tests inadvertently assume that some setup, like the call to BuildSubject<T>, has already been done; while other tests may not make that assumption and will call BuildSubject<T> themselves. Depending on the order the tests run, you may sometimes get lucky and not see the exception, but other times you may run into the problem where BuildSubject<T> gets called at just the wrong time and causes you pain.

Autofac and Quartz.Net Integration

Does anyone have any experience integrating autofac and Quartz.Net? If so, where is it best to control lifetime management -- the IJobFactory, within the Execute of the IJob, or through event listeners?
Right now, I'm using a custom autofac IJobFactory to create the IJob instances, but I don't have an easy way to plug in to a ILifetimeScope in the IJobFactory to ensure any expensive resources that are injected in the IJob are cleaned up. The job factory just creates an instance of a job and returns it. Here are my current ideas (hopefully there are better ones...)
It looks like most AutoFac integrations somehow wrap a ILifetimeScope around the unit of work they create. The obvious brute force way seems to be to pass an ILifetimeScope into the IJob and have the Execute method create a child ILifetimeScope and instantiate any dependencies there. This seems a little too close to a service locator pattern, which in turn seems to go against the spirit of autofac, but it might be the most obvious way to ensure proper handling of a scope.
I could plug into some of the Quartz events to handle the different phases of the Job execution stack, and handle lifetime management there. That would probably be a lot more work, but possibly worth it if it gets cleaner separation of concerns.
Ensure that an IJob is a simple wrapper around an IServiceComponent type, which would do all the work, and request it as Owned<T>, or Func<Owned<T>>. I like how this seems to vibe more with autofac, but I don't like that its not strictly enforceable for all implementors of IJob.
Without knowing too much about Quartz.Net and IJobs, I'll venture a suggestion still.
Consider the following Job wrapper:
public class JobWrapper<T>: IJob where T:IJob
{
private Func<Owned<T>> _jobFactory;
public JobWrapper(Func<Owned<T>> jobFactory)
{
_jobFactory = jobFactory;
}
void IJob.Execute()
{
using (var ownedJob = _jobFactory())
{
var theJob = ownedJob.Value;
theJob.Execute();
}
}
}
Given the following registrations:
builder.RegisterGeneric(typeof(JobWrapper<>));
builder.RegisterType<SomeJob>();
A job factory could now resolve this wrapper:
var job = _container.Resolve<JobWrapper<SomeJob>>();
Note: a lifetime scope will be created as part of the ownedJob instance, which in this case is of type Owned<SomeJob>. Any dependencies required by SomeJob that is InstancePerLifetimeScope or InstancePerDependency will be created and destroyed along with the Owned instance.
Take a look at https://github.com/alphacloud/Autofac.Extras.Quartz. It also available as NuGet package https://www.nuget.org/packages/Autofac.Extras.Quartz/
I know it a bit late:)