I have a unit test suite where every test succeeds when it is executed individually.
However, if I execute the whole suite, one test hangs when it should init a singleton. It hangs only if it is executed after a certain other unit test - if I change the order, the whole test suit executes successfully.
If I pause the execution of the hanging unit test, the stack trace is the following:
The execution hangs at the statement static let shared = StoreManager():
class StoreManager: NSObject, CalledByDataStoreInStoreManager {
static let shared = StoreManager() // Instantiate the singleton
// …
}
The other unit test that it executed before and that causes the test to hang does not use the StoreManager singleton.
My question is:
What could be the reason that the 1st test lets the initialisation of the singleton in the 2nd test fail, although this singleton is not used in the 1st test?
So the obvious answer is that there's a side effect occurring in your 1st test that is affecting your second test. Exactly what that side effect is, though, is dependent on two things:
What is being tested in the first test
What occurs in the initializer for StoreManager
From your description, it sounds like the first test is somehow affecting what's then used in the StoreManager init, which is unaffected when that test is not run before your 2nd test.
Solved: The side effect that the answer of BHendricks pointed me to, was the following:
Since my app uses also the watch, the watch session is activated in the app delegate, as recommended by Apple. This was done by calling a function in a WatchSessionManager instance, and the initialisation of this instance tried to init a singleton that is also tried to be init by the Storemanager.
This created an init cycle deadlock.
Since I use a mock app delegate for unit tests, I activate now the watch connectivity session there without initialising the unnecessary initialisation of the WatchSessionManager instance, and the deadlock is avoided.
Related
Today I refactored a ViewModel for a SwiftUI view to structured concurrency. It fires a network request and when the request comes back, updates a #Published property to update the UI. Since I use a Task to perform the network request, I have to get back to the MainActor to update my property, and I was exploring different ways to do that. One straightforward way was to use MainActor.run inside my Task, which works just fine. I then tried to use #MainActor, and don't quite understand the behaviour here.
A bit simplified, my ViewModel would look somewhat like this:
class ContentViewModel: ObservableObject {
#Published var showLoadingIndicator = false
#MainActor func reload() {
showLoadingIndicator = true
Task {
try await doNetworkRequest()
showLoadingIndicator = false
}
}
#MainActor func someOtherMethod() {
// does UI work
}
}
I would have expected this to not work properly.
First, I expected SwiftUI to complain that showLoadingIndicator = false happens off the main thread. It didn't. So I put in a breakpoint, and it seems even the Task within a #MainActor is run on the main thread. Why that is is maybe a question for another day, I think I haven't quite figured out Task yet. For now, let's accept this.
So then I would have expected the UI to be blocked during my networkRequest - after all, it is run on the main thread. But this is not the case either. The network request runs, and the UI stays responsive during that. Even a call to another method on the main actor (e.g. someOtherMethod) works completely fine.
Even running something like Task.sleep() within doNetworkRequest will STILL work completely fine. This is great, but I would like to understand why.
My questions:
a) Am I right in assuming a Task within a MainActor does not block the UI? Why?
b) Is this a sensible approach, or can I run into trouble by using #MainActor for dispatching asynchronous work like this?
await is a yield point in Swift. It's where the current Task releases the queue and allows something else to run. So at this line:
try await doNetworkRequest()
your Task will let go of the main queue, and let something else be scheduled. It won't block the queue waiting for it to finish.
This means that after the await returns, it's possible that other code has been run by the main actor, so you can't trust the values of properties or other preconditions you've cached before the await.
Currently there's no simple, built-in way to say "block this actor until this finishes." Actors are reentrant.
Im having 4 test classes inside one project (Lets call them Class A, Class B, Class C, Class D)
Each of these 3 classes have two [TestFixture("string")], which makes it to 8 tests in total.
All classes are having the [Parallelizable] parameter.
When i start the test all at once by clicking inside the Test Explorer on the name of the project and "Run", then it will start all 8 tests at the same time.
The problem here is, that it consumes a lot RAM and the tests fail because it takes too long to load and i get a timeout error (Im doing automation tests with selenium in chrome)
Now i want to define a order.
For example:
Class A and Class B should start parallel
Class C and Class D should start parallel when Class A and Class B is done
Is it possible?
I tried the parameter [Order(1)] for Class A and Class B and Order(2)] for Class C and Class D
But when i run the tests, all 8 tests start to load.
Example from my code:
[TestFixture("normalUser")]
[TestFixture("adminUser")]
[Parallelizable]
public class ImportTest
{
private IWebDriver webDriver;
private const int waitTimer = 60;
public WebDriverWait w;
public string userRole;
// Constructor
public ImportTest(string userRole)
{
this.userRole = userRole;
Console.WriteLine(userRole);
}
////-----------------------------
[SetUp]
{
}
//-------------------------------
[Test]
public void Test1()
{
Do Test
}
[Test]
public void Test2()
{
Do Test
}
//--------------------------
[TearDown]
public void CloseBrowser()
{
webDriver.Quit();
}
}
First, I'll describe what's happening...
The OrderAttribute was created in NUnit V2, before parallel tests existed. It defines the order in which tests are started. Since there was no parallelism at the time, one test had to finish before the next one started.
When parallel execution was introduced in NUnit 3, Order was not exactly broken, because it continued to start tests in the specified order. But many users perceptions were "broken", because they thought that one test would not start until the prior one finished.
Order could, of course, be changed to work like that. However, at this point, that would be a breaking change for some people, so you most likely won't see it happen until there's an NUnit 4.
So... what can you do as a workaround? I can see three options...
The simplest approach would be to make each fixture [NonParallelizable]. Then they would all run separately. You should try that first and see if the performance is acceptable to you. If you want the tests within each fixture to run in parallel, you could use [Parallelizable(ParallelScope.Children)] instead but that might break things if the tests change the state of the fixture or of any common references found in the fixture.
Alternatively, you could pick only some fixtures to mark as [NonParallelizable]. In that case, I'd do it for the ones that consume a lot of memory.
For the most effort required, you could implement ordering yourself for these classes. I'd do that by creating some sort of shared token... e.g. a lock... which each class had to acquire on startup. I'd grab the lock in the OneTimeSetUp for a fixture and release it in the onetime teardown. The locking code should be before any setup code, which acquires resources and should be released after your teardown releases those resources.
I made option 3 rather sketchy because (a) I don't know precisely how your application works and (b) I presume that you won't do it unless it's absolutely necessary.
Final advice: don't make any assumptions about the performance impact of any of these options, even the first. Measure first!
I am testing a framework.
Every test case set static variable and it looks like XCTest shares static area of a framework.
As a result, running tests at the same time makes test failure from the second test.
Note that running tests individually makes all test succeed.
I am testing with Unit test not UI test, but should I reset App at the stage of class tearDown()?
If so, please tell me how to do it, because
XCUIApplication.terminate() fails in the subclass of XCTestCase.
I am using Xcode 11 Swift 5.1
I believe what you're looking for are the instance methods, not the type methods.
Furthermore, consider setUp over tearDown:
Here is the description for the instance method setUp from Apple's documentation:
Before each test begins, XCTest calls setUpWithError(), followed by setUp(). Override this method to reset state for each test method. If state preparation might throw errors, override setUpWithError().
https://developer.apple.com/documentation/xctest/xctest/1500341-setup
Example usage:
override func setUp() {
// reset state for each test
}
For comparison, here is the description for the type method setUp from Apple's documentation:
The setUp() class method is called exactly once for a test case, before its first test method is called. Override this method to customize the initial state for all tests in the test case.
https://developer.apple.com/documentation/xctest/xctestcase/1496262-setup
Example usage:
override class func setUp() {
// customize the initial state for all tests
}
I don't know what the purpose of your static variables are, but it sounds like they are responsible for changing behavior in your app. I recommend using mock objects and/or data. I would consider looking into writing "testable code." It will help you keep your code clean and also make it easier for you to write unit tests. It is also object-oriented.
I've got a base class I'm using with MSpec which provides convenience methods around AutoMock:
public abstract class SubjectBuilderContext
{
static AutoMock _container;
protected static ISubjectBuilderConfigurationContext<T> BuildSubject<T>()
{
_container = AutoMock.GetLoose();
return new SubjectBuilderConfigurationContext<T>(_container);
}
protected static Mock<TDouble> GetMock<TDouble>()
where TDouble : class
{
return _container.Mock<TDouble>();
}
}
Occasionally, I'm seeing an exception happen when attempting to retrieve a Mock like so:
It should_store_the_receipt = () => GetMock<IFileService>().Verify(f => f.SaveFileAsync(Moq.It.IsAny<byte[]>(), Moq.It.IsAny<string>()), Times.Once());
Here's the exception:
System.ObjectDisposedExceptionInstances cannot be resolved and nested
lifetimes cannot be created from this LifetimeScope as it has already
been disposed.
I'm guessing it has something to do with the way MSpec runs the tests (via reflection) and that there's a period of time when nothing actively has references to any of the objects in the underlying lifetime scope being used by AutoMock which causes the lifetimescope to get disposed. What's going on here, and is there some simple way for me to keep it from happening?
The AutoMock lifetime scope from Autofac.Extras.Moq is disposed when the mock itself is disposed. If you're getting this, it means the AutoMock instance has been disposed or has otherwise lost scope and the GC has cleaned it up.
Given that, there are a few possibilities.
The first possibility is that you've got some potential threading challenges around async method calls. Looking at the method that's being mocked, I see you're verifying the call to a SaveFileAsync method. However, I don't see any async related code in there, and I'm not entirely sure when/how the tests running are calling it given the currently posted code, but if there is a situation where an async call causes the test to run on one thread while the AutoMock loses scope or otherwise gets killed on the other thread, I could see this happening.
The second possibility is the mix of static and instance items in the context. You are storing the AutoMock as a static, but it appears the context class in which it resides is a base class that is intended to supply instance-related values. If two tests run in parallel, for example, the first test will set the AutoMock to what it thinks it needs, then the second test will overwrite the AutoMock and the first will go out of scope, disposing the scope.
The third possibility is multiple calls to BuildSubject<T> in one test. The call to BuildSubject<T> initializes the AutoMock. If you call that multiple times in one test, despite changing the T type, you'll be stomping the AutoMock each time and the associated lifetime scope will be disposed.
The fourth possibility is a test ordering problem. If you only see it sometimes but not other times, it could be that certain tests inadvertently assume that some setup, like the call to BuildSubject<T>, has already been done; while other tests may not make that assumption and will call BuildSubject<T> themselves. Depending on the order the tests run, you may sometimes get lucky and not see the exception, but other times you may run into the problem where BuildSubject<T> gets called at just the wrong time and causes you pain.
I'm testing some code that uses StructureMap for Inversion of Control and problems have come up when I use different concrete classes for the same interface.
For example:
[Test]
public void Test1()
{
ObjectFactory.Inject<IFoo>(new TestFoo());
...
}
[Test]
public void Test2()
{
ObjectFactory.Initialize(
x => x.ForRequestedType<IFoo>().TheDefaultIsConcreteType<RealFoo>()
);
// ObjectFactory.Inject<IFoo>(new RealFoo()) doesn't work either.
...
}
Test2 works fine if it runs by itself, using a RealFoo. But if Test1 runs first, Test2 ends up using a TestFoo instead of RealFoo. Aren't NUnit tests supposed to be isolated? How can I reset StructureMap?
Oddly enough, Test2 fails if I don't include the Initialize expression. But if I do include it, it gets ignored...
If you must use ObjectFactory in your tests, in your SetUp or TearDown, make a call to ObjectFactory.ResetAll().
Even better, try to migrate your code away from depending on ObjectFactory. Any class that needs to pull stuff out of the container (other than the startup method) can take in an IContainer, which will automatically be populated by StructureMap (assuming the class itself is retrieved from the container). You can reference the IContainer wrapped by ObjectFactory through its Container property. You can also avoid using ObjectFactory completely and just create an instance of a Container that you manage yourself (it can be configured in the same way as ObjectFactory).
Yes, NUnit tests are supposed to be isolated and it is your responsibility to make sure they are isolated. The solution would be to reset ObjectFactory in the TearDown method of your test fixture. You can use ObjectFactory.EjectAllInstancesOf() for example.
Of course it doesn't reset between tests. ObjectFactory is a static wrapper around an InstanceManager; it is static through an AppDomain and as tests run in the same AppDomain this is why it is not reset. You need to TearDown the ObjectFactory between tests or configure a new Container for each test (i.e., get away from using the static ObjectFactory).
Incidentally, this is the main reason for avoiding global state and singletons: they are not friendly to testing.
From the Google guide to Writing Testable Code:
Global State: Global state is bad from theoretical, maintainability, and understandability point of view, but is tolerable at run-time as long as you have one instance of your application. However, each test is a small instantiation of your application in contrast to one instance of application in production. The global state persists from one test to the next and creates mass confusion. Tests run in isolation but not together. Worse yet, tests fail together but problems can not be reproduced in isolation. Order of the tests matters. The APIs are not clear about the order of initialization and object instantiation, and so on. I hope that by now most developers agree that global state should be treated like GOTO.