Is it possible to Unit Test WinUI 3 code using NUnit? - nunit

I've got a WinUI-3 solution with three projects: class libary for the backend and ViewModels, Unit testing via NUnit and a WinUI-3 app.
Everything was working fine with all tests passing until I called Colors.Transparent in one of my ViewModel methods (called from the ViewModel constructor), now all the tests for that ViewModel fail with:
System.Reflection.TargetInvocationException : Exception has been thrown by the target of an invocation.
----> System.Runtime.InteropServices.COMException : Class not registered (0x80040154 (REGDB_E_CLASSNOTREG))
Stack Trace: 
RuntimeType.CreateInstanceOfT()
Activator.CreateInstance[T]()
WeakLazy`1.get_Value()
_IColorsStatics.get_Instance()
Colors.get_Transparent()
MainViewModel.ctor(IServiceProvider provider) line 58
MainViewModelTests.Constructor_SetsCachedSortedColumn_ToLTV() line 64
--COMException
BaseActivationFactory.ctor(String typeNamespace, String typeFullName)
_IColorsStatics.ctor()
RuntimeType.CreateInstanceOfT()
It looks like a dependency injection issue, any ideas how to get around this or does NUnit just not support WinUI-3?

You need to create an app that runs the tests on the UI thread as described here.
The idea is otherwise to keep your view models "free" from WinUI specific logic and tests these individually in a simple unit testing project.

Related

How do I undo a Setup call for a moq Mock?

This might be a special use case that I am dealing with here. Here is what my simple C# NUnit test that uses Moq looks like
Mock<ISomeRepository> mockR = new Mock<ISomeRepository>();
mockR.Setup(x => x.GetSomething).Returns(new Something(a=1,b=2);
--use the mocked repository here
Now later in this same unit test or another test case I want to invoke the real implementation of the method GetSomething() on this mockR object.
Is there a way to do that? My repository is Singleton at its heart. So even if I create a new object, the GetSomething method still returns the same Moq'd object.
That would largely depend on your implementation of that GetSomething, which is something you're not showing here ;). Also, I'm not sure that's even a valid setup, shouldn't there be a .Setup(..).Returns(..) there?
Mocks are used to represent dependencies of a class allowing that class to be tested without using their actual dependencies. Or you can do tests which involve the actual dependencies.
But using a mocked dependency and the real dependency within the same unit test sounds like you're not clear what your test is testing.
If it's another test case, it shouldn't be a problem either. Each test should not impact another, so if you set up the class under test separately that should be fine, even with a singleton.
I'm assuming that you're injecting the singleton dependency. If not, do that.

How to pass set of Systemproperties only to one particular testcase instead of all in gradle test

I have set of Junit test suites. All are working fine in eclipse.
In the test suites one test suite we will pass some System properties.
But those modified system properties should not propagate to other test suites So i just put those only in setup method like below,
#BeforeClass
public static void setUp() {
System.setProperty("public", "publicfolder");
System.setProperty("private", "privatefolder");
}
But this is working fine in eclipse only. While running it outside all other test suites are working fine except the above one.
I know to pass system properties in gradle in build file. but how could i pass those system properties to only one test suite instead of all thats my question here.
You could add another Test task so there's two in total. Each could have a different filter to run separate test suites and each could pass different system properties.
See here for a similar solution

ASP Boilerplate problems using Effort in unit testing with EFProf (Entity Framework Profiler)

Having issues using EFProf (http://www.hibernatingrhinos.com/products/EFProf) with ASP Boilerplate (http://www.aspnetboilerplate.com/).
For unit testing, ASP Boilerplate uses Effort (https://github.com/tamasflamich/effort) for mocking the database in-memory.
If I run the unit tests without adding the reference to EFProf, the tests run correctly (green).
If I add the initialization line:
HibernatingRhinos.Profiler.Appender.EntityFramework.EntityFrameworkProfiler.Initialize();
in either my test base ctor or my application project's Initialize(), I get the following error:
Castle.MicroKernel.ComponentActivator.ComponentActivatorException
ComponentActivator: could not instantiate MyApp.EntityFramework.MyAppDataContext
The inner exception has the relevant information:
Error: Unable to cast object of type 'Effort.Provider.EffortConnection' to type 'HibernatingRhinos.Profiler.Appender.ProfiledDataAccess.ProfiledConnection'.
Is Effort just not compatible with EFProf? Or am I doing something blindingly obvious wrong in my initialization?
Answering my own question: Effort fakes the DbContect object but does not actually create SQL for in-memory, thus there is nothing to intercept by profilers. It is also the reason why the CommandText is always null when using EF6's Database.Log with Effort.
Am going to try using Moq with EF6 to use an in-memory database implementation for testing as an alternative to Asp Boilerplate's testing project that utilizes Effort per this article: https://msdn.microsoft.com/en-us/library/dn314429(v=vs.113).aspx

Issue with GHUnit Testing for iPhone

I am working on Unit Test using third party framework GHUnit, created project added GHUnit framework and other framework which are needed.
I created one class called TestCases, in that import library GHUnit and class which need to write test case.
I need to write test cases for 40 classes.
Do i need to write all test cases in one single class.
Do i need to create each class for testCase?
If Yes then when i try to create new class in separate testCase1,testCase2....testCase40 it can't able to show those testCases1 testCase2 ...testCase40
IT shows me a tableview and run button and only first testCases methods, its not showing me remaining testCases Class method.
Please advice in this situation. What action i need to do for this
#Advance thanks you all.,
Separate unit testing (functionality testing) integration testing (complete system working testing)
UNIT TESTING: (for each of those 40 classes)
Usually write different test class for each of the class, so that if there is a single change in any of the class can test it by specifically running that particular class, so if there is one or 40 or 100s of class better write unit test for each of them and ensure there functionality.
In each class better write different test cases for testing different functionality, so that it will be easy to identity (for a third person, not the one who develop it and written test case for that) where the error come from, and manage them.
Each function better test only one case, write different test cases for testing different functional behavior of each functions. So it may result with 100 test cases in a single class for testing a class with 10 functions. But it is good.
INTEGRATION TESTING: (for testing depended functionality of 40 classes)
When come to integration testing, write test cases for different behavior of complete system, in a single class with different possibilities (test cases).
And finally “Spend more time for testing than coding”.
Also ensure the coverage of test cases for the code is between 90% to 100%.

Delay-loading TestCaseSource in NUnit

I have some NUnit tests which uses a TestCaseSource function. Unfortunately, the TestCaseSource function that I need takes a long time to initialize, because it scans a folder tree recursively to find all of the test images that would be passed into the test function. (Alternatively it could load from a file list XML every time it's run, but automatic discovery of new image files is still a requirement.)
Is it possible to specify an NUnit attribute together with TestCaseSource such that NUnit does not enumerate the test cases (does not call the TestCaseSource function) until either the user clicks on the node, or until the test suite is being run?
The need to get all test images stored in a folder is a project requirement because other people who do not have access to the test project will need to add new test images to the folder, without having to modify the test project's source code. They would then be able to view the test result.
Some dogmatic unit-testers may counter that I am using NUnit to do something it's not supposed to do. I have to admit that I have to meet a requirement, and NUnit is such a great tool with a great GUI that satisfies most of my requirements, such that I do not care about whether it is proper unit testing or not.
Additional info (from NUnit documentation)
Note on Object Construction
NUnit locates the test cases at the
time the tests are loaded, creates
instances of each class with
non-static sources and builds a list
of tests to be executed. Each source
object is only created once at this
time and is destroyed after all tests
are loaded.
If the data source is in the test
fixture itself, the object is created
using the appropriate constructor for
the fixture parameters provided on the
TestFixtureAttribute or the default
constructor if no parameters were
specified. Since this object is
destroyed before the tests are run, no
communication is possible between
these two phases - or between
different runs - except through the
parameters themselves.
It seems the purpose of loading the test cases up front is to avoid having communications (or side-effects) between TestCaseSource and the execution of the tests. Is this true? Is this the only reason to require test cases to be loaded up front?
Note:
A modification of NUnit was needed, as documented in http://blog.sponholtz.com/2012/02/late-binded-parameterized-tests-in.html
There are plans to introduce this option to later versions of NUnit.
I don't know of a way to delay-load test names in the GUI. My recommendation would be to move those tests to a separate assembly. That way, you can quickly run all of your other tests, and load the slower exhaustive tests only when needed.