Attempting to do some memory leak checks in my automation test using the following
nUnit 3.8.1
TestStack.White 0.13.3
dotMemory 3.0.20171219.105559
Launching my tests with the following console command, as outlined here.
dotMemoryUnit.exe "E:\nunit3-console.exe" -- "C:\Dev\White\bin\Debug\Automation.dll"
The tests (outlined below in mostly psuedocode) launches the application, grabs a snapshot, navigates into various sub pages, returns to the base page, adn then gets another snapshot so I can do a compare for survived objects. The snapshot comparison is done using the method outlined here
private const MemoryCheckPoint snapshot1
[ OneTimeSetUp ]
public void SetUp()
{
// launch application, hook up with teststack.white
LaunchApplication();
}
[ Test, Order(1) ]
public void GetSnapshot()
{
snapshot1 = dotMemory.Check();
}
[ Test, Order(2) ]
public void DoStuff()
{
//Many tests like this that test navigation from this page
//making sure controls work and values are returned as expected
}
[ Test, Order (3) ]
public void CheckMemory()
{
dotMemory.Check(memory =>
{
// Compare two checkpoints
Assert.That(memory.GetDifference(snapshot1).GetSurvivedObjects
(where => where.Type.Is<string>()).ObjectsCount, Is.EqualTo(0));
});
}
[ OneTimeTearDown ]
public void CloseWindow()
{
Application.Close();
}
The idea is that if there are any UI elements that don't get disposed of due to events etc, this should pick them up as survived objects, and then I can manually repeat the test later to track down the source of the problem.
However when I run the tests using the dotmemoryunit.exe console I get the following error.
1) Error : White.Tests.MemoryCheck.System.ArguementException : You are trying to compare the snapshot with itself at JetBrains.dotMemoryUnit.Kernel.dotMemory.Api.GetDifference< Snapshot snapshot1, Snapshot snapshot2>
Considering they're definitely different snapshots, I cannot figure out why this is failing.
The reason I'm using the console runner is because, for some reason, when I try to run the automation tests using the resharper test runner, they don't run and it just returns Inconclusive : test not run
By default dotMemory Unit works in context of the "Test", you can think about it like at the very beginning of the test method there is a call DotMemoryUnitController.TestStart and at the very end DotMemoryUnitController.TestEnd. All data are only valid inside one "Test".
You can switch off this behaviour by specifying --no-instrumentation command line parameter and calling DotMemoryUnitController.TestStart and DotMemoryUnitController.TestEnd manualy how described in this article
Related
Consider this Test
[TestFixture]
class Sample
{
[Test]
public void Test()
{
Thread.CurrentThread.Name = "Foo";
}
}
If I debug this test, it passes without error.
If I run this test, it fails with the following exception
System.InvalidOperationException : This property has already been set and cannot be modified.
In run mode, the test's thread's name is "NonParallelWorker".
In debug mode, the test's thread's name is null
As a constraint, assume the code-under-test is not allowed to change, and attempts to set the thread's name, without checking for null first.
E.g.
public void SampleMethodUnderTest()
{
// It is important that this method gets to set this field.
Thread.CurrentThread.Name = "Important Value";
}
My search through the documentation and other's posts has come up dry...
Question
Is there any way to disable/modify NUnit's thread-naming behavior?
Try adding the RequiresThreadAttribute.
[TestFixture]
class Sample
{
[Test, RequiresThread]
public void Test()
{
Thread.CurrentThread.Name = "Foo";
}
}
I think this will work currently, although the fact that this creates an unnamed thread may be an implementation detail, and not something that will necessarily work reliably going forward, I'm not sure. The alternative of course is to create your own user-controlled thread in the test, and pass any exceptions back to NUnit.
Having a test class like this
public class VerySimpleFactory {
#TestFactory
public Stream<? extends DynamicNode> someTests() {
DynamicContainer container1 = DynamicContainer.dynamicContainer("A",
Arrays.asList(t("A1"), t("A2"), t("A3"), t("A4"), t("A5")));
DynamicContainer container2 = DynamicContainer.dynamicContainer("B",
Arrays.asList(t("B1"), t("B2"), t("B3"), t("B4"), t("B5")));
DynamicContainer container3 = DynamicContainer.dynamicContainer("C",
Arrays.asList(t("C1"), t("C2"), t("C3"), t("C4"), t("C5")));
DynamicContainer container4 = DynamicContainer.dynamicContainer("D",
Arrays.asList(t("D1"), t("D2"), t("D3"), t("D4"), t("D5")));
return Arrays.asList(container1, container2, container3, container4).stream();
}
#Test
public void t1() throws Exception {
Thread.sleep(1000);
}
#Test
public void t2() throws Exception {
Thread.sleep(1000);
}
public DynamicTest t(String name) {
return DynamicTest.dynamicTest(name, () -> Thread.sleep(1000));
}
}
the Tests having a #Test annotaiton are discovered instantly by JUnit View, but the tests from TestFactory are discoverd at runtime, each after the last test was completely executed. This leads to a changing and "jumping" JUnit view. Also I cannot select a special test I'm interested in to be executed as single test, until all previous tests were executed.
It would be much nicer if all dynamic tests were shown in JUnit view also at beginning of test execution.
If this doesn't happen, is it a problem of JUnit 5, eclipse or my code?
Dynamic tests are dynamic. Not static.
It is not possible to know before-hand which and how many tests will be generated by #TestFactory annotated method ... actually, it may produce tests in an eternal loop.
Copied from https://junit.org/junit5/docs/current/user-guide/#writing-tests-dynamic-tests-examples
generateRandomNumberOfTests() implements an Iterator that generates
random numbers, a display name generator, and a test executor and then
provides all three to DynamicTest.stream(). Although the
non-deterministic behavior of generateRandomNumberOfTests() is of
course in conflict with test repeatability and should thus be used
with care, it serves to demonstrate the expressiveness and power of
dynamic tests.
I'm using NUnit 3.0 and TestFixtureSource to run test cases inside a fixture multiple times with different parameters/configurations (I do want to do this at TestFixture level). Simple example:
[TestFixtureSource(typeof (ConfigurationProvider))]
public class Fixture
{
public Fixture(Configuration configuration)
{
_configuration = configuration;
}
private Configuration _configuration;
[Test]
public void Test()
{
//do something with _configuration
Assert.Fail();
}
}
Let's say Test() fails for one of the configurations and succeeds for another. In the run report file and in Visual Studio's Test Explorer the name for both the failed and the succeeded runs will be displayed as just Test(), which doesn't tell me anything about which setup caused issues.
Is there a way to affect the test cases names in this situation (i.e. prefix its name per fixture run/configuration)? As a workaround I'm currently printing to the results output before each test case fires but I would rather avoid doing that.
Since NUnit 3.0 is in beta and this feature is fairly new I wasn't able to find anything in the docs. I found TestCaseData but I don't think it's tailored to be used with fixtures just yet (it's designed for test cases).
I can't find a way to change the testname, but it should not be neccessary, because NUnit3 constructs the testname by including a description of the testfixture.
The example class Fixture from the question can be used unchanged if the Configuration and ConfigurationProvider has an implementation like this:
public class Configuration
{
public string Description { get; }
public Configuration(string description)
{
Description = description;
}
public override string ToString()
{
return Description;
}
}
public class ConfigurationProvider : IEnumerable
{
public IEnumerator GetEnumerator()
{
yield return new Configuration("Foo");
yield return new Configuration("Bar");
yield return new Configuration("Baz");
}
}
The 'trick' is to make sure the constructor-parameter to the fixture is a string or has a ToString-method that gives a sensible description of the fixture.
If you are using NUnit 3 Test Adapter in Visual Studio, then the testfixtures will be displayed as Fixture(Foo), Fixture(Bar) and Fixture(Baz) so you can easily distinguish between their tests. The xml-output from nunit3-console.exe also uses descriptive names, fx: fullname=MyTests.Fixture(Bar).Test
<test-case id="0-1003" name="Test" fullname="MyTests.Fixture(Bar).Test" methodname="Test" classname="MyTests.Fixture" runstate="Runnable" result="Failed" ... >
<failure>
<message><![CDATA[]]></message>
<stack-trace><![CDATA[at MyTests.Fixture.Test() in ... ]]></stack-trace>
</failure>
...
</test-case>
One way to perform such actions is to have find and replace tokens in source code and dynamically build test libraries before execution using command line msbuild. High level steps are
Define test case names as sometest_TOKEN in source then using command line tools like fnr.exe replce _TOKEN with whatever you like. For example sometest_build2145.
Compile the dll with using msbuild for example msbuild /t:REbuild mytestproj.sln. Thereafter execute all test cases in mytestproj.dll.
Is it possible to write a Unit Test that calls the Messenger.Default.Register method and then write an Assertion to be used by the Action?
I would like to determine if my ViewModel is sending the correct message after calling an Execute on one of my Commands.
I have tried writing the Assert.AreEqual as the Action however this doesn't seem to be working correctly.
Sounds like a job for mocking! Assuming you're passing in the messenger interface to your viewmodel (because dependency inversion is a Good Thing, for this very reason), your code should look something like this if I understand you correctly:
public class YourViewModel
{
readonly IMessenger messenger;
public YourViewModel(IMessenger messenger)
{
this.messenger = messenger;
// setup of your delegate command to call Execute
}
void Execute(object parameter)
{
messenger.Send(new YourMessageType());
}
}
Then in your unit test you'd mock the messenger and verify that the right method is called, which in this case is Send. So, using the popular mocking framework Moq:
public class YourViewModelTests
{
[Test]
public void Execute_Always_SendsYourMessageType()
{
// arrange
var mockRepository = new MockRepository(MockBehavior.Loose);
var mockMessenger = mockRepository.Create<IMessenger>();
var systemUnderTest = new YourViewModel(mockMessenger.Object);
// act
systemUnderTest.YourCommand.Execute(null);
// assert
mockMessenger.Verify(p => p.Send<YourMessageType>(
It.Is(m => /* return true if it's the right message */)));
}
}
Usually I'd move the just about all of the "arrange" phase into a test setup method, but you should get the idea.
If you'd still like to do it without mocking the messenger and also use Messenger.Default, you can do the following:
public class YourViewModelTests
{
[Test]
public void Execute_Always_SendsYourMessageType()
{
// arrange
var systemUnderTest = new YourViewModel();
// Set the action to store the message that was sent
YourMessageType actual;
Messenger.Default.Register<YourMessageType>(this, t => actual = t);
// act
systemUnderTest.YourCommand.Execute(null);
// assert
YourMessageType expected = /* set up your expected message */;
Assert.That(actual, Is.EqualTo(expected));
}
}
Alternatively, for each test it is possible to create a separate copy of the Messenger. For the runtime you want to use the Default instance of the Messenger, but for Unit Tests, as I said, create a separate copy for each test:
return new GalaSoft.MvvmLight.Messaging.Messenger(); // Unit Tests
return GalaSoft.MvvmLight.Messaging.Messenger.Default; // Runtime
Otherwise one might end up re-inventing the wheel, since in more complex situations where there is a need to test ViewModel communications, you will have to manage Messenger subscribers, message types an so on. Then probably writing unit tests for the messenger mock making sure it works in the same way as the original messenger. There is nothing in the engine of the Messenger that should be different when comparing Runtime and Test executions.
So for testing a factory returns the same instance of the Messenger. Test method subscribes and waits, ViewModel publishes; then Test accepts and exits. Otherwise Test times out and reports an error. I found this approach more "closer to reality" than mocking the messenger and verifying through the mock that the method was called.
We've got some integration tests in our solution. To run these tests, simulation software must be installed on the developer PC. This software is, however, not installed on every developer PC. If the simulation software is not installed, these tests should be skipped, otherwise ==> NullRefException.
I'm now seeking for a way to do a "conditional ignore" for tests/testfixtures.
Something like
if(simulationFilesExist)
do testfixture
else
skip testfixture
NUnit gives some useful things like ignore and explicit, but that's not quite what I need.
Use some code in your test or fixture set up method that detects if the simulation software is installed or not and calls Assert.Ignore() if it isn't.
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
or
[TestFixtureSetUp]
public void FixtureSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting fixture." );
}
}
In NUnit 3.0 and higher you have to use OneTimeSetUp attribute instead of TestFixtureSetUp.
NUnit also gives you the option to supply a Category attribute.
Depending on how you are launching your tests, it may be appropriate to flag all the tests that require the simulator with a known category (e.g., [Category("RequiresSimulationSoftware")]). Then from the NUnit GUI you can choose to exclude certain categories. You can do the same thing from the NUnit command line runner (specify /exclude:RequiresSimulationSoftware if applicable).
I didn't want to duplicate Assert.Ignore condition in every test case, so I ended up using a custom Attribute class, which I derived from the NUnitAttribute class:
[AttributeUsage(AttributeTargets.Method, AllowMultiple = false, Inherited = false)]
public class SimulatorOnlyAttribute : NUnitAttribute, IApplyToTest
{
public void ApplyToTest(Test test)
{
if (test.RunState == RunState.NotRunnable)
{
return;
}
if (!Helper.RunsOnSimulator)
{
test.RunState = RunState.Ignored;
test.Properties.Set(PropertyNames.SkipReason, "This test should run only on simulator");
}
}
}
So now I can just mark required test cases with the new attribute:
[SimulatorOnly]
public void Test()
For reference you could investigate source code of the IgnoreAttribute.
Use:
[SetUp]
public void TestSetUp()
{
if (!TestHelper.SimulationFilesExist())
{
Assert.Ignore( "Simulation files are not installed. Omitting." );
}
}
You use this type of condition in TestFixtureSet Attribute. But if this fixture have a parameterized test then if you want to ignore parameterized test of this fixture then this goes in an infinite loop and your test will be hanged. So you use the setup attribute better for the if condition.
There are a lot of ways to alter the result status of a test. Here are a few, and ways to read out the various status:
TestExecutionContext.CurrentContext.CurrentTest.MakeInvalid("I want this test to be SKIPPED");
ResultState resultStateObject = new ResultState(TestStatus.Skipped);
TestExecutionContext.CurrentContext.CurrentResult.SetResult(resultStateObject, "this test is being skipped derp derp");
TestExecutionContext.CurrentContext.CurrentTest.RunState = RunState.Ignored;
Logger.log("After doing things");
resultstate = TestExecutionContext.CurrentContext.CurrentResult.ResultState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State: " + resultstate);
resultstatestatus = TestExecutionContext.CurrentContext.CurrentResult.ResultState.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result State Status: " + resultstate);
runstate = TestExecutionContext.CurrentContext.CurrentTest.RunState.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Run State: " + runstate); //test="#runstate = 'Skipped' or #runstate = 'Ignored' or #runstate='Inconclusive'
status = TestContext.CurrentContext.Result.Outcome.Status.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Result Status: " + status);
message = TestExecutionContext.CurrentContext.CurrentResult.Message.ToString();
Logger.log("%%%%%%%%%%%%%%%%%% Message: " + message);