Containing the combinatorial explosion of test cases using Catch - catch-unit-test

Let us say I have a simple class which takes 6 boolean arguments in its constructor and performs some computation based on the state of those arguments.
If I want to use Catch to adequately test all of the cases, then I would need 64 separate unit tests.
Now, let's say at some point in the future, I add a 7th boolean argument. Now the number of test cases that I have to write has doubled to 128.
Is there a way that I can craft my unit test in such as a way as to automatically "generate" all 2^n test cases from a single test specification?

You can use the generators part of Catch to automatically walk through all the different combinations of bools that your function takes:
TEST_CASE("where is my sandwich", "[hunger][food]")
{
bool wantLettuce = GENERATE(Values(false, true));
bool wantTomato = GENERATE(Values(false, true));
bool wantBacon = GENERATE(Values(false, true));
bool wantCheese = GENERATE(Values(false, true));
bool wantEgg = GENERATE(Values(false, true));
CHECK(sandwichAssembler(wantLettuce, wantTomato, wantBacon, wantCheese, wantEgg));
}
IIRC this will cause Catch to run 2^5 times trying all the combinations.
...but I'm assuming that all you want to do is exercise all the combinations. If you actually want to verify the output meets some expectation in each case, you'll need to do more.
(I've not tested this - recalling from memory while waiting for my sandwich in a cafe in Barcelona)

Related

How to capture the exception when Assert get failed either in Nunit or MsTest

How to capture Failed Assert exception message or state when using NUnit ir MeTest Assert method.
I was trying to capture "AssertionException" when my assert get failed, how can i capture that using either NUnit or MsTest. since Assert method doesn't return any type. My requirements are Test should continue even when assert get failed to finish remaining assertions, should capture the error and should fail this assert. I was using below code statement. When i use Nunit framework its failing and continuous to next assert buy not able to capture even when i use try.. catch block where as in MsTest its failing, capturing using try..catch block and not continuing to next assertion.
Much appreciated for any help!
public static void ResponseValueAssert(dynamic actualValue, dynamic expectedValue, string nameOfAssert)
{
//var ex = Assert.Throws<AssertionException>(() =>
//Assert.AreEqual(expectedValue, actualValue, "Actual value doesn't match with Expected value {0}", nameOfAssert));
if (ResponseValueAssertImplicit(actualValue, expectedValue, nameOfAssert))
{
Console.WriteLine("\r\nResponse Assert:- {0}: <PASS>", nameOfAssert);
}
else
{
Console.WriteLine("\r\nResponse Assert:- {0}: <<FAIL>>", nameOfAssert);
Console.Error.WriteLine("\r\nResponse Assert:- {0}: <<FAIL>>", nameOfAssert);
// Assert.Fail();
}
Console.WriteLine("Expected Value: {0}.\r\nActual Value: {1}.", actualValue, expectedValue);
}
public static bool ResponseValueAssertImplicit(dynamic actualValue, dynamic expectedValue, string nameOfAssert)
{
try
{
Assert.AreEqual(expectedValue, actualValue, "Actual value doesn't match with Expected value {0}", nameOfAssert);
return true;
}
catch (AssertionException ex)
{
return false;
}
}
It's not useful to ask a question about two completely different pieces of software as one thing. The answer would obviously be different for NUnit and MSTest, which are implemented quite differently these days.
So I'll answer only with respect to NUnit, since I have no idea what you would do with MSTest anyway.
In NUnit, if you want a test to continue so that more than one assertion can be reported on in the same test, you use multiple assertions. That is...
Assert.Multiple(() =>
{
// Put your various asserts here
};
NUnit will report all the failed asserts. At the end of the block, the test will be terminated if any of the asserts failed.
Note that many people will say that more than one assert in a test is a bad idea. I believe it is most of the time but that there are situations, like checking multiple properties of the same object, where it can be useful.
Also, for the record, you should never be catching exceptions that are used internally by the test framework. They are basically hidden implementation details and all your work can be lost in the next release of the software... as has happened already to some people in this case.

Using std::bind to capture a parameter pack "by move"

I'm attempting to implement std::async from scratch, and have run into a hiccup with arguments of move-only type. The gist of it is, C++14 init-captures allow us to capture single variables "by move" or "by perfect forwarding", but they do not appear to let us capture parameter packs "by move" nor "by perfect forwarding", because you can't capture a parameter pack by init-capture — only by named capture.
I've found what appears to be a workaround, by using std::bind to capture the parameter pack "by move", and then using a wrapper to move the parameters out of the bind object's storage into the parameter slots of the function I really want to call. It even looks elegant, if you don't think too much about it. But I can't help thinking that there must be a better way — ideally one that doesn't rely on std::bind at all.
(Worst case, I'd like to know how much of std::bind I'd have to reimplement on my own in order to get away from it. Part of the point of this exercise is to show how things are implemented all the way down to the bottom, so having a dependency as complicated as std::bind really sucks.)
My questions are:
How do I make my code work, without using std::bind? (I.e., using only core language features. Generic lambdas are fair game.)
Is my std::bind workaround bulletproof? That is, can anybody show an example where the STL's std::async works and my Async fails?
Pointers to discussion and/or proposals to support parameter-pack capture in C++1z will be gratefully accepted.
Here's my code:
template<typename UniqueFunctionVoidVoid>
auto FireAndForget(UniqueFunctionVoidVoid&& uf)
{
std::thread(std::forward<UniqueFunctionVoidVoid>(uf)).detach();
}
template<typename Func, typename... Args>
auto Async(Func func, Args... args)
-> std::future<decltype(func(std::move(args)...))>
{
using R = decltype(func(std::move(args)...));
std::packaged_task<R(Args...)> task(std::move(func));
std::future<R> result = task.get_future();
#ifdef FAIL
// sadly this syntax is not supported
auto bound = [task = std::move(task), args = std::move(args)...]() { task(std::move(args)...) };
#else
// this appears to work
auto wrapper = [](std::packaged_task<R(Args...)>& task, Args&... args) { task(std::move(args)...); };
auto bound = std::bind(wrapper, std::move(task), std::move(args)...);
#endif
FireAndForget(std::move(bound));
return result;
}
int main()
{
auto f3 = [x = std::unique_ptr<int>{}](std::unique_ptr<int> y) -> bool { sleep(2); return x == y; };
std::future<bool> r3 = Async(std::move(f3), std::unique_ptr<int>{});
std::future<bool> r4 = Async(std::move(f3), std::unique_ptr<int>(new int));
assert(r3.get() == true);
assert(r4.get() == false);
}
It was suggested to me offline that another approach would be to capture the args pack in a std::tuple, and then re-expand that tuple into the argument list of task using something like std::experimental::apply (coming soon to a C++17 standard library near you!).
auto bound = [task = std::move(task), args = std::make_tuple(std::move(args)...)]() {
std::experimental::apply(task, args);
};
This is much cleaner. We've reduced the amount of library code involved, down from bind to "merely" tuple. But that's still a big dependency that I'd love to be able to get rid of!

Good way to repeat a test, inserting an extra action?

I like the way Catch has nested hierarchies of tests, and it works through the combinations. It feels more natural than the setup/teardown of xUnit frameworks.
I now have a set of tests. What I want to do, about halfway down is insert a load/save serialization test, and then repeat all the tests below that point, first without the load/save, then again using the data it loaded from the serialization process. I.e. to prove that the load/save was correct.
I cannot get my head around if Catch has anything that can help with this? If it was phpUnit, I would be thinking about a string of #depends tests, and use a #dataProvider with a boolean input. A bit ugly.
(If that does not make sense, let me know, and I'll try to work out a minimal example)
The issue here is that Catch is designed to descend a tree-like organisation of tests and it automatically discovers all of the leaf-nodes of the structure and calls back into the test cases with untested code paths until they're all tested. The leaf nodes (tests, sections) are meant to be independent.
It sounds like you want to test a repository - something that can persist some data and then load it back in.
To repeat the exact same tests in two different scenarios (before serialisation, after serialisation) you'd need to put the same tests into some common place and call into that place. You can still use the same Catch macros in a non-test-case function, as long as you call it from a test case.
One possible way to do this is:
struct TestFixture {
Data data;
Repository repository;
TestFixture() : data(), instance() { }
};
void fillUpData(Data& data) {
// ...
}
void isDataAsExpected(Data& data) {
// Verify that 'data' is what we expect it to be, whether we
// loaded it or filled it up manually
SECTION("Data has ...) {
REQUIRE(data...);
}
}
TEST_CASE_METHOD(TestFixture, "Test with raw data") {
fillUpData(data);
isDataAsExpected(data);
REQUIRE(repository.save(data));
}
TEST_CASE_METHOD(TestFixture, "Operate on serialised data") {
REQUIRE(repository.load(data));
isDataAsExpected(_data);
}
One possible alternative is to supply your own main and then use command-line arguments to control whether/not the data is first serialised.
There's a third way I can think of that uses a non-quite-ready-yet feature of Catch - Generators:
TEST_CASE("...") {
using Catch::Generators;
int iteration(GENERATE(values(0, 1)));
const bool doSave(iteration == 0);
const bool doLoad(iteration == 1);
Repository repository;
Data data;
if (doLoad) {
REQUIRE(repository.load(data));
} else {
// fill up data
}
REQUIRE(..data..test..);
if (doSave) {
REQUIRE(repository.save(data));
}
}
The advantage of this method is you can see the flow and the test runs twice (for the two values) but the major disadvantage is that Generators are incompatible with SECTIONs and BDD-style features.

FakeItEasy expectation fail against HashSet comparisons

I am using Xamarin Studio 5.2 on Mac OS X 10.9.4 with NUnit 2.6.3 and FakeItEasy 1.23.0.
When I run tests for this code:
using System;
using ValueSet = System.Collections.Generic.HashSet<uint>;
using NUnit.Framework;
using FakeItEasy;
namespace SetTest
{
[TestFixture]
class TestFixture
{
[Test]
public void CallsUsersWithSetAndReducedSet()
{
var values = new ValueSet { 1, 2, 3 };
var setUser = A.Fake<SetUser>();
ClassUnderTest testInstance = new ClassUnderTest();
using (var scope = Fake.CreateScope())
{
testInstance.RunWith(setUser);
using (scope.OrderedAssertions())
{
A.CallTo(() => setUser.Use(A<ValueSet>.That.IsEqualTo(values))).MustHaveHappened(Repeated.Exactly.Once);
A.CallTo(() => setUser.Use(A<ValueSet>.That.Matches(set =>
set.Count == 2 && set.Contains(1)))).MustHaveHappened(Repeated.Exactly.Once);
}
}
}
}
public class SetUser
{
public virtual void Use(ValueSet set)
{
}
}
class ClassUnderTest
{
public static void Main(string[] arguments)
{
}
public void RunWith(SetUser setUser)
{
var values = new ValueSet { 1, 2, 3 };
setUser.Use(values);
values.Remove(3);
setUser.Use(values);
}
}
}
I get the following error output:
FakeItEasy.ExpectationException: Assertion failed for the following call: SetTest.SetUser.Use(1[System.UInt32]>) Expected to find it exactly once but found it #0 times among the calls:
1. SetTest.SetUser.Use(set: System.Collection.Generic.HashSet1[System.UInt32]) repeated 2 times
I don't understand what is causing this failure and how to fix it.
What is needed to get this type of test to pass?
#Tim Long is on the right track in his comment.
Here's a little more detail, as well as updates to respond to your comments of 2014-08-11 03:25:56:
The first reason the first MustHaveHappened fails:
According to the FakeItEasy argument constraints documentation, That.IsEqualTo tests for "object equality using object.Equals". That's what's causing the unexpected behaviour.
Not passing values into the method isn't necessarily a problem, or wouldn't be if ValueSet.Equals performed a value comparison, but ValueSet is a HashSet<uint>, so you can see from that class's method documentation that it doesn't—it uses object.Equals, which tests for reference equality. Thus, your IsEqualTo assertion fails. If you use a more sophisticated matcher that performed a value-type comparison for HashSet, perhaps something closer to what you use in your second A.CallTo, or maybe something using That.Contains, I think you'll have better success.
You may think to use That.IsSameSequenceAs, but be careful if doing so: the HashSet doesn't guarantee the order of the elements in the enumeration, so even if the set has the same elements, you may get a failure.
The second reason the first MustHaveHappened fails:
RunWith changes the contents of the values set between calls to setUser.Use. So the same set is used in two calls, first with 3 elements, then when it has only 2 elements. This means that by the time the first MustHaveHappened call is made, the set has only 2 elements, so the comparison fails. You could see this more clearly by writing an argument formatter for the ValueSet. That would provide more information.
The cause of the mismatch is that when a call is made to a faked method, FakeItEasy captures the arguments. However, for reference types, such as ValueSet (HashSet), only the reference to the argument is kept. Thus, if the object is modified later, in particular between the execution and the verification stages of the test, the object will look different than it did at the time of the faked call. See #jimmy_keen's answer to MustHaveHappened fails when called twice on the same object. There's a little more discussion over at FakeItEasy Issue 306 - Verifying multiple method calls with reference parameters.
In this case, the usual approach is to do as he suggests—provide code to capture the important state of the incoming argument at call time, and then query that saved state later.
You might be able to use something like this:
[Test]
public void CallsUsersWithSetAndReducedSet()
{
var capturedValueSets = new List<List<uint>>();
var setUser = A.Fake<SetUser>();
A.CallTo(() => setUser.Use(A<ValueSet>._)) // matches any call to setUser.Use
.Invokes((ValueSet theSet) => capturedValueSets.Add(theSet.ToList()));
ClassUnderTest testInstance = new ClassUnderTest();
testInstance.RunWith(setUser);
Assert.That(capturedValueSets, Has.Count.EqualTo(2),
"not enough calls to setUser.Use");
Assert.That(capturedValueSets[0], Is.EquivalentTo(new uint[] {1, 2, 3}),
"bad set passed to first call to setUser.Use");
Assert.That(capturedValueSets[1], Has.Count.EqualTo(2) & Has.Member(1),
"bad set passed to second call to setUser.Use");
}
You can see that each time Use is called, we add the contents of the ValueSet argument to capturedValueSets. Then at the end we
make sure 2 calls were made, by checking the length of capturedValueSets
make sure that the first time Use was called, the set had the elements 1, 2, and 3. Is.EquivalentTo checks the two lists but ignores order
make sure that the second time Use was called, the set had 2 elements, one of which was 1
By checking the two captured value sets in turn, all the bits about the scopes and ordered assertions became unnecessary.

Handling errors in an observable sequence using Rx

Is there a way to have an observable sequence to resume execution with the next element in the sequence if an error occurs?
From this post it looks like you need to specify a new observable sequence in Catch() to resume execution, but what if you needed to just continue processing with the next element in the sequence instead? Is there a way to achieve this?
UPDATE:
The scenario is as follows:
I have a bunch of elements that I need to process. The processing is made up of a bunch of steps. I have
decomposed the steps into tasks that I would like to compose.
I followed the guidelines for ToObservable() posted here
to convert by tasks to an observables for composition.
so basically I'm doing somethng like so -
foreach(element in collection)
{
var result = from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
result.subscribe( register on next and error handlers here)
}
or I could something like this:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable()
from bResult in DoBAsync(aResult).ToObservable()
from cResult in DoCAsync(bResult).ToObservable()
select cResult;
What is the best way here to continue processing other elements even if let's say the processing of
one of the elements throws an exception. I would like to be able to log the error and move on ideally.
Both James & Richard made some good points, but I don't think they have given you the best method for solving your problem.
James suggested using .Catch(Observable.Never<Unit>()). He was wrong when he said that "will ... allow the stream to continue" because once you hit an exception the stream must end - that is what Richard pointed out when he mentioned the contract between observers and observables.
Also, using Never in this way will cause your observables to never complete.
The short answer is that .Catch(Observable.Empty<Unit>()) is the correct way to change a sequence from one that ends with an error to one that ends with completion.
You've hit on the right idea of using SelectMany to process each value of the source collection so that you can catch each exception, but you're left with a couple of issues.
You're using tasks (TPL) just to turn a function call into an observable. This forces your observable to use task pool threads which means that the SelectMany statement will likely produce values in a non-deterministic order.
Also you hide the actual calls to process your data making refactoring and maintenance harder.
I think you're better off creating an extension method that allows the exceptions to be skipped. Here it is:
public static IObservable<R> SelectAndSkipOnException<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return
source
.Select(t =>
Observable.Start(() => selector(t)).Catch(Observable.Empty<R>()))
.Merge();
}
With this method you can now simply do this:
var result =
collection.ToObservable()
.SelectAndSkipOnException(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
});
This code is much simpler, but it hides the exception(s). If you want to hang on to the exceptions while letting your sequence continue then you need to do some extra funkiness. Adding a couple of overloads to the Materialize extension method works to keep the errors.
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<T> source, Func<T, R> selector)
{
return source.Select(t => Notification.CreateOnNext(t)).Materialize(selector);
}
public static IObservable<Notification<R>> Materialize<T, R>(
this IObservable<Notification<T>> source, Func<T, R> selector)
{
Func<Notification<T>, Notification<R>> f = nt =>
{
if (nt.Kind == NotificationKind.OnNext)
{
try
{
return Notification.CreateOnNext<R>(selector(nt.Value));
}
catch (Exception ex)
{
ex.Data["Value"] = nt.Value;
ex.Data["Selector"] = selector;
return Notification.CreateOnError<R>(ex);
}
}
else
{
if (nt.Kind == NotificationKind.OnError)
{
return Notification.CreateOnError<R>(nt.Exception);
}
else
{
return Notification.CreateOnCompleted<R>();
}
}
};
return source.Select(nt => f(nt));
}
These methods allow you to write this:
var result =
collection
.ToObservable()
.Materialize(t =>
{
var a = DoA(t);
var b = DoB(a);
var c = DoC(b);
return c;
})
.Do(nt =>
{
if (nt.Kind == NotificationKind.OnError)
{
/* Process the error in `nt.Exception` */
}
})
.Where(nt => nt.Kind != NotificationKind.OnError)
.Dematerialize();
You can even chain these Materialize methods and use ex.Data["Value"] & ex.Data["Selector"] to get the value and selector function that threw the error out.
I hope this helps.
The contract between IObservable and IObserver is OnNext*(OnCompelted|OnError)? which is upheld by all operators, even if not by the source.
Your only choice is to re-subscribe to the source using Retry, but if the source returns the IObservable instance for every description you won't see any new values.
Could you supply more information on your scenario? Maybe there is another way of looking at it.
Edit: Based on your updated feedback, it sounds like you just need Catch:
var result =
from element in collection.ToObservable()
from aResult in DoAAsync(element).ToObservable().Log().Catch(Observable.Empty<TA>())
from bResult in DoBAsync(aResult).ToObservable().Log().Catch(Observable.Empty<TB>())
from cResult in DoCAsync(bResult).ToObservable().Log().Catch(Observable.Empty<TC>())
select cResult;
This replaces an error with an Empty which would not trigger the next sequence (since it uses SelectMany under the hood.