Is there a way to specify that you want an NUnit test to fail, meaning that a fail should be reported as a pass and a pass should be reported as a fail? This would be useful when testing your own NUnit extensions. Here is an example of something I would like to be able to do:
[Test]
[ExpectFail]
public void TypeOf_fail() {
string str = "abc";
str.Should().Be.TypeOf<int>();
}
This does not compile because [ExpectFail] is an imaginary attribute to illustrate what I want to do, but the code inside the method works fine. This problem is specific to testing an NUnit extension in that you can normally just easily write your tests to pass, not fail. In this case you need prove that it is possible to write a failing test using the NUnit extension that you are testing.
I know this is an old post, but here is what has helped me, using NUnit:
[TestCase("SomeValidValue")]
[TestCase("{3X5XFX9X-7XCX-4XCX-8X3X-3XAXEX9X0XBX}", ExpectedException = typeof(AssertionException))]
public void GetSpecificConfiguration(string key)
{
Assert.IsNotNull(Config.Instance.GetRecord(key));
}
This approach allows me to have the same test, with two expectations, one succeeding, and one failing.
Unit tests should be designed so that:
They set up some state
They run the method under test
They assert that one thing is correct after the method under test has completed
(reference: The Art of Unit Testing by Roy Osherove)
Why are tests that are designed to fail a bad thing? They could fail in unexpected ways, and still be marked as a pass because they failed. In your example, assuming that Should() is the method under test (though this point remains even if it isn't), you write the test above and mark it as 'expected to fail'. It fails. Everything is fine. In a couple of months you come back to Should() and realise it needs some refactoring, so you change its implementation.
Now, in your example, Should() throws an exception, because you've accidentally introduced a bug. But your test (which fails because of the exception now, not the logic) is marked as should fail, and it does, so it's still marked as a pass, despite the breaking change.
The test should be designed to pass, not to fail, that way if it fails in another, unexpected, way you'll be notified. So in your example you should write tests with opposite logic:
[Test]
public void TypeOf_ShouldBeString() {
string str = "abc";
str.Should().Be.TypeOf<string>();
}
or:
[Test]
public void TypeOf_ShouldNotBeInt() {
string str = "abc";
str.Should().Not.Be.TypeOf<int>();
}
(Not sure of the syntax you're using, so .Not probably will need replacing with the correct syntax, but the sentiment holds).
Edit2: If what you're trying to do is ensure that your Should() method fails (by failing an Assert. method) then what you want to do is catch the NUnit AssertionException which the Assert. static methods throw. Try this:
[Test]
[ExpectedException(typeof(AssertionException))]
public void ShouldBeTypeOf_WithInt_Fails() {
string str = "abc";
str.Should().Be.TypeOf<int>();
}
What about Assert.Throws<XXXException>(x.DoSomething());
The nice thing here is that if the test passes (i.e. the exception was thrown), the return value is the actual exception itself, and you can then interrogate it and Assert further based on that..
Given that in this case you're wanting to test NUnit extensions, which, I'm assuming, function by making Assert calls, you could Assert.Throws<AssertionException>(...), and then do the above.
I'm thinking of writing some similar test plumbing code which I might in turn need tests for, so I'll let you know if I discover anything else in this area.
If you mean that the block of code is expected to throw an exception for test to pass, here's the code:
[Test]
[ExpectedException(typeof(Exception))]
public void TypeOf_fail() {
string str = "abc";
str.Should().Be.TypeOf<int>();
}
Of course, replace Exception with the most specific exception possible.
Related
Googling has led me to general Swift error handling links, but I have a more specific question. I think the answer here is "no, you're out of luck" but I want to double check to see if I'm missing something. This question from a few years ago seems similar and has some answers with gross looking workarounds... I'm looking to see if the latest version of Swift makes something more elegant possible.
Situation: I have a function which is NOT marked with throws, and uses try!.
Goal: I want to create a unit test which verifies that, yep, giving this function the wrong thing will in fact fail and throw a (runtime/fatal) error.
Problems:
When I wrap this function in a do-catch, the compiler warns me that the catch block is unreachable.
When I run the test and pass in the bad arguments, the do-catch does NOT catch the error.
XCTAssertThrows also does not catch the error.
This function is built to have an identical signature to another, silently failing function, which I swap it out for this one on simulators so that I can loudly fail during testing (either automated or manual). So I can't just change this to a throwing function, because then the other function will have to be marked as throwing and I want it to fail silently.
So, is there a way to throw an unhandled error that I can catch in a unit test?
Alternatively, can I make this function blow up in a testable way without changing the signature?
There is no way to catch non-throwing errors in swift and you mean that by using ! after try.
But you can refactor your code in a way you can have more control from outside of the function like this:
Factor out the throwing function, so you can test it in the right way:
func throwingFunc() throws {
let json = "catch me if you can".data(using: .utf8)!
try JSONDecoder().decode(Int.self, from: json)
}
Write a non-throwing wrapper with a custom error handler:
func nonThrowingFunc( catchHandler:((Error)->Void)? = nil ) {
guard let handler = catchHandler else { return try! throwingFunc() }
do {
try throwingFunc()
} catch {
handler(error)
}
}
So the handler will be called only if you are handling it:
// Test the function and faild the test if needed
nonThrowingFunc { error in
XCTFail(error.localizedDescription)
}
And you have the crashing one:
// Crash the program
nonThrowingFunc()
Note
! (as force) is designed for situations that you are pretty sure about the result. good examples:
Decoding hardcoded or static JSON
Force unwrapping hardcoded values
force try interfaces when you know what is the implementation of it at the point
etc.
If your function is not pure enough and may fail by passing different arguments, you should consider NOT forcing it and refactor your code to a safer version.
Swift (until & including current 5.5) postulates explicitly that all errors inside non-throwing function MUST be handled inside(!). So swift by-design has no public mechanism to intervene in this process and generates run-time error.
You might not even like my answer. But here it goes:
Though I agree that there are plenty of cases where try! is more useful (or event better) than try, I would also argue that they are not meant to be tested because they signify programmer mistakes. A programmer mistake is unplanned. You cannot test something unplanned. If you expect (or suspect) a mistake to happen in production, then you should not be using try! in the first place. To me this is a violation of what I think are standards for programming.
Throwable are added to handle expected mistakes and using try! tells the compiler that you expect a mistake will NEVER happen. For example, when you are parsing a hard-coded value that you know will never fail. So why would you ever need to test a mistake that will never happen?
You can also use an assertionFailure if you want to be rigorous in debug but safe in release.
Runtime errors, like
array index out of bounds
forcibly unwrapping nil values
division by zero
, are considered programming errors, and need precondition checks to reduce the chances they happen in production. The precondition failures are usually caught in the testing phase, as QA people toss the application on all sides, trying to find implementation flaws.
As they are programming errors, you should not need to test them, if indeed the execution of the code reached to a point where the assertion fails, then it means the other code failed to provide valid data. And that's what you should unit test, the other code.
It's best if you avoid the need to have the assertions. !, try!, IOU's, all trap for nil values, so better to avoid these constructs if you're not 100% sure that you'll receive only non-nil values.
A programming error most of the times means that your application reached into an unrecoverable state, and there's little it can be done afterwards, so better let it crash then continue with an invalid state.
Thus, Swift doesn't need to expose an API to handle this kind of scenarios. The existing workarounds, are complicated, fragile, and don't worth to be used in real-life applications.
To conclude:
replace the forced unwraps (try! included) with code that can handle nils, and unit test that, or,
unit test the caller code, since that's the actual problematic code.
The latter case assumes that the forced unwrap usage is legitimate, as the code expects for the callee to be non-nil, so the burden is moved on the other piece of code, the provider of the value that's being forcefully unwrapped.
use Generic Function XCTAssertThrowsError in swift for unit testing
Asserts that an expression throws an error.
func XCTAssertThrowsError<T>(_ expression: #autoclosure () throws -> T, _ message: #autoclosure () -> String = "", file: StaticString = #filePath, line: UInt = #line, _ errorHandler: (_ error: Error) -> Void = { _ in })
https://developer.apple.com/documentation/xctest/1500795-xctassertthrowserror
I have a parameterised test that unit tests a certain logic. There are several test cases captured by the NUnit TestCaseAttribute.
Now I wish to utilize exactly the same parameters to test a slightly different logic.
I realize that I can deliver the parameters through a different attribute - TestCaseSourceAttribute and use the same source for multiple unit tests.
But I wonder if one can both use TestCaseAttribute (which I find more convenient in this particular test) and reuse the parameters for another test?
My solution involves reflection:
[TestCase(Impl.SqlErrorCode.PartiallyDocumentedColumn, 1978.14, "MyTable", ChangeTypeCode.AddTable, "dbo.MyAuxTable:MyTableId")]
[TestCase(Impl.SqlErrorCode.UndocumentedColumn, 1978.15, "MyAuxTable", ChangeTypeCode.AddTable, "dbo.MyAuxTable:MyAuxTableId")]
[TestCase(Impl.SqlErrorCode.UndocumentedColumn, 1978.16, "MyTable", ChangeTypeCode.AddTable, "dbo.MyTable:MyAuxTableId")]
[TestCase(Impl.SqlErrorCode.NonExistingColumnInComments, 1969.19, "MyTable", ChangeTypeCode.None, "dbo.MyTable:Remarks")]
public async Task AddTableWithBadComments(Impl.SqlErrorCode expectedSqlErrorCode, decimal step, string tableName, int sqlErrorState, string expectedObjectName)
{
// ...
}
private static IEnumerable GetParametersOfAnotherTest(string testName)
{
var testCaseAttrs = typeof(IntegrationTests).GetMethod(testName).GetCustomAttributes<TestCaseAttribute>();
return testCaseAttrs.Select(a => a.Arguments);
}
[TestCaseSource(nameof(GetParametersOfAnotherTest), new object[] { nameof(AddTableWithBadComments) })]
public async Task AddTableWithBadCommentsNoVerify(Impl.SqlErrorCode expectedSqlErrorCode, double _step, string tableName, int sqlErrorState, string expectedObjectName)
{
// A different logic, but with the same parameters.
}
It has some problems though.
So, my question is this - is there an NUnit way to run a test method Y with the parameters of the test method X, where the latter uses TestCaseAttribute to provide the parameters?
I use nunit 3.7.1
The actual answer is quite short. The NUnit way to reuse parameters is TestCaseSourceAttribute. :-)
I thought I would explain why your solution doesn't work.
In NUnit 3+, attributes like TestCase and TestCaseSource are not just containers of data. They implement interfaces, which NUnit calls in order to have the attributes operate on a particular test.
Your code is treating TestCaseAttribute as if it were no more than a data store for arguments. But the attribute actually does some things and some of them are different from what TestCaseSourceAttribute does.
From your code, I can see you figured part of that out yourself. Your first method relies on the attribute converting double to decimal, while your second takes the argument as a double. That difference is of course due to the fact that you can't have a decimal argument to an attribute.
Unfortunately, for a full solution, you would have to duplicate or make allowances for other differences between the two attributes, which are all due to the restrictions C# places on attribute arguments. IMO, it's not worth it. It's trivial to create a static array of TestCaseData items and use them for both methods. If you make your approach work (which is possible) it's only advantage will be in its cleverness. :-)
Everyone says that we should use the new assertThat from Junit, but, for big Strings comparison it's seems to be some lack of feature.
Example:
#Test
public void testAssertThat() throws Exception {
Assert.assertThat("auiehaeiueahuiheauihaeuieahuiaehuieahuaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei",
CoreMatchers.equalTo( "auiehaeiueahuiheauihaeuieahuiaehuieaheaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei" ) );
}
#Test
public void testAssertEquals() throws Exception {
Assert.assertEquals( "auiehaeiueahuiheauihaeuieahuiaehuieahuaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei",
"auiehaeiueahuiheauihaeuieahuiaehuieaheaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei" );
}
assertEquals prints an easier to read error message:
org.junit.ComparisonFailure:
expected:<...uihaeuieahuiaehuieah[u]aiehiaueheauihaeuiha...> but
was:<...uihaeuieahuiaehuieah[e]aiehiaueheauihaeuiha...>
while assertThat prints this:
java.lang.AssertionError: Expected:
"auiehaeiueahuiheauihaeuieahuiaehuieaheaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei"
but: was "auiehaeiueahuiheauihaeuieahuiaehuieahuaiehiaueheauihaeuihaeuiaehuiaehuiaehuiaehaeuihaei"
Is there a way to get the same behavior with assertThat?
The friendly message org.junit.ComparisonFailure: expected... comes from the fact that it is the way that JUnit works with assertEquals and with String as input.
In this way, Junit throws org.junit.ComparisonFailure if the String comparison fails.
In your IDE, the comparison is more readable indeed. For example, in Eclipse, you can double- click on the failed Junit test to display a string comparison.
Like this :
AssertThat has a different semantic and the javadoc says it explicitly :
Asserts that actual satisfies the condition specified by matcher. If not, an AssertionError is thrown with information about the matcher and failing value.
And as the name implies, AssertionError has a semantic wider.
To conclude : if you want keep the friendly message for String, you should go on using AssertEquals for String comparisons.
I'm working on an expression evaluator. There is an evaluate() function which is called many times depending on the complexity of the expression processed.
I need to break and investigate when this method returns null. There are many paths and return statements.
It is possible to break on exit method event but I can't find how to put a condition about the value returned.
I got stuck in that frustration too. One can inspect (and write conditions) on named variables, but not on something unnamed like a return value. Here are some ideas (for whoever might be interested):
One could include something like evaluate() == null in the breakpoint's condition. Tests performed (Eclipse 4.4) show that in such a case, the function will be performed again for the breakpoint purposes, but this time with the breakpoint disabled. So you will avoid a stack overflow situation, at least. Whether this would be useful, depends on the nature of the function under consideration - will it return the same value at breakpoint time as at run time? (Some s[a|i]mple code to test:)
class TestBreakpoint {
int counter = 0;
boolean eval() { /* <== breakpoint here, [x]on exit, [x]condition: eval()==false */
System.out.println("Iteration " + ++counter);
return true;
}
public static void main(String[] args) {
TestBreakpoint app = new TestBreakpoint();
System.out.println("STARTED");
app.eval();
System.out.println("STOPPED");
}
}
// RESULTS:
// Normal run: shows 1 iteration of eval()
// Debug run: shows 2 iterations of eval(), no stack overflow, no stop on breakpoint
Another way to make it easier (to potentially do debugging in future) would be to have coding conventions (or personal coding style) that require one to declare a local variable that is set inside the function, and returned only once at the end. E.g.:
public MyType evaluate() {
MyType result = null;
if (conditionA) result = new MyType('A');
else if (conditionB) result = new MyType ('B');
return result;
}
Then you can at least do an exit breakpoint with a condition like result == null. However, I agree that this is unnecessarily verbose for simple functions, is a bit contrary to flow that the language allows, and can only be enforced manually. (Personally, I do use this convention sometimes for more complex functions (the name result 'reserved' just for this use), where it may make things clearer, but not for simple functions. But it's difficult to draw the line; just this morning had to step through a simple function to see which of 3 possible cases was the one fired. For today's complex systems, one wants to avoid stepping.)
Barring the above, you would need to modify your code on a case by case basis as in the previous point for the single function to assign your return value to some variable, which you can test. If some work policy disallows you to make such non-functional changes, one is quite stuck... It is of course also possible that such a rewrite could result in a bug inadvertently being resolved, if the original code was a bit convoluted, so beware of reverting to the original after debugging, only to find that the bug is now back.
You didn't say what language you were working in. If it's Java or C++ you can set a condition on a Method (or Function) breakpoint using the breakpoint properties. Here are images showing both cases.
In the Java example you would unclik Entry and put a check in Exit.
Java Method Breakpoint Properties Dialog
!
C++ Function Breakpoint Properties Dialog
This is not yet supported by the Eclipse debugger and added as an enhancement request. I'd appreciate if you vote for it.
https://bugs.eclipse.org/bugs/show_bug.cgi?id=425744
If I write a parameterized NUnit test, using something like [TestCaseSource] or [ValueSource], NUnit will pass the parameters directly to my test method. But is there any other way to access those parameters, e.g. from SetUp, or from a helper method (without having to explicitly pass the parameter value to that helper method)?
For example, suppose I have three different scenarios (maybe it's "rising rates", "falling rates", and "constant rates"). I'm writing tests for a particular calculation, and some tests will have the same behavior in all three scenarios; others in two of the three (and I'll write a second test for the other scenario); others will have a separate test for each scenario. Parameterized tests seem like a good way to model this; I can write a strategy object for each scenario, and parameterize the tests based on which scenarios each test should apply to.
I can do something like this:
public IEnumerable<RateStrategy> AllScenarios {
get {
yield return new RisingRatesStrategy();
yield return new FallingRatesStrategy();
yield return new ConstantRatesStrategy();
}
}
[TestCaseSource("AllScenarios")]
public void SomethingThatIsTheSameInAllScenarios(RateStrategy scenario) {
InitializeScenario(scenario);
... arrange ...
... act ...
... assert ...
}
The downside to this is that I need to remember to call InitializeScenario in every test. This is easy to mess up, and it also makes the tests harder to read -- in addition to the attribute that says exactly which scenarios this test applies to, I also need an extra line of code cluttering up my test, saying that oh yeah, there are scenarios.
Is there some other way I could access the test parameters? Is there a static property, similar to those on TestContext, that would let me access the test's parameters from, say, my SetUp method, so I could make my tests more declarative (convention-based) and less repetitive?
(TestContext looked promising, but it only tells me the test's name and whether it passed or failed. The test's parameters are sort of there, but only as part of a display string, not as actual objects; I can't grab the strategy object and start calling methods on it.)