junit4 assert methods - junit4

hello I've write a junit test with eclipse , to check the Gui component status ,I use assert : textfield.assert("expected message") i'm searching how to get the error message printed by assert the message saying that the expected text doesn't match th typed text is printed in the eclipse console I like to get this message to generate a report is there an easy mean, my be a junit method ?

int a = 0;
int b = 1;
AssertEquals("The value of A is not equal to the value of B", a, b);
The error message should be printed to the console when you run the above.
If AssertEquals does not meet your needs, you can use AssertTrue, or any of the other methods in the JUnit API.

It sounds like you want to add your own JUnit RunListener.

Related

Pytest - How to report test with status error instead of status failure

I have seen several question on this subject, but no answer matching exactly what I want to do.
What I want to do with Pytest: don't report tests where an error occurred (any kind of error, ZeroDivision for example) as a failure, but as an error.
The reason is simple: I want to know which tests highlight a possible bug and which tests didn't execute as expected for any other reason than a bug (bad configuration, network issue,...).
In this answer, test calling fixture is reported as an error, which is good. But I don't know which tests will cause an error, so how to do? If I call the fixture, the test will automatically be reported with an error.
I also tried to use a wrapper function as explained here but I can't make it work in Pytest.
To summarize, what I want to do is following:
class TestErrorVsFailure:
def test_fail(self):
print("Test expected to report a failure")
assert 2 == 1
def test_error(self):
print("Test expected to report an error")
a = 5/0
Result of Pytest execution: both tests are reported as failed
================================== FAILURES ===================================
________________________ TestErrorVsFailure.test_fail _________________________
self = <test_programs_services.TestErrorVsFailure object at 0x00000125F0633640>
#pytest.mark.poc
def test_fail(self):
print("Test expected to report a failure")
> assert 2 == 1
E assert 2 == 1
tests\scenario\web\test_programs_services.py:43: AssertionError
________________________ TestErrorVsFailure.test_error ________________________
self = <test_programs_services.TestErrorVsFailure object at 0x00000125F066A490>
#pytest.mark.poc
def test_error(self):
print("Test expected to report an error")
> a = 5 / 0
E ZeroDivisionError: division by zero
tests\scenario\web\test_programs_services.py:48: ZeroDivisionError
================= 2 failed in 1.48s =================
I can also try to catch as much as possible all exceptions, but raising a custom exception results as a failed test too so it seems not to be the right solution...
Thanks in advance if you have the solution!
UPDATE:
Pytest-html report shows 2 tests Failed too, as output of Pytest.
However, allure-pytest html report shows 1 test Failed and 1 test broken, which is what I want.
I am still interested in your feedbacks if you have any.

How to configure NUnit TestResult output to include Expected Result and Actual Result

I am using NUnit: 2.6.4.14350 version. The output generated from NUnit doesnot have the ExpectedResult attribute value and ActaulResult attribute value for each test-case. Hence forth, I want to include them in the NUnit TestResult output.
Current Output :
"test-case name="MyTestClass.TC-40" description=" SUM of Two Numbers :SUM(2,3)" executed="True" result="Success" success="True" time="0.347" asserts="1""
Desired Output:
"test-case name="MyTestClass.TC-40" description=" SUM of Two Numbers :SUM(2,3)" executed="True" result="Success" success="True" time="0.347" asserts="1" expectedresult="5" actualresult="5" "
Thanks in advance for helping.
There is no capability in NUnit to add attributes or elements to the XML result. You would have to modify NUnit to do this.
If you use an equality constraint, the error message contains both the expected and actual values. Normally, there is no message for success but you can use Assert.Success to create one.
You could request a new feature for a future version of NUnit, of course.

Displaying message on a successful unit test in Matlab

Is there a way to display a message when a unit test passes. Such as what it was testing.
I know I can show a message when it fails to pass
function testOne (testCase)
% some test here
msg = 'This will show what it fails';
testCase.assertEqual(properties(Object), expProp, msg);
end
Expanding on kyamagu's comment, assuming you are using R2014a you can write a listener that listens to AssertionPassed events. This listener is a function that take the source object (the TestCase instance) and an event data instance which contains information about the assertion like the actual value, the constraint used, and diagnsotics passed by the user. If you are doing this for one test you can jsut add this listener directly inside the test
methods(Test)
function testOne (testCase)
testCase.addlistener('AssertionPassed', ...
#(src,evd) disp('This will show what it succeeds'));
% some test here
msg = 'This will show what it fails';
testCase.assertEqual(properties(Object), expProp, msg);
end
end
If you want to show something about the success for every single assertion or verification, you may be able to get what you want by writing your own plugin. Plugins get handed the correct TestCase instances and can use them to add these listeners for both passing and failing qualifications. Once you write a plugin, you can install it onto a TestRunner and be able to get the desired behavior for all assertions, verifications, etc.
What about helpdlg?
function testOne (testCase)
% some test here, result is boolean in variable Test
if Test
msg = 'This will show what it succeeds';
helpdlg(msg);
else
msg = 'This will show what it fails';
testCase.assertEqual(properties(Object), expProp, msg);
end
end
You could add a timer for it to disappear after a certain amount of time... Also disp is a solution if it's for inline commands...

JUnit4: assumeThat message is not printing

How can I see the error messages of assumeThat in JUnit4?
This test passes but does not print anything. The reason is that to perform the test on read1, I require b to be true. Since b is false, the test does not apply.
boolean b = false;
byte[] read1 = null;
assumeThat("b shold be true to continue testing", b, is(true));
assertThat("read1 should not be null", read1, is(notNullValue()));
I have not found any kind of log with the assumeThat messages. My issue here is that I would like to know which tests passed but did not completed because assumption X failed. It would be nice if a message (for example similar to when an assertion fails) was still printed for the assumptions.
Strangely as it may seem, I haven't been able to find comments on this problem (including in the JUnit page). So there is a chance that I'm just doing something wrong or looking for messages in the wrong place. I'm using Eclipe.
If I understand you correctly, the
assertThat("read1 should not be null", read1, is(notNullValue()));
should only be executed if b == true, but it's not an error if b == false?
Then why don't you simply do:
if(b) {
assertThat("read1 should not be null", read1, is(notNullValue()));
} else {
log.warn("b == false, Test on read1 skipped");
}
? Or do you want to have a special test result state ("success-but-suspicious") if that happens?
Update: I just re-read the docs on Assume. It states that if an assumtion fails, the entire test is considered "not meaningful" and marked as "ignored". With small, focused tests that should usually give you all the info you need.
Update2: Just tried it with the JUnit 4.8 in Eclipse 4.2 I have on hand here. That just marks the test as successful. So I guess the JUnit runner in Eclipse might not support that feature correctly.
An error isn't printed because an assumption failure isn't an error. The assume methods are used to short-circuit tests that should not be run due to the value or state one or more objects (or primitives). Assumptions were designed for the Theories runner but they can be used in other runners as well.
One common usage of assumptions is when you have multiple test methods that need to perform some set of operations to get an object into a particular state before the actual test case logic can be performed.
Before assumptions, each test would get the object into a desired state, assert that it is in that state, then continue. For example, for a test of a stack, many tests require a non-empty stack, so you would create a stack, push an item, assert the stack is not empty, then do the rest of the test (pop an item, push another item, etc).
The problem comes if you do this pattern in multiple tests, and then a bug gets introduced that causes all of these tests to fail (for example, isEmpty() always returning true). When you run the tests you get so many failures that you don't know where to start.
So instead you have one test that verifies that if you push an item into an empty stack, isEmpy() returns false. Then any test that needs a non-empty stack does this:
Stack<Object> stack = new Stack<>();
stack.push(new Object());
assumeFalse(stack.isEmpty());
// Continue with the test methods
If you want an error message, you probably want to use an assertion.
Agreeing with the previous answer (by NamshubWriter).
However, if you really want (justifiably) to have a clear indication that the test was ignored because some assumption failed, you can do something like:
#Before
public void before() {
final String mandatoryPropKey = "system.mandatory-property-for-this-test";
final String mandatoryPropExpectedVal = "true";
try { // try/catch because the Assume failure won't print any message (by design)
Assume.assumeTrue("Assumed that the \"" + mandatoryPropKey + "\" system property is \"" + mandatoryPropExpectedVal + "\".", Boolean.valueOf(System.getProperty(mandatoryPropKey, mandatoryPropExpectedVal)));
} catch (AssumptionViolatedException ave) {
logger.warn("Not executing this test because this assumption failed: {}", ave.getMessage());
throw ave;
}
}

FakeItEasy & "params" arguments

I have a method with the following signature.
Foo GetFooById( int id, params string[] children )
This method is defined on an interface named IDal.
In my unit test I write the following:
IDal dal = A.Fake<IDal>();
Foo fooToReturn = new Foo();
fooToReturn.Id = 7;
A.CallTo(()=>dal.GetFooById(7, "SomeChild")).Returns(fooToReturn);
When the test runs, the signature isn't being matched on the second argument. I tried changing it to:
A.CallTo(()=>dal.GetFooById(7, new string[]{"SomeChild"})).Returns(fooToReturn);
But that was also unsuccessful. The only way I can get this to work is to use:
A.CallTo(()=>dal.GetFooById(7, A<string[]>.Ignored )).Returns(fooToReturn);
I'd prefer to be able to specify the value of the second argument so the unit test will break if someone changes it.
Update: I'm not sure when, but the issue has long since been resolved. FakeItEasy 2.0.0 supports the desired behaviour out of the box.
It might be possible to special case param-arrays in the parsing of the call-specification. Please submit an issue at: https://github.com/patrik-hagne/FakeItEasy/issues?sort=created&direction=desc&state=open
Until then, the best workaround is this:
A.CallTo(() => dal.GetFooById(7, A<string[]>.That.IsSameSequenceAs("SomeChild"))).Returns(fooToReturn);