What Event Handler to Subscribe and Assert.AreEqual Fails - nunit

I want to invoke a method when my integration test fails (i.e., Assert.AreEqual fails), is there any event delegate that I can subscribe to in NUnit framework? Of course the event delegate must be fired when the tests fail.
This is because in my tests there are a lot of Assert statement, and I can't tell need to log the Assert, along with the assertion information that cause a problem in my bug tracker. The best way to do this is to when the test fails, a delegate method is invoke.

I'm curious, what problem are you trying to solve by doing this? I don't know of a way to do this with NUnit, but there is a messy way to do it. You can supply a Func and invoke it as the fail message. The body of the Func can provide you delegation you're looking for.
var callback = new Func<string>(() => {
// do something
return "Reason this test failed";
});
Assert.AreEqual("", " ", callback.Invoke());

It seems that there is no event I can subscribe to in case of assert fail

This sounds like you are wanting to create your own test-runner.
There is a distinction between tests (a defines the actions) and the running of the tests (actually running the tests). In this case, if you want to detect that a test failed, you would have to do this when the test is run.
If all you are looking to do is send an email on fail, it may be easiest to create a script (powershell/batch) that runs the command line runner, and sends an email on fail (to your bug tracking system?)
If you want more complex interactivity, you may need to consider creating a custom runner. The runner can run the test and then take action based on results. In this case you should look in the Nunit.Core namespace for the test runner interface.
For example (untested):
TestPackage package = new TestPackage();
package.add("c:\some\path\to.dll");
SimpleTestRunner runner = new SimpleTestRunner();
if (runner.Load(Package)){
var results = runner.Run(new NullListener(), TestFilter.Empty, false, LoggingThreshold.Off);
if(results.ResultState != ResultState.Success){
... do something interesting ...
}
}
EDIT: better snippet of code https://stackoverflow.com/a/5241900/1961413

Related

How to call certain steps even if a test case fails in SOAP UI to clean up before proceeding?

I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI

How to stop a serenity jbehave test and set test out come

I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.

Redirect standard output and standard err when executing a method

I have a program that tests each method in a Test# subclass and outputs XML in JUnit's XML format.
For instance:
class ExampleTest : Test
{
Void testOne()
{
...
}
}
I want to execute the testOne method and capture the standard output and standard error produced in it. This out and err output will be included in the XML report.
My first idea was to look at sys::Env. The environment class sys::Env has err and out but are readonly.
My second idea is that sys::Process can be launched for each test method and redirect sys::Process#.err and sys::Process#.out in it, but I'm afraid it will be very slow.
There is other way to do it?
You won't be able to redirect output from your current process (and really should not).
If the output absolutely has to be stdout/err - you'll need to go the Process route. You'll take the fork/jvm/stream setup hit, but that may be negligible compared to your test runtime.
A better option would be to log using the Logging API - which will give more control over what gets logged, and where things go.

Scala Test: Auto-skip tests that exceed timeout

As I have a collection of scala tests that connect with remote services (some of which may not be available at the time of test execution), I would like to have a way of indicating Scala tests that should be ignored, if the time-out exceeds a desired threshold.
Indeed, I could enclose the body of a test in a future and have it auto-pass, if the time-out is exceeded but having slow tests silently pass strikes me as risky. It would be better if it were explicitly skipped during the test run. So, what I would really like is something like the following:
ignorePast(10 seconds) should "execute a service that is sometimes unavailable" in {
invokeServiceThatIsSometimesUnavailable()
....
}
Looking at the ScalaTest documentation, I don't see this feature supported directly but suspect that there might be away to add this capability? Indeed, I could just add a "tag" to "slow" tests and tell the runner not to execute them, but I would rather the tests be automatically skipped when the timeout is exceeded.
I believe that's not something you're test framework should be responsible for.
Wrap your invokeServiceThatIsSometimesUnavailable() in an exception handling block and you'll be fine.
try {
invokeServiceThatIsSometimesUnavailable()
} catch {
case e : YourServiceTimeoutException => reportTheIgnoredTest()
}
I agree with Maciej that exceptions are probably the best way to go, since the timeout happens within your test itself.
There's also assume (see here), which allows to cancel a test if some pre-requisite fails. You could use it also within a single test, I think.

Rhino Mocks Calling instead of Recording in NUnit

I am trying to write unit tests for a bit of code involving Events. Since I need to raise an event at will, I've decided to rely upon RhinoMocks to do so for me, and then make sure that the results of the events being raised are as expected (when they click a button, values should change in a predictable manner, in this example, the height of the object should decrease)
So, I do a bit of research and realize I need an Event Raiser for the event in question. Then it's as simple as calling eventraiser.Raise(); and we're good.
The code for obtaining an event raiser I've written as is follows (written in C#) (more or less copied straight off the net)
using (mocks.Record())
{
MyControl testing = mocks.DynamicMock<MyControl>();
testing.Controls.Find("MainLabel",false)[0].Click += null;
LastCall.IgnoreArguments();
LastCall.Constraints(Rhino.Mocks.Constraints.Is.NotNull());
Raiser1 = LastCall.GetEventRaiser();
}
I then test it as In playback mode.
using (mocks.Playback())
{
MyControl thingy = new MyControl();
int temp=thingy.Size.Height;
Raiser1.Raise();
Assert.Greater(temp, thingy.Size.Height);
}
The problem is, when I run these tests through NUnit, it fails. It throws an exception at the line testing.Controls.Find("MainLabel",false)[0].Click += null; which complains about trying to add null to the event listener. Specifically, "System.NullReferenceException: Object Reference not set to an instance of the Object"
Now, I was under the understanding that any code under the Mocks.Record heading wouldn't actually be called, it would instead create expectations for code calls in the playback. However, this is the second instance where I've had a problem like this (the first problem involved classes/cases that where a lot more complicated) Where it appears in NUnit that the code is actually being called normally instead of creating expectations. I am curious if anyone can point out what I am doing wrong. Or an alternative way to solve the core issue.
I'm not sure, but you might get that behaviour if you haven't made the event virtual in MyControl. If methods, events, or properties aren't virtual, then I don't think DynamicMock can replace their behaviour with recording and playback versions.
Personally, I like to define interfaces for the classes I'm going to mock out and then mock the interface. That way, I'm sure to avoid this kind of problem.