Many of my tests are dependent on the database. I use the following to check the connection before running the test case:
assume(database.isAvailable, "Database is down")
When I add it to each test case, the correct !!! CANCELED !!! with the correct message is displayed in the output.
When I add it to the beforeEach method:
override def beforeEach() = {
assume(database.isAvailable, "Database is down")
}
all I can see is just Exception encountered when attempting to run a suite with class name and *** ABORTED *** (on the line with the assume call).
Do I really need to add this assumption to each testcase?
Apparently this is something intended. See
http://www.scalatest.org/user_guide/sharing_fixtures
Mix in a before-and-after trait when you want an aborted suite, not a
failed test, if the fixture code fails.
On that same page there are other alternatives. Possibly worths a look withFixture
Related
can i do an action in XCUITest after test failure, after assert or even if element not found or not hittable
app will stay like this for hours if fails or entered an assertion .
like this assert or fail
can i let the test call a method if fails?
If you execute a single test, it will stop in case of failure.
However, if you execute multiple tests together, it will not stop but instead log the failure, so that the other tests are still executed (see the Report navigator in the leftmost pane of Xcode).
And you can of course execute any method before you call a method that declares a test as failed. just call it before, say, XCTFail("error message").
EDIT:
In function setup(), you should also set continueAfterFailure = true
I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI
I am migrating our Project from MSTest to NUnit.
I have a scenario where I need to execute the below condition
testContext.CurrentTestOutcome.Equals(UnitTestOutcome.Timeout)
Can you please suggest the NUnit equivalent to MSTest's UnitTestOutcome.Timeout?
The question is not entirely clear. #Francesco B. already interpreted it as meaning "How can I specify a timeout?" and answered accordingly.
I understand you to be asking "How can I detect that my test has timed out?" Short answer - you can't detect it in the test itself. It can only be detected by a runner that is executing your test.
Longer answer...
You can examine the test context in your teardown to see what was the outcome of the test using TestContext.CurrentContext.Result.Outcome. This is useful if your teardown needs to know the test has failed.
However, you will never see an outcome of "timed out" because...
Your teardown is included in what gets timed by the Timeout attribute.
Your teardown won't be called if the test method triggers timeout.
Even if the first two points were not true, there is no "timed out" outcome. The test is marked as a failure and only the message indicates it timed out.
Of course, if I misunderstood the question and you just wanted to know how to specify a timeout, the other answer is what you want. :-)
As per the official documentation, you can use the Timeout attribute:
[Test, Timeout(2000)]
public void PotentiallyLongRunningTest()
{
...
}
Of course you will have to provide the timeout value in milliseconds; past that limit, your test will be listed as failed.
There is a known "rare" case where NUnit doesn't respect the timeout, which has already been discussed.
As I have a collection of scala tests that connect with remote services (some of which may not be available at the time of test execution), I would like to have a way of indicating Scala tests that should be ignored, if the time-out exceeds a desired threshold.
Indeed, I could enclose the body of a test in a future and have it auto-pass, if the time-out is exceeded but having slow tests silently pass strikes me as risky. It would be better if it were explicitly skipped during the test run. So, what I would really like is something like the following:
ignorePast(10 seconds) should "execute a service that is sometimes unavailable" in {
invokeServiceThatIsSometimesUnavailable()
....
}
Looking at the ScalaTest documentation, I don't see this feature supported directly but suspect that there might be away to add this capability? Indeed, I could just add a "tag" to "slow" tests and tell the runner not to execute them, but I would rather the tests be automatically skipped when the timeout is exceeded.
I believe that's not something you're test framework should be responsible for.
Wrap your invokeServiceThatIsSometimesUnavailable() in an exception handling block and you'll be fine.
try {
invokeServiceThatIsSometimesUnavailable()
} catch {
case e : YourServiceTimeoutException => reportTheIgnoredTest()
}
I agree with Maciej that exceptions are probably the best way to go, since the timeout happens within your test itself.
There's also assume (see here), which allows to cancel a test if some pre-requisite fails. You could use it also within a single test, I think.
I want to invoke a method when my integration test fails (i.e., Assert.AreEqual fails), is there any event delegate that I can subscribe to in NUnit framework? Of course the event delegate must be fired when the tests fail.
This is because in my tests there are a lot of Assert statement, and I can't tell need to log the Assert, along with the assertion information that cause a problem in my bug tracker. The best way to do this is to when the test fails, a delegate method is invoke.
I'm curious, what problem are you trying to solve by doing this? I don't know of a way to do this with NUnit, but there is a messy way to do it. You can supply a Func and invoke it as the fail message. The body of the Func can provide you delegation you're looking for.
var callback = new Func<string>(() => {
// do something
return "Reason this test failed";
});
Assert.AreEqual("", " ", callback.Invoke());
It seems that there is no event I can subscribe to in case of assert fail
This sounds like you are wanting to create your own test-runner.
There is a distinction between tests (a defines the actions) and the running of the tests (actually running the tests). In this case, if you want to detect that a test failed, you would have to do this when the test is run.
If all you are looking to do is send an email on fail, it may be easiest to create a script (powershell/batch) that runs the command line runner, and sends an email on fail (to your bug tracking system?)
If you want more complex interactivity, you may need to consider creating a custom runner. The runner can run the test and then take action based on results. In this case you should look in the Nunit.Core namespace for the test runner interface.
For example (untested):
TestPackage package = new TestPackage();
package.add("c:\some\path\to.dll");
SimpleTestRunner runner = new SimpleTestRunner();
if (runner.Load(Package)){
var results = runner.Run(new NullListener(), TestFilter.Empty, false, LoggingThreshold.Off);
if(results.ResultState != ResultState.Success){
... do something interesting ...
}
}
EDIT: better snippet of code https://stackoverflow.com/a/5241900/1961413