UnitTestOutcome.Timeout equivalent in NUNIT - nunit

I am migrating our Project from MSTest to NUnit.
I have a scenario where I need to execute the below condition
testContext.CurrentTestOutcome.Equals(UnitTestOutcome.Timeout)
Can you please suggest the NUnit equivalent to MSTest's UnitTestOutcome.Timeout?

The question is not entirely clear. #Francesco B. already interpreted it as meaning "How can I specify a timeout?" and answered accordingly.
I understand you to be asking "How can I detect that my test has timed out?" Short answer - you can't detect it in the test itself. It can only be detected by a runner that is executing your test.
Longer answer...
You can examine the test context in your teardown to see what was the outcome of the test using TestContext.CurrentContext.Result.Outcome. This is useful if your teardown needs to know the test has failed.
However, you will never see an outcome of "timed out" because...
Your teardown is included in what gets timed by the Timeout attribute.
Your teardown won't be called if the test method triggers timeout.
Even if the first two points were not true, there is no "timed out" outcome. The test is marked as a failure and only the message indicates it timed out.
Of course, if I misunderstood the question and you just wanted to know how to specify a timeout, the other answer is what you want. :-)

As per the official documentation, you can use the Timeout attribute:
[Test, Timeout(2000)]
public void PotentiallyLongRunningTest()
{
...
}
Of course you will have to provide the timeout value in milliseconds; past that limit, your test will be listed as failed.
There is a known "rare" case where NUnit doesn't respect the timeout, which has already been discussed.

Related

Letting Concourse retry a build which failed because of a flaky issue

According to Concourse documentation
If any step in the build plan fails, the build will fail and subsequent steps will not be executed
It makes sense. However I'm wondering how I could deal with flaky steps.
For instance if I have a pipeline with
a get step with trigger: true
and then a task which performs several operations, including an HTTP call to an external service.
If the HTTP call fails because of a temporary network error then it makes sense that Concourse fails the build. But I would also appreciate if I could have a way to tell Concourse that this type of errors does not mean that the current version is corrupted and that it should automatically retry to build it after some time.
I've looked for it in the Concourse documentation but couldn't find such feature. Is it possible?
Check out the attempts step modifier, the example from the doc:
plan:
- get: foo
- task: unit
file: foo/unit.yml
attempts: 10
It will attempt to run the task 10 times before it declares the task failed.
Using attempts as explained in the other answer can be an option. But, before going that route, I would think more about the possible consequences and alternatives.
Attempts has two potential problems:
it cannot know wether the failure is due to a flake or to a real error. If it is due to a real error, it will keep banging on the task for, say, 10 times, potentially consuming compute resource (it depends on how heavy the task is).
it will work as expected only if the task is as focused as possible and idempotent. For example, if the flake HTTP request you mention comes after other operations that make a change to the external world, then you must ensure (when designing the task) that redoing such operations due to a flake to the HTTP request is safe.
If you know that your task is not subject to these kind of problems, then attempts can make sense.
On the other hand, this discussion makes us realize that maybe we can restructure the pipeline to be more Concourse idiomatic.
Since you mention an HTTP request, another option is to proxy that HTTP request via a Concourse resource (see https://concourse-ci.org/implementing-resource-types.html). Once done, the side-effect is visible in the pipeline (instead of being hidden in the task) and its success could be made optional with try or another hook modifier (see https://concourse-ci.org/try-step.html and https://concourse-ci.org/modifier-and-hook-steps.html).
The trade-off in this case is the time to write your own Concourse resource (in case you don't find a community-provided one). Only you are in the position to take this decision. What I can say is that writing a resource is not that complicated, once you get familiar with the concept. For some tricks on quick iterations during development, that apply to any Concourse resource, you can have a look at https://github.com/Pix4D/cogito/blob/master/CONTRIBUTING.md#quick-iterations-during-development.

How to call certain steps even if a test case fails in SOAP UI to clean up before proceeding?

I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI

How to stop a serenity jbehave test and set test out come

I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.

Scala Test: Auto-skip tests that exceed timeout

As I have a collection of scala tests that connect with remote services (some of which may not be available at the time of test execution), I would like to have a way of indicating Scala tests that should be ignored, if the time-out exceeds a desired threshold.
Indeed, I could enclose the body of a test in a future and have it auto-pass, if the time-out is exceeded but having slow tests silently pass strikes me as risky. It would be better if it were explicitly skipped during the test run. So, what I would really like is something like the following:
ignorePast(10 seconds) should "execute a service that is sometimes unavailable" in {
invokeServiceThatIsSometimesUnavailable()
....
}
Looking at the ScalaTest documentation, I don't see this feature supported directly but suspect that there might be away to add this capability? Indeed, I could just add a "tag" to "slow" tests and tell the runner not to execute them, but I would rather the tests be automatically skipped when the timeout is exceeded.
I believe that's not something you're test framework should be responsible for.
Wrap your invokeServiceThatIsSometimesUnavailable() in an exception handling block and you'll be fine.
try {
invokeServiceThatIsSometimesUnavailable()
} catch {
case e : YourServiceTimeoutException => reportTheIgnoredTest()
}
I agree with Maciej that exceptions are probably the best way to go, since the timeout happens within your test itself.
There's also assume (see here), which allows to cancel a test if some pre-requisite fails. You could use it also within a single test, I think.

How should I deal with failing tests for bugs that will not be fixed

I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";