I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.
Related
I wanna implement browser.waitForAngular() in all by e2e test scripts instead of explicit waits. Any help is appreciated....
Usually you don't need to call .waitForAngular() function manually in your tests, since it will be executed automatically before every your action on the page - http://www.protractortest.org/#/infrastructure
However, this does not means you can remove your explicit waits from code. Your tests will be much more stable if you will be using both - automatic wait for angular AND explicit waits for needed elements conditions.
I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI
Well the system we have has a bunch of dependencies, but I'll try to summarize what's going on without divulging too much details.
Test assembly in the form of a .dll is the one being executed. A lot of these tests call an API.
In the problematic method, there's 2 API calls that have an await on them: one to write a record to that external interface, and another to extract all records and then read the last one in that external interface, both via API. The test is simply to check if writing the last record was successful in an end-to-end context, that's why there's both a write and then a read.
If we execute the test in Visual Studio, everything works as expected. I also tested it manually via command lining vstest.console.exe, and the expected results always come out as well.
However, when it comes to VS Test task in VSTS, it fails for some reason. We've been trying to figure it out, and eventually we reached the point where we printed the list from the 'read' part. It turns out the last record we inserted isn't in the data we pulled, but if we check the external interface via a different method, we confirmed that the write process actually happened. What gives? Why is VSTest getting like an outdated set of records?
We also noticed two things:
1.) For the tests that passed, none of the Console.WriteLine outputs appear in the logs. Only on Failed test do they do so.
2.) Even if our Data.Should.Be call is at the very end of the TestMethod, the logs report the fail BEFORE it prints out the lines! And even then, the printing should happen after reading the list of records, and yet when the prints do happen we're still missing the record we just wrote.
Is there like a bottom-to-top thing we're missing here? It really seems to me like VSTS vstest is executing the assert before the actual code. The order of TestMethods happen the right order though (the 4th test written top-to-bottom in the code is executed 4th rather than 4th to last) and we need them to happen in the right order because some of the later tests depend on the former tests succeeding.
Anything we're missing here? I'd put a source code but there's a bunch of things I need to scrub first if so.
Turns out we were sorely misunderstanding what 'await' does. We're using .Wait() instead for the culprit and will also go back through the other tests to check for quality.
are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.
I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";