I wanna implement browser.waitForAngular() in all by e2e test scripts instead of explicit waits. Any help is appreciated....
Usually you don't need to call .waitForAngular() function manually in your tests, since it will be executed automatically before every your action on the page - http://www.protractortest.org/#/infrastructure
However, this does not means you can remove your explicit waits from code. Your tests will be much more stable if you will be using both - automatic wait for angular AND explicit waits for needed elements conditions.
Related
I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.
I'm trying to automate a lengthy process that can be broken down into several steps. (say Steps 1-5)
I have written a script that separates these into functions and call them sequentially.
However, we now have the additional requirement of making the script restartable. That is, if it fails in any one of the steps, rerunning the script would cause it to skip all completed steps and retry from the failed one.
Is this at all possible without referencing an external log file?
I've tried using workflows but it seems like recursion isn't supported.
Any ideas?
Some options aside from using a log file.
Use the registry
you can set a registry value to a number depending on what step you stopped on, this removes the need for a log file but is somewhat similar in terms of 'external' storage
Check the task status on each run
depending on the tasks you could have the script 'test', for example, step 3 to see if it has already been completed, then check step 4, 5 etc. until it encounters one it needs to run and continue from there, this may be impossible or require a lot of overhead code though for not much payoff.
Allow the user to continue from within the script.
this is probably the best way of doing it (aside from just using a log file), run the script in blocks, and when an error is encountered you can prompt the user to fix the issue before pressing 'enter' to re-run the previous script block, this makes it easy to provide information about what failed as well.
the main thing here is that once a script 'quits', in order to know what happened in it's last run it needs an external source of information, or to handle it in another way.
My workflow is: start ipcontroller/ipengines, then run 'python test_script.py' several times with different parameters. This script includes a map_async call. The ipengines don't recognize changes to the code between calls to the script, and static class variables are not reset to their defaults. It seems like a magic %reset call would do the trick, but attempting to execute this command on the ipengines does not seem to do anything.
My solution to this was to use the ipengine to start a new subprocess which completes the desired operations. This subprocess has its own memory. Not ideal, but provides the desired functionality.
are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.
I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";