Is it possible to reorder the NUnit tests at run-time before the execution begins? - nunit

We have quite a lot of tests that need to bypass Load Balancer in order to talk directly to a specific web server.
Each test is decorated with TestCaseSource attribute specifying a function that at run-time determines the list of web servers to hit.
So, if we have n tests T1, T2, ..., Tn and m Web Servers W1, W2, ..., Wm (discovered at run-time), the tests run in the following order:
T1W1
T1W2
...
T1Wm
T2W1
T2W2
...
T2Wm
...
TnW1
TnW2
...
TnWm
Now, I need them to run in a different order, namely:
T1W1
T2W1
...
TnW1
T1W2
T2W2
...
TnW2
...
T1Wm
T2Wm
...
TnWm
I understand that I can modify the test name using the TestCaseData.TestName property. But doing so would still run the child test cases together. For example, see below:
The tests nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge and nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge run one after another rather than:
nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app01_
nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app02_
So essentially, renaming the test cases does not split the child test cases. Not good for me.
So, is there a way to change the order at run-time the way I need it?

It's not possible to do this with a TestCaseSourceAttribute. All the test cases generated for a single test method are run together.
The other mechanism for grouping tests is by fixture. If you made your class a parameterized fixture and passed it the web servers using TestFixtureSourceAttribute, then you could control the order of the tests within each fixture.
You would save the passed in parameter for the fixture as an instance member and use it within every test. This is probably simpler and easier to read than what you are doing anyway, because there is only one reference to the source rather than many.

Related

NUnit running classes in parallel

Iam a little bit confused about the Parallel attribute of nunit:
Say i have 3 Classes each filled with some tests:
ClassA
- Test1
- Test2
- Test3
ClassB
- Test1
ClassC
- Test1
- Test2
I would like to run Every Test in ClassA and ClassB in parallel (i dont care about order )
I also would like to run ClassC while ClassA and ClassB are running, but in this class i want to keep the order in which i specified the tests
So my question is how should i set the attributes to get a behaviour like this?
I checked the docu https://github.com/nunit/docs/wiki/Framework-Parallel-Test-Execution but iam still confused
Starting simple...
If you do nothing with ParallelizableAttribute then nothing runs in parallel. :-)
If you add Parallelizable to each fixture, then the three fixtures will run in parallel, but the individual tests will not. That is, up to three things can be running at one time, one from each class.
If you add [Parallelizable(ParallelScope.Fixtures)] at the assembly level, the effect is the same as (2). You should only do this if almost all of your fixtures will successfully run in parallel, in which case you would mark those that can't as [NonParallelizable]. My experience in helping people is that too many people do this without realizing that their tests may not always run correctly in parallel when not written to do so. Starting out, it's safest to default to non-parallel and only add it when it works for you.
Starting with (2), change the attribute on A and B to [Parallelizable(ParallelScope.All)] or
[Parallelizable(ParallelScope.Self + ParallelScope.Children). I like the longer form because it's much clearer to readers as to what it does. This will have exactly the effect that you want.
One more note: you should probably make sure that any fixture in which you specify the order of tests does not run in parallel. NUnit let's you specify both parallel and order without error. In that case, it simply starts the tests in the order you give, but that may not be what you intended.
4.

How to stop a serenity jbehave test and set test out come

I'm using serenity core and JBehave. I would like to stop the test and set test outcome as PASS based on some condition if the condition fails test continues.
How could I stop the JBehave test during suite execution and set PASS? And then the suite continues with another test.
To the best of my knowledge, JBehave is designed to optionally stop a test upon failure only. By default it will fail an entire scenario and continue with any other scenarios in the story as applicable.
In situations like this, I believe your only solution is through the steps methods themselves. Check your "pass" condition in the steps method and then continue the code only if that condition fails. It's possible your story's scenario definition is not concise enough? Keep it to a simple Given, When, Then instead of breaking the steps up in too detailed a manner.

Managing multiple anylogic simulations within an experiment

We are developing an ABM under AnyLogic 7 and are at the point where we want to make multiple simulations from a single experiment. Different parameters are to be set for each simulation run so as to generate results for a small suite of standard scenarios.
We have an experiment that auto-starts without the need to press the "Run". Subsequent pressing of the Run does increment the experiment counter and reruns the model.
What we'd like is a way to have the auto-run, or single press of Run, launch a loop of simulations. Within that loop would be the programmatic adjustment of the variables linked to passed parameters.
EDIT- One wrinkle is that some parameters are strings. The Optimization or Parameter Variation experiments don't lend themselves to enumerating a set of strings to be be used across a set of simulation runs. You can set a string per parameter for all the simulation runs within one experiment.
We've used the help sample for "Running a Model from Outside Without Presentation Window", to add the auto-run capability to the initial experiment setup block of code. A method to wait for Run 0 to complete, then dispatch Run 1, 2, etc, is needed.
Pointers to tutorial models with such features, or to a snip of code for the experiment's java blocks are much appreciated.
maybe I don't understand your need but this certainly sounds like you'd want to use a "Parameter Variation" experiment. You can specify which parameters should be varied in which steps and running the experiment automatically starts as many simulation runs as needed, all without animation.
hope that helps
As you, I was confronted to this problem. My aim was to use parameter variation with a model and variation were on non numeric data, and I knew the number of runs to start.
Then i succeed in this task with the help of Custom Variation.
Firstly I build an experiment typed as 'multiple run', create my GUI (user was able to select the string values used in each run.
Then, I create a new java class which inherit from the previous 'multiple run' experiment,
In this class (called MyMultipleRunClass) was present:
- overload of the getMaximumIterations method from default experiment to provide to default anylogic callback the correct number of iteration, and idnex was also used to retrieve my parameter value from array,
- implementation of the static method start,
public static void start() {
prepareBeforeExperimentStart_xjal( MyMultipleRunClass.class);
MyMultipleRunClass ex = new MyMultipleRunClass();
ex.setCommandLuneArguments_xjal(null);
ex.setup(null);
}
Then the experiment to run is the 'empty' customExperiment, which automatically start the other Multiple run experiment thru the presented subclass.
Maybe it exists shortest path, but from my point of view anylogic is correctly used (no trick with non exposed interface) and it works as expected.

pytest: are pytest_sessionstart() and pytest_sessionfinish() valid hooks?

are pytest_sessionstart(session) and pytest_sessionfinish(session) valid hooks? They are not described in dev hook docs or latest hook docs
What is the difference between them and pytest_configure(config)/pytest_unconfigure(config)?
In docs it is said:
pytest_configure(config)called after command line options have been parsed. and all plugins
and initial conftest files been loaded.
and
pytest_unconfigure(config) called before test process is exited.
Session is the same, right?
Thanks!
The bad news is that the situation with sessionstart/configure is not very well specified. Sessionstart in particular is not much documented because the semantics differ if one is in the xdist/distribution case or not. One can distinguish these situations but it's all a bit too complicated.
The good news is that pytest-2.3 should make things easier. If you define a #fixture with scope="session" you can implement a fixture that is called once per process within which test execute.
For distributed testing, this means once per test slave. For single-process testing, it means once for the whole test run. In either case, if you do a "--collectonly" run, or "-h" or other options that do not involve the running of tests, then fixture functions will not execute at all.
Hope this clarifies.

How should I deal with failing tests for bugs that will not be fixed

I have a complex set of integration tests that uses Perl's WWW::Mechanize to drive a web app and check the results based on specific combinations of data. There are over 20 subroutines that make up the logic of the tests, loop through data, etc. Each test runs several of the test subroutines on a different dataset.
The web app is not perfect, so sometimes bugs cause the tests to fail with very specific combinations of data. But these combinations are rare enough that our team will not bother to fix the bugs for a long time; building many other new features takes priority.
So what should I do with the failing tests? It's just a few tests out of several dozen per combination of data.
1) I can't let it fail because then the whole test suite would fail.
2) If we comment them out, that means we miss out on making that test for all the other datasets.
3) I could add a flag in the specific dataset that fails, and have the test not run if that flag is set, but then I'm passing extra flags all over the place in my test subroutines.
What's the cleanest and easiest way to do this?
Or are clean and easy mutually exclusive?
That's what TODO is for.
With a todo block, the tests inside are expected to fail. Test::More will run the tests normally, but print out special flags indicating they are "todo". Test::Harness will interpret failures as being ok. Should anything succeed, it will report it as an unexpected success. You then know the thing you had todo is done and can remove the TODO flag.
The nice part about todo tests, as opposed to simply commenting out a block of tests, is it's like having a programmatic todo list. You know how much work is left to be done, you're aware of what bugs there are, and you'll know immediately when they're fixed.
Once a todo test starts succeeding, simply move it outside the block. When the block is empty, delete it.
I see two major options
disable the test (commenting it out), with a reference to your bugtracking system (i.e. a bug ig), possibly keeping a note in the bug as well that there is a test ready for this bug
move the failing tests in a seperate test suite. You could even reverse the failing assertion so you can run the suite and while it is green the bug is still there and if it becomes red either the bug is gone or something else is fishy. Of course a link to the bugtracking system and bag is still a good thing to have.
If you actually use Test::More in conjunction with WWW::Mechanize, case closed (see comment from #daxim). If not, think of a similar approach:
# In your testing module
our $TODO;
# ...
if (defined $TODO) {
# only print warnings
};
# in a test script
local $My::Test::TODO = "This bug is delayed until iteration 42";