Dependencies between tests in scala - scala

I want to test a complex workflow, both by unit-testing its components, and running an integration test for the whole thing, without running sub-components twice unnecessarily.
For example, routine c processes the results of a and b. I would like to have the following test suites:
Unit test 1: Running a and validating the results
Unit test 2: Running b and validating the results
Nightly integration test: Running a and b, validating their results, then running c and validating its results, without re-running a and b, but re-using the outputs
Running each component takes some time, so the obvious solution of "just run everything every time" is not practical.
The code is in Scala, but I don't care which test framework to use - scalatest, specs2, even TestNG, all are fine, though I would prefer a Scala-ish solution. Thanks!

"Reusing the ouputs" means your unit tests would have produced some artifact, which the nightly integration test will consume. This is probably a bad practice that will end up hurting you more often than not. Unless those tests truly take a large amount of time, i'd repeat them, or find better ways to verify their output (DRYer tests, don't test the same everywhere)

Related

NUnits: Start automation tests parallel in a defined order (Selenium)

Im writing automation tests for our website using NUnit and Selenium.
We have 2 different users (aminInt and hsuInt) and 3 features which need to be tested (in the example below TestA, TestB, TestC)
In my example below, there are in total 6 automation tests in the test explorer as each feature is being tested with both users.
Everything works. Each test is getting its own webDriver and all the tests are independent.
Now i want to start the tests in parallel.
I already tried everything i could find online. Tried the different parameter in parallelizable, but i cant get it right.
I would like to start 2 test at a time.
For example:
First test:
TestA adminInt
TestA hsuInt
after both tests above are done, it should start:
Second Test:
TestB adminInt
TestB hsuInt
If your goal is to save time and your tests are truly independent, then trying to control the order of execution isn't what you want. Essentially, it's NUnit's job to figure out how to run in parallel. Your job is merely to tell NUnit whether the tests you wrote are capable of running in parallel.
To tell NUnit which tests may be run in parallel, use the [Non]ParallelizableAttribute. If you place attributes on the fixtures, their different meaning is as follows...
[NonParallelizable] means that the fixture is not capable of running in parallel with any other fixtures. That's the default if you don't specify any attribute.
[Parallelizable] means that the fixture is capable of running in parallel with other fixtures in your test.
[Parallelizable(ParallelScope.All)] means that both the fixture and all the individual tests under that fixture are capable of running in parallel.
[Parallelizable(ParallelScope.Children] means that the fixture is not capable of running in parallel with other fixtures but the test methods under it may run in parallel with one another.
I stressed capable above because that's what you should focus on. Don't use the attribute with the expectation that NUnit will run some tests together with other specific tests because there is no NUnit feature to do that. Keep your focus on what you can control... writing independent, parallelizable tests and telling NUnit which ones they are.
If you find that NUnit starts too many test threads at one time, you can place a LevelOfParallelism attribute in your AssemblyInfo.cs. However, keep in mind that NUnit defaults depending on the number of cores available. So what works for you on your development machine may not give the best performance in your CI builds. My advice is to let NUnit use its defaults for most things until you see a reason to override them.
Thanks for the answers.
I found a solution for how to start just 2 tests in parallel.
With this parameter in front of the class, only 2 tests will start in parallel.
//[assembly: LevelOfParallelism(2)] You need this parameter only inside of one class. So if you have different classes which have their own tests, add this parameter to only one class and it will run all tests in parallel as long as you have the fixture command too.
[TestFixture("userA")]
[TestFixture("userB")]
To start the test for 2 different users for example.

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

Run NUnit Test After All Others Are Complete

I have a situation (detailed below) in which I want to run one NUnit test after all the other tests have completed. I know that I can use the order attribute to start my tests in a certain order but in this case:
I want to attribute (or otherwise change) only one test out of several hundred.
I want this test to run last, not first.
I want this test to run after all other tests have completed, not after they've started.
I have experimented with OneTimeTearDown, but ideally this would run as a regular, named test and appear that way in the test results.
(Why)
I have several hundred named, hand-crafted tests that run against different folders of json test files. Non-programmers add files to these folders from time to time. The purpose of this final test is to introspect those folders and compare the contents on disk with the files for which a test has already been executed (these are recorded by each test). If this indicates that there are untested files that, itself, constitutes a test failure.
It's an interesting question. Basically you want a meta-test... one that tests how well you are testing. For that reason, it's logical for it to actually be a test.
Unfortunately, NUnit only supports this sort of "run after everything" in a OneTimeTearDown. Now, you can Perform assertions in a OneTimeTearDown, but any failures are treated as errors, i.e. unexpected exceptions. For some purposes, this may be workable, but it doesn't look quite the same as a failure.
If I were doing this, I think I'd make it a separate analytical step in my script, after the tests had been run.

AutoFixture AutoMoqData gets slow as more tests are added

Using nunit 2.6.4 and AutoMoqData the Resharper runner appears to be evaluating all of the parameters to be passed into all tests prior to executing a single test, even if all I want to do is run a single test/small suite of tests. Right now (we have 1000's of tests) it's taking 2-3 minutes to run a single test, which doesn't work for TDD.
I tried switching to Xunit to see if nunit was the issue and there was still a big delay before running the first test.
Is this behaviour to be expected? Or are we doing something wrong?
So the results of my investigation are that when nunit discovers the tests it runs through the attributes and creates the objects, and nunit (2) discovers all the tests, even if you are only interested in running 1. Apparently this will change at some point for nunit 3.
The complicated and large object graph was the reason that the tests were slowing down, and by customising Autofixture to brutally prune this graph the tests are now much much (260s - 8s) faster.
I tried using Autofixture.AutoEntityFramework, but although it was doing what I wanted it to do, the speed gains were not enough to effectively TDD (260s - about 100s).

"Inverted" Timeout functionality in NUnit tests

I'm looking for a way to write a unit test using NUnit so that if current test takes more than X miliseconds to complete it should terminate and report it as skipped / success, but not failed. This would basically be the opposite of the NUnit Timeout attribute.
A bit of context to the problem: I have some unit tests that invoke remote servers and check their responses. If there is a network issue I don't want the test to fail, I would only want to have that test fail if it was able to get a response and that response was incorrect. The same time if a response doesn't arrive I'd like to move on and skip the test.
(I realize that this approach might result in some errors not being reported, however in m situation I'm looking for no false negatives being reported (e.g. not having a test that succeeds once and fails another time based on network connectivity which I don't intend to test). )
Wouldn't it be better to mock the invocation of the remote servers?
Using mocking, you will be able to isolate your unit from its dependencies, and precisely test your the unit itself, how it deals with responses etc, and not worry about the servers at all.
Indeed, you can use mocking to purposely create incorrect responses to see how your unit deals with them, check for any expected exceptions and so on.
Have a look at Moq.