Using nunit 2.6.4 and AutoMoqData the Resharper runner appears to be evaluating all of the parameters to be passed into all tests prior to executing a single test, even if all I want to do is run a single test/small suite of tests. Right now (we have 1000's of tests) it's taking 2-3 minutes to run a single test, which doesn't work for TDD.
I tried switching to Xunit to see if nunit was the issue and there was still a big delay before running the first test.
Is this behaviour to be expected? Or are we doing something wrong?
So the results of my investigation are that when nunit discovers the tests it runs through the attributes and creates the objects, and nunit (2) discovers all the tests, even if you are only interested in running 1. Apparently this will change at some point for nunit 3.
The complicated and large object graph was the reason that the tests were slowing down, and by customising Autofixture to brutally prune this graph the tests are now much much (260s - 8s) faster.
I tried using Autofixture.AutoEntityFramework, but although it was doing what I wanted it to do, the speed gains were not enough to effectively TDD (260s - about 100s).
Related
Im writing automation tests for our website using NUnit and Selenium.
We have 2 different users (aminInt and hsuInt) and 3 features which need to be tested (in the example below TestA, TestB, TestC)
In my example below, there are in total 6 automation tests in the test explorer as each feature is being tested with both users.
Everything works. Each test is getting its own webDriver and all the tests are independent.
Now i want to start the tests in parallel.
I already tried everything i could find online. Tried the different parameter in parallelizable, but i cant get it right.
I would like to start 2 test at a time.
For example:
First test:
TestA adminInt
TestA hsuInt
after both tests above are done, it should start:
Second Test:
TestB adminInt
TestB hsuInt
If your goal is to save time and your tests are truly independent, then trying to control the order of execution isn't what you want. Essentially, it's NUnit's job to figure out how to run in parallel. Your job is merely to tell NUnit whether the tests you wrote are capable of running in parallel.
To tell NUnit which tests may be run in parallel, use the [Non]ParallelizableAttribute. If you place attributes on the fixtures, their different meaning is as follows...
[NonParallelizable] means that the fixture is not capable of running in parallel with any other fixtures. That's the default if you don't specify any attribute.
[Parallelizable] means that the fixture is capable of running in parallel with other fixtures in your test.
[Parallelizable(ParallelScope.All)] means that both the fixture and all the individual tests under that fixture are capable of running in parallel.
[Parallelizable(ParallelScope.Children] means that the fixture is not capable of running in parallel with other fixtures but the test methods under it may run in parallel with one another.
I stressed capable above because that's what you should focus on. Don't use the attribute with the expectation that NUnit will run some tests together with other specific tests because there is no NUnit feature to do that. Keep your focus on what you can control... writing independent, parallelizable tests and telling NUnit which ones they are.
If you find that NUnit starts too many test threads at one time, you can place a LevelOfParallelism attribute in your AssemblyInfo.cs. However, keep in mind that NUnit defaults depending on the number of cores available. So what works for you on your development machine may not give the best performance in your CI builds. My advice is to let NUnit use its defaults for most things until you see a reason to override them.
Thanks for the answers.
I found a solution for how to start just 2 tests in parallel.
With this parameter in front of the class, only 2 tests will start in parallel.
//[assembly: LevelOfParallelism(2)] You need this parameter only inside of one class. So if you have different classes which have their own tests, add this parameter to only one class and it will run all tests in parallel as long as you have the fixture command too.
[TestFixture("userA")]
[TestFixture("userB")]
To start the test for 2 different users for example.
I want to test a complex workflow, both by unit-testing its components, and running an integration test for the whole thing, without running sub-components twice unnecessarily.
For example, routine c processes the results of a and b. I would like to have the following test suites:
Unit test 1: Running a and validating the results
Unit test 2: Running b and validating the results
Nightly integration test: Running a and b, validating their results, then running c and validating its results, without re-running a and b, but re-using the outputs
Running each component takes some time, so the obvious solution of "just run everything every time" is not practical.
The code is in Scala, but I don't care which test framework to use - scalatest, specs2, even TestNG, all are fine, though I would prefer a Scala-ish solution. Thanks!
"Reusing the ouputs" means your unit tests would have produced some artifact, which the nightly integration test will consume. This is probably a bad practice that will end up hurting you more often than not. Unless those tests truly take a large amount of time, i'd repeat them, or find better ways to verify their output (DRYer tests, don't test the same everywhere)
I'm running JUnit 4 with AnyLogic. In one of my tests, I need access to the Experiment running the test. Is there any clean way to access it at runtime? E.g., is there a static method along the lines of Experiment.getRunningExperiment()?
There isn't a static method that I know of (and, if there was, it might be complicated by multi-run experiments which permit parallel execution, although perhaps not since there's still a single Experiment, though there'd be thread-safety issues).
However, you can use getEngine().getExperiment() from within a model. You probably need to explain more about your usage context. If you're using AnyLogic Pro and exporting the model to run standalone, then you should have access to the experiment instance anyway (as in the help "Running the model from outside without UI").
Are you trying to run JUnit tests from within an Experiment? If so, what's your general design? Obviously JUnit doesn't sit as well in that scenario since it 'expects' to be instantiating and running the thing to be tested. For my automated tests (where I can't export it standalone because I don't use AnyLogic Pro), I judged that it was easier to avoid JUnit (it's just a framework after all) and implement the tests 'directly' (by having my model components write outputs and, at the end of the run in the Experiment, having the Experiment compare the outputs to pre-prepared expected ones and flag if the test was passed or failed). With AnyLogic Pro, you could still export standalone and use JUnit to run the 'already-a-test' Experiments (with the JUnit test checking the Experiment for a testPassed Boolean being set at the end or whatever).
The fact that you want to get running experiments suggests that you're potentially doing this whilst runs are potentially executing. If so, could you explain a bit about your requirements?
I have a test suite of ~ 1500 tests and they generally run and finish within 'reasonable time'.
Recently, however, I've changed parts of the code to use threads -- and now my builds fail from time to time by simply timing out. I imagine that a thread refuses to die and the build waits until reaching the maximum build time.
My problem is how to detect which test is causing the problem?
Can I activate some logging that shows me that a test has started/finished? I can of course be done by inserting code in every single test method - or just the fixtures, but that is A LOT of work that I'd rather avoid.
I'd suggest upgrading to NUnit 2.5 and decorating your tests with Timeout attribute, specifying maximum per-test run time. For example, you can put this in your AssemblyInfo.cs:
[assembly: NUnit.Framework.Timeout(100)]
NUnit will now launch each test in a separate thread and cancell it if it exceeds its time slot. However, this might be costly, so it's probably better to identify long-running tests and then remove assembly-level attribute in favor of test-fixture time slots. You can also override this on individual tests, assigning them more time to run.
This way you move the timeout/hang detection from CruiseControl.Net to NUnit and get information inside the report on the tasks that did not complete properly. Unfortunately there's no way for CC.Net to get this information when it has to kill the process because of timeout.
This might seem a sily question to those who are well versed in autonmation but I am struggling with many things. Here's one:
I am finding that the tests I created with Selenium RC in Visual Studio 2008 are getting run from NUnit in the alphabetical order of their names?
What am I missing? Is there a way to organize the order in which the tests in Nunit are run?
Thanks!
Technically, your unit tests should all be able to run independently of each other so ordering should not matter.
I think it might be time to re-think how you are doing your tests.
If you just want them run in a specific order because that's how you "prefer" them to be run, then I would argue that you are throwing away precious time doing something that really isn't that important.
If you need specific things done before the tests are executed, check out the setup and teardown methods.