CC.Net Build Server -- how to find the NUnit test that does not complete at all? - nunit

I have a test suite of ~ 1500 tests and they generally run and finish within 'reasonable time'.
Recently, however, I've changed parts of the code to use threads -- and now my builds fail from time to time by simply timing out. I imagine that a thread refuses to die and the build waits until reaching the maximum build time.
My problem is how to detect which test is causing the problem?
Can I activate some logging that shows me that a test has started/finished? I can of course be done by inserting code in every single test method - or just the fixtures, but that is A LOT of work that I'd rather avoid.

I'd suggest upgrading to NUnit 2.5 and decorating your tests with Timeout attribute, specifying maximum per-test run time. For example, you can put this in your AssemblyInfo.cs:
[assembly: NUnit.Framework.Timeout(100)]
NUnit will now launch each test in a separate thread and cancell it if it exceeds its time slot. However, this might be costly, so it's probably better to identify long-running tests and then remove assembly-level attribute in favor of test-fixture time slots. You can also override this on individual tests, assigning them more time to run.
This way you move the timeout/hang detection from CruiseControl.Net to NUnit and get information inside the report on the tasks that did not complete properly. Unfortunately there's no way for CC.Net to get this information when it has to kill the process because of timeout.

Related

Are Dart/Flutter tests executed synchronously or asynchronously

could someone explain please how Flutter/Dart tests are executed using test runner?
Are the tests executed synchronously or asynchronously?
Does the testing framework execute every single test synchronously, meaning that only a single test and test suite is executed at any single time?
Or does the testing framework only execute a single test at a time within a test suite, but are able to execute multiple test suites at the same time?
Or testing framework run all tests and test suites completely independent of each other at the same time, completely asynchronously?
This is important because it has a direct impact on the way we are or aren't able to structure our tests, especially when it comes to the set up and tear downs of tests, and the way we assert functionality is working correctly.
Thanks!
In general, dart test will execute many tests in parallel (the parallelism level varies based on CPU core count), but you can disable this with a command line flag.
You should not write tests with any inter-dependence (i.e. one test should not rely on some global state set up by another test). For example, you may find that because your laptop has a different CPU configuration to your CI server, your tests might pass locally but fail in CI due to different ordering.
If you have some setup logic that is very expensive, and needs to be reused between multiple tests, you can use setUpAll() to run some code once before every test in a test group, however this is still discouraged. Personally, I prefer to just join the tests into one long test, to keep all tests self-contained.
This has some advantages. For example, you can use --total-shards and --shard-index to parallelize tests in CI (by creating multiple jobs to each run a different subset of the test suite).
You can also randomize the order of your tests with --test-randomize-ordering-seed, to make sure you aren't accidentally setting up such dependences between tests that might invalidate their results (i.e. perhaps test2 only passes if it happens after test1, randomizing the ordering will catch this).
TLDR
Many tests run in parallel. Try to keep tests self-contained, with no dependence on the order of tests. Extract setup logic into functions and pass it into setUp. If you really really need the performance, you can try setUpAll but avoid it if possible.

Using NUnit for functional tests and guarantees about execution sequence and threads

I asked this auestion on NUnit-Discuss, but i realize that group is not very active, so i give it a try here:
We've been using MSTests until now for some functional tests.
I know, neither MSTest nor NUnit is really for functional test, but we need those tests with a simple integration in Visual Studio.
The tests will launch other executables, connect, do stuff, disconnect and kill the processes.
We're having trouble with MSTest in that it launched tests in a separate thread and seems that some execution is overlapping between tests, even when executed sequentially.
So i'm thinking about moving to NUnit.
The question i have is:
Can NUnit be configured in any way such as to give the following guarantees:
Tests will be executed sequentially, in an order that can be specified.
Tests will be executed from the same thread.
TearDown code of one test will have been fully executed before Setup code of a following test will be called.
If so, what would be that configuration, if any particular?
Thank you.
By default, NUnit does not execute any tests in parallel. If you never use the ParallelizableAttribute, then your tests run one at a time.
Of course, that does not mean your tests can't break NUnit, for example, by starting a thread or process that never terminates after the test thread terminates. NUnit only takes responsibility for the tests it runs itself.
NUnit does not guarantee that all tests will be executed from the same thread. That is a separate matter from parallelization, of course. Separate threads may be started for each thread, based on attributes you specify. You may, for example, designate some tests to run in a Single-threaded Apartment, while others run by default in an MTA. You might use the RequiresThreadAttribute, which asks NUnit to use a new thread for the test it decorates. You might use the SingleThreadedAttribute on a class, to indicate that all the code in that class runs on the same thread.
One trick, which is currently available but which may not exist in all future releases, is to specify --workers=0 on the command-line to nunit3-console. That tells NUnit to simply run the tests without creating any test workers and gives an execution path that more closely resembles that of NUnit V2.
So, in general, I think your needs can be met, but it could require some tinkering with your tests to make it work the way you want.

Selenoid query priority

There is a question: Is there a possibility to set tests to perform priority in selenoid.
Problem: There is a suite> 20 tests, correspondingly at startup it fills the queue. After that, another test is run. He gets to the end of the line.
Is there an option to make it run as soon as the browser is freed, without waiting for all the tests to run before it?
No, this is not possible in current implementation. All incoming requests have equal priority. Two alternatives:
I think such issues should be addressed in test framework of you choice. For example for py.test a quick search shows a plugin for ordering your tests: https://github.com/ftobia/pytest-ordering Not sure whether it works.
You could also install Ggr and use different Selenoids and quota names for different tests, but this seems to be too much complicated for your case.

selenium tests failing randomly when run in parallel - pytest-django

When I run my selenium tests (django StaticLiveServerTestCase tests using selenium webdriver), I get random failures when running my tests in parallel using pytest-xdist.
Sometimes my full test suite will pass, and other times it will not.
There are two tests in my test suite that seem to fail most often. All of my tests load data from a fixture, but these two that fail create new objects to test specific edge cases. After they create the objects, I have my logged in client make a get to the URL for the page under test.
Failure modes:
1) The objects that were created during my two tests sometimes will not show up, and I'll get a NoSuchElementException.
2) The object will show up, but the values will be incorrect (they will render as n/a instead of as a number that I assign at object creation).
I'm new to parallelizing my test builds. My debugging has been fairly rudimentary so far. Any help would be appreciated, whether that be through debugging techniques or otherwise!
Seems like this has something to do with the database transaction i use to manipulate the state of the data before accessing the app with the webdriver, not finishing before the webdriver connection is made. The webdriver reads from the old state of the database.
I just need to figure out how to make sure the old connection is finished.

AutoFixture AutoMoqData gets slow as more tests are added

Using nunit 2.6.4 and AutoMoqData the Resharper runner appears to be evaluating all of the parameters to be passed into all tests prior to executing a single test, even if all I want to do is run a single test/small suite of tests. Right now (we have 1000's of tests) it's taking 2-3 minutes to run a single test, which doesn't work for TDD.
I tried switching to Xunit to see if nunit was the issue and there was still a big delay before running the first test.
Is this behaviour to be expected? Or are we doing something wrong?
So the results of my investigation are that when nunit discovers the tests it runs through the attributes and creates the objects, and nunit (2) discovers all the tests, even if you are only interested in running 1. Apparently this will change at some point for nunit 3.
The complicated and large object graph was the reason that the tests were slowing down, and by customising Autofixture to brutally prune this graph the tests are now much much (260s - 8s) faster.
I tried using Autofixture.AutoEntityFramework, but although it was doing what I wanted it to do, the speed gains were not enough to effectively TDD (260s - about 100s).