I'm using OpenCover to report on my code coverage for my NUnit tests, and when I run a suite of tests which take a long time I get the following exception:
An exception occured: The number of WaitHandles must be less than or equal to 64.
stack: at System.Threading.WaitHandle.WaitAny(WaitHandle[] waitHandles, Int32 millisecondsTimeout, Boolean exitContext)
at OpenCover.Framework.Manager.ProfilerManager.ProcessMessages(List`1 handles, GCHandle pinnedComms)
at OpenCover.Framework.Manager.ProfilerManager.RunProcess(Action`1 process, Boolean isService)
at OpenCover.Console.Program.Main(String[] args)
This only happens when I replace my mock DAL with a real DAL in my tests. Basically I'm running the same set of tests against the same interfaces, just with an integrated implementation instead of a mock implementation. The mock DAL tests run fine, another DAL implementation which uses XML files runs fine (just expectedly slower). The slowest of the three, the actual SQL implementation (slow because of the teardown/setup between each test), brings about this error.
There's no shortage of information online about threading and WaitHandles for custom code, but this is happening inside of a 3rd party tool. Is there something I can do with OpenCover to fix this? Some command line argument which explicitly directs the threading to allow these long-running tests? Perhaps an argument that it needs to pass to NUnit?
Related
We have quite a lot of tests that need to bypass Load Balancer in order to talk directly to a specific web server.
Each test is decorated with TestCaseSource attribute specifying a function that at run-time determines the list of web servers to hit.
So, if we have n tests T1, T2, ..., Tn and m Web Servers W1, W2, ..., Wm (discovered at run-time), the tests run in the following order:
T1W1
T1W2
...
T1Wm
T2W1
T2W2
...
T2Wm
...
TnW1
TnW2
...
TnWm
Now, I need them to run in a different order, namely:
T1W1
T2W1
...
TnW1
T1W2
T2W2
...
TnW2
...
T1Wm
T2Wm
...
TnWm
I understand that I can modify the test name using the TestCaseData.TestName property. But doing so would still run the child test cases together. For example, see below:
The tests nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge and nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge run one after another rather than:
nan4dfc1app01_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app01_
nan4dfc1app02_RegisterAndStartShiftAndEnsureInvalidBadge running with all other tests starting with nan4dfc1app02_
So essentially, renaming the test cases does not split the child test cases. Not good for me.
So, is there a way to change the order at run-time the way I need it?
It's not possible to do this with a TestCaseSourceAttribute. All the test cases generated for a single test method are run together.
The other mechanism for grouping tests is by fixture. If you made your class a parameterized fixture and passed it the web servers using TestFixtureSourceAttribute, then you could control the order of the tests within each fixture.
You would save the passed in parameter for the fixture as an instance member and use it within every test. This is probably simpler and easier to read than what you are doing anyway, because there is only one reference to the source rather than many.
I use SOAP UI for testing a REST API. I have a few test cases which are independent of each other and can be executed in random order.
I know that one can disable aborting the whole run by disabling the option Fail on error as shown in this answer on SO. However, it can be so that the TestCase1 has prepared certain data to run tests first and it breaks in the middle of its run because an assertion fails or for some other reason. Now, the TestCase2 starts running after it and will test some other things, however, because TestCase1 has not had its all steps (including those that clean up) executed, it may fail.
I would like to be able to run all of the tests even if a certain test fails however I want to be able to execute a number of particular test case specific steps should a test fail. In programming terms, I would like to have a finally where each test case will have a number of steps that will be executed regardless of whether the test failed or passed.
Is there any way to achieve this?
You can use Teardown script at test case level
In below example test step fails but still teardown script runs. So its more like Finally
Alternatively you can try creating your own soft assertion which will not stop the test case even if it fails. for example
def err[]
then whenever there is an error you can do
err.add( "Values did not matched")
at the end you can check
assert err.size()>0 ,"There is an error"
log.info err
This way you can capture errors and do actual assertions at the end or alternatively you can use the below teardown script provided by SoapUI
I am writing a regression suite for API's using SCALATEST, I am kind of stuck-up with following scenario:
For instance I have two tests:
test-1{
Call for API-1
Call for API-2
Call for API-3
}
test-2{
Call for API-5
Call for API-6
Call for API-7
}
I have created a generalized function to Call API's I have setup separate JSON files for URI, method, body and headers.
Now my question is that as all these calls will be async, and will be getting back Future Results, one way to handle I know is flatmap / or For within one Test.
But what about 2nd Test, do I need to block main thread here or there is some smart solution for this. I can't afford to run multiple cases in parallel due to inter-dependencies on resources they will be using.
It's better for your tests be executed sequentially, for this please refer to the scalatest user guide on how to deal with Futures
Play will also provide you some utils to handle a Future, the usage is described in the testing documentation
I have written a code to assert some values. I have also extended nunit so that
public void TestFinished(TestResult result){}
is run at the end of the test.
But, I am confused about its potential usefulness. I suppose the TestFinished method is used to write some helpful data about the test run.
But it seems there is no way to reference exceptions thrown by the Asserts in the TestFinished method.
I've been doing unit testing and I ran into this weird bad problem.
I'm doing user authentication tests with some of my services/mappers.
I run, all together about 307 tests right now. This only really happens when I run them all in one batch.
I try to instantiate only one Zend_Application object and use that for all my tests. I only instantiate it to take care of the db connection, the session, and the autoloading of my classes.
Here is the problem.
Somewhere along the line of tests the __destruct method of the Zend_Session_SaveHandler_DbTable gets called. I have NO IDEA WHY? But it does.
The __destruct method will render any writing to my session objects useless because they are marked as read-only.
I have NO clue why the destruct method is being called.
It gets called many tests before my authentication tests. If I run each folder of tests individually there is no problem. It's only when I try to run all 307 tests. I do have some tests that do database work but my code is not closing the db connections or destructing the save handler.
Does anyone have any ideas on why this would be happening and why my Zend_Session_SaveHandler_DbTable is being destructed? Does this have anything to do with the lifetime that it has by default?
I think that what was happening is that PHPUnit was doing garbage collection. Whenever I ran the 307 tests the garbage collector had to run and it probably destroyed the Zend_Session_SaveHandler_DbTable for some reason.
This would explain why it didn't get destroyed when fewer tests were being run.
Or maybe it was PHP doing the garbage collection, that makes more sense.
Either way, my current solution is to create a new Zend_Application object for each test class so that all the tests within that class have a fresh zend_application object to work with.
Here is some interesting information.
I put an echo statement in the __destruct method of the savehandler.
The method was being called ( X + 1 ) times, where X was the number of tests that I ran. If I ran 50 test I got 51 echos, 307 tests then 308 echos, etc.
Here is the interesting part. If I ran only a few tests, the echos would all come at the END of the test run. If I tried to run all 307 tests, 90 echos would show up after what I assumed were 90 test. The rest of the echos would then come up at the end of the remaining tests. The number of echos was X + 1 again, or in this case 308.
So, this is where I'm assuming that this has something to do with either the tearDown method that PHPUnit calls, or the PHP garbage collector. Maybe PHPUnit invokes the garbage collector at teardown. Who knows but I'm glad I got it working now as my tests were all passing beforehand.
If any of you have a better solution then let me know. Maybe I uncovered a flaw in my code, phpunit, or zend, that hadn't been known before and there is some way to fix it.
It's an old question, but I have just had the same problem and found the solution here. I think it's the right way to solve it.
Zend_Session::$_unitTestEnabled = true;