I'm running the test to verify that a method was called on an OCMockObject. I'm expecting it to fail and in the issue Navigator it is indeed showing it's failing, but the Notification center is popping up (before the all the tests finish running) and saying Tests Succeeded.
I realize it's most likely a threading issue because one of my tests that is run each time reads from a massive JSON file, parses and then runs tests to confirm the parsing. I'm betting this is slowing the tests down, but I would think the unit test runner would account for this, apparently it's not and probably only considers the OCUnit tests as passing (in reality I'm using a 3rd party fluent assertion library that doesn't use OCUnit methods).
Actual Question:
Is there a way to slow the unit test runner to wait until all methods finished? I also am not doing any kind of threading inside the tests or in the code that's being tested.
Related
Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.
Is this just one of those things that enterprise developers need to get used to?
Is there some advantage to every time you run your unit tests the application needs to spin up?
Am I do something so completely wrong that no-one understands this question - googling doesn't seem to provide anyone else moaning about it.
Jon Reid wrote a good blog post about not running the application when running unit tests: http://qualitycoding.org/app-delegate-for-tests/.
The idea is basically replacing the App Delegate in main.swift for an
"empty" App Delegate when you run the tests target, so you don't run any application logic when running the tests.
I have a suite of Webdriver tests in NUnit that I have running repeatedly on a TeamCity server. I'm getting the driver type from a config file that is altered between test runs; this was the most elegant way of doing cross-browser testing within NUnit that I could come up with. Fortunately, TeamCity combines the output of all the test runs quite neatly. Unfortunately, when a test fails more than once, only one failure and its accompanying stack trace is displayed, with an annotation of "2 failures in one build." Since these are actually different tests, being in different browsers, I would like to view the error outputs separately. Is there a way to do this?
For cross browser testing you can consider using Grid 2 instead of changing the browser type from the configuration file.
In addition you'll be able to run in your tests in parallel on different browsers
and solve your multiple test failure information
We often use test suites to ensure we start from a known, working situation.
In this case, we want the whole test suite to run to test all the pages. If some fail, we still want the other tests to run.
The test suite seems to stop the moment it finds an error in one of the tests.
Can it be set-up to keep going and run all the tests, irrespective of results?
Check you are using Verify and not Assert. Assert will STOP your system
Each test case restart from a fixed point (like load base URL)
Set your timeout (how long you want the system to wait before moving on)
I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something.
Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go).
It seems my code is not actually being ran when the tests happen?
Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here?
P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.
Code-wise, how are you downloading your data? Most of the time URL connection methods are asynchronous, and you get notification of data being available as a call off the run loop. You very likely are not running the run loop.
If this is the problem, read up on run loops.