I'm developing a static library in Obj-C for a CocoaTouch project. I've added unit testing to my Xcode project by using the built in OCUnit framework. I can run tests successfully upon building the project and everything looks good. However I'm a little confused about something.
Part of what the static library does is to connect to a URL and download the resource there. I constructed a test case that invokes the method that creates a connection and ensures the connection is successful. However when my tests run a connection is never made to my testing web server (where the connection is set to go).
It seems my code is not actually being ran when the tests happen?
Also, I am doing some NSLog calls in the unit tests and the code they run, but I never see those. I'm new to unit testing so I'm obviously not fully grasping what is going on here. Can anyone help me out here?
P.S. By the way these are "Logical Tests" as Apple calls them so they are not linked against the library, instead the implementation files are included in the testing target.
Code-wise, how are you downloading your data? Most of the time URL connection methods are asynchronous, and you get notification of data being available as a call off the run loop. You very likely are not running the run loop.
If this is the problem, read up on run loops.
Related
Trying to figure out what's the status of GWTTestCase suite/methodology.
I've read some things which say that GWTTestCase is kind of obsolete. If this is true, then what would be the preferred methodology for client-side testing?
Also, although I haven't tried it myself, someone here says that he tried it, and it takes seconds or tens of seconds to run a single test; is this true? (i.e. is it common to take tens of seconds to run a test with GWTTestCase, or is it more likely a config error on our side, etc)
Do you use any other methodology for GWT client-side testing that has worked well for you?
The problem is that any GWT code has to be compiled to run within a browser. If your code is just Java, you can run in a typical JUnit or TestNG test, and it will run as instantly as you expect.
But consider that a JUnit test must be compiled to .class, and run in the JVM from the test runner main() - though you don't normally invoke this directly, just start it from your build tool or IDE. In the same way, your GWT/Java code must be compiled into JavaScript, and then run in a browser of some kind.
That compilation is what takes time - for a minimal test, running in only one browser (i.e. one permutation), this is going to take a minimum of 10 seconds on most machines (the host page for the GWTTestCase to allow the JVM to tell it which test to run, and get results or stacktraces or timeouts back). Then add in how long the tested component of your project takes to compile, and you should have a good idea of how long that test case will take.
There are a few measures you can take to minimize the time taken, though 10 seconds is pretty much the bare minimum if you need to run in the browser.
Use test suites - these tell the compiler to go ahead and make a single larger module in which to run all of the tests. Downside: if you do anything clever with your modules, joining them into one might have other ramifications.
Use JVM tests - if you are just testing a presenter, and the presenter is pure Java (with a mock vide), then don't mess with running the code in the browser just to test its logic. If you are concerned about differences, consider if the purpose of the test is to make sure the compiler works, or to exercise the logic.
Is this just one of those things that enterprise developers need to get used to?
Is there some advantage to every time you run your unit tests the application needs to spin up?
Am I do something so completely wrong that no-one understands this question - googling doesn't seem to provide anyone else moaning about it.
Jon Reid wrote a good blog post about not running the application when running unit tests: http://qualitycoding.org/app-delegate-for-tests/.
The idea is basically replacing the App Delegate in main.swift for an
"empty" App Delegate when you run the tests target, so you don't run any application logic when running the tests.
I'm running the test to verify that a method was called on an OCMockObject. I'm expecting it to fail and in the issue Navigator it is indeed showing it's failing, but the Notification center is popping up (before the all the tests finish running) and saying Tests Succeeded.
I realize it's most likely a threading issue because one of my tests that is run each time reads from a massive JSON file, parses and then runs tests to confirm the parsing. I'm betting this is slowing the tests down, but I would think the unit test runner would account for this, apparently it's not and probably only considers the OCUnit tests as passing (in reality I'm using a 3rd party fluent assertion library that doesn't use OCUnit methods).
Actual Question:
Is there a way to slow the unit test runner to wait until all methods finished? I also am not doing any kind of threading inside the tests or in the code that's being tested.
I have been trying to add some (logic) unit tests to my code recently. I've set up the tests with Kiwi, I like the BDD style and the syntax.
My problem now is that I'm trying to test some code that relies on CLLocationManager sending a correct locationManager:didUpdateToLocation:fromLocation:. However, this never happens when I run the test, presumably because CLLocationManager thinks it's not authorised. For the record, I have added a .gpx file to the test target and edited the scheme to use that file as the location (under Edit Scheme... -> Test -> Info). The same code works fine when I run the full app in the simulator. Any idea how I can get (simulated) location updates to be sent in a test case?
Use dependency injection to specify the location manager you'd like to use. You can either:
Specify it as an argument to the initializer (constructor injection)
Set it as a property (setter injection)
Try to use constructor injection if you can.
Then for real use, pass in a real CLLocationManager. But for test use, provide a fake that you can trigger to send the desired method, with preset test arguments. This also makes your test deterministic by removing any reference to your actual location.
I ended up going down a different way: I converted my logic test to an application test, so that the test actually runs alongside the app in the simulator. This has the definitive advantage that I don't have to jump through hoops to get [NSBundle mainBundle] and CLLocationManager to work exactly like in the app. I'd have preferred the conceptual cleanliness of a separate logic test, but I don't think it makes sense to rewrite code just for that.
On numerous unrelated, projects the CPU usage of NUnit has often ended up being about 50% even when I'm not running my tests. From other information I've read this is supposedly more to do with my code than Nunit.
Does anyone know how I can isolate the problems in my code that will be causing this and fix them?
Thanks
I have the same problem and it seems to be rather consistently affecting only one test project doing integration testing (calling web services, checking stuff over HTTP, etc). I'm very careful to dispose of networked objects (with using(...){ }), so I don't quite understand why NUnit should continue to use 90% CPU days after the test is done with and all objects in use by the test should be disposed of.
The really strange thing is that while running the test, NUnit uses no more than 10%-50% CPU. It's only after the test has completed that CPU usage surges and stays constantly at 80%-100% forever. It's really strange. Reloading or closing the test project (File > Close) doesn't help either. NUnit itself needs to be closed.