I'm a big believer in testing, but not a very good practitioner. I've done pretty well at getting coverage on my model objects and programming them in a TDD style. I'm actually enjoying it so much I'd love to extend this to my controller layer, particularly my UIViewController subclasses.
Unfortunately, many UIKit classes don't function in independent tests. However, I'm unhappy with the restriction of only running my dependent tests on the device. It's really important to me to run all unit tests before every build, and it seems to me like its possible and worthwhile to unit test (as opposed to other types of testing) controller code.
My question is simply this: How do I test UIViewControllers in such a way that the tests run before every build? I am aware of a couple of different solutions to this problem, but don't know a whole lot about the various benefits of each one.
In general if you are stuck with an environmental problem where unconditional testing seems to be made impossible, you can work around it if the benefits are great enough to warrant your jumping through some hoops.
The situation here is a special case of unwanted outside dependencies, which I generally solve in my unit tests by either #ifdefing out tiny bits of dependent code, or by implementing stub classes to fill the expected dependency roles enough that my code can be tested.
So in this specific case, you might create a new source file, linked only in the test bundle, called "UIKitStubClasses.m" … within it you could implement the bare necessities to simulate UIKit dependent classes such as UIViewController, so that your tests link and thoroughly exercise their own logic.
The important thing to remember is this is usually not necessarily all that much work. The tests will let you know what you need to implement in the stub, for example by issuing exceptions about unimplemented methods. You just add what you need to quiet the errors and test your code, and then your stub class is as sufficient for testing as any of the legitimate system framework classes would be.
Can you not use the tips/methods in Chris Hanson's blog here: http://chanson.livejournal.com/120263.html
It seems that you would probably need to run your app as the test harness, but it seems doable. Just because you are running the 'app' doesn't mean it necessarily has to do the normal thing your application does. Alternatively, you could create another application target that is just a simple dummy application that will load your test bundles and run them.
I agree it might not be perfect, but it sounds like it might work, and it seems like it could be automated.
So I was unhappy with this situation as it stood, and both Daniel and Ed's answers spurred me on to improve things further. I decided to take matters into my own hands.
What I ended up writing was a small Cocoa Touch application that in -applicationDidFinishLaunching scans the class hierarchy for any SenTestCase subclasses and runs them. Unlike GoogleToolboxForMac's unit testing stuff, I tried to use as much of the built in SenTesting machinery as possible and as a result it didn't really end up as that much code. I haven't done too much testing beyond verifying that UILabel allocates without crashing the test rig (otest does in fact crash if you try), but it should work.
I've put the source online on both Bitbucket and Github.
While apple's tools have improved over time, they still leave much to be desired. GHUnit is much better, has a real test runner, and makes is dead simple to set breakpoints, etc. You can even leverage all your existing SenTests.
Get it on github.
You can instantiate all your UIKit code with no worries. I use it to test UIViewControllers, assert outlets, and do behavior testing.
Related
I want to take a BDD approach to unit testing in an iOS project, and I just realized that there may not be an existing library that provides test doubles of the test spy variety. Ideally, I'm looking for something similar to Mockito, Jasmine, or RR.
Before I go and spend a week of free time writing a test spy library, I thought I'd pose the question here on SO first.
So far I've looked at OCMock and Kiwi, but they both seem to be of the traditional high-specification-by-default mocking frameworks that require expectation assertions be set in the arrange phase, prior to the act phase. Obviously, this is hampering my vision of beautiful, DRY, nested specs (which I plan on authoring in either Kiwi or Cedar).
Just saw this.
Kiwi definitely does not do this now. You are right that the mocks in it are built for a 'standard' arrange prior to act phase.
Moving on, albeit at first glance, it seems that adding the basics for spy functionality would not require too much reengineering. Every message (barring some implementation important, reserved selectors) that gets to a mock goes through -[KWMock forwardInvocation:].
Essentially, the current -[KWMock forwardInvocation:] would need be modified to record/copy all invocations that pass through it, instead of what it does now. This would be the primitive functionality that would allow expectations to be verified later by querying the recorded invocations. Of course, coming up with a nice readable form for verification isn't trivial either.
The spy/mock would still need to know what class/protocol it is standing in for upfront. This is so it will be able to generate valid method signatures for selectors of messages sent to it that allows the runtime forwarding machinery to generate the actual NSInvocation that will be forwarded.
I am preoccupied with other things right now to get an implementation in there, but I'll be happy to answer more questions or merge any pull requests. HTH.
I took the time to set up some Unit Tests and set up the targets in XCode, etc., and they're pretty useful for a few classes. However:
I want to test small UI pieces for which I don't want to launch the entire application. There is no concept of pass/fail: I need to "see" the pieces, and I can make dummy instances of all the relevant classes to do this. My question is: how can I set this up in XCode?
I realize I could use another XCode project for each class (or groups of classes), but that seems a bit cumbersome. Another target for each?
I know that you're looking for an approach to testing UI components that doesn't require a fully functional application, but I've been impressed with what the new UI Automation instrument introduced in iOS 4.0 lets you do.
This instrument lets you use Javascript scripts to interactively test your application's interface, and it does so in a way that does not require checking exact pixel values or positions on a screen. It uses the built-in accessibility hooks present in the system for VoiceOver to identify and interact with components.
Using this instrument, I have been able to script tests that fully exercise my application as a user would interact with it, as well as ones that hammer on particular areas and look for subtle memory buildups.
The documentation on this part of Instruments is a little sparse, but I recently taught a class covering the subject for which the video is available on iTunes U for free (look for the Testing class in the Fall semester). My course notes (in VoodooPad format) cover this as well. I also highly recommend watching the WWDC 2010 video session 306 - "Automating User Interface Testing with Instruments".
Well, you cannot call showing a piece of some GUI a testing even if that GUI is a part of a large application. What you can do here is create a separate executable target and write a small tool that reuses GUI components from your application and shows them to you basing on input parameters. This will eliminate the need for many many different targets.
If you still insist on using unit tests, you can show your GUI for some period of time, for example, 10 seconds. So the test case will run until GUI is closed or timeout elapses and each test will take up to N seconds to execute.
This is a good question. I think you actually do not want to use unit tests for those 'visual confirmations'. Personally I usually write little test apps to do this kind of testing or development. I don't like separate targets in the same project so I usually just create a test project next to the original one and then reference those classes and resources using relative paths. Less clutter. And it is really nice to be able to test more complex user interface elements in their own little test environment.
I would take a two-level approach to UI "unit testing":
lthough Cocoa/CocoaTouch are still closer to the Model-View-Controller than the Model-View-ViewModel paradigm, you can gain much of the testability advantage by breaking your "View" into a "view model" and a "presenter" view (note that this is somewhat along the lines of the NSView/NSCell pair; Cocoa engineers had this one a long time ago). If the view is a simple presentation layer, than you can test behavior of the view by unit testing the "view model".
To test the drawing/rendering of your views, you will have to either do human testing or do rendering/pixel-based tests. Google's Toolbox for Mac has several tools for doing pixel-by-pixel comparison of rendered NSViews, CALayers, UIViews, etc. I've written a tool for the Core Plot project to make dealing with the test failures and merging the reference files back into your unit test bundle a little easier.
I think the title speaks for itself guys - why should I write an interface and then implement a concrete class if there is only ever going to be 1 concrete implementation of that interface?
I think you shouldn't ;)
There's no need to shadow all your classes with corresponding interfaces.
Even if you're going to make more implementations later, you can always extract the interface when it becomes necessary.
This is a question of granularity. You cannot clutter your code with unnecessary interfaces but they are useful at boundaries between layers.
Someday you may try to test a class that depends on this interface. Then it's nice that you can mock it.
I'm constantly creating and removing interfaces. Some were not worth the effort and some are really needed. My intuition is mostly right but some refactorings are necessary.
The question is, if there is only going to ever be one concrete implementation, should there be an interface?
YAGNI - You Ain't Gonna Need It from Wikipedia
According to those who advocate the YAGNI approach, the temptation to write code that is not necessary at the moment, but might be in the future, has the following disadvantages:
* The time spent is taken from adding, testing or improving necessary functionality.
* The new features must be debugged, documented, and supported.
* Any new feature imposes constraints on what can be done in the future, so an unnecessary feature now may prevent implementing a necessary feature later.
* Until the feature is actually needed, it is difficult to fully define what it should do and to test it. If the new feature is not properly defined and tested, it may not work right, even if it eventually is needed.
* It leads to code bloat; the software becomes larger and more complicated.
* Unless there are specifications and some kind of revision control, the feature may not be known to programmers who could make use of it.
* Adding the new feature may suggest other new features. If these new features are implemented as well, this may result in a snowball effect towards creeping featurism.
Two somewhat conflicting answers to your question:
You do not need to extract an interface from every single concrete class you construct, and
Most Java programmers don't build as many interfaces as they should.
Most systems (even "throwaway code") evolve and change far past what their original design intended for them. Interfaces help them to grow flexibly by reducing coupling. In general, here are the warning signs that you ought to be coding to an interface:
Do you even suspect that another concrete class might need the same interface (like, if you suspect your data access objects might need XML representation down the road -- something that I've experienced)?
Do you suspect that your code might need to live on the other side of a Web Services layer?
Does your code forms a service layer to some outside client?
If you can honestly answer "no" to all these questions, then an interface might be overkill. Might. But again, unforeseen consequences are the name of the game in programming.
You need to decide what the programming interface is, by specifying the public functions. If you don't do a good job of that, the class would be difficult to use.
Therefore, if you decide later you need to create a formal interface, you should have the design ready to go.
So, you do need to design an interface, but you don't need to write it as an interface and then implement it.
I use a test driven approach to creating my code. This will often lead me to create interfaces where I want to supply a mock or dummy implementation as part of my test fixture.
I would not normally create any code unless it has some relevance to my tests, and since you cannot easily test an interface, only an implementation, that leads me to create interfaces if I need them when supplying dependencies for a test case.
I will also sometimes create interfaces when refactoring, to remove duplication or improve code readability.
You can always refactor your code to introduce an interface if you find out you need one later.
The only exception to this would be if I were designing an API for release to a third party - where the cost of making API changes is high. In this case I might try to predict the type of changes I might need to do in the future and work out ways of creating my API to minimise future incompatible changes.
One thing which no one mentioned yet, is that sometimes it is necessary in order to avoid depenency issues. you can have the interface in a common project with few dependencies and the implementation in a separate project with lots of dependencies.
"Only Ever going to have One implementation" == famous last words
It doesn't cost much to make an interface and then derive a concrete class from it. The process of doing it can make you rethink your design and often leads to a better end product. And once you've done it, if you ever find yourself eating those words - as frequently happens - you won't have to worry about it. You're already set. Whereas otherwise you have a pile of refactoring to do and it's gonna be a pain.
Editted to clarify: I'm working on the assumption that this class is going to be spread relatively far and wide. If it's a tiny utility class used by one or two other classes in a single package then yeah, don't worry about it. If it's a class that's going to be used in multiple packages by multiple other classes then my previous answer applies.
The question should be: "how can you ever be sure, that there is only going to ever be one concrete implementation?"
How can you be totally sure?
By the time you thought this through, you would already have created the interface and be on your way without assumptions that might turn out to be wrong.
With today's coding tools (like Resharper), it really doesn't take much time at all to create and maintain interfaces alongside your classes, whereas discovering that now you need an extra implementation and to replace all concrete references can take a long time and is no fun at all - believe me.
A lot of this is taken from a Rainsberger talk on InfoQ: http://www.infoq.com/presentations/integration-tests-scam
There are 3 reasons to have a class:
It holds some Value
It helps Persist some entity
It performs some Service
The majority of services should have interfaces. It creates a boundary, hides implementation, and you already have a second client; all of the tests that interact with that service.
Basically if you would ever want to Mock it out in a unit test it should have an interface.
I recently wrote a small tool to generate a class for each tier I hand write for the boring "forms over data" work where I spend almost 90% of my time (depressing I know) ... more on this as the economy improves ;)
My question is this - will using this tool instead of hand typing all this code from day to day actually hurt me as a developer? I feel like I will always be making changes to this tool and thus I "should" stay on top of the patterns used/ choices made/ etc... but some small part of me feels like I might lose my edge ... am I wrong?
If the tool can spit the code out without thought, then it probably saves you lots of thoughtless typing.
Writing the tool in the first place requires thinking, so I'd guess you'd be more "on the edge" maintaining and writing the tool.
That's good! Of course writing a tool to do all the job for you is impossible and wrong.
But automating repeatable tasks is always good - and sometimes writing specific types of code is repeatable.
It is even encouraged in the "Pragmatic Programmer" book.
Make sure that in the source control you have checked in a code generator and not its output (unless you have to modify the code later by hand)!
You are most definitely not wrong. I use code generators anywhere I can - I currently use CodeSmith to create my DAO's by looking at the database.
What edge are you afraid of losing? In my mind going to code generation is actually giving you an edge.
Larry Wall (of Perl fame) describes the three cardinal virtues of programming as Laziness, Impatience, and Hubris.
Congratulations! You have shown good laziness, in that you have identified some work you can pass off to an automated process and done so. (Bad laziness leads to cutting corners, procrastination, and generally postponing rather than eliminating work.) If you can successfully palm off some work onto another program, you are spending less time on annoying triviality and more on accomplishing things and learning.
Generate what you can. Code generation is one of the best tools I've picked up over the last 2 or 3 years. Typing the same code over and over (or copy and pasting it) is prone to error.
Spending less time doing something by having something/someone else do it, and more time researching better ways to do it will generally lead to doing it in a better way.
This doesn't have to just apply to programming....
Your code generator (at least in principle - I haven't looked at it myself) is The Right Thing, at least as far as it goes.
The next step would be to see whether you can, instead of generating all this redundant code, create a base class whose functionality matches the generated code and then derive your application code from it. Using inheritance rather than generation will allow you to benefit from improvements without needing to re-run the generator on all your projects. Perhaps more importantly, if you customize the generated code, the customizations would be lost if you re-run the generator, but customizations in a derived class will be preserved when the base class is changed.
No. Why do you think IDE's are so popular. Imagine if all the people who use Visual Studio had to programmatically create the GUI's without help from the IDE, it'd be terrible. I would be willing to bet most people who use VisualStudio won't know how to manualy create the forms they're creating in the IDE. But there's nothing wrong with that.
I believe in code generation wherever possible to remove the rote tasks of programming. You will not lose your edge, you will probably become a better programmer because you will spend more time working on the important and interesting stuff.
BTW, your tool sounds interesting. Have you released it anywhere?
Code generation is fine as long as you understand what you are generating. Physicists use calculators because they understand the formulas they are automating and realize that their precious time is better spent on important tasks.
Code generation is one of those invaluable DO:s that The Pragmatic Programmer advocates. I truly recommend that book. Here's a Pragmatic Programmer quick ref.
Its almost hypocritical not to code generate. Here we are automating all of these tasks that were traditionally done by hand... and yet many of us still hand crank all of our code, even if it can be easily generated.
My only experience with code generation is the macros of Common Lisp. They are used all the time. Everything that automats repetitive tasks is beneficial; that is what programming is about.
Read the story of Mac.
Imagine that each time you made a change to the tool and regenerated your code, that you made that design change by hand on all of your modules.
Since I've started generating code and gotten up to speed, I've found that I rarely get bugs in the generated code.
I find that writing code gen does help me learn the nuances of good architecture. You start seeing common patterns as opposed to a narrow view of your design. That said, don't use code gen as a substitute for good object-oriented code, and don't love your code gen so much you ignore new technologies. For example, if you're in .NET and are writing code-gen for data access, you'd better have a good excuse for not using Linq to SQL or NHibernate. Similarly, Dynamic Data can help in many forms-on-data scenarios. So, my advice: spike new stuff and code gen as needed.
My 2cents on code gen is that it is also critical for use in refactoring. I have found that partial classes and a good file comparison utility (Araxis or BeyondCompare) are essential.
Keep your generated code in one file and the custom Tweaks you made for that class in another file.
This practice will allow you to make those comprehensive framework changes implemented quickly and will also help you move to a new paradigm while easily being able to save your custom logic.
CodeSmith FTW!
While build servers are great to make sure all your code compiles, it doesn't address the differences in signatures with your stored procs or the like. If you routinely run the code gen you can more easily identify when those changes occur. A unit test will tell you the SP is wrong, code gen will tell you how to make it right.
Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.
I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.
In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.
Dear SOers, what are your opinions on this?
I read 2 questions:
What is your opinion on testing that particular methods on components are called in a particular order?
I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days.
We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out
interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.
Tests which test only one thing are easier to understand and maintain.
Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.
Should one be obsessed with test coverage?
When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.
Overall unit test coverage stats? Not particularly interested so long as they're high.
100% unit test coverage for an entire system? Not interested at all.
I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).
The book The Art of Unit Testing has plenty of good advice in these areas.
100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.
It depends on how you model the domain(s) of your program.
If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.
If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.
And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.
As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.
I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.
What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?
What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?
Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.
Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).
E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.
If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.
Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.