When it comes to writing unit testing for UI what do you write test for?
Do you test each method? (EX: a method returns the correct data)
Or do you test the functionalities? (Making sure that the table populates the data it suppose to)
Do I need to mock everything except the item I am testing? Let's say I am testing to make sure a table view populates correctly? Do i mock everything else?
Please provide as much details as possibe
I'll try to answer this in a general way.
When testing UI-ish code it's often a good idea to target the tests "one step away" from the UI itself. Ex. run against the models instead of the UI itself if possible. It's much less brittle this way. I'm not familiar with iOS UI test automation but these sort of things tend to break upon the smallest layout changes etc.
I'll suggest you take a look at FoneMonkey by Gorilla Logic. They have a very nice utility for writing unit test which actually tests from the users perspective, aka. check that the UI is as it should be, ie. loads correctly, contains the correct values, etc.
You can even run it in a faceless environment, eg. Continuous Integration server, etc.
Related
This is clearly a practical question. Internet wisdom suggests logically organizing tests for maintainability. When it comes to ReST API Controllers one could include all integration-tests for all actions of the controller in a single file.
Assuming we have 4 CRUD actions per controller with an average of 6 tests per action we are bound to end up with at least 24 tests in one file. In industrial-grade web-servers I suspect that this number would baloon way further upwards.
The thing is that these actions even though they are part of one class (controller) they are complex and orthocanonical (different resources/artifacts/mockups needed to test each group etc).
I'm having a hard time coming to terms that all these tests should be placed in one file given the fact that they are testing almost entirely different things and can be placed in 4 separate files. Isn't this more aligned with the spirit of TDD afterall?
Is my intuition misplaced?
Nothing is stopping you from organizing your tests in any way that makes sense for your project.
Internet wisdom has nothing to do with this, do what works for you. You want one test file per crud method? Then do that.
Where your tests live is far less important than what you actually test and how.
Just one note of caution, I have seen people spend a long time and test completely the wrong things.
Let's instantiate controllers to call methods from them ( don't, this is a sign of really bad SOC ), let's mock who knows what.
You will achieve a much better result if you unit test what should be unit tested and that is functional stuff, classes and methods that do stuff. For an API this will mean business rules, data transformations, model transformations, that kind of stuff.
For the rest, I'd stick with integrations tests, call your endpoints like a normal user would and this means mostly integration testing. Use something like Postman to organize collections of tests.
You'll have a lot less to mock, your tests won't be affected much if you change your implementation details, you'll be able to do measurements on your endpoints and you'll actually test the real thing, going all the way to your storage and back which no amount of mocking will give you.
I have a question about react-testing-library. It seems like this is the go to testing library if you're doing hooks development since Enzyme doesn't seem to support hooks at this time and who knows if it will at least from the shallow rendering perspective... at least from what I've read at this time. So what is driving me a little crazy about react-testing-library is that it suggests doing full renders, firing clicks, changes, etc. to test your components. So what if you were to change the functionality of a Button component let's just say, are all the tests going to break that are using it? Doesn't it seem odd to render and run tests on every child component of that component when you're already testing that component? Are you expected to mock all those components inside a parent component? Doesn't it seem redundant to do clicks and changes if you're already doing that in automation testing such as using webdriver?
The idea is that you test 'mission critical' things in end to end testing.
These test rely on lots of features all working together.
The entire APP running and every single piece of functionality in-between working.
Because they rely on so many thing and take so long to develop and run you don't want to test every thing with an end to end test.
And if it breaks, where did it break? which piece of functionality is no longer working?
If you change the functionality of a button that was used in and end to end test it would fail - as it should. But say the end to end test fails and your integration/unit tests on the button also fail? You know straight away where your problem is.
And what if you refactor the button so that it still functions the same but the code implementing this is much cleaner? Then you should design your tests so that they still pass and this is actually where react-testing-library really shines.
You mimic how a use might interact with the component and what you expect the component to do - not what it's internal state is like you might in enzyme.
I'm not a professional developer though but that's my two cents.
You must take a look at the "Testing Trophy" philosophy that #kentcdodds talks about. - https://testingjavascript.com/
Like Michael mentions in the other answer, if you change the functionality of your Button components, your tests are expected to break. Tests are a clear translation of the business needs, so if needs change, your existing tests are supposed to break, so that the new ones may be incorporated.
On your point around doing automation testing instead, where I'm assuming you mean "end-to-end testing". This is different from the tests that react-testing-library suggests you to do. The philosophy asks you to write a good number of integration tests on your parent component, so that you can be sure that the way the parent component uses the child component is in harmony. It validates the configurations you made on the child component which are very specific to the behavior of this parent component, and hence the integration tests.
While working on projects, I often come to the dilemma of working on UI first or on logic first. Having UI first gives a nice overview of how the end product is going to look like whilehaving logic first uncovers any possible roadblocks in technology.
However, it is not always that crystal clear.. some times UI may need data to be populated to really show what it means and simulating the data could be more difficult than inplementing the logic.. what is your preferred approach for development and why? Which is more efficient and effective?
( I am seeing this issue more and more with iphone projects)
Neither.
You will have to so some thinking at the beginning of the project anyway, deciding the general approach you will take and what operations you will support.
Do that reasonably well and you will have defined the interface between the view and the underlying logic. Look at the Model-View-Controller approach for some inspiration.
What you want to have early on is an idea of what are the basic operations your logic code needs to do to in order to achieve a purpose. Usually it will be a simple function call but sometimes it may involve more than that. Have that clear first.
Next, a complex system that works is based on a simple system that works.
Which means you will need to have a basic UI you'll use to test a basic logic implementation first.
A simple form with a button which presents a message is basic enough. Then, it can grow, you implement a piece of functionality and then you add a simple UI you can test it with.
It is easier to do both piece by piece as the logic and UI for a small piece of the logic are conceptually similar and it will be easy to keep track of both while you implement and test.
The most important part is to keep the UI and logic decoupled, making them talk through a common interface. This will allow you to make quick edits when needed and improve the looks of the GUI towards the end.
You will be better able to just scrap the UI if you don't like it. All you'll need to do is use the same interface, one which you know how to do because you wrote it and you already implemented it.
If at some point you realize you've made a big mistake you can still salvage parts of the code, again because the UI and logic are decoupled and, hopefully, the logic part is also modular enough.
In short: think first, do both the UI and the logic in small increments and keep things modular.
Iterate between both (logic and UI). Back and forth. The UI may change as you understand how and what you can do with the logic and whatever constraints that may present. The logic may change as you change the features, behavior, or performance requirements of an easy-to-use, responsive, and decent looking UI.
I usually do the minimum possible of each until I have some barely working mockups. Then I use each mockup to test where my assumptions about the proper UI and/or logic might be right or wrong. Pick the most flawed, and start iterating.
Apple suggests mocking up the UI on paper first. (Have an eraser handy...)
If possible, in parallel.
But personally, I advocate logic first.
I start with the fundamentals first and that means getting the logic coded and working first. There are two reasons for this:
If I can't get the logic working correctly having a pretty UI is useless and a waste of my time
You will most likely change the UI when you work on the logic aspect making the UI process longer and more expensive
I usually get my UI in order first. The reason? As I prototype different designs, sometimes my idea for the app changes. If it does, then it's no consequence - there is no code to rewrite.
However, sometimes it is helpful to get the fundamentals first in order to determine if the app is going to work or not. If it's not going to function, then why waste time making interfaces?
I like to start by laying out the different parts of my project in something like Vizio.
I make boxes for the different views I expect to have, and I fill them with the information I expect them to contain.
I make another set of boxes for my expected model objects (logic). I fill them with the information I expect they will work with, and I draw lines between models and views where I think it will be necessary.
I do the same thing for object graphs (if I plan on using CoreData), and for database tables if I am going to have an external database.
Laying everything out visually helps me decide if I am missing any important features or interactions between project components. It also gives me something to quickly refer to if I later lose track of what I was doing.
From that point, I tend to work on a model until it has enough done to fill out part of a view, then I work on a view until it can interact with the model.
I also try to identify views or models that could be reused for multiple purposes so that I can reduce the overall amount of work I have to do.
Take the agile approach and work on small amounts of both in iterations. Slowly building each functional piece of the program so as to not build any monolithic piece at once.
I took the time to set up some Unit Tests and set up the targets in XCode, etc., and they're pretty useful for a few classes. However:
I want to test small UI pieces for which I don't want to launch the entire application. There is no concept of pass/fail: I need to "see" the pieces, and I can make dummy instances of all the relevant classes to do this. My question is: how can I set this up in XCode?
I realize I could use another XCode project for each class (or groups of classes), but that seems a bit cumbersome. Another target for each?
I know that you're looking for an approach to testing UI components that doesn't require a fully functional application, but I've been impressed with what the new UI Automation instrument introduced in iOS 4.0 lets you do.
This instrument lets you use Javascript scripts to interactively test your application's interface, and it does so in a way that does not require checking exact pixel values or positions on a screen. It uses the built-in accessibility hooks present in the system for VoiceOver to identify and interact with components.
Using this instrument, I have been able to script tests that fully exercise my application as a user would interact with it, as well as ones that hammer on particular areas and look for subtle memory buildups.
The documentation on this part of Instruments is a little sparse, but I recently taught a class covering the subject for which the video is available on iTunes U for free (look for the Testing class in the Fall semester). My course notes (in VoodooPad format) cover this as well. I also highly recommend watching the WWDC 2010 video session 306 - "Automating User Interface Testing with Instruments".
Well, you cannot call showing a piece of some GUI a testing even if that GUI is a part of a large application. What you can do here is create a separate executable target and write a small tool that reuses GUI components from your application and shows them to you basing on input parameters. This will eliminate the need for many many different targets.
If you still insist on using unit tests, you can show your GUI for some period of time, for example, 10 seconds. So the test case will run until GUI is closed or timeout elapses and each test will take up to N seconds to execute.
This is a good question. I think you actually do not want to use unit tests for those 'visual confirmations'. Personally I usually write little test apps to do this kind of testing or development. I don't like separate targets in the same project so I usually just create a test project next to the original one and then reference those classes and resources using relative paths. Less clutter. And it is really nice to be able to test more complex user interface elements in their own little test environment.
I would take a two-level approach to UI "unit testing":
lthough Cocoa/CocoaTouch are still closer to the Model-View-Controller than the Model-View-ViewModel paradigm, you can gain much of the testability advantage by breaking your "View" into a "view model" and a "presenter" view (note that this is somewhat along the lines of the NSView/NSCell pair; Cocoa engineers had this one a long time ago). If the view is a simple presentation layer, than you can test behavior of the view by unit testing the "view model".
To test the drawing/rendering of your views, you will have to either do human testing or do rendering/pixel-based tests. Google's Toolbox for Mac has several tools for doing pixel-by-pixel comparison of rendered NSViews, CALayers, UIViews, etc. I've written a tool for the Core Plot project to make dealing with the test failures and merging the reference files back into your unit test bundle a little easier.
Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.
I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.
In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.
Dear SOers, what are your opinions on this?
I read 2 questions:
What is your opinion on testing that particular methods on components are called in a particular order?
I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days.
We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out
interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.
Tests which test only one thing are easier to understand and maintain.
Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.
Should one be obsessed with test coverage?
When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.
Overall unit test coverage stats? Not particularly interested so long as they're high.
100% unit test coverage for an entire system? Not interested at all.
I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).
The book The Art of Unit Testing has plenty of good advice in these areas.
100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.
It depends on how you model the domain(s) of your program.
If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.
If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.
And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.
As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.
I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.
What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?
What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?
Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.
Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).
E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.
If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.
Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.