Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.
I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.
In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.
Dear SOers, what are your opinions on this?
I read 2 questions:
What is your opinion on testing that particular methods on components are called in a particular order?
I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days.
We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out
interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.
Tests which test only one thing are easier to understand and maintain.
Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.
Should one be obsessed with test coverage?
When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.
Overall unit test coverage stats? Not particularly interested so long as they're high.
100% unit test coverage for an entire system? Not interested at all.
I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).
The book The Art of Unit Testing has plenty of good advice in these areas.
100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.
It depends on how you model the domain(s) of your program.
If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.
If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.
And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.
As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.
I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.
What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?
What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?
Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.
Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).
E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.
If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.
Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.
Related
This is clearly a practical question. Internet wisdom suggests logically organizing tests for maintainability. When it comes to ReST API Controllers one could include all integration-tests for all actions of the controller in a single file.
Assuming we have 4 CRUD actions per controller with an average of 6 tests per action we are bound to end up with at least 24 tests in one file. In industrial-grade web-servers I suspect that this number would baloon way further upwards.
The thing is that these actions even though they are part of one class (controller) they are complex and orthocanonical (different resources/artifacts/mockups needed to test each group etc).
I'm having a hard time coming to terms that all these tests should be placed in one file given the fact that they are testing almost entirely different things and can be placed in 4 separate files. Isn't this more aligned with the spirit of TDD afterall?
Is my intuition misplaced?
Nothing is stopping you from organizing your tests in any way that makes sense for your project.
Internet wisdom has nothing to do with this, do what works for you. You want one test file per crud method? Then do that.
Where your tests live is far less important than what you actually test and how.
Just one note of caution, I have seen people spend a long time and test completely the wrong things.
Let's instantiate controllers to call methods from them ( don't, this is a sign of really bad SOC ), let's mock who knows what.
You will achieve a much better result if you unit test what should be unit tested and that is functional stuff, classes and methods that do stuff. For an API this will mean business rules, data transformations, model transformations, that kind of stuff.
For the rest, I'd stick with integrations tests, call your endpoints like a normal user would and this means mostly integration testing. Use something like Postman to organize collections of tests.
You'll have a lot less to mock, your tests won't be affected much if you change your implementation details, you'll be able to do measurements on your endpoints and you'll actually test the real thing, going all the way to your storage and back which no amount of mocking will give you.
I'm currently working a MVC 5 project using Entity Framework 5 (I may switch to 6 soon). I use database first and MySQL with an existing database (with about 40 tables). This project started as a “proof of concept” and now my company decided to go with the software I'm developing. I am struggling with the testing part.
My first idea was to use mostly integration tests. That way I felt that I can test my code and also my underlying database. I created a script that dumps the existing database schema into a “test database” in MySQL. I always start my tests with a clean database with no data and creates/delete a bit of data for each test. The thing is that it takes a fair amount of time when I run my tests (I run my tests very often).
I am thinking of replacing my integration tests with unit tests in order to speed up the time it takes to run them. I would “remove” the test database and only use mocks instead. I have tested a few methods and it seems to works great but I'm wondering:
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I'm sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Do you think going from integration testing to unit testing is a king of “downgrade”?
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
I'm aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I'm aware that this kind of question is prone to “it depends of your needs” kind of responses but I'm sure some may have been through this and can share their knowledge. I'm also aware that in a perfect world I would do both (tests by using mocks and by using database) but I don’t have this kind of time.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
Do you think mocking my database can “hide” bugs that can occur only when my code is running against a real database? Note that I don’t want to test Entity Framework (I’m sure the fine people from Microsoft did a great job on that), but can my code runs well against mocks and breaks against MySQL ?
Yes, if you only test your code using Mocks, it's very easy for you to have false confidence in your code. When you're mocking the database, what you're doing is saying "I expect these calls to take place". If your code makes those calls, it'll pass the test, but if they're the wrong calls, it won't work in production. At a simple level, if you add / remove a column from your database the database interaction may need to change, but the process of adding/removing the column is hidden from your tests until you update the mocks.
Do you think going from integration testing to unit testing is a king of “downgrade”?
It's not a downgrade, it's different. Unit testing and integration testing have different benefits that in most cases will complement each other.
Do you think dropping Integration testing and adopting unit testing for speed consideration is ok.
Ok is very subjective. I'd say no, however you don't have to run all of your tests all of the time. Most testing frameworks (if not all) allow you to categorise your tests in some way. This allows you to create subsets of your tests, so you could for example have a "DatabaseIntegration" category that you put all of your database integration tests in, or "EndToEnd" for full end to end tests. My preferred approach is to have separate builds. The usual/continuous build that I would run before/after each check-in only runs unit tests. This gives quick feedback and validation that nothing has broken. A less common / daily / overnight build, in addition to running the unit tests, would also run slower / repeatable integration tests. I would also tend to run integration tests for areas that I've been working on before checking in the code if there's a possibility of the code impacting the integration.
I’m aware that some framework exists that run the tests against an in-memory database (i.e. Effort framework), but I don’t see the advantages of this vs mocking, what am I missing?
I haven't used them, so this is speculation. I would imagine the main benefit is that rather than having to simulate the database interaction with mocks, you instead setup the database and measure the post state. The tests become less how you did something and more what data moved. On the face of it, this could lead to less brittle tests, however you're effectively writing integration tests against another data provider that you're not going to use in production. If it's the right thing to do is again, very subjective.
I guess the second benefit is likely to be that you don't necessarily need to refactor your code in order to take advantage of the in memory database. If your code hasn't been constructed to support dependency injection then there is a good chance that you will need to perform some level of refactoring in order to support mocking.
I’m also aware that in a perfect world I would do both (tests by using mocks and by using database) but i don’t have this kind of time.
I don't really understand why you feel this is the case. You've already said that you have integration tests already that you're planning on replacing with unit tests. Unless you need to do major refactoring in order to support the unit-tests your integration tests should still work. You don't usually need as many integration tests as you need unit tests, since the unit tests are there to verify the functionality and the integration tests are there to verify the integration, so the overhead of creating them should be relatively small. Using categorisation to determine which tests you run will reduce the time impact of running your tests.
As a side question what tool would you recommend for mocking. I was told that “moq” is a good framework but it’s a little bit slow. What do you think?
I've used quite a few different mocking libraries and for the most part, they are all very similar. Some things are easier with different frameworks, but without knowing what you're doing it's hard to say if you will notice. If you haven't built your code with dependency injection in mind then you may have find it challenging getting your mocks to where you need them.
Mocking of any kind is generally quite fast, you're usually (unless you're using partial mocks) removing all of the functionality of the class/interface you're mocking so it's going to perform faster than your normal code. The only performance issues I've heard about are if you're MS fakes/shims, sometimes (depending on the complexity of the assembly being faked) it can take a while for the fake assemblies to be created.
The two frameworks I've used that are a bit different are MS fakes/shims and Typemock. The MS version requires a certain level of visual studio, but allows you to generate fake assemblies with shims of certain types of object that means you don't have to pass your mocks from your test through to where they're used. Typemock is a commercial solution that uses the profiling API to inject code while your tests are running which means it can reach parts other mocking frameworks can't. These are both particularly useful if you've got a codebase that hasn't been written with unit testing in mind that can help to bridge the gap.
Few days back I went to an interview there they asked me What all modules you will test in Regression Testing? How you find out which test cases need to be executed in Regression testing?
Those modules that are having little or more modification in existing modules or code are required regression testing
The best way to do this is to have some insight into which test cases cover which parts of the product. Then when a part of the product changes, you can run just the cases that cover the change. This isn't always easy. In a complex piece of software, a change in one part can have an effect in a seemingly disconnected part.
The best solution I have seen to this problem is to use code coverage data. If you know which blocks are hit by each test and you know which blocks were changed by the fix, you can know exactly which test cases to run.
If you don't have a lot of data, your best bet is to think about the change and what things it could affect and then run cases that are in those areas.
I want to take a BDD approach to unit testing in an iOS project, and I just realized that there may not be an existing library that provides test doubles of the test spy variety. Ideally, I'm looking for something similar to Mockito, Jasmine, or RR.
Before I go and spend a week of free time writing a test spy library, I thought I'd pose the question here on SO first.
So far I've looked at OCMock and Kiwi, but they both seem to be of the traditional high-specification-by-default mocking frameworks that require expectation assertions be set in the arrange phase, prior to the act phase. Obviously, this is hampering my vision of beautiful, DRY, nested specs (which I plan on authoring in either Kiwi or Cedar).
Just saw this.
Kiwi definitely does not do this now. You are right that the mocks in it are built for a 'standard' arrange prior to act phase.
Moving on, albeit at first glance, it seems that adding the basics for spy functionality would not require too much reengineering. Every message (barring some implementation important, reserved selectors) that gets to a mock goes through -[KWMock forwardInvocation:].
Essentially, the current -[KWMock forwardInvocation:] would need be modified to record/copy all invocations that pass through it, instead of what it does now. This would be the primitive functionality that would allow expectations to be verified later by querying the recorded invocations. Of course, coming up with a nice readable form for verification isn't trivial either.
The spy/mock would still need to know what class/protocol it is standing in for upfront. This is so it will be able to generate valid method signatures for selectors of messages sent to it that allows the runtime forwarding machinery to generate the actual NSInvocation that will be forwarded.
I am preoccupied with other things right now to get an implementation in there, but I'll be happy to answer more questions or merge any pull requests. HTH.
I'm a big believer in testing, but not a very good practitioner. I've done pretty well at getting coverage on my model objects and programming them in a TDD style. I'm actually enjoying it so much I'd love to extend this to my controller layer, particularly my UIViewController subclasses.
Unfortunately, many UIKit classes don't function in independent tests. However, I'm unhappy with the restriction of only running my dependent tests on the device. It's really important to me to run all unit tests before every build, and it seems to me like its possible and worthwhile to unit test (as opposed to other types of testing) controller code.
My question is simply this: How do I test UIViewControllers in such a way that the tests run before every build? I am aware of a couple of different solutions to this problem, but don't know a whole lot about the various benefits of each one.
In general if you are stuck with an environmental problem where unconditional testing seems to be made impossible, you can work around it if the benefits are great enough to warrant your jumping through some hoops.
The situation here is a special case of unwanted outside dependencies, which I generally solve in my unit tests by either #ifdefing out tiny bits of dependent code, or by implementing stub classes to fill the expected dependency roles enough that my code can be tested.
So in this specific case, you might create a new source file, linked only in the test bundle, called "UIKitStubClasses.m" … within it you could implement the bare necessities to simulate UIKit dependent classes such as UIViewController, so that your tests link and thoroughly exercise their own logic.
The important thing to remember is this is usually not necessarily all that much work. The tests will let you know what you need to implement in the stub, for example by issuing exceptions about unimplemented methods. You just add what you need to quiet the errors and test your code, and then your stub class is as sufficient for testing as any of the legitimate system framework classes would be.
Can you not use the tips/methods in Chris Hanson's blog here: http://chanson.livejournal.com/120263.html
It seems that you would probably need to run your app as the test harness, but it seems doable. Just because you are running the 'app' doesn't mean it necessarily has to do the normal thing your application does. Alternatively, you could create another application target that is just a simple dummy application that will load your test bundles and run them.
I agree it might not be perfect, but it sounds like it might work, and it seems like it could be automated.
So I was unhappy with this situation as it stood, and both Daniel and Ed's answers spurred me on to improve things further. I decided to take matters into my own hands.
What I ended up writing was a small Cocoa Touch application that in -applicationDidFinishLaunching scans the class hierarchy for any SenTestCase subclasses and runs them. Unlike GoogleToolboxForMac's unit testing stuff, I tried to use as much of the built in SenTesting machinery as possible and as a result it didn't really end up as that much code. I haven't done too much testing beyond verifying that UILabel allocates without crashing the test rig (otest does in fact crash if you try), but it should work.
I've put the source online on both Bitbucket and Github.
While apple's tools have improved over time, they still leave much to be desired. GHUnit is much better, has a real test runner, and makes is dead simple to set breakpoints, etc. You can even leverage all your existing SenTests.
Get it on github.
You can instantiate all your UIKit code with no worries. I use it to test UIViewControllers, assert outlets, and do behavior testing.