When to use unit tests? [closed] - iphone

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
I understand how to implement unit tests, I'm just struggling figure out when to use them.
Let's say I have a basic Reminders App. A user can add/edit/delete reminders and view them in a tableview. What parts of the app would I want to set up unit tests for?

Ideal world answer would say that every line of code you wrote should be unit tested.
But let's forget about that for a moment and move back to real world. Write tests for code that is important and having another line of defense is worth it. In other words, does it make much sense testing constructor that simply assigns value to one field? Most likely not. Is it worth to unit test parser extracting account data from complex XML your client provides? Probably yes.
Where does this difference comes from? Two major reasons:
it's less likely that constructor code will suffer from unpredictable changes (vs. much-evolving parser code to meet up with changing requirements/optimizations/refactorings)
constructor code is fairly simple and you've wrote such code many times already, tests might not offer you huge advantage in spotting issues; quick glance at such code and you'll be most likely able to tell what's going on (vs. complex XML parser code)
Why making the distinction? Why testing this and not that? Wouldn't it be easier to simply test everything (as ideal world answer would suggest)?
No. Because of time and money constraints. Writing code takes both. And there's only certain amount of money somebody is willing to pay for your product just as there's only certain amount of time he's going for wait for it to be delivered. Some tests are simply not worth it (again, constructor code example). Remember that unit tests are not immune to diminishing returns (having 80% code base covered with tests might take extra 20% development time and later save 20% time spent on debugging/maintenance, while going for another 10% might be twice as time consuming yet yield much lesser gains).
Again, you probably want to ask "Where's the line?" When do you decide "Ok, unit tests for this piece of code are not really needed"? Unfortunately, this kind of judgement comes with experience. Write code, read code, see what others (possibly more experienced developers) do and learn.
If I were to give couple of generic advises (what to unit test), those would be:
start with business/domain logic code
make sure to test all kind of converters/parsers/calculators (those are fairly easy to test, tend to change often [either due to changing requirements or refactorings] and by their nature are error prone)
avoid testing simple one-liner methods, unless that one line is crucial in some way
write tests for bugs that you discover (and keep them!)
don't follow magic fairy-tales of "good code must have 99.99% test coverage" blindly
reading questions on topic at programmers.stackexchange.com can often give you different perspective to approach problems

Test all the code you write. And if you want to be really cool, write the test first. If you have a method on a model or controller, you should also have a test for it.
Without knowing more about your code, its hard to advise. But it sounds like you would have a controller (like RemindersController) and a model (like Reminder). This would be a basic outline I would start with:
RemindersController
should add a new reminder
should update an existing reminder
should delete an existing reminder
Reminder
initWithMessage:atTime: should set a message
initWithMessage:atTime: should set a time

Assuming that you're storing your reminders somewhere, perhaps in the plist. You could write a unit test to generate a Reminder object, store it, retrieve the data, and finally generate a usable Reminder class object.
That way you know several things:
A: Your Reminder generation is working
B: Your method of storing the data is working
C: Going from Data to your Reminder object is working
However, you should not expect to be able to Unit test the actual "functionality" of your app. Such as touch events or navigation controls. These should be left to Acceptance testing which is an entirely different discussion.

I follow these principles in choosing what types of tests to write and when:
Focus on writing end-to-end tests. You cover more code per test than with unit tests and so get more testing bang for the buck. Make these the bread-and-butter automated validation of your system as a whole.
Drop down to writing unit tests around nuggets of complicated logic. Unit tests are worth their weight in situations where end-to-end tests would be difficult to debug or unwieldy to write for adequate code coverage.
Wait until the API you are testing against is stable to write either type of test. You want to avoid having to refactor both your implementation and your tests.
Rob Ashton has a fine article on this topic, which I drew heavily from to articulate the principles above.

Related

Tips to Learning Code in a Big Project [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Not sure if this is the best place to ask or not but if it gets closed down oh well. I am in computer programming and starting on my first work term. I will be doing 2D game programming for iPhone in objective C. I was just wondering if you had any tips for learning how the code works on a big project. In college I have never worked with something in terms of this scope. I am used to a project with maybe a dozen source files while what I'll be working on has hundreds. It is very overwhelming for me.
Any Tips would be appreciated. Thanks very much
This is how I do it. Opinions and methods may vary.
Generally speaking, I find the best way to learn about a system is to go through the code while the app is running.
Pick a significant place in the UI (the startup screen, some other screen).
Find the class for that view. Generally just ask a senior developer. Developers are happy to give a pointer (no pun intended) to someone who wants to learn by himself instead of having to explain everything.
Place a breakpoint in that class and run the app in Xcode until you hit your breakpoint.
Then start tracing in there to see how things happen.
Repeat the process at different spots in the app and soon you'll get a general idea of how the app works. Then it's a lot easier to catch the details.
If the system is really enormous (like an enterprise app that runs on multiple systems), then a diagram showing all the architecturally significant pieces would probably help. For an iOS app, it's probably not needed.
Good luck...
I am a 3rd year computer engineer who has done four work terms, and I can offer the following :
Some general advice:
Compartmentalizing your approach is still very useful on a big project, as in a small one. The more specific parts you focus on at a time, the easier it will be to understand them. This is not always practical due to interdependence of programs, but it is still possible to, say, work on the graphics portion alone, or the character's movement algorithm, etc. You should know that in the past that it was possible for an educated person to know the sum of human knowledge, but that is impossible today. Even senior engineers/programmers have specific areas of expertise, and other areas where they are fuzzy. Find what you most enjoy/are talented at, and devote time to that.
A basic foundation is important. Study the basic ideas of loop structures, classes, methods and the like, and know them like the back of your hand, so when applying them across languages/platforms, all you need to do is refresh yourself on the syntax. The same basic ideas apply across a range of languages.
Most of all, do not panic. It is your first work term, and you are assigned mentors/supervisors, as well as working with a team. Doing it alone would be difficult, so network well with your teammates/superiors so you can all learn from each other, divide the work, and lessen the stress on yourself!
Good luck! :)
Short Answer :
Read less, do more, then read when you get stuck. In my opinion, that's the best way to learn any new language and also someone said :
"We learn by doing, there is no other way".
Long Answer :
Rule 1: Relax.
Rule 2: You gotta understand that this is not easy stuff to master. That is why people who do get paid really well. If you had an idea you could bang this stuff out in a couple of weeks with you need to dump that. Plan to spend months working up on it.
Rule 3: Understand that the Apple API is HUGE and it is always evolving. There is enough content to learn something new Everyday.
Rule 4: The fewer programming languages you've had to learn, the harder it is to learn new ones. You will learn slower than someone else who has learned half dozen languages/APIs already.
Rule 5: Don't be afraid to use repetition and brute force. I think the thing that slows novices down is not learning the behaviors and methods of common foundation classes like NSString, NSArray, NSDictionary etc.
Rule 6: As a learning exercise, copy-pasting might not be the right thing to do. If there's an Apple example of how to do something, rather than copy-pasting I tend to rewrite it manually. I find it sticks better in my mind.
Rule 7: Use any resources you like. There are no rules on how you should learn.
Rule 8: iPhone is a memory constrained device where network and local storage access is slow. Parts of your application can be unloaded at any time, your application is responsible for maintaining it's memory footprint (not the user), and an event (phone call, memory, etc.) may require the app to respond accordingly and quickly.
Rule 9: It isn't about you. It isn't about your code. And it isn't about your code doing this or that. It's first about the user and responding to the user. It's second about your code responding to the framework. You don't usually tell the framework what to do. It asks you for things when it needs something. You sit and wait for it to talk to you. You're not in charge. You don't control the runloop; it controls you. You register to be told when things happen, and you indicate that you're the object who knows something about something (data for a table for instance). And then you let go, and let Cocoa do the rest. It's a very different world. I like it very much.
Rule 10: Relax.
When I'm coming to a new Xcode project, I open it up in OmniGraffle Pro. If the project is well organized, you'll see a nice diagram with a summary of the classes, the methods that are present and a little bit of how things relate to each other, important enums to know about, and other helpful information for getting a good overview of the project.
After that, pick a point like #mprivat said and run it in the debugger and get a feel for how things run. I like to set breakpoints with logs of the breakpoint name and hit count (and maybe the value of some variable or parameter if it seems relevant) and automatic continue after a little while to avoid pesky timing issues that can sometimes creep up when the debugger pauses execution. I use breakpoint logging so I don't have to worry about accidentally committing clutter code. (Be careful of pulling new code though because breakpoints don't move with your changing codebase. :))

Guidance needed in Writing Specifications [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
I was asked (at a place i just began working) to create simple specs for some new functionality that is going to be added to an existing Registration system. I need a little help since i've never done this before.
Here are two diagrams that show the current workflow and the new workflow.
Current Workflow: http://img80.imageshack.us/img80/102/currentworkflow.png
New Workflow http://img245.imageshack.us/img245/6748/newworkflow.png
I know they might be a bit vague but here's what's basically happening.
We are adding a new import form to an existing windows application.
We are modifying an existing form by adding a search button which will search
search and populate data read by an ocr.
I'm a new developer and i'm pretty bad at writing documents in general, but i would like to improve on this. Maybe some examples on how to go about writing something like this would be helpful. I've googled for some examples, but most of the ones i've found are on creating a brand new system. I need something that shows how to write one for modifying an exisiting system.
Here's my attempt at a specification. Maybe someone can critique it. At least then i will know what i need to improve. http://cid-ddb3f6a92ec2b97e.skydrive.live.com/self.aspx/.Public/Specs.docx
Thanks
I love writing specs (I'm a rare one in my company).
Diagrams are a good way to go, but for the more literally minded I start with a full specification template that has a ton of headings in it. For a new system, you'd generally have something to say for everyone. In your case you've specifically mentioned it's an existing app you're modifying, but the point is not to fill out all of the headings - the point is to think about them, and then delete them after due consideration. For example:
Business Requirements (short synopsis of the need, as explained to the business, non-technical users)
Use Cases (usually for bigger specs only)
Functional Requirements
Overview
Flowcharts etc.
Configuration
Error Reporting
Testing
Documentation
Training
Assumptions and Additional Constraints
Third-party Software Requirements
Internationalization
Expandability (e.g. for bits that might need to plug-in to others etc)
Customization
Questions (for questions that still need to be answered by someone to finish the spec)
Also if it's really technical then you might need an introduction sections for:
- Target Audience
- Terminology
- Examples
All of these is generally overkill for all but the largest of designs. But even for a modification, I'd go through every item and consider whether I need to write anything or not. I think this is where a lot of the value of writing a spec comes from - the process of creation. In other words, trying to be thorough and not miss too much. All the benefits that come afterwards - like being able to do estimates, being able to explain the functionality to others etc - are nice side-effects. As long as it doesn't end up completely garbled, and suits your company okay, I think that's more important than the specific appearance, format or contents of the spec.
EDIT: Comments on your specification
I think you've done a reasonable job here. Most developers should be able to take the spec and produce something sensible, and most business analysts should be able to look at the spec and work out what it does and how it works. In my comments below, keep in mind that there's always a trade-off between how detailed you want the spec and how much time you have. I tend to believe the more detailed the spec the more time everyone saves, but that's not the case for everyone.
If you want this to be clearly understood by a business user (e.g. the customer), then the Objective section could maybe contain a sentence or two describing the problem it solves. In other words not what it will achieve, but why.
It's worth explicitly naming the intermediate staging table here. At the very least it means if someone comes back to the spec a year from now, they know exactly where to look in the database.
Minor point: in my experience screenshots that contain unrealistic data are harder to understand. Instead of showing "My Sample Form", "Name", "Address" etc, it'd perhaps be easier to understand with some sensible data. Can still be fake to protect the customer's data, like "123 Fake St" etc. Not a huge deal though.
It's not clear what will happen when something goes wrong. Are there to be any checks that the data in the staging table is in a valid format? If not, is the user given an error message, or otherwise logged somewhere? One error per row of invalid data, or one for the whole batch? The form consists of a single button - something I think we can agree isn't the world's greatest UI, but I understand sometimes these things happen - perhaps it could be enhanced with a logging window to show the results of the import. The answers might be straightforward, but the developer needs to know what they are.
Perhaps not an issue depending on how much data there is, but if there was a lot and it will take a while, it might be worth having a progress bar. Or, mention if the data will be imported in stages.
Would it be worth mentioning the definition of the permanent table to which data is moved? Are all fields moved from the staging table to the permanent table, or only some? If only some, can you show what maps to what? If the permanent table has different data lengths - for example if Address Street is a Varchar(30) - what would happen if the data won't fit? Again, perhaps simple answers, but ones that would be very usefully answered here.
Perhaps worth mentioning if the data will be imported in a single transaction or not - if the data import fails partway through, if everything rolled back, or is half the imported data left imported?
If another developer will be doing this work, I think they're far more likely to get the work right if you mocked-up / draw the screens for them. Even if it's just a form with one button, and even though I can take a good guess at what your search pop-up form will look like, I would make no mistakes if I knew exactly what it's supposed to look like. Tools like Balsamiq Mockups (and see examples here) are wonderful for quick mocks, though the default "comic sans" look may not ride well with managers. I'd rather have a dirty mockup than none at all though. (Note: the free version of Balsamiq doesn't let you save images, but you can achieve the same with the export/import functionality. Also you can't save to an image file like PNG etc, but you can use a screen-capture program to take a picture of what you draw.)
Minor point: I try to avoid personal pronouns like "I", "we", "our", just to make it a little more professional and better for customers to read if necessary. I only noticed one "our", so you've mostly got it right in terms of tone here.
Minor point: are varchars enough or will there be non-standard characters in there that require unicode (i.e. nvarchar)?
It's less clear to me what's happening in the Voter Add/Update Form, but I don't have knowledge of your application - maybe everyone involved will say "oh right, I get it". For example I don't understand the relevance of "ImpRecord001" and "ImpRecord002" - would it be worth mentioning in the design what these batch codes actually mean in the real world?
Is the "Search Data" button the same as the "Search OCR" button?
For any document: first consider why you are writing it - who will read it, what do they need to know? How much detail is appropriate? Another couple of general ideas
If may be useful to then think about the sources of information that go into what you are writing. One result of that might be that you make sure that what you write can be verified. If for example an information source is a person, especially for IT docs it might be a non-IT person telling you stuff, then you may be quite careful about how your present some information so that the "source" can also understand what you are saying.
Also consider carefully what comes after the current document. For example might a test plan be written on the basis of what you write? This might lead you to present information in tables that quite naturally get expanded to test cases.
So to your specific question. What do you mean by "spec"? The workflow you give isn't enough for a user to look at and agree "Yes please, that's what I want". It's not enough for someone to write some code. I'm thinking you might need several documents.
1). Some kind of requirements doc. One format you might use is a storyboard. This focuses on what the user can see and do. Exactly what data is shown on each screen. If there are computations underlying what's displayed you may need to have appendices describing these. This doc is read by both users and developers. Powerpoint or Word could be used.
2). From that you might derive some explicit data models. Item-by-item, field-by-field. data types, sizes, validation etc. I might use date modelling tool, or UML or just a spreadsheet. Primary audience is developer, but ideally a user (or a business anlyst intermediary) could verify the details. [If you don't have a business analyst, you probably are the business analyst :-) ]
3). More technical, a spec for the developers referring to items 1&2. A decomposition of the implemntation. Names of modules, packages, classes or whatever you are using. Defintions of transformations, algorithms and calculations. A more technical doc. I would use UML, but any precise form of capture would do. This is where we might really drill down into what some of the detailed boxes in your workflow mean.
As has been observed, in general we also need to make sure the developer udnerstands the non-functional requirements, such as security and data volumes. In your situation this may be be implcitly understood, so possibly you may not need it now ... in some other life you may, so it might be a good idea to at least have a one liner in place to remind you for the future.
Those are an excellent start for a spec.
I would add to them by creating mock screen shots of what you want the windows application to look like.
On top of that you can add the details of each data field, and what the allowed values are.
Include details of any exceptions you can think of, and how you want errors reported.
You might also want to consider what sort of reporting, and security/auditing you need, as these will need to be included in the design.
Finally, it's worthwhile to sit down with the developer and talk them through the process, going through each step, as i'm sure further details will be needed.
Some of the steps down at the bottom are a bit wordy. Try splitting them up and make sure the word IF never appears. IFs should be designated by using a diamond and splitting out the flow paths based on the conditional.

For a large project, what planning should be done before coding and how should it be approached? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 6 years ago.
Improve this question
What is your method of "mapping out" an idea before creating it?
Say I wanted to take on a big project, for example at the scale of a site like Facebook or MySpace. What planning/design steps should I take before I start the actual work?
For example, should I map everything out page by page (their functionalities, data, etc.)?
For a large project first think of a one-liner to description of your site (try to not use any buzzwords here). Next think of three design maxims (rules your design Should never conflict with). Then draw a few views and think up a few user cases (1 day) then work in code for 2 weeks (this will be a throw away prototype so just work as fast as you can forget about bugs and details, don't worry about code smells or design patterns, just make as much as you can), then revaluate all the steps above and throw away your two week prototype, and begin your project in a serious manner applying solid engineering and design. After a month has gone by evaluate your(team) moral and get feedback. If it all seems to be going ok, continue, you got a long ride ahead, otherwise just give up, do a postmortem, and start over with new goals.
I always start with the user interface design. I figure out what the user should be able to do and what controls I will give them to do it. Once I get that laid out in a way I like it, then I start with the code "wiring".
Make a list of all the features that site have.
Make a list of nice to have features.
Make a list of the weakness of the site.
Order that list and prioritize the items that will be built first.
Identify what will be possible to do and what is not.
Meet with your customer and present these results.
Usually I do a mindmap of
problem I am trying to solve,
translated into exact requirements,
then mapping that to user workflows.
The cross linking features of mindmapping softwares make it lot easy. Since mindmapping is 'kind of freeform', I end up concentrating on the 'task' rather than 'representation' (e.g which type of UML diagram should I use to represent this) ?
Once initial ideas are clear then I can work on project plan, spec/design documents using UML for more low level details. This approach usually works well for me.
To see if it works for you or not, you can use FreeMind (opensource mindmaping software, good but currently limited functionality). Then You can try Mindmanager or iMindmap for mindmaping. Both integrate well with other Office products.
Usually I start out by grabbing my scratchbook and just start writing down what I want as in terms of features, this should be quite detailed. And can be quite messy with every thing scrambled together, if so, when you're done make an 'official version' of you're ideas on paper (REAL pen and paper works best for this in my opinion).
Then I start making some scetches of how the pages would look like, what information it must contain and translate that to a global database design. Then work that global design to a more advanced level where all pages come together, with relations between tables and stuff.
After that I build up the most important pages on a code framework (I always make use of a framework, if you don't then forget the framework part), and by 'most important pages' I mean in for example a blog that would be the posts. After that build the not-so-important pages, in case of a blog that could be an archive of posts.
If you have that done, put the code together with a design, or do that while coding if you do not seperate code from HTML/CSS/JS.
Oh and yes, do NOT expand your first idea along the way. Just write that down and implement that afterwards. So if, in case of the blog again, you think half way you want Youtube tags in you're BB-code, write it down. Add that later, offcourse before you're initial site releases.
That's my workflow, at least a basic basic, basic description of it.
Start with "paper prototypes", i. e. take a pencil and sketch each page very roughly. This lets you start from the user perspective, which I think is a good idea.
You can then use the sketches for a first hallway usability test and later as the basis for "wireframes" you would give a web designer to work from.
If you've gone through the complete site once, you probably have a good idea of what the backend should be able to do. You can now use your page sketches and compile a list of the actions a user can trigger by clicking on things. This is the raw material for designing the server-side API that the frontend can call.
Using the calls that need to be served, you can design the backend: What functionalities group nicely, what data needs to be fetched, what do you need to store between page calls (== Session variables) etc.
In this process, I have fared quite well by postponing technology decisions (frameworks, protocols etc.) and even class structure etc., until I've gone through the whole thing once in terms of "what things should do what to what other things" (I guess there's a better term).
I think I would start with an open-source SNS solution that comes close to what you need and then figure out how to add use-specific plug-ins, modules, and themes that achieve your purposes. There are a lot of em out there. Building from scratch is going to take a lot more effort and planning. Most SNS functionality is not worth re-inventing. Focus on what will make your site unique and build upward toward that.
I'm a fairly visual person when it comes to designing software so I sketch out dataflows, class hierarchies, UI and flow charts on whiteboards and paper first.
Butcher paper and colored pens can be particularly fun to use as it's 3 feet wide and comes in 100 foot rolls. When you've got a design that's satisfying or sufficiently complete, tear it off the roll and pin to the wall. Update as necessary.
That technique has worked for some large refactors as well as new projects.
You could start with something very simple and then add features a little at a time. You may reach a point where you want to start over, but the groundwork you did will be beneficial. Or you can try to do the whole thing at once, in which case you'll need the advice already given in the other replies.
One more idea: Specify those features you are not going to include, and other restrictions. These are called constraints, and are as important as the rest of the plan, as it gives you boundaries so you know when you're done planning!
If you work for the same company as this person, start by getting everything in writing so you aren't the one to take the fall when the inevitable happens...

How important is it to write functional specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never written functional specs, I prefer to jump into the code and design things as I go. So far its worked fine, but for a recent personal project I'm writing out some specs which describe all the features of the product, and how it should 'work' without going into details of how it will be implemented, and I'm finding it very valuable.
What are your thoughts, do you write specs or do you just start coding and plan as you go, and which practice is better?
If you're driving from your home to the nearest grocery store, you probably don't need a map. But...
If you're driving to a place you've never been before in another state, you probably do.
If you're driving around at random for the fun of driving, you probably don't need a map. But...
If you're trying to get somewhere in the most effective fashion (minimize distance, minimize time, make three specific stops along the way, etc.) you probably do.
If you're driving by yourself and can take as long as you like, stopping any time you see something interesting or to reconsider your destination or route, you may not need a map. But...
If you're driving as part of a convoy, and all need to make food and overnight lodging stops together, and need to arrive together, you probably do.
If you think I'm not talking about programming, you probably don't need a functional spec, story cards, narrative, CRCs, etc. But...
If you think I am, you might want to consider at least one of the above.
;-)
For someone who "jumps into the code" and "design[s] as they go", I would say writing anything including a functional spec is better than your current methods. A great deal of time and effort can be saved if you take the time to think it through and design it before you even start.
Requirements help define what you need to make.
Design helps define what you are planning on making.
User Documentation defines what you did make.
You'll find that most places will have some variation of these three documents. The functional spec can be lumped into the design document.
I'd recommend reading Rapid Development if you're not convinced. You truely can get work done faster if you take more time to plan and design.
Jumping "straight to code" for large software projects would almost surely lead to failure (as immediatley starting posing bricks to build a bridge would).
The guys at 37 Signals would say that is better to write a short document on paper than writing a complex spec. I'd say that this could be true for mocking up quickly new websites (where the design and the idea could lead better than a rigid schema), but not always acceptable in other real life situations.
Just think of the (legal, even) importance a spec document signed by your customer can have.
The morale probably is: be flexible, and plan with functional or technical specs as much as you need, according to your project's scenario.
For one-off hacks and small utilities, don't bother.
But if you're writing a serious, large application, and have demanding customers and has to run for a long time, it's a MUST. Read Joel's great articles on the subject - they're a good start.
I do it both ways, but I've learned something from Test Driven Development...
If you go into coding with a roadmap you will get to the end of the trip a helluva lot faster than you will if you just start walking down the road without having any idea of how it is going to fork in the middle.
You don't have to write down every detail of what every function is going to do, but define you basics so that way you know what you should get done to make everything work well together.
All that being said, I needed to write a series of exception handlers yesterday and I just dove right in without trying to architect it out at all. Maybe I should reread my own advice ;)
What a lot of people don't want to admit or realize is that software development is an engineering discipline. A lot can be learned as to how they approach things. Mapping out what your going to do in an application isn't necessarily vital on small projects as it is normally easier to quickly go back and fix your mistakes. You don't see how much time is wasted compared to writing down what the system is going to do first.
In reality in large projects its almost necessary to have road map of how the system works and what it does. Call it a Functional Spec if you will, but normally you have to have something that can show you why step b follows step a. We all think we can think it up on the fly (I am definitely guilty of this too), but in reality it causes us problems. Think back and ask yourself how many times you encountered something and said to yourself "Man I wish I would have thought of that earlier?" Or someone else see's what you've done, and showed you that you could have take 3 steps to accomplish a task where you took 10.
Putting it down on paper really forces you to think about what your going to do. Once it's on paper it's not a nebulous thought anymore and then you can look at it and evaluate if what you were thinking really makes sense. Changing a one page document is easier than changing 5000 lines of code.
If you are working in an XP (or similar) environment, you'll use stories to guide development along with lots of unit and hallway useability testing (I've drunk the Kool-Aid, I guess).
However, there is one area where a spec is absolutely required: when coordinating with an external team. I had a project with a large insurance company where we needed to have an agreement on certain program behaviors, some aspects of database design and a number of file layouts. Without the spec, I was wide open to a creative interpretation of what we had promised. These were good people - I trusted them and liked working with them. But still, without that spec it would have been a death march. With the spec, I could always point out where they had deviated from the agreed-to layout or where they were asking for additional custom work ($$!). If working with a semi-antagonistic relationship, the spec can save you from even worse: a lawsuit.
Oh yes, and I agree with Kieveli: "jumping right to code" is almost never a good idea.
I would say it totally "depends" on the type of problem. I tend to ask myself am I writing it for the sake of it or for the layers above you. I also had debated this and my personal experience says, you should since it keeps the project on track with the expectations (rather than going off course).
I like to decompose any non trivial problems loosely on paper first, rather than jumping in to code, for a number of reasons;
The stuff i write on paper doesn't have to compile or make any sense to a computer
I can work at arbitrary levels of abstraction on paper
I can add pictures and diagrams really easily
I can think through and debug a concept very quickly
If the problem I'm dealing with is likely to involve either a significant amount of time, or a number of other people, I'll write it up as an outline functional spec. If I'm being paid by someone else to develop the software, and there is any potential for ambiguity, I will add enough extra detail to remove this ambiguity. I also like to use this documentation as a starting point for developing automated test cases, once the software has been written.
Put another way, I write enough of a functional specification to properly understand the software I am writing myself, and to resolve any possibile ambiguities for anyone else involved.
I rarely feel the need for a functional spec. OTOH I always have the user responsible for the feature a phone call away, so I can always query them for functional requirements as I go.
To me a functional spec is more of a political tool than technical. I guess once you have a spec you can always blame the spec if you later discover problems with the implementation. But who to blame is really of no interest to me, the problem will still be there even if you find a scapegoat, better then to revisit the implementation and try to do it right.
It's virtually impossible to write a good spec, because you really don't know enough of either the problem or the tools or future changes in the environment to do it right.
Thus I think it's much more important to adapt an agile approach to development and dedicate enough resources and time to revisit and refactor as you go.
It's important not to write them: There's Nothing Functional about a Functional Spec

Are mock frameworks and high test coverage important?

Mock frameworks, e.g. EasyMock, make it easier to plugin dummy dependencies. Having said that, using them for ensuring how different methods on particular components are called (and in what order) seems bad to me. It exposes the behaviour to test class, which makes it harder to maintain production code. And I really don't see the benefit; mentally I feel like I've been chained to a heavy ball.
I much rather like to just test against interface, giving test data as input and asserting the result. Better yet, to use some testing tool that generates test data automatically for verifying given property. e.g. adding one element to a list, and removing it immediately yields the same list.
In our workplace, we use Hudson which gives testing coverage. Unfortunately it makes it easy to get blindly obsessed that everything is tested. I strongly feel that one shouldn't test everything if one wants to be productive also in maintenance mode. One good example would be controllers in web frameworks. As generally they should contain very little logic, testing with mock framework that controller calls such and such method in particular order is nonsensical in my honest opinion.
Dear SOers, what are your opinions on this?
I read 2 questions:
What is your opinion on testing that particular methods on components are called in a particular order?
I've fallen foul of this in the past. We use a lot more "stubbing" and a lot less "mocking" these days.
We try to write unit tests which test only one thing. When we do this it's normally possible to write a very simple test which stubs out
interactions with most other components. And we very rarely assert ordering. This helps to make the tests less brittle.
Tests which test only one thing are easier to understand and maintain.
Also, if you find yourself having to write lots of expectations for interactions with lots of components there could well be a problem in the code you're testing anyway. If it's difficult to maintain tests the code you're testing can often be refactored.
Should one be obsessed with test coverage?
When writing unit tests for a given class I'm pretty obsessed with test coverage. It makes it really easy to spot important bits of behaviour that I haven't tested. I can also make a judgement call about which bits I don't need to cover.
Overall unit test coverage stats? Not particularly interested so long as they're high.
100% unit test coverage for an entire system? Not interested at all.
I agree - I'm in favor of leaning heavily towards state verification rather than behavior verification (a loose interpretation of classical TDD while still using test doubles).
The book The Art of Unit Testing has plenty of good advice in these areas.
100% test coverage, GUI testing, testing getters/setters or other no-logic code, etc. seem unlikely to provide good ROI. TDD will provide high test coverage in any case. Test what might break.
It depends on how you model the domain(s) of your program.
If you model the domains in terms of data stored in data structures and methods that read data from one data structure and store derived data in another data structure (procedures or functions depending how procedural or functional your design is), then mock objects are not appropriate. So called "state-based" testing is what you want. The outcome you care about is that a procedure puts the right data in the right variables and what it calls to make that happen is just an implementation detail.
If you model the domains in terms of message-passing communication protocols by which objects collaborate, then the protocols are what you care about and what data the objects store to coordinate their behaviour in the protocols in which they play roles is just implementation detail. In that case, mock objects are the right tool for the job and state based testing ties the tests too closely to unimportant implementation details.
And in most object-oriented programs there is a mix of styles. Some code will be written purely functional, transforming immutable data structures. Other code will be coordinating the behaviour of objects that change their hidden, internal state over time.
As for high test coverage, it really doesn't tell you that much. Low test coverage shows you where you have inadequate testing, but high test coverage doesn't show you that the code is adequately tested. Tests can, for example, run through code paths and so increase the coverage stats but not actually make any assertions about what those code paths did. Also, what really matters is how different parts of the program behave in combination, which unit test coverage won't tell you. If you want to verify that your tests really are testing your system's behaviour adequately you could use a Mutation Testing tool. It's a slow process, so it's something you'd run in a nightly build rather than on every check-in.
I'd asked a similar question How Much Unit Testing is a Good Thing, which might help give an idea of the variety of levels of testing people feel are appropriate.
What is the probability that during your code's maintenance some junior employee will break the part of code that runs "controller calls such and such method in particular order"?
What is the cost to your organization if such a thing occurs - in production outage, debugging/fixing/re-testing/re-release, legal/financial risk, reputation risk, etc...?
Now, multiply #1 and #2 and check whether your reluctance to achieve a reasonable amount of test coverage is worth the risk.
Sometimes, it will not be (this is why in testing there's a concept of a point of diminishing returns).
E.g. if you maintain a web app that is not production critical and has 100 users who have a workaround if the app is broken (and/or can do easy and immediate rollback), then spending 3 months doing full testing coverage of that app is probably non-sensical.
If you work on an app where a minor bug can have multi-million-dollar or worse consequences (think space shuttle software, or guidance system for a cruise missile), then the thorough testing with complete coverage becomes a lot more sensical.
Also, i'm not sure if i'm reading too much into your question but you seem to be implying that having mocking-enabled unit testing somehow excluds application/integration functional testing. If that is the case, you are right to object to such a notion - the two testing approaches must co-exist.