Providing suggested solutions in error messages - frameworks

Developer tools and software typically do not provide solution suggestions in error messages. This makes sense for compilers because they are supposed to tell precisely what went wrong.
There are "lint" tools to provide suggestions, but AFAIK, few developers use lint tools regularly or even at all.
There is a large set of developer-oriented software that would do well to have a "suggested solution(s)" part to error messages. This is one of the great features that IDEs like Eclipse have. But software like web application frameworks, standard/popular libraries, etc. do not have this helpful feature.
Is this something that is just lacking in user-friendly design (one can consider this unnecessary, given that Google is so good) or is there a good reason for it? Do any compilers, frameworks, platforms that you use provide error messages with solution suggestions, if none, why not?

What do you want to see?
Error: Null Pointer Exception (suggested solution: Set the object to something).
I mean, it's not the error writers job to educate you. I prefer simple error messages that point to the exact problem, so I myself can determine just what is causing it this time. For me, this certainly lines in the domain of 3rd party tools; perhaps the compilers could provide extensive context to them, to do their analysis, but it's not something I would really find valuable.

The primary thing I want from a compiler or runtime error is context - where did it happen and where was it called from when it failed.
I think most modern compilers and runtimes (Java, Ruby, Go) do a decent job there, with line numbers and stack traces you can find most bugs. Even the Javascript options getting good, it certainly beats the good old "alert()" approach to debugging.
Isn't it fair enough to leave suggested solutions to the IDEs?
But I do agree that I have seen frameworks/libraries that were very sparse with error messages, and "NullPointerException at line 264" deep inside some third-party library where you do not have the source code tells you very close to nothing.
If this is an issue, I think it is primarily restricted to third-party libraries. The "good reason" is presumably that it was developed in a hurry in somebody's spare time, and they did not put meaningful error messages very high on the priority list.

It's hard for the solution to an error being presented. There are so many possibilities and as #silky pointed out, some just cannot be diagnosed.
Warnings are a different beast. In many situations modern compilers use these to say "I think you meant X when you said Y; you might want to check that."

A programming language has the opportunity to be the ultimate in flexibility in terms of user interface. You can make the computer do anything you want. The flip side of the coin is that if you type so much as one character wrong, it might have no idea along which axis your mistake was made, or where.
Systems with less flexibility offer more opportunities for offering solutions to problems. If you type (a b c) to your Lisp compiler and it doesn't know what a is, it's so close to so many valid lines of code that it can't exactly suggest a single fix. If you misspell "IDENTIFICATION DIVISION" at the start of your COBOL program, it's relatively easy for the compiler to spot the error and help you out. Most other languages lie between these extremes.
Programmers tend to move, over their careers, from less powerful and more structured languages, into more powerful and flexible languages. (At least, that's what I saw happen before Javascript became such a hot newbie language.) This means their discipline improves to the point where they are able to use tools that offer power at the expense of being told what to do. The environments I've used that can tell me what to fix, tend to be those that I dislike using.
It's no different from any other art. Look at musicians or painters or martial artists or actors or writers or chefs or even people learning to speak Spanish: when they're young and inexperienced, they're put in a system where there's a lot of structure, and if they make a mistake somebody can easily correct them. As they become more skilled, they need and want less and less support. When they've become experts themselves, they need no support at all, but the flip side of the coin is that you can't as easily point out what's right or wrong. If your kid colors outside the lines, you can explain the issue, but if Picasso or Pollock makes a bad brushstroke, what would you say? Or if Philip Glass puts a note out of place, or Bruce Lee turns his body too far into a punch? And who would want to work in an art form that's so limited that profane things aren't possible? COBOL compilers still exist if anybody really wants them, but far more people pay money for awful paintings than masterful color-by-number prints.
More directly, there's a site, ErrorHelp (nee bug.gd), that lets you type in an error message and get a result, and it's older than SO but nobody uses it. I've tried. Unless you're in a context where there's only one possible answer, a simple problem-encountered to suggested-solution dictionary does not work, and therefore it's an utter failure in any creative field.

Most IDE's have their own compilers. This allows them to do partial compilation, code refactorings, and many other tricks. I find the error messages and suggestions quite useful. Just because the compiler isn't invoked on the command line, doesn't mean it's not a compiler.
(source: theeggeadventure.com)

Related

How can I convince people to use proper object oriented Perl?

I have been using an Object-Oriented MVC architecture for a web project, and all the models are OO Perl. But I have noticed a couple on the team are reverting to procedural techniques and are essentially using "objects" as dumping grounds for related functions. Their functions basically read/write directly to/from the database.
What's the best way to convince them this is the wrong way? Are there some good tutorials I can get them to read?
I can think of three possible reasons why some team members may not produce nice OO code:
They don't care
They care, try to do it right but lack skills to do it right
They are doing it right, or at least right enough! Some matters of design are matters of opinion. Go back and look at some of your own old code, code that you thought was quite good, there's a good chance that you would revise some of it now.
My approach is to make the assumption that everybody wants to do the job right, so I assume (or pretend to assume) that folks care. One tries to build an ethos of wanting to do it right.
So that leaves building our skills, theirs and yours (and mine). Code reviews seem like one obvious way to do this. Talk about alternatives. Also maybe pair-programming?
OOP is just a tool to accomplish some great things like more reliable, usable and maintainable software. If you can not convince your team to write tests give up with strict OOP.
Are they all out of step with you? Perhaps it's just you who is out of step!
Lest I seem facetious, I mean what are the answers to the following questions:
What has the team agreed should be the approach?
Whose responsibility is it to enforce purity? If you don't go for enforced purity, whose responsibility is it to encourage purity? If it's not you, have a look in the mirror and make sure you are not the annoying noob on the team telling the old sweats how to do the jobs they've been doing since you were in Perl baby-clothes.
What mechanisms do you have in place for sorting out such team issues? Regular code reviews, pair programming, bare-knuckle fights ?
Asking them to read some tutorials is likely to be a waste of breath. What are your arguments for doing things the OO way? As Boris has observed, OO is not an end in itself, it is only a means to an end, and not the only means.
There are probably more things you want to consider in your situation, but these should get you started.

Do Poor Code Samples Turn You Away From Libraries?

I've been evaluating a framework that on paper looks great. The problem is that the sample code is incomplete and of poor quality. The supplied reference implementations are for the most part not meant to be used (so they can be considered as sample code as well) and have only succeeded at confusing me.
I know that it's common for things to look better on paper, but my experience with the sample code is turning me away from further investigation.
Do you let poor code samples change your judgment of frameworks/libraries? So far my experience has been similar to the "resume effect": if someone doesn't put the effort into spell checking their resume, they probably won't get the job...
For me, it does. I tend to want to avoid libraries where the code samples are incomplete. If the library is open source, I will overlook it, since I can directly look at the code and see if the library's internals are reasonable, and I know that, if there is a problem someday, I could (if I had to) fix it.
If the library is commercial, and their samples and/or documentation is poor, I look elsewhere. I just see it as risk management - poor samples make me fear the quality of the library in general.
No matter how good something is on paper or in theory, it can still be crap when programmed.
I think this is a valid reason to turn away from and evaluate other libraries. As a potential user of a library a lack of documentation and/or bad code samples gives the impression that the library is not yet mature enough for use by third parties. In time it may well gain the missing pieces but until then I think its reasonable to look elsewhere.
I was recently evaluating the multitude of blogging applications that people have uploaded to github.com I quickly skipped ones that no documentation as they obviously weren't ready for others to use. The ones that remained at the end had a good README with info on how to get the app up and running as well as an online example of the code running.
Poor code samples combined with poor documentation will make me turn away from a library unless there is a compelling reason to use it. However, a library that has either good code samples or good documentation is usually worth using. (Assuming that the library itself otherwise meets my needs.)
If I can't find good examples (and/or documentation) illustrating how to use the library, I'm definitely less likely to use it - just as a practical matter, it'll be harder for me to figure out how. But I don't care what the code that implements the library itself looks like. I don't think I'd choose one library/framework over another just because the developers of the one have shown an ability to write cleaner code (which is what I understand the "resume effect" to mean).
Lack of documentation and examples makes me a whole lot less likely to use that particular library. It's not worth my time testing and trying to figure out how a black box works if there are alternate solutions to the problem out there.
Yes, definitely. Every library should come with a simple example using program and a CLI interface (for very simple libraries with <3 methods and <10 hooks, one example should suffice).
And why does your framework "look great" if it's so hard to use that even the original coders make mistakes using it?
It certainly matters to me. Evidence of sloppy/incomplete coding and poor communication decreases my confidence that the actual implementation code is stable and robust.
Myself yes, but there must be people out there who aren't turned off by this otherwise there are plenty of open source projects that would have died a long long time ago.

What inherited code has impressed or inspired you?

I've heard a ton of complaining over the years about inherited projects that us developers have to work with. The WTF site has tons of examples of code that make me actually mutter under my breath "WTF?"
But have any of you actually been presented with code that made you go, "Holy crap this was well thought out!" or "Wow, I never thought of that!"
What inherited code have you had to work with that made you smile and why?
Long ago, I was responsible for the Turbo C/C++ run-time library. Tanj Bennett wrote the original 80x87 floating point emulator in 16-bit assembler. I hadn't looked closely at Tanj's code since it worked well and didn't require attention. But we were making the move to 32-bits and the task fell to me to stretch the emulator.
If programming could ever be said to have something in common with art this was it.
Tanj's core math functions managed to keep an 80-bit floating point temporary result in five 16-bit registers without having to save and restore them from memory. X86 assembly programmers will understand just what an accomplishment this was. Register space was scarce and keeping five registers as your temp while simultaneously doing complex math was a beautiful site to behold.
If it was only a matter of clever coding that would have been enough to qualify it as art but it was more than that. Tanj had carefully picked the underlying math algorithms that would be most suitable for keeping the temp in registers. The result was a blazing-fast floating point emulator which was an important selling point for many of our customers.
By the time the 386 came along most people who cared about floating-point performance weren't using an emulator but we had to support Intel's 386SX so the emulator needed an overhaul. I rewrote the instruction-decode logic and exception handling but left the core math functions completely untouched.
In my first job, I was amazed to discover a "safe ID" class in the codebase (c++), which was wrapping numerical IDs in a class templated with an empty tag class, that ensured that the compiler would complain if you tried for example to compare or assign a UserId into an OrderId.
Not only did I made sure that I had an equivalent Id class in all subsequent codebases I would be using, but it actually opened my eyes on what the compiler could do to guarantee correctness and help writing stronger code.
The code that impresses me the most, and which I try to emulate - is code that seems too simple and easy to understand.
It is damn difficult to write that kind of code. :-)
I have a funny story to tell here.
I was working on this Javaish application, filled with getters & setters that did nothing but get or set and interfaces and everything ever invented to make code unreadable. One day I stumbled upon some code which seemed very well crafted -- it was basically an algorithm implementation that looked very elegant = few lines of readable code, even though it respected every possible rule the project had to adhere to (it was checkstyled automatically).
I couldn't figure out who on the team could have written such code. I was dying to discuss with him and share thoughts. Thankfully, we had switched to subversion (from cvs) a few months earlier and I quickly ran am 'svn blame'. I loled all over the place, seeing my name next to the implementation.
I had heard stories about people not remembering code they wrote 6 months back, code that is a nightmare to maintain. I could not believe such a thing could happen: how can you forget code you wrote? Well, now I'm convinced it can happen. Thankfully the code was alright and easy to extend, so I've only experienced half of the story.
Some VB6 code by another programmer at my company I came across that handled the error conditions very well (whether it be deal with them directly or log them).
Along with some rather complex code that was well commented.
I know this will bring a lot of answers like,
"I've never find good code before I step in" and variations.
I think the real problem there is not that there isn't good coders or excellent projects out there, is that there's an excess of NIH syndrome and the fact that no body likes code from others. The latter is just because you have to make an intellectual effort to understand it, a much bigger effort than you need to understand you own code so that you dislike it (it's making you think and work after all).
Personally I can remember (as everyone I guess) some cases of really bad code but also I remember some pretty well documented, elegant code.
Currently, the project that most impressed me was a very potent, Dynamic Workflow Engine, not only by the simplicity but also for the way it is coded. I can remember some very clever snippets here and there, as well as a beautiful metaprogramming library based on a full IDL developed by some friends of mine (Aspl.es)
I inherited a large bunch of code that was SO well written I actually spent the $40 online to find the guy, I went to his house and thanked him.
I think Rocky Lhotka should get the credit, but I had to touch a CSLA.NET application recently {in my private practice on the side} and I was very impressed with the orderliness of the code. The app worked extremely well, but the client needed a few extensions. The original author had died tragically, and the new guy was unsophisticated. He didn't understand CSLA.NET's business object based approach, and he wanted to do it all over again in cut-and-dried VB.NET, without any fancy framework.
So I got the call. Looking at a working example of WinForm binding and CSLA.NET was pretty instructive about a lot of things.
Symbian OS - the old core bit of it anyway, the bit that dated back to the Psion days or those who even today keep that spirit alive.
And sitting right along side it and all over it is all the new crap created by the lowest bidders hired by the big phone corporations. It was startling, you could actually feel in your bones whether a bit of the code-base was old or new somehow.
I remember when I wrote my bachelor thesis on type inference, my Pascal-to-Pascal 'compiler' was an extension of a Parser my supervisor programmed (in Java). It had a pretty good structure as far as I can remember, and for me who had never done any serious Object-oriented programming, it was quite a revelation.
I've been doing a lot of Eclipse plug-in development and often had to debug into the actual Eclipse source code. While I haven't "inherited" it in the sense that I'm not continuing work on it, I've always been impressed with the design and quality of the early core.

Getting your head around other people's code

I'm occasionally unfortunate enough to have to make alterations to very old, poorly not documented and poorly not designed code.
It often takes a long time to make a simple change because there is not much structure to the existing code and I really have to read a lot of code before I have a feel for where things would be.
What I think would help a lot in cases like this is a tool that would allow one to visualise an overview of the code, and then maybe even drill down for more detail. I suspect such a tool would be very hard to get right, given that is trying to find structure where there is little or none.
I guess this is not really a question, but rather a musing. I should make it into a question - What do others do to assist in getting their head around other peoples code, the good and the bad?
Hmm, this is a hard one, so much to say so little time ...
1) If you can run the code it makes life soooo much easier, breakpoints (especially conditional) break points are you friend.
2) A purists' approach would be to write a few unit tests, for known functionality, then refactor to improve code and understanding, then re-test. If things break, then create more unit tests - repeat until bored/old/moved to new project
3) ReSharper is good at showing where things are being used, what's calling a method for instance, it's static but a good start, and it helps with refactoring.
4) Many .net events are coded as public, and events can be a pain to debug at the best of times. Recode them to be private and use a property with add/remove. You can then use break point to see what is listening on an event.
BTW - I'm playing in the .Net space, and would love a tool to help do this kind of stuff, like Joel does anyone out there know of a good dynamic code reviewing tool?
I have been asked to take ownership of some NASTY code in the past - both work and "play".
Most of the amateurs I took over code for had just sort of evolved the code to do what they needed over several iterations. It was always a giant incestuous mess of library A calling B, calling back into A, calling C, calling B, etc. A lot of the time they'd use threads and not a critical section was to be seen.
I found the best/only way to get a handle on the code was start at the OS entry point [main()] and build my own call stack diagram showing the call tree. You don't really need to build a full tree at the outset. Just trace through the section(s) you're working on at each stage and you'll get a good enough handle on things to be able to run with it.
To top it all off, use the biggest slice of dead tree you can find and a pen. Laying it all out in front of you so you don't have to jump back and forward on screens or pages makes life so much simpler.
EDIT: There's a lot of talk about coding standards... they will just make poor code look consistent with good code (and usually be harder to spot). Coding standards don't always make maintaining code easier.
I do this on a regular basis. And have developed some tools and tricks.
Try to get a general overview (object diagram or other).
Document your findings.
Test your assumptions (especially for vague code).
The problem with this is that on most companies you are appreciated by result. That's why some programmers write poor code fast and move on to a different project. So you are left with the garbage, and your boss compares your sluggish progress with the quick and dirtu guy. (Luckily my current employer is different).
I generally use UML sequence diagrams of various key ways that the component is used. I don't know of any tools that can generate them automatically, but many UML tools such as BoUML and EA Sparx can create classes/operations from source code which saves some typing.
The definitive text on this situation is Michael Feathers' Working Effectively with Legacy Code. As S. Lott says get some unit tests in to establish behaviour of the lagacy code. Once you have those in you can begin to refactor. There seems to be a sample chapter available on the Object Mentor website.
I strongly recommend BOUML. It's a free UML modelling tool, which:
is extremely fast (fastest UML tool ever created, check out benchmarks),
has rock solid C++ import support,
has great SVG export support, which is important, because viewing large graphs in vector format, which scales fast in e.g. Firefox, is very convenient (you can quickly switch between "birds eye" view and class detail view),
is full featured, intensively developed (look at development history, it's hard to believe that so fast progress is possible).
So: import your code into BOUML and view it there, or export to SVG and view it in Firefox.
See Unit Testing Legacy ASP.NET Webforms Applications for advice on getting a grip on legacy apps via unit testing.
There are many similar questions and answers. Here's the search https://stackoverflow.com/search?q=unit+test+legacy
The point is that getting your head around legacy is probably easiest if you are writing unit tests for that legacy.
I haven't had great luck with tools to automate the review of poorly documented/executed code, cause a confusing/badly designed program generally translates to a less than useful model. It's not exciting or immediately rewarding, but I've had the best results with picking a spot and following the program execution line by line, documenting and adding comments as I go, and refactoring where applicable.
a good IDE (EMACS or Eclipse) could help in many cases. Also on a UNIX-platform, there are some tools for crossreferencing (etags, ctags) or checking (lint) or gcc with many many warning options turned on.
First, before trying to comprehend a function/method, i would refactor it a bit to fit your coding conventions (spaces, braces, indentation) and remove most of the comments if they seem to be wrong.
Then I would refactor and comment the parts you understood, and try to find/grep those parts over the whole source tree and refactor them there also.
Over the time, you get a nicer code, you like to work with.
I personally do a lot of drawing of diagrams, and figuring out the bones of the structure.
The fad de jour (and possibly quite rightly) has got me writing unit tests to test my assertions, and build up a safety net for changes I make to the system.
Once I get to a point where I'm comfortable enought knowing what the system does, I'll take a stab at fixing bugs in the sanest way possible, and hope my safety nets neared completion.
That's just me, however. ;)
i have actuaally been using the refactoring features of ReSharper to help m get a handle on a bunch of projects that i inherited recently. So, to figure out another programmer's very poorly structured, undocumented code, i actually start by refactoring it.
Cleaning up the code, renaming methods, classes and namespaces properly, extracting methods are all structural changes that can shed light on what a piece of code is supposed to do. It might sound counterintuitive to refactor code that you don't "know" but trut me, ReSharper really allows you to do this. Take for example the issue of red herring dead code. You see a method in a class or perhaps a strangely named variable. You can start by trying to lookup usages or, ungh, do a text search, but ReSharper will actually detect dead code and color it gray. As soon as you open a file you see in gray and with scroll bar flags what would have in the past been confusing red herrings.
There are dozens of other tricks and probably a number of other tools that can do similar things but i am a ReSharper junky.
Cheers.
Get to know the software intimately from a user's point of view. A lot can be learnt about the underlying structure by studying and interacting with the user interface(s).
Printouts
Whiteboards
Lots of notepaper
Lots of Starbucks
Being able to scribble all over the poor thing is the most useful method for me. Usually I turn up a lot of "huh, that's funny..." while trying to make basic code structure diagrams that turns out to be more useful than the diagrams themselves in the end. Automated tools are probably more helpful than I give them credit for, but the value of finding those funny bits exceeds the value of rapidly generated diagrams for me.
For diagrams, I look for mostly where the data is going. Where does it come in, where does it end up, and what does it go through on the way. Generally what happens to the data seems to give a good impression of the overall layout, and some bones to come back to if I'm rewriting.
When I'm working on legacy code, I don't attempt to understand the entire system. That would result in complexity overload and subsequent brain explosion.
Rather, I take one single feature of the system and try to understand completely how it works, from end to end. I will generally debug into the code, starting from the point in the UI code where I can find the specific functionality (since this is usually the only thing I'll be able to find at first). Then I will perform some action in the GUI, and drill down in the code all the way down into the database and then back up. This usually results in a complete understanding of at least one feature of the system, and sometimes gives insight into other parts of the system as well.
Once I understand what functions are being called and what stored procedures, tables, and views are involved, I then do a search through the code to find out what other parts of the application rely on these same functions/procs. This is how I find out if a change I'm going to make will break anything else in the system.
It can also sometimes be useful to attempt to make diagrams of the database and/or code structure, but sometimes it's just so bad or so insanely complex that it's better to ignore the system as a whole and just focus on the part that you need to change.
My big problem is that I (currently) have very large systems to understand in a fairly short space of time (I pity contract developers on this point) and don't have a lot of experience doing this (having previously been fortunate enough to be the one designing from the ground up.)
One method I use is to try to understand the meaning of the naming of variables, methods, classes, etc. This is useful because it (hopefully increasingly) embeds a high-level view of a train of thought from an atomic level.
I say this because typically developers will name their elements (with what they believe are) meaningfully and providing insight into their intended function. This is flawed, admittedly, if the developer has a defective understanding of their program, the terminology or (often the case, imho) is trying to sound clever. How many developers have seen keywords or class names and only then looked up the term in the dictionary, for the first time?
It's all about the standards and coding rules your company is using.
if everyone codes in different style, then it's hard to maintain other programmer code and etc, if you decide what standard you'll use have some rules, everything will be fine :) Note: that you don't have to make a lot of rules, because people should have possibility to code in style they like, otherwise you can be very surprised.

How important is it to write functional specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never written functional specs, I prefer to jump into the code and design things as I go. So far its worked fine, but for a recent personal project I'm writing out some specs which describe all the features of the product, and how it should 'work' without going into details of how it will be implemented, and I'm finding it very valuable.
What are your thoughts, do you write specs or do you just start coding and plan as you go, and which practice is better?
If you're driving from your home to the nearest grocery store, you probably don't need a map. But...
If you're driving to a place you've never been before in another state, you probably do.
If you're driving around at random for the fun of driving, you probably don't need a map. But...
If you're trying to get somewhere in the most effective fashion (minimize distance, minimize time, make three specific stops along the way, etc.) you probably do.
If you're driving by yourself and can take as long as you like, stopping any time you see something interesting or to reconsider your destination or route, you may not need a map. But...
If you're driving as part of a convoy, and all need to make food and overnight lodging stops together, and need to arrive together, you probably do.
If you think I'm not talking about programming, you probably don't need a functional spec, story cards, narrative, CRCs, etc. But...
If you think I am, you might want to consider at least one of the above.
;-)
For someone who "jumps into the code" and "design[s] as they go", I would say writing anything including a functional spec is better than your current methods. A great deal of time and effort can be saved if you take the time to think it through and design it before you even start.
Requirements help define what you need to make.
Design helps define what you are planning on making.
User Documentation defines what you did make.
You'll find that most places will have some variation of these three documents. The functional spec can be lumped into the design document.
I'd recommend reading Rapid Development if you're not convinced. You truely can get work done faster if you take more time to plan and design.
Jumping "straight to code" for large software projects would almost surely lead to failure (as immediatley starting posing bricks to build a bridge would).
The guys at 37 Signals would say that is better to write a short document on paper than writing a complex spec. I'd say that this could be true for mocking up quickly new websites (where the design and the idea could lead better than a rigid schema), but not always acceptable in other real life situations.
Just think of the (legal, even) importance a spec document signed by your customer can have.
The morale probably is: be flexible, and plan with functional or technical specs as much as you need, according to your project's scenario.
For one-off hacks and small utilities, don't bother.
But if you're writing a serious, large application, and have demanding customers and has to run for a long time, it's a MUST. Read Joel's great articles on the subject - they're a good start.
I do it both ways, but I've learned something from Test Driven Development...
If you go into coding with a roadmap you will get to the end of the trip a helluva lot faster than you will if you just start walking down the road without having any idea of how it is going to fork in the middle.
You don't have to write down every detail of what every function is going to do, but define you basics so that way you know what you should get done to make everything work well together.
All that being said, I needed to write a series of exception handlers yesterday and I just dove right in without trying to architect it out at all. Maybe I should reread my own advice ;)
What a lot of people don't want to admit or realize is that software development is an engineering discipline. A lot can be learned as to how they approach things. Mapping out what your going to do in an application isn't necessarily vital on small projects as it is normally easier to quickly go back and fix your mistakes. You don't see how much time is wasted compared to writing down what the system is going to do first.
In reality in large projects its almost necessary to have road map of how the system works and what it does. Call it a Functional Spec if you will, but normally you have to have something that can show you why step b follows step a. We all think we can think it up on the fly (I am definitely guilty of this too), but in reality it causes us problems. Think back and ask yourself how many times you encountered something and said to yourself "Man I wish I would have thought of that earlier?" Or someone else see's what you've done, and showed you that you could have take 3 steps to accomplish a task where you took 10.
Putting it down on paper really forces you to think about what your going to do. Once it's on paper it's not a nebulous thought anymore and then you can look at it and evaluate if what you were thinking really makes sense. Changing a one page document is easier than changing 5000 lines of code.
If you are working in an XP (or similar) environment, you'll use stories to guide development along with lots of unit and hallway useability testing (I've drunk the Kool-Aid, I guess).
However, there is one area where a spec is absolutely required: when coordinating with an external team. I had a project with a large insurance company where we needed to have an agreement on certain program behaviors, some aspects of database design and a number of file layouts. Without the spec, I was wide open to a creative interpretation of what we had promised. These were good people - I trusted them and liked working with them. But still, without that spec it would have been a death march. With the spec, I could always point out where they had deviated from the agreed-to layout or where they were asking for additional custom work ($$!). If working with a semi-antagonistic relationship, the spec can save you from even worse: a lawsuit.
Oh yes, and I agree with Kieveli: "jumping right to code" is almost never a good idea.
I would say it totally "depends" on the type of problem. I tend to ask myself am I writing it for the sake of it or for the layers above you. I also had debated this and my personal experience says, you should since it keeps the project on track with the expectations (rather than going off course).
I like to decompose any non trivial problems loosely on paper first, rather than jumping in to code, for a number of reasons;
The stuff i write on paper doesn't have to compile or make any sense to a computer
I can work at arbitrary levels of abstraction on paper
I can add pictures and diagrams really easily
I can think through and debug a concept very quickly
If the problem I'm dealing with is likely to involve either a significant amount of time, or a number of other people, I'll write it up as an outline functional spec. If I'm being paid by someone else to develop the software, and there is any potential for ambiguity, I will add enough extra detail to remove this ambiguity. I also like to use this documentation as a starting point for developing automated test cases, once the software has been written.
Put another way, I write enough of a functional specification to properly understand the software I am writing myself, and to resolve any possibile ambiguities for anyone else involved.
I rarely feel the need for a functional spec. OTOH I always have the user responsible for the feature a phone call away, so I can always query them for functional requirements as I go.
To me a functional spec is more of a political tool than technical. I guess once you have a spec you can always blame the spec if you later discover problems with the implementation. But who to blame is really of no interest to me, the problem will still be there even if you find a scapegoat, better then to revisit the implementation and try to do it right.
It's virtually impossible to write a good spec, because you really don't know enough of either the problem or the tools or future changes in the environment to do it right.
Thus I think it's much more important to adapt an agile approach to development and dedicate enough resources and time to revisit and refactor as you go.
It's important not to write them: There's Nothing Functional about a Functional Spec