Reduce the amount of trampolines type 2 used - trampolines

we are developing a iOS app using Unity3D and we have run into the "Ran out of trampolines type 2 error" all the time. Since Unity doesn't allow use to adjust the trampoline limit, so we have to reduce the amount of trampoline used. So far, we have donea few things like removing some LINQ calls and seems to have improved the situation a bit. But in reality we are still not really sure if the things we are doing is helping or not.
So i'm wondering if someone can tell us what exactly is trampolines type 2 and how can we reduce the amount of trampolines type 2 sued by our app? thanks!
Thanks!
This is my original question back few weeks ago:
MonoDevelop settings to fix "ran out of trampolines type 2" error

Related

Tips to Learning Code in a Big Project [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 9 years ago.
Not sure if this is the best place to ask or not but if it gets closed down oh well. I am in computer programming and starting on my first work term. I will be doing 2D game programming for iPhone in objective C. I was just wondering if you had any tips for learning how the code works on a big project. In college I have never worked with something in terms of this scope. I am used to a project with maybe a dozen source files while what I'll be working on has hundreds. It is very overwhelming for me.
Any Tips would be appreciated. Thanks very much
This is how I do it. Opinions and methods may vary.
Generally speaking, I find the best way to learn about a system is to go through the code while the app is running.
Pick a significant place in the UI (the startup screen, some other screen).
Find the class for that view. Generally just ask a senior developer. Developers are happy to give a pointer (no pun intended) to someone who wants to learn by himself instead of having to explain everything.
Place a breakpoint in that class and run the app in Xcode until you hit your breakpoint.
Then start tracing in there to see how things happen.
Repeat the process at different spots in the app and soon you'll get a general idea of how the app works. Then it's a lot easier to catch the details.
If the system is really enormous (like an enterprise app that runs on multiple systems), then a diagram showing all the architecturally significant pieces would probably help. For an iOS app, it's probably not needed.
Good luck...
I am a 3rd year computer engineer who has done four work terms, and I can offer the following :
Some general advice:
Compartmentalizing your approach is still very useful on a big project, as in a small one. The more specific parts you focus on at a time, the easier it will be to understand them. This is not always practical due to interdependence of programs, but it is still possible to, say, work on the graphics portion alone, or the character's movement algorithm, etc. You should know that in the past that it was possible for an educated person to know the sum of human knowledge, but that is impossible today. Even senior engineers/programmers have specific areas of expertise, and other areas where they are fuzzy. Find what you most enjoy/are talented at, and devote time to that.
A basic foundation is important. Study the basic ideas of loop structures, classes, methods and the like, and know them like the back of your hand, so when applying them across languages/platforms, all you need to do is refresh yourself on the syntax. The same basic ideas apply across a range of languages.
Most of all, do not panic. It is your first work term, and you are assigned mentors/supervisors, as well as working with a team. Doing it alone would be difficult, so network well with your teammates/superiors so you can all learn from each other, divide the work, and lessen the stress on yourself!
Good luck! :)
Short Answer :
Read less, do more, then read when you get stuck. In my opinion, that's the best way to learn any new language and also someone said :
"We learn by doing, there is no other way".
Long Answer :
Rule 1: Relax.
Rule 2: You gotta understand that this is not easy stuff to master. That is why people who do get paid really well. If you had an idea you could bang this stuff out in a couple of weeks with you need to dump that. Plan to spend months working up on it.
Rule 3: Understand that the Apple API is HUGE and it is always evolving. There is enough content to learn something new Everyday.
Rule 4: The fewer programming languages you've had to learn, the harder it is to learn new ones. You will learn slower than someone else who has learned half dozen languages/APIs already.
Rule 5: Don't be afraid to use repetition and brute force. I think the thing that slows novices down is not learning the behaviors and methods of common foundation classes like NSString, NSArray, NSDictionary etc.
Rule 6: As a learning exercise, copy-pasting might not be the right thing to do. If there's an Apple example of how to do something, rather than copy-pasting I tend to rewrite it manually. I find it sticks better in my mind.
Rule 7: Use any resources you like. There are no rules on how you should learn.
Rule 8: iPhone is a memory constrained device where network and local storage access is slow. Parts of your application can be unloaded at any time, your application is responsible for maintaining it's memory footprint (not the user), and an event (phone call, memory, etc.) may require the app to respond accordingly and quickly.
Rule 9: It isn't about you. It isn't about your code. And it isn't about your code doing this or that. It's first about the user and responding to the user. It's second about your code responding to the framework. You don't usually tell the framework what to do. It asks you for things when it needs something. You sit and wait for it to talk to you. You're not in charge. You don't control the runloop; it controls you. You register to be told when things happen, and you indicate that you're the object who knows something about something (data for a table for instance). And then you let go, and let Cocoa do the rest. It's a very different world. I like it very much.
Rule 10: Relax.
When I'm coming to a new Xcode project, I open it up in OmniGraffle Pro. If the project is well organized, you'll see a nice diagram with a summary of the classes, the methods that are present and a little bit of how things relate to each other, important enums to know about, and other helpful information for getting a good overview of the project.
After that, pick a point like #mprivat said and run it in the debugger and get a feel for how things run. I like to set breakpoints with logs of the breakpoint name and hit count (and maybe the value of some variable or parameter if it seems relevant) and automatic continue after a little while to avoid pesky timing issues that can sometimes creep up when the debugger pauses execution. I use breakpoint logging so I don't have to worry about accidentally committing clutter code. (Be careful of pulling new code though because breakpoints don't move with your changing codebase. :))

Is there a way to use assembly hand-coded shaders instead of using GLSL on iPhone?

I would like to use hand-coded assembly language vertex and fragment shaders in order to program very optimized shaders on iPhone with OpenGL ES 2.0.
I googled around but I can't find an example or even if it is allowed by apple sdk.
On the iPhone, there is no way to hand tune shaders. It's worth noting that on the iPhone in particular, there are no optimizations that you can do that a compiler can't. That said, the GLSL compiler will probably beat or match your hand-tuned assembly.
On the PC, however, I personally do not have faith in the driver to know that a looping shader should use fewer registers and more instructions in order to achieve greater throughput through higher occupancy. Drivers simply don't have enough context to make the right choice all of the time. Data-specific compile-time optimizations are a great example of this problem.
As someone who has actually looked at the GLSL compiler's assembly output and tried to game the compiler's register allocation strategy, I can tell you that not having assembly access absolutely does hurt performance (on the PC, there are some publicly available tools from NVidia and AMD that allow you to do this). The trade-off to using assembly is that every shader needs to be hand tuned for every supported part in order to achieve the maximum possible performance. While this is a bit extreme, if I want to invest my own time into fine tuning a rendering back-end for each video card my products support, then I should be able to do so. A more practical example would be hand-tuning for low-end video cards, but letting the GLSL compiler do it's job on more high-end video cards.
Further, offline compilers provide a safety mechanism. Many video games today rely on drivers to emulate a lot of the functionality available in modern graphics APIs. As a game developer who works in the game-as-a-service space on the PC, I can tell you that it's extremely upsetting to get a call in the middle of the night because of a minor GLSL bug in a newly released graphics driver. Driver bugs severely impact the overall player experience. Most players simply think that your game is broken, and you can actually lose players as a result (and we probably have). Being able to compile once for each supported video card and hand-tune after the fact would be a huge win in this regard. It simply means that the driver would have to do less work. Code is evil, so the less code that executes is better =).
As a side note, I made the following demo using the 'compile'-'view assembly'-'modify'-'repeat' approach: http://www.youtube.com/watch?v=km0DpZUgvbg . I can tell you with 100% certainty that I could further improve the performance of this ray-tracer with assembly language, and AFAIK, it's the fastest voxel ray-tracer whose existance has been published (that was the case as of Mar 2012, but is likely no longer true). Unsurprisingly, each time a new driver would come out, I would see this demo's performance go from 125-130 fps down to 30 fps - all because the driver didn't know how to optimize my shader correctly. That means I'd have to repeat my optimization process each time a new driver came out, which caused me to simply mothball the project (ACK!). Even though my voxel raytracer can support a large variety of hardware in a performant manner, drivers are currently making it impossible to support this technology in a full product. I simply do not have the weight to put this technology in action because it would require driver vendors to know the ways in which they need to optimize my shader. How many other technologies would be possible if we simply had direct assembly shader access? This implies that lacking assembly access is actually a serious cost. For anyone else in this position, I recommend the following: Use NVidia's assembly language when possible and fallback to GLSL when it's not. If we show the advantage of assembly over GLSL, then hopefully we'll get first-class assembly support from all vendors =).
And finally, not to pick on another author, but I want to point out that the argument made by 'Nicol Bolas' is almost entirely fallacious (sorry Nicol, I have nothing against you, but I wanted to point out some popular arguments that simply don't hold up to an ethics test). Please note that a fallacious argument does not mean that a particular conclusion is incorrect -- just that the argument posited is simply fallacious.
"Why? You don't trust the compiler to do it's job? Do you really think that you know enough about the GPU in question to be able to consistently beat the compiler?"
This is not an argument. This is simply a question that a lot of people can't think of a real answer to. Therefor, they come to the conclusion that they should just trust the compiler. This prevents people from further investigating the ramifications of trusting the compiler, and prevents logical discourse of the actual pros and cons from taking place. Furthermore, the use of the word "really" in your second question implies that someone answering yes must be deluded. This also hints at what you think about yourself, Nicol -- it implies that you value your opinion above all others, and anyone who doesn't think like you must have something wrong with them (not that they are wrong, but that they have something wrong with them - big difference). That said, you might want to take some time to think about your thought process, your feelings, and your emotional state. Taking this approach will seriously limit your ability to learn, as you won't be challenging your own ideas with enough rigor. Please stop using this argument. It's not healthy or ethical.
"Ultimately, you're just going to have to trust the compiler made by the people who built your GPU. Nobody else has a problem with that these days."
I have a problem with it. Also, even if no one did have a problem with it, that still wouldn't matter. The fact is that there might still be some benefit to allowing assembly shaders. That said, a consensus does not mean correctness. Further, this argument is especially unethical because there is an implied "Nobody else has a problem with that these days so if you have a problem with it, you must be weird or out of date in your thinking". People have a natural desire to fit in, and using this argument is a way of dividing people on this issue and persecuting people that don't think like you. That said, this argument is especially insidious because of it's implications. Please don't use this argument.
Nicol, both of your fallacies imply that you are right and normal, and anyone who doesn't agree with you is wrong and has something wrong with them. These are extremely unhealthy viewpoints, and you should examine them rigorously for your own mental health and career.
For future reference: http://en.wikipedia.org/wiki/List_of_fallacies#Formal_fallacies
Thanks!
I would like to use hand-coded assembly language vertex and fragment shaders in order to program very optimized shaders on iphone with opengl es 2.0.
Why? You don't trust the compiler to do it's job? Do you really think that you know enough about the GPU in question to be able to consistently beat the compiler?
Anyway, you can't. Nor could you do it in desktop OpenGL. ARB assembly shaders aren't that much closer to the hardware than GLSL; they both go through compilation and optimization by an internal compiler.
Ultimately, you're just going to have to trust the compiler made by the people who built your GPU. Nobody else has a problem with that these days.
I understand you point very well : you want to see the generated assembly source code, and maybe modify it.
In fact, GLSL compilers are not optimized in comparison with HLSL ones. To convince yourself, just compare the generated assembly in ShaderAnalyzer for the same shader in GLSL and HLSL ; you will immediately see that they don't give the same optimization at all.
Even for trivial optimizations, like factorizing if() conditions, most of GLSL compilers don't do the job.
I really would like to see the generated ASM by Apple's compiler (especially for iOS platforms). If you know a way to get the assembly, I'm very interested in the process.

Providing suggested solutions in error messages

Developer tools and software typically do not provide solution suggestions in error messages. This makes sense for compilers because they are supposed to tell precisely what went wrong.
There are "lint" tools to provide suggestions, but AFAIK, few developers use lint tools regularly or even at all.
There is a large set of developer-oriented software that would do well to have a "suggested solution(s)" part to error messages. This is one of the great features that IDEs like Eclipse have. But software like web application frameworks, standard/popular libraries, etc. do not have this helpful feature.
Is this something that is just lacking in user-friendly design (one can consider this unnecessary, given that Google is so good) or is there a good reason for it? Do any compilers, frameworks, platforms that you use provide error messages with solution suggestions, if none, why not?
What do you want to see?
Error: Null Pointer Exception (suggested solution: Set the object to something).
I mean, it's not the error writers job to educate you. I prefer simple error messages that point to the exact problem, so I myself can determine just what is causing it this time. For me, this certainly lines in the domain of 3rd party tools; perhaps the compilers could provide extensive context to them, to do their analysis, but it's not something I would really find valuable.
The primary thing I want from a compiler or runtime error is context - where did it happen and where was it called from when it failed.
I think most modern compilers and runtimes (Java, Ruby, Go) do a decent job there, with line numbers and stack traces you can find most bugs. Even the Javascript options getting good, it certainly beats the good old "alert()" approach to debugging.
Isn't it fair enough to leave suggested solutions to the IDEs?
But I do agree that I have seen frameworks/libraries that were very sparse with error messages, and "NullPointerException at line 264" deep inside some third-party library where you do not have the source code tells you very close to nothing.
If this is an issue, I think it is primarily restricted to third-party libraries. The "good reason" is presumably that it was developed in a hurry in somebody's spare time, and they did not put meaningful error messages very high on the priority list.
It's hard for the solution to an error being presented. There are so many possibilities and as #silky pointed out, some just cannot be diagnosed.
Warnings are a different beast. In many situations modern compilers use these to say "I think you meant X when you said Y; you might want to check that."
A programming language has the opportunity to be the ultimate in flexibility in terms of user interface. You can make the computer do anything you want. The flip side of the coin is that if you type so much as one character wrong, it might have no idea along which axis your mistake was made, or where.
Systems with less flexibility offer more opportunities for offering solutions to problems. If you type (a b c) to your Lisp compiler and it doesn't know what a is, it's so close to so many valid lines of code that it can't exactly suggest a single fix. If you misspell "IDENTIFICATION DIVISION" at the start of your COBOL program, it's relatively easy for the compiler to spot the error and help you out. Most other languages lie between these extremes.
Programmers tend to move, over their careers, from less powerful and more structured languages, into more powerful and flexible languages. (At least, that's what I saw happen before Javascript became such a hot newbie language.) This means their discipline improves to the point where they are able to use tools that offer power at the expense of being told what to do. The environments I've used that can tell me what to fix, tend to be those that I dislike using.
It's no different from any other art. Look at musicians or painters or martial artists or actors or writers or chefs or even people learning to speak Spanish: when they're young and inexperienced, they're put in a system where there's a lot of structure, and if they make a mistake somebody can easily correct them. As they become more skilled, they need and want less and less support. When they've become experts themselves, they need no support at all, but the flip side of the coin is that you can't as easily point out what's right or wrong. If your kid colors outside the lines, you can explain the issue, but if Picasso or Pollock makes a bad brushstroke, what would you say? Or if Philip Glass puts a note out of place, or Bruce Lee turns his body too far into a punch? And who would want to work in an art form that's so limited that profane things aren't possible? COBOL compilers still exist if anybody really wants them, but far more people pay money for awful paintings than masterful color-by-number prints.
More directly, there's a site, ErrorHelp (nee bug.gd), that lets you type in an error message and get a result, and it's older than SO but nobody uses it. I've tried. Unless you're in a context where there's only one possible answer, a simple problem-encountered to suggested-solution dictionary does not work, and therefore it's an utter failure in any creative field.
Most IDE's have their own compilers. This allows them to do partial compilation, code refactorings, and many other tricks. I find the error messages and suggestions quite useful. Just because the compiler isn't invoked on the command line, doesn't mean it's not a compiler.
(source: theeggeadventure.com)

What inherited code has impressed or inspired you?

I've heard a ton of complaining over the years about inherited projects that us developers have to work with. The WTF site has tons of examples of code that make me actually mutter under my breath "WTF?"
But have any of you actually been presented with code that made you go, "Holy crap this was well thought out!" or "Wow, I never thought of that!"
What inherited code have you had to work with that made you smile and why?
Long ago, I was responsible for the Turbo C/C++ run-time library. Tanj Bennett wrote the original 80x87 floating point emulator in 16-bit assembler. I hadn't looked closely at Tanj's code since it worked well and didn't require attention. But we were making the move to 32-bits and the task fell to me to stretch the emulator.
If programming could ever be said to have something in common with art this was it.
Tanj's core math functions managed to keep an 80-bit floating point temporary result in five 16-bit registers without having to save and restore them from memory. X86 assembly programmers will understand just what an accomplishment this was. Register space was scarce and keeping five registers as your temp while simultaneously doing complex math was a beautiful site to behold.
If it was only a matter of clever coding that would have been enough to qualify it as art but it was more than that. Tanj had carefully picked the underlying math algorithms that would be most suitable for keeping the temp in registers. The result was a blazing-fast floating point emulator which was an important selling point for many of our customers.
By the time the 386 came along most people who cared about floating-point performance weren't using an emulator but we had to support Intel's 386SX so the emulator needed an overhaul. I rewrote the instruction-decode logic and exception handling but left the core math functions completely untouched.
In my first job, I was amazed to discover a "safe ID" class in the codebase (c++), which was wrapping numerical IDs in a class templated with an empty tag class, that ensured that the compiler would complain if you tried for example to compare or assign a UserId into an OrderId.
Not only did I made sure that I had an equivalent Id class in all subsequent codebases I would be using, but it actually opened my eyes on what the compiler could do to guarantee correctness and help writing stronger code.
The code that impresses me the most, and which I try to emulate - is code that seems too simple and easy to understand.
It is damn difficult to write that kind of code. :-)
I have a funny story to tell here.
I was working on this Javaish application, filled with getters & setters that did nothing but get or set and interfaces and everything ever invented to make code unreadable. One day I stumbled upon some code which seemed very well crafted -- it was basically an algorithm implementation that looked very elegant = few lines of readable code, even though it respected every possible rule the project had to adhere to (it was checkstyled automatically).
I couldn't figure out who on the team could have written such code. I was dying to discuss with him and share thoughts. Thankfully, we had switched to subversion (from cvs) a few months earlier and I quickly ran am 'svn blame'. I loled all over the place, seeing my name next to the implementation.
I had heard stories about people not remembering code they wrote 6 months back, code that is a nightmare to maintain. I could not believe such a thing could happen: how can you forget code you wrote? Well, now I'm convinced it can happen. Thankfully the code was alright and easy to extend, so I've only experienced half of the story.
Some VB6 code by another programmer at my company I came across that handled the error conditions very well (whether it be deal with them directly or log them).
Along with some rather complex code that was well commented.
I know this will bring a lot of answers like,
"I've never find good code before I step in" and variations.
I think the real problem there is not that there isn't good coders or excellent projects out there, is that there's an excess of NIH syndrome and the fact that no body likes code from others. The latter is just because you have to make an intellectual effort to understand it, a much bigger effort than you need to understand you own code so that you dislike it (it's making you think and work after all).
Personally I can remember (as everyone I guess) some cases of really bad code but also I remember some pretty well documented, elegant code.
Currently, the project that most impressed me was a very potent, Dynamic Workflow Engine, not only by the simplicity but also for the way it is coded. I can remember some very clever snippets here and there, as well as a beautiful metaprogramming library based on a full IDL developed by some friends of mine (Aspl.es)
I inherited a large bunch of code that was SO well written I actually spent the $40 online to find the guy, I went to his house and thanked him.
I think Rocky Lhotka should get the credit, but I had to touch a CSLA.NET application recently {in my private practice on the side} and I was very impressed with the orderliness of the code. The app worked extremely well, but the client needed a few extensions. The original author had died tragically, and the new guy was unsophisticated. He didn't understand CSLA.NET's business object based approach, and he wanted to do it all over again in cut-and-dried VB.NET, without any fancy framework.
So I got the call. Looking at a working example of WinForm binding and CSLA.NET was pretty instructive about a lot of things.
Symbian OS - the old core bit of it anyway, the bit that dated back to the Psion days or those who even today keep that spirit alive.
And sitting right along side it and all over it is all the new crap created by the lowest bidders hired by the big phone corporations. It was startling, you could actually feel in your bones whether a bit of the code-base was old or new somehow.
I remember when I wrote my bachelor thesis on type inference, my Pascal-to-Pascal 'compiler' was an extension of a Parser my supervisor programmed (in Java). It had a pretty good structure as far as I can remember, and for me who had never done any serious Object-oriented programming, it was quite a revelation.
I've been doing a lot of Eclipse plug-in development and often had to debug into the actual Eclipse source code. While I haven't "inherited" it in the sense that I'm not continuing work on it, I've always been impressed with the design and quality of the early core.

How important is it to write functional specs? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 10 months ago.
Improve this question
I've never written functional specs, I prefer to jump into the code and design things as I go. So far its worked fine, but for a recent personal project I'm writing out some specs which describe all the features of the product, and how it should 'work' without going into details of how it will be implemented, and I'm finding it very valuable.
What are your thoughts, do you write specs or do you just start coding and plan as you go, and which practice is better?
If you're driving from your home to the nearest grocery store, you probably don't need a map. But...
If you're driving to a place you've never been before in another state, you probably do.
If you're driving around at random for the fun of driving, you probably don't need a map. But...
If you're trying to get somewhere in the most effective fashion (minimize distance, minimize time, make three specific stops along the way, etc.) you probably do.
If you're driving by yourself and can take as long as you like, stopping any time you see something interesting or to reconsider your destination or route, you may not need a map. But...
If you're driving as part of a convoy, and all need to make food and overnight lodging stops together, and need to arrive together, you probably do.
If you think I'm not talking about programming, you probably don't need a functional spec, story cards, narrative, CRCs, etc. But...
If you think I am, you might want to consider at least one of the above.
;-)
For someone who "jumps into the code" and "design[s] as they go", I would say writing anything including a functional spec is better than your current methods. A great deal of time and effort can be saved if you take the time to think it through and design it before you even start.
Requirements help define what you need to make.
Design helps define what you are planning on making.
User Documentation defines what you did make.
You'll find that most places will have some variation of these three documents. The functional spec can be lumped into the design document.
I'd recommend reading Rapid Development if you're not convinced. You truely can get work done faster if you take more time to plan and design.
Jumping "straight to code" for large software projects would almost surely lead to failure (as immediatley starting posing bricks to build a bridge would).
The guys at 37 Signals would say that is better to write a short document on paper than writing a complex spec. I'd say that this could be true for mocking up quickly new websites (where the design and the idea could lead better than a rigid schema), but not always acceptable in other real life situations.
Just think of the (legal, even) importance a spec document signed by your customer can have.
The morale probably is: be flexible, and plan with functional or technical specs as much as you need, according to your project's scenario.
For one-off hacks and small utilities, don't bother.
But if you're writing a serious, large application, and have demanding customers and has to run for a long time, it's a MUST. Read Joel's great articles on the subject - they're a good start.
I do it both ways, but I've learned something from Test Driven Development...
If you go into coding with a roadmap you will get to the end of the trip a helluva lot faster than you will if you just start walking down the road without having any idea of how it is going to fork in the middle.
You don't have to write down every detail of what every function is going to do, but define you basics so that way you know what you should get done to make everything work well together.
All that being said, I needed to write a series of exception handlers yesterday and I just dove right in without trying to architect it out at all. Maybe I should reread my own advice ;)
What a lot of people don't want to admit or realize is that software development is an engineering discipline. A lot can be learned as to how they approach things. Mapping out what your going to do in an application isn't necessarily vital on small projects as it is normally easier to quickly go back and fix your mistakes. You don't see how much time is wasted compared to writing down what the system is going to do first.
In reality in large projects its almost necessary to have road map of how the system works and what it does. Call it a Functional Spec if you will, but normally you have to have something that can show you why step b follows step a. We all think we can think it up on the fly (I am definitely guilty of this too), but in reality it causes us problems. Think back and ask yourself how many times you encountered something and said to yourself "Man I wish I would have thought of that earlier?" Or someone else see's what you've done, and showed you that you could have take 3 steps to accomplish a task where you took 10.
Putting it down on paper really forces you to think about what your going to do. Once it's on paper it's not a nebulous thought anymore and then you can look at it and evaluate if what you were thinking really makes sense. Changing a one page document is easier than changing 5000 lines of code.
If you are working in an XP (or similar) environment, you'll use stories to guide development along with lots of unit and hallway useability testing (I've drunk the Kool-Aid, I guess).
However, there is one area where a spec is absolutely required: when coordinating with an external team. I had a project with a large insurance company where we needed to have an agreement on certain program behaviors, some aspects of database design and a number of file layouts. Without the spec, I was wide open to a creative interpretation of what we had promised. These were good people - I trusted them and liked working with them. But still, without that spec it would have been a death march. With the spec, I could always point out where they had deviated from the agreed-to layout or where they were asking for additional custom work ($$!). If working with a semi-antagonistic relationship, the spec can save you from even worse: a lawsuit.
Oh yes, and I agree with Kieveli: "jumping right to code" is almost never a good idea.
I would say it totally "depends" on the type of problem. I tend to ask myself am I writing it for the sake of it or for the layers above you. I also had debated this and my personal experience says, you should since it keeps the project on track with the expectations (rather than going off course).
I like to decompose any non trivial problems loosely on paper first, rather than jumping in to code, for a number of reasons;
The stuff i write on paper doesn't have to compile or make any sense to a computer
I can work at arbitrary levels of abstraction on paper
I can add pictures and diagrams really easily
I can think through and debug a concept very quickly
If the problem I'm dealing with is likely to involve either a significant amount of time, or a number of other people, I'll write it up as an outline functional spec. If I'm being paid by someone else to develop the software, and there is any potential for ambiguity, I will add enough extra detail to remove this ambiguity. I also like to use this documentation as a starting point for developing automated test cases, once the software has been written.
Put another way, I write enough of a functional specification to properly understand the software I am writing myself, and to resolve any possibile ambiguities for anyone else involved.
I rarely feel the need for a functional spec. OTOH I always have the user responsible for the feature a phone call away, so I can always query them for functional requirements as I go.
To me a functional spec is more of a political tool than technical. I guess once you have a spec you can always blame the spec if you later discover problems with the implementation. But who to blame is really of no interest to me, the problem will still be there even if you find a scapegoat, better then to revisit the implementation and try to do it right.
It's virtually impossible to write a good spec, because you really don't know enough of either the problem or the tools or future changes in the environment to do it right.
Thus I think it's much more important to adapt an agile approach to development and dedicate enough resources and time to revisit and refactor as you go.
It's important not to write them: There's Nothing Functional about a Functional Spec