Documentation and version control - version-control

Given a project I'm about to start there will be documentation produced.
What is the best practice for this?
Should the documents live with the code and assets or should there be a separate documentation store?
Edit
I'd like a wiki but I will need to print the documents etc... It's a university project.

It really depends on your team. Where I work, we keep documentation in a wiki which is linked in with our team website. For the purposes of shipping documentation, the wiki can be exported and we run it through a parser that "fancifies" the look and feel of the documentation for customer purposes.
Storing the documentation with the code (typically in your source repository) is not a bad idea. Just make sure to keep them separated. For example, keep a docs folder which is on the same level with your src folder in your repository. This way, you can quickly ship the current documentation, you can easily track revisions, and anybody new to the project can immediately jump in without having to go to multiple locations for information.

Storing it in source control is fine.

This is an interesting question -- basically, what others are saying is right about generated documentation, source files and templates/etc. should be stored in source control and generated during your build process.
As far as requirements/specs/etc. documentation, I have worked both ways, and I very much prefer using SharePoint or a Wiki/document portal that is designed for document sharing/versioning. The reason is, most non-developer folks aren't comfortable working with source control systems, and you don't gain any of the advantages of intelligent merging if you are using a binary format like Word. Plus it's nice to have internet-based access so you can reference and work on the docs in a distributed team without people having to install extra software.

Here's a 2017 summary of the options and my experience:
(extreme 1) Completely external (e.g. a wiki, Google Docs, LaTeX, MS Word, MS Onedrive)
People aren't bothered about keeping it up to date (half of them don't even know where to find the page that needs updating since it's so out of the trenches).
wiki platforms are “captive user interfaces” - your data gets stored in their proprietary schemas and is not easy to examine with a simple text editor (Confluence is even worse in that you have no access to the plaintext content at all anymore)
(extreme 2) Completely internal (e.g. javadoc)
pollutes the source code, and is usually too low level to be of any use. Well-written source code is still the best form of low level documentation.
However, I feel package-info.java files are underutilized.
(balance) Colocated documentation (e.g. README.md)
A good half way solution, with the benefits of version control. If a single README.md file is not enough, consider a doc/ folder. The only drawback of this I've seen is whether to source control helpful graphics (e.g. png files) and risk bloating the repo.
One interesting way to avoid this problem is to use plaintext diagram tools (I find Grapheasy and Text Diagram to be a breath of fresh air).
plaintext can be easily read even if your rendering engine changes as the years go by.
Github's success is in no small part thanks to its README.md located in the root of the project.
One tiny disadvantage of this approach though is that your continuous integration system will trigger a new build each time you make edits to the README.md file.

If you are writing versioned user documentation associated with each release of the product, then it makes sense to put the documentation in source control along with its associated product release.
If you are writing internal developer documentation, use automated internal source code documentation (javadoc, doxygen, .net annotations, etc) for source level documentation and a project wiki for design level documentation.

I think most of us in the industry are not really following best-practices and it of course also depends a lot on your situation.
In an agile environment where you would have a very iterative process of release, you will want to "travel light". In this particular case, Jason's suggestion of a separate Wiki really works great.
In a water-fall/big bang model, you will have a better opportunity to have a decent documentation update with each new release. Also you will need to clearly document what version of the requirements was agreed on and have loads of documentation for every tiny change you do to requirements (due to the effects it has on subsequent stages). Often if the documentation can live together with the version controlled source code it is the best.

Are you using any sort of auto-documentation or is it completely manual? Assuming that you are using an auto-documentation system, the documentation is more or less generated on the fly, and would be part of the code itself.
To me, (assuming that it's possible with whatever code you are using), this would be the preferred method of handling it, as you wouldn't need to maintain the documentation source at all.

Related

LiveCode Source Control

Anyone out there using LiveCode in a multi developer project?
Either way, can someone recommend a good source control system / plugin to use?
We've looked at MagicCarpet but since it is no longer developed we wish to use something else.
Thanks
I'm working on a solution to this problem by exporting the stack file as a structured directory of script, json and image files which will diff and merge nicely in most VCS. It is not yet available but the intention is it will be open source. My goal is to demonstrate it at the RunRevLive conference in May.
Here's the repo for lcVCS https://github.com/montegoulding/lcVCS
I have put a git library stack on revOnline (libVersionControl) that exports to structured xml files that git can handle. It works as far as it goes, but I have hopes that Monte's solution will supersede this effort.
revOnline link to stack
Yes, our team has been using LiveCode with multiple developers. Since the Livecode community is still young, acquiring good source control tools can be a challenge. Our solution has been to break code into modules (stack files). When there are updates to merge into the main codebase, we clone our existing codebase, and merge code changes manually using line by line compare in a text editor. This is not a fun process, but is much less painful than it sounds.
If I were to redesign the system, we would simply use Git (Github.com etc.). There is no reason this would not work with Livecode stacks.
We use LiveCode in a small team with Subversion.
We don't have a perfect solution, but it is very lightweight; we all use a custom extension to the standard toolbar, which among other things has a 'save+backup' button. When we started using it with Subversion, we added code to this button which saves an XML sidecar file for the stack. The file contains all the scripts, custom properties, and optionally fields (controlled by user property in each stack). In our case almost all of our work is in scripts, so this works for us.
The effect is that each time we commit to SVN, we're always committing two files, the LiveCode stack and the accompanying sidecar file - the latter works fine for diffing etc.
Where this lets us down is that we don't have any solution for merging. If we were working on larger systems more actively, we'd also modify I expect look to modify the sidecar format into a complete folder of files. For now however this makes the situation workable (and it takes no noticeable time to generate the sidecar file).
Happy to share code if that was useful.
I know of a tool thats being worked on that is going to really help in this regard. When he showed it to me it looked very functional already. But I'm not sure when he will share it with the community.
So the point is, its just a matter of time before people's stuff comes together to make a turn key solution for this.

PowerBuilder 11.5 & Version Control

What is the best version control system to implement with PowerBuilder 11.5?
If you have examples of how you have did branching/trunk/tags that would be awesome. We have tried to wrap our heads around it a few times and always run into problems because we use shared libraries such as PFC/PFE in multiple applications.
Right now we are only using PBNative, and it sucks.
The Agent SVN is a MS-SCCI Subversion plug-in works with PowerBuilder.
Here is a link that describes how to setup Agent SVN to work with PowerBuilder and Subversion.
We currently use Perforce and it's P4SCC plugin, which works very well. In fact, I'm sure I read somewhere that the guys at Sybase who write PowerBuilder, actually use Perforce themselves.
So, to be fair, let's start out by saying that while you're asking about version control, PBNative is source control. If you compare something that is intended to have more features than just keep two developers from editing the same piece of source, then yes, PBNative will suck. The Madone SL may be an incredible bicycle, but if you're trying to take a couple of laps around an Indy track, it will suck.
"Best" is a pretty subjective word. There are lots of features available in version control and configuration management tools. You can get tons of features, but you'll pay through the nose. StarTeam has some nice features like being able to trace a client change request or bug report all the way through to the changed code, and being able to link in a customized diff tool (which is particularly useful in PB). Then again, if cost is your key criteria rather than features, there are lots of free options that will get the job done. As long as the tool supports the Microsoft SCC interface, you should be OK.
There is a relatively active NNTP newsgroup that focuses on source control with PowerBuilder, which you can also access via the web. You can probably find some already-posted opinions there.
Many years ago I used Starteam to control PB applications. PowerBuilder needless to say is an outdated bear, and it has to export each and every object from its "libraries" into source control.
Currently our legacy PB apps have its libraries saved whole into Subversion, without any support for diff's etc.
We use Visual SourceSafe. We don't use PFC, but we do have libraries that are shared among several projects. Till now, each project was developed separately from the others, and so the shared libraries were duplicated. To have them synchronized, they were all shared at the VSS level. Lately we've reorganized our sources so all projects are near each other, and there's only one instance of the shared libraries.
VSS is definitively not the best source control system, to say the least, but it integrates into PB without the need of any bridges. PB has an inherent problem working with source control, so it probably won't make a very difference working with one instead of the other (at least from the PB point of view).
Now, on a personal note, I'd like to say PB 11.5 is a piece of sh*t. It constantly crashes, full of unbelievable UI nuisance and just brings productivity to its knees. It's probably the worst IDE ever created. Stay away if possible.
FYI: The new PB12 (PB.NET) will integrate with SCC systems so you can easily choose which source control system that you want to use. Since we basically have dropped PBLs (they are now directories) files can be checked in/out individually - even with a plain vanilla editor since files are now normal (unicode) text files.
StarTeam integrates so beautifully with the PB IDE. I used that combination at my previous company (PB9 and ST5.x) for several years. You should be managing your code at the object level - don't log the entire PBL into ST...
If you're having problems with that setup, hit me up offline. phoran at sybase dot com.
We use Merant Version Manager for older projects and TFS for newer work. The only issue we have is that TFS does not support keyword expansion and changing the 'read the flowerbox comments' attitude people have. Some folks are nervous about losing the inline versioning history.
We use StarTeam and have been very pleased with it. It combines bug tracking with version control. Unfortunately though we don't store our files on the object level. We just store the PBL files directly in source control. Anything that supports the SCC interface theoretically should work correctly in PowerBuilder.
PB9: We used PVCS but had stability problems with pbl corruption and also problems co-existing with later versions of Crystal Reports (dll conflict) so now we use PB9 with Dynamsoft's Source Anywhere Standalone. This system is more primitive; it is missing the more advanced features for promotion levels and for pulling out an older milestone version of all objects to make a patch build.
What we are looking for now is something which will allow more advanced "change management", to support promotion levels at the change level (rather than at the object level). Would it be better to use perforce, starteam, or (harvest change manager + HarPB), or something else? Any advice on these combinations would be greatly appreciated.
You can always use Plastic SCM with PowerBuilder through SCC. Plastic is pretty advanced in terms of graphics, tools, replica and so on, so it's always a good choice to keep in mind.

Should I store generated code in source control

This is a debate I'm taking a part in. I would like to get more opinions and points of view.
We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project.
Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment.
I would like to have a way to trace back the history of the code, even if it is usually treated as a black box.
Any arguments or counter arguments?
UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue.
I won't select an accepted answer at this point for the reasons mentioned above.
Saving it in source control is more trouble than it's worth.
You have to do a commit every time you do a build for it to be any value.
Generally we leave generated code( idl, jaxb stuff, etc) outside source control where I work and it's never been a problem
Put it in source code control. The advantage of having the history of everything you write available for future developers outweighs the minor pain of occasionally rebuilding after a sync.
Every time I want to show changes to a source tree on my own personal repo, all the 'generated files' will show up as having changed and need comitting.
I would prefer to have a cleaner list of modifications that only include real updates that were performed, and not auto-generated changes.
Leave them out, and then after a build, add an 'ignore' on each of the generated files.
Look at it this way: do you check your object files into source control? Generated source files are build artifacts just like object files, libraries and executables. They should be treated the same. Most would argue that you shouldn't be checking generated object files and executables into source control. The same arguments apply to generated source.
If you need to look at the historical version of a generated file you can sync to the historical version of its sources and rebuild.
Checking generated files of any sort into source control is analogous to database denormalization. There are occasionally reasons to do this (typically for performance), but this should be done only with great care as it becomes much harder to maintain correctness and consistency once the data is denormalized.
I would say that you should avoid adding any generated code (or other artifacts) to source control. If the generated code is the same for the given input then you could just check out the versions you want to diff and generate the code for comparison.
I call the DRY principle. If you already have the "source files" in the repository which are used to generate these code files at build time, there is no need to have the same code committed "twice".
Also, you might avert some problems this way if for example the code generation breaks someday.
No, for three reasons.
Source code is everything necessary and sufficient to reproduce a snapshot of your application as of some current or previous point in time - nothing more and nothing less. Part of what this implies is that someone is responsible for everything checked in. Generally I'm happy to be responsible for the code I write, but not the code that's generated as a consequence of what I write.
I don't want someone to be tempted to try to shortcut a build from primary sources by using intermediate code that may or may not be current (and more importantly that I don't want to accept responsibility for.) And't it's too tempting for some people to get caught up in a meaningless process about debugging conflicts in intermediate code based on partial builds.
Once it's in source control, I accept responsibility for a. it being there, b. it being current, and c. it being reliably integratable with everything else in there. That includes removing it when I'm no longer using it. The less of that responsibility the better.
I really don't think you should check them in.
Surely any change in the generated code is either going to be noise - changes between environments, or changes as a result of something else - e.g. a change in your DB. If your DB's creation scripts (or any other dependencies) are in source control then why do you need the generated scripts as well?
The general rule is no, but if it takes time to generate the code (because of DB access, web services, etc.) then you might want to save a cached version in the source control and save everyone the pain.
Your tooling also need to be aware of this and handle checking-out from the source control when needed, too many tools decide to check out from the source control without any reason.
A good tool will use the cached version without touching it (nor modifying the time steps on the file).
Also you need to put big warning inside the generated code for people to not modify the file, a warning at the top is not enough, you have to repeat it every dozen lines.
We don't store generated DB code either: since it is generated, you can get it at will at any given version from the source files. Storing it would be like storing bytecode or such.
Now, you need to ensure the code generator used at a given version is available! Newer versions can generate different code...
There is a special case where you want to check in your generated files: when you may need to build on systems where tools used to generate the other files aren't available. The classic example of this, and one I work with, is Lex and Yacc code. Because we develop a runtime system that has to build and run on a huge variety of platforms and architectures, we can only rely on target systems to have C and C++ compilers, not the tools necessary to generate the lexing/parsing code for our interface definition translator. Thus, when we change our grammars, we check in the generated code to parse it.
Leave it out.
If you're checking in generated files you're doing something wrong. What's wrong may differ, it could be that your build process is inefficient, or something else, but I can't see it ever being a good idea. History should be associated with the source files, not the generated ones.
It just creates a headache for people who then end up trying to resolve differences, find the files that are no longer generated by the build and then delete them, etc.
A world of pain awaits those who check in generated files!
In some projects I add generated code to source control, but it really depends. My basic guideline is if the generated code is an intrinsic part of the compiler then I won't add it. If the generated code is from an external tool, such as SubSonic in this case, then I would add if to source control. If you periodically upgrade the component then I want to know the changes in the generated source in case bugs or issues arise.
As far as generated code needing to be checked in, a worst case scenario is manually differencing the files and reverting the files if necessary. If you are using svn, you can add a pre-commit hook in svn to deny a commit if the file hasn't really changed.
arriving a bit late ... anyway ...
Would you put compiler's intermediate file into source version control ?
In case of code generation, by definition the source code is the input of the generator while the generated code can be considered as intermediate files between the "real" source and the built application.
So I would say: don't put generated code under version control, but the generator and its input.
Concretely, I work with a code generator I wrote: I never had to maintain the generated source code under version control. I would even say that since the generator reached a certain maturity level, I didn't have to observe the contents of generated code although the input (for instance model description) changed.
The job of configuration management (of which version control is just one part) is to be able to do the following:
Know which changes and bug fixes have gone into every delivered build.
Be able to reproduce exactly any delivered build, starting from the original source code. Automatically generated code does not count as "source code" regardless of the language.
The first one ensures that when you tell the client or end user "the bug you reported last week is fixed and the new feature has been added" they don't come back two hours later and say "no it hasn't". It also makes sure they don't say "Why is it doing X? We never asked for X".
The second one means that when the client or end user reports a bug in some version you issued a year ago you can go back to that version, reproduce the bug, fix it, and prove that it was your fix has eliminated the bug rather than some perturbation of compiler and other fixes.
This means that your compiler, libraries etc also need to be part of CM.
So now to answer your question: if you can do all the above then you don't need to record any intermediate representations, because you are guaranteed to get the same answer anyway. If you can't do all the above then all bets are off because you can never guarantee to do the same thing twice and get the same answer. So you might as well put all your .o files under version control as well.
There are good arguments both for and against presented here.
For the record, I build the T4 generation system in Visual Studio and our default out-of-the-box option causes generated code to be checked in. You have to work a bit harder if you prefer not to check in.
For me the key consideration is diffing the generated output when either the input or generator itself is updated.
If you don't have your output checked in, then you have to take a copy of all generated code before upgrading a generator or modifying input in order to be able to compare that with the output from the new version. I think this is a fairly tedious process, but with checked in output, it's a simple matter of diffing the new output against the repository.
At this point, it is reasonable to ask "Why do you care about changes in generated code?" (Especially as compared to object code.)
I believe there are a few key reasons, which come down to the current state of the art rather than any inherent problem.
You craft handwritten code that meshes tightly with generated code. That's not the case on the whole with obj files these days. When the generated code changes, it's sadly quite often the case that some handwritten code needs to change to match. Folks often don't observe a high degree of backwards compatibility with extensibility points in generated code.
Generated code simply changes its behavior. You wouldn't tolerate this from a compiler, but in fairness, an application-level code generator is targeting a different field of problem with a wider range of acceptable solutions. It's important to see if assumptions you made about previous behavior are now broken.
You just don't 100% trust the output of your generator from release to release. There's a lot of value to be had from generator tools even if they aren't built and maintained with the rigor of your compiler vendor. Release 1.0 might have been perfectly stable for your application but maybe 1.1 has a few glitches for your use case now. Alternatively you change input values and find that you are exercisig a new piece of the generator that you hadn't used before - potentially you get surprised by the results.
Essentially all of these things come down to tool maturity - most business app code generators aren't close to the level that compilers or even lex/yacc-level tools have been for years.
Both side have valid and reasonable argument, and it's difficult to agree on something common. Version Control Systems (VCSs) tracks the files
developers put into it, and have the assumption that the files inside VCS are hand crafted by developers, and developers are interested in the history
and change between any revision of the files. This assumption equalize the two concepts, "I want to get this file when I do checkout." and "I am
interested in the change of this file."
Now, the arguments from both sides could be rephrase like this:
"I want to get all these generated files when I do checkout, because I don't have the tool to generate them in this machine."
"I should not put them into VCS, since I am not interested in the change of this file."
Fortunately, it seems that the two requirements are not conflicting fundamentally. With some extension of current VCSs, it should be possible to have
both. In other words, it's a false dilemma. If we ponder a while, it's not hard to realize that the problem stems from the assumption VCSs hold. VCSs
should distinguish the files, which are hand crafted by developers, from files which are not hand crafted by developers, but just happens to be inside
this VCS. For the first category of files, which we call source files (code) usually, VCSs have done great job now. For the latter category, VCSs have
not had such concept yet, as far as I know.
Summary
I will take git as one example to illustrate what I mean.
git status should not show generated files by default.
git commit should include generated files as snapshot.
git diff should not show generated files by default.
PS
Git hooks could be used as a workaround, but it would be great if git supports it natively. gitignore doesn't meet our requirement, for ignored
files won't go into VCSs.enter code here
I would argue for. If you're using a continuous integration process that checks out the code, modifies the build number, builds the software and then tests it, then it's simpler and easier to just have that code as part of your repository.
Additionally, it's part and parcel of every "snapshot" that you take of your software repository. If it's part of the software, then it should be part of the repository.
It really depends. Ultimately, the goal is to be able to reproduce what you had if need be. If you are able to regenerate your binaries exactly, there is no need to store them. but you need to remember that in order to recreate your stuff you will probably need your exact configuration you did it with in the first place, and that not only means your source code, but also your build environment, your IDE, maybe even other libraries, generators or stuff, in the exact configuration (versions) you have used.
I have run into trouble in projects were we upgraded our build environment to newer versions or even to another vendors', where we were unable to recreate the exact binaries we had before. This is a real pain when the binaries to be deplyed depend on a kind of hash, especially in secured environment, and the recreated files somehow differ because of compiler upgrades or whatever.
So, would you store generated code: I would say no. The binaries or deliverables that are released, including the tools that you reproduced them with I would store. And then, there is no need to store them in source control, just make a good backup of those files.
I (regretfully) wind up putting a lot of derived sources under source control because I work remotely with people who either can't be bothered to set up a proper build environment or who don't have the skills to set it up so that the derived sources are built exactly right. (And when it comes to Gnu autotools, I am one of those people myself! I can't work with three different systems each of which works with a different version of autotools—and only that version.)
This sort of difficulty probably applies more to part-time, volunteer, open-source projects than to paid projects where the person paying the bills can insist on a uniform build environment.
When you do this, you're basically committing to building the derived files only at one site, or only at properly configured sites. Your Makefiles (or whatever) should be set up to notice where they are running and should refuse to re-derive sources unless they know they are running at a safe build site.
The correct answer is "It Depends". It depends upon what the client's needs are.
If you can roll back code to a particular release and stand up to any external audit's without it, then you're still not on firm ground. As dev's we need to consider not just 'noise', pain and disk space, but the fact that we are tasked with the role of generating intellectual property and there may be legal ramifications. Would you be able to prove to a judge that you're able to regenerate a web site exactly the way a customer saw it two years ago?
I'm not suggesting you save or don't save gen'd files, whichever way you decide if you're not involving the Subject Matter Experts of the decision you're probably wrong.
My two cents.
I would say that yes you want to put it under source control. From a configuration management standpoint EVERYTHING that is used to produce a software build needs to be controlled so that it can be recreated. I understand that generated code can easily be recreated, but an argument can be made that it is not the same since the date/timestamps will be different between the two builds. In some areas such as government, they require a lot of times this is what's done.
In general, generated code need not be stored in source control because the revision history of this code can be traced by the revision history of the code that generated it!
However, it sounds the OP is using the generated code as the data access layer of the application instead of manually writing one. In this case, I would change the build process, and commit the code to source control because it is a critical component of the runtime code. This also removes the dependency on the code generation tool from the build process in case the developers need to use different version of the tool for different branches.
It seems that the code only needs to be generated once instead of every build. When a developer needs to add/remove/change the way an object accesses the database, the code should be generated again, just like making manual modifications. This speeds up the build process, allows manual optimizations to be made to the data access layer, and history of the data access layer is retained in a simple manner.
If it is part of the source code then it should be put in source control regardless of who or what generates it. You want your source control to reflect the current state of your system without having to regenerate it.
Absolutely have the generated code in source control, for many reasons. I'm reiterating what a lot of people have already said, but some reasons I'd do it are
With codefiles in source control, you'll potentially be able to compile the code without using your Visual Studio pre-build step.
When you're doing a full comparison between two versions, it would be nice to know if the generated code changed between those two tags, without having to manually check it.
If the code generator itself changes, then you'll want to make sure that the changes to the generated code changes appropriately. i.e. If your generator changes, but the output isn't supposed to change, then when you go to commit your code, there will be no differences between what was previously generated and what's in the generated code now.
I would leave generated files out of a source tree, but put it in a separate build tree.
e.g. workflow is
checkin/out/modify/merge source normally (w/o any generated files)
At appropriate occasions, check out source tree into a clean build tree
After a build, checkin all "important" files ("real" source files, executables + generated source file) that must be present for auditing/regulatory purposes. This gives you a history of all appropriate generated code+executables+whatever, at time increments that are related to releases / testing snapshots, etc. and decoupled from day-to-day development.
There's probably good ways in Subversion/Mercurial/Git/etc to tie the history of the real source files in both places together.
Looks like there are very strong and convincing opinions on both sides. I would recommend reading all the top voted answers, and then deciding what arguments apply to your specific case.
UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the other answers could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue.

How do you organize your temporary workfiles?

I do alot of bugfixing and implementing new features for several different customers. These customers all report their bugs, change requests and new feature request into our Trac system.
Sometimes these requests result in me creating some SQL change scripts, sometimes there are Excel documents or Access databases with testdata, Word documents from the customer and so on. Alot of files that are used to fix one ticket and then can be deletede when the ticket is closed.
I usualy do this by creating folders in the filesystem like this: /customerXX/TicketNNNNN and then just dumping everything in there.
How do you organize your workfiles? Have you found some fantastic tool to do this?
I would say for scripts or files that are related to a particular ticket, the best thing to do would be to attach the file to that ticket in your issue tracking software - almost all issue trackers that I've worked with will allow you to do this. That way, you can look back and a) see exactly what you did in case something goes wrong, or b) do exactly the same thing if the issue comes up again later. That's almost certainly the best place to keep files with extra info from the customer, too (or at least the first place most people will look).
For frequently re-used scripts that aren't specific to a particular ticket, I would create a scripts/ or bin/ directory in the associated project, and keep them in there.
I also have a small handful of useful files that I keep in src/misc/ off my home directory, with things like SQL queries to get readable "explain" output out of Oracle and such, that aren't specific to any particular project. The number of these is small enough that subdirectories aren't necessary, though - I suspect if you ended up with a large number of these files, many of them could/should be moved to specific projects or your issue tracking system.
JIRA has been quite helpful for this at my site. It supports issue tracking, file attachments,and you can easily customize and categorize your projects and issues.
I use Fogbugz and I add all file to the case. I believe that no matter what application you use, The important is to keep this files for future references. If your bug-tracking tool does not let you attach file then add the files to the version control.
We use CaWeb4 and find it very easy to use for our bug tracking.

Version Control for Graphics

Say a development team includes (or makes use of) graphic artists who create all the images that go into a product. Such things include icons, bitmaps, window backgrounds, button images, animations, etc.
Obviously, everything needed to build a piece of software should be under some form of version control. But most version control systems for developers are designed primarily for text-based information. Should the graphics people use the same version-control system and repository that the coders do? If not, what should they use, and what is the best way to keep everything synchronized?
Yes, having art assets in version control is very useful. You get the ability to track history, roll back changes, and you have a single source to do backups with. Keep in mind that art assets are MUCH larger so your server needs to have lots of disk space & network bandwidth.
I've had success with using perforce on very large projects (+100 GB), however we had to wrap access to the version control server with something a little more artist friendly.
I've heard some good things about Alienbrain as well, it does seem to have a very slick UI.
GitHub recently introduced "image view modes", take a look: https://github.com/blog/817-behold-image-view-modes.
We, too, just put the binaries in source control. We use Git, but it would apply just as well to Subversion.
One suggestion I have is to use SVGs where possible, because you can see actual differences. With binaries (most other image formats), the best you can get is a version history.
A lot of the graphics type people will want something more sophisticated than subversion. While it's good for version control, they will want a content management system that allows cross-referencing of assets, tagging, thumbnails and that sort of thing (as well as versioning).
TortoiseSVN can show image revisions side-by-side, which is really useful. I've used it with different teams with a great degree of success. The artists loved having the ability to roll back things (after they got used to the concepts). It does take a lot of space, though.
Interesting question. I don't have a bunch of experience working directly with designers on a project. When I have, it's been through a contractual sort of agreement where they "delivered" a design. I have done some of my own design work for both web sites and desktop applications, and though I have not used source control in the past, I am in the process of implementing SVN for my own use as I am starting to do some paid freelance work. I intend to utilize version/source control precisely the way I would with source code. It just becomes another folder in the project trunk. The way I have worked without source control is to create an assets folder in which all media files that are equivalents of source code reside. I like to think of Photoshop PSD's as graphics source code while the JPEG output for a web site or otherwise is the compiled version.
In the case of working with designers, which is a distinct possibility I face in the near future, I'd like to make an attempt to have them "check-in" their different versions of their source files on a regular basis. I'll be curious to read what others with some experience will say in response to this.
We use subversion. Just place a folder under /trunk/docs for comps and have designers check out and commit to that folder. Works like a champ.
#lomaxx TortoiseSVN includes a program called TortoiseIDiff which looks to be a diff for images. I haven't used it but looks intriguing.
I would definitely put the graphics under version control. The diff might not be very useful from within a diff tool like diffmerge, but you can still checkout two versions of the graphic and view them side by side to see the differences.
I don't see any reason why the resultant graphics shouldn't be kept in the same version control system that the coders use. However, when you're creating graphics using PSD files or PDN files you might want to create a seperate repository for those as they have a different context to the actual end jpeg or gif that is produced and deployed with the developed application.
In my opinion Pixelapse combined with a backup solution is the best version control software for graphics that I've found thus far. It supports adobe files and a bunch of normal raster images. It has version by version preview. It autosaves when the files update(on save). It works like dropbox but have a great web interface.
You can use it in teams and share projects to different people. It also support infinite reviewers which is great for design agencies. And if you want you can publicly collaborate on projects that are "open".
Unfortunately you can't have a local pixelapse server, so for backup my current setup is that I have the Pixelapse folder(like a dropbox folder) inside a git repo for snapshot creation.
With respect to diff and merging, I think the version control is more critical for graphics and media elements. If you think about it, most designers are going to be the sole owners of a file -- at least in the case of graphics -- or at least I would think that'd be the case. I'd be curious to hear from a designer.
#Damian - Good point about the tagging and cross referencing. That's true; while I haven't working with many designers on a software development project, I have worked for a company that had a design department and know that this is an issue. Designers are still (perpetually) looking for the perfect system to handle this sort of thing. I think this is more suited to a design department for shared access, searching and versioning, etc to all assets -- where there is a business incentive to not reinvent the wheel wherever/whenever possible. I don't think it would apply for a project-oriented manner as tagging and cross referencing wouldn't be quite as applicable.
We keep the binary files and images in revision control, using Perforce. It's great!
We keep a lot of art assets, and it scales well for lots of large files. It recognizes binary files, the ones that can't be diffed, and stores them as full file copies in the back end.
It has P4V (cross-platform visual browser), and a thumbnail system so image files can be seen in the browser.
You might want to take a look at Boar: "Simple version control and backup for photos, videos and other binary files". It can handle binary files of any size. http://code.google.com/p/boar/
A free and slightly wonky solution is Adobe version Cue its comes with the Adobe Suites up to CS4 and is easy to install and maintain. Offers user level control and is artist friendly. Adobe has discontinued support though for it which is a shame. Adobe Bridge acts as the client between the user and the Version Cue server. If used properly its an inexpensive solution to version control. I use CS3 version cue with CS3 Bridge. Works great for small teams.