how to proceed as a module author after a trial release - perl

The problem occurs to me using Perl5 for programming and Dist::Zilla (dzil) for deploying to CPAN. But the question is probably a general one, meaning independent of the programming language.
Problem description
Assuming I'm releasing a new version 0.1 of the module Foo with a new feature called bar. I would like to release it as a trial release (dzil build --trial), to be able the get feedback on the change without forcing it onto unaware users. When I'm doing so using my toolchain, the Changes file will look something like this:
1 Revision history for Foo
2
3 0.1 2019-06-19 17:49:09+02:00 Continent/City (TRIAL RELEASE)
4 - added bar
The packed distribution (for upload to CPAN) is a file like so: Foo-0.1-TRIAL.tar.gz
Until here, everything was easy. But from now on, I'm unsure how to react to upcoming events:
Feedback is positive. No changes needed. Bring the release in production.
How should the new (but unaltered) release look like?
Should I do a release with the same version or count the version up? Should I add a new line to the Changes file (e.g "bring trial 0.1 into production") or alter the existing one (meaning just removing the (TRIAL RELEASE)).
Feedback is negative. Changes needed. Add feature/change baz.
Same questions here. The new change surely needs a mention in Changes. But again: Count version up, or let it be? Make a new entry to Changes or alter the existing one?
This is one of these questions where I feel like: it probably doesn't matter to much, but there must be a "best practice". And in the long term it kind of pays of doing it right.
Question
How should I continue with version numbering and Changes file, after doing a trial release to specifically CPAN?
(General answers, which are independent from Perl and CPAN are also of interest)

My general advice is: if the TRIAL turns out to be a resounding success and there are zero changes aside from releasing the same thing without --trial, you can reuse the version, since at runtime the two tarballs will have identical behavior. If you have to make any changes that may affect users or tests, bump the version. If you're unsure, bump the version -- versions are free. Remember that without a distinct version, other distributions can't distinguish the changes in their dependencies, CPAN clients can't request a specific version easily, etc.
As far as the changelog, it may help that MetaCPAN now includes the changes for any trial releases leading up to the stable release in its changes preview, provided it can parse your changelog and the trial releases are marked as such. (example) So I would just include an entry for each distinct version.

Related

Cannot find unitypackage mentioned in Getting Started with MRTK

I am trying to download MRTK by following https://microsoft.github.io/MixedRealityToolkit-Unity/Documentation/GettingStartedWithTheMRTK.html. But on the Assests folder on GitHub for MRTK, I cannot find the two packages below mentioned:
Microsoft.MixedRealityToolkit.Unity.Examples.unitypackage
Microsoft.MixedRealityToolkit.Unity.Foundation.unitypackage
Did I miss anything simple?
The exact naming of the packages is going to be different from release to release because the version number increases each time a new version is published.
The docs will say something like
"Microsoft.MixedRealityToolkit.Unity.Examples.unitypackage"
But the latest right now actually is named:
"Microsoft.MixedReality.Toolkit.Unity.Examples-v2.0.0-RC2.1.unitypackage"
It's kinda done this way to reduce the number of things that have to get changed with each release (i.e. making sure that every single instance of the naming of a version stays up to date can be somewhat risk prone in terms of missing things, so when possible if we can just say "grab the latest package" or "grab the thing that is the examples package" it also helps to reduce the number of version mismatches out there).
So to be super clear here, basically go to the releases page:
https://github.com/microsoft/MixedRealityToolkit-Unity/releases
Scroll to the bottom of the the latest release and expand the Assets expando and get the two unitypackages:
https://github.com/microsoft/MixedRealityToolkit-Unity/releases/download/v2.0.0-RC2.1/Microsoft.MixedReality.Toolkit.Unity.Examples-v2.0.0-RC2.1.unitypackage
https://github.com/microsoft/MixedRealityToolkit-Unity/releases/download/v2.0.0-RC2.1/Microsoft.MixedReality.Toolkit.Unity.Foundation-v2.0.0-RC2.1.unitypackage
Would it be helpful to document the packages as
Microsoft.MixedRealityToolkit.Unity.Foundation[Version].unitypackage

what's the meaning of the word 'next' in a filename?

For example: easeljs-NEXT.js
I sense that the NEXT in caps has meaning, but don't know what.
I've tried searching on Bing, for example "what does NEXT mean in a filename".
Also tried a similar search here in stackoverflow with no result.
CreateJS contributor here.
The "NEXT" naming is the file convention we have chosen for the upcoming/in-progress version of CreateJS libraries. Typically, we commit changes/fixes over a period of time, and then eventually tag a new version that gets put on the CDN and (ideally/eventually) included in an updated version of Adobe Animate.
Due to our testing process, and inter-reliance across libraries (Preload, Sound, Easel, Tween), we are pretty conservative when it comes to making official builds. This is our way of making sure there are easy-to-use, compiled builds in GitHub with the latest features, fixes, and documentation. They aren't "official releases", as they might not play well with other content.
Releases:
easeljs.js (with comments and whitepace, good for testing)
easeljs.min.js (minified)
Upcoming/Latest
easeljs-NEXT.js
easeljs-NEXT.min.js
Prior to version 1.0, we used version names:
easeljs-0.6.2.min.js
easeljs-0.6.2.combined.js (the old testing version, not included on CDN)
You can also find "Combined" scripts on the CDN (and other CDNs) that have all 4 libs included. We didn't build NEXT versions of these, since they would be prone to issues:
1.0.0/createjs.js
1.0.0/createjs.min.js
Again, before 1.0, we used a version, which for combined libs was a release date, since they actual version numbers of the libs didn't all align.
createjs-2015.11.26.min.js
createjs-2015.11.26.combined.js
We are working on improving our release schedule, so there are more official releases than in the past. Hope that provides some insight!

Recommendations for migrating custom code mods to new major release of open source software?

I've got a (dirty) production installation of Simple Machines Forum (SMF 1.1.13). It was a clean install, once... about five years, twenty updates, and 40 mods ago. Not to mention the custom code that was patched directly into the code base. This started as a for-fun side project, and didn't have any code management practices at the get-go.
Now SMF 2 is (getting closer to) going live, and I want to upgrade. But without leaving the custom features behind.
Keep reading, this is a general software management question, not an SMF support question...
I'm trying to figure out the best way to port the custom features into the new code branch.
In some cases, the custom 1.1.x functionality will already exist in 2.0. Yay, no work for me!
In some cases, there will be mod packages versioned for 2.0, and I can just install them directly on a clean SMF 2 build. Yay, minimal work for me!
In some cases, the code port will be fairly straightforward between the two versions (e.g. a few small changes in queries or global variable construction). (I've ported a few features/mods back from 2.0 to 1.1.x, so I'm starting to get familiar with it.)
In some cases, I'm just going to have to redevelop the features mostly from scratch.
Those last two options are gonna be hard to manage.
Any suggestions on how to port a large number of changes from one branch to another?
When it's not my own in-house code, that is. Here's my initial plan:
Diff between a clean version of 1.1.x and my "dirty" production code
Map each line diff to a feature ("That code update is the custom tagging feature, gonna have to port it line by line, and that one over there is the gallery, I can probably install an updated mod.") This would be SOMUCHEASIER if there were a diff tool that generated a consolidated report, instead of having to go through scores of files one at a time. Google and SO searches didn't find a tool like that-- Is there one?
Install a clean 2.0 branch
Install the available updated mods
Roll up my sleeves and go through my diffs feature by feature (this? is why I need the consolidated diff report. It would be hell to do page by page.) and build them back in.
Any better ideas? (Pointers to release management info are welcome, though of course with the caveat that it's not actually my code so I have limited control.)
Otherwise? I fear my options are ditch the custom features (not really feasible) or stay on the old branch. Both suck. Help!
tl;dr: Point me to a diff tool that will do consolidated file-by-file diff reports for entire directories. And/or help me figure out an easier way to migrate my custom code.
Your plan is generally the most practical approach, although I would say that you're heading in the wrong direction by looking for code-level differences. With no version control over the project lifetime and with no concise record of applied changes, in examining differences at the code level you are looking for a level of detail that may not give you the information you need to apply the same changes to a new implementation.
Move away from thinking of code-level changes and consider application-level feature and behavioural changes. What features have your changes introduced? In what ways does your application now behave differently due to your changes?
You say that there have been many unversioned changes over a long period - you will fail to find all the changes no matter what tools you have at your disposal and you will still need to consider the feature and behavioural changes that exist to fully represent the same features and behaviours in any upgraded implementation.
You know your application well, you use it and you do appreciate the feature changes that you have introduced even if you may not realise this.
Install a vanilla 2.0 release
Apply all appropriate mods
Apply relevant styling
Use the new system, note the differences in behaviour and develop from this a set of required features
Your feature set does not need to be complete - don't stall at the stage of trying to figure out all required changes, this will take too long.
Apply features gathered from most recent feedback (ideally through revertable mods)
Note the differences in behaviour at develop from this a set of required features
Repeat

Should I store generated code in source control

This is a debate I'm taking a part in. I would like to get more opinions and points of view.
We have some classes that are generated in build time to handle DB operations (in This specific case, with SubSonic, but I don't think it is very important for the question). The generation is set as a pre-build step in Visual Studio. So every time a developer (or the official build process) runs a build, these classes are generated, and then compiled into the project.
Now some people are claiming, that having these classes saved in source control could cause confusion, in case the code you get, doesn't match what would have been generated in your own environment.
I would like to have a way to trace back the history of the code, even if it is usually treated as a black box.
Any arguments or counter arguments?
UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the answers below could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue.
I won't select an accepted answer at this point for the reasons mentioned above.
Saving it in source control is more trouble than it's worth.
You have to do a commit every time you do a build for it to be any value.
Generally we leave generated code( idl, jaxb stuff, etc) outside source control where I work and it's never been a problem
Put it in source code control. The advantage of having the history of everything you write available for future developers outweighs the minor pain of occasionally rebuilding after a sync.
Every time I want to show changes to a source tree on my own personal repo, all the 'generated files' will show up as having changed and need comitting.
I would prefer to have a cleaner list of modifications that only include real updates that were performed, and not auto-generated changes.
Leave them out, and then after a build, add an 'ignore' on each of the generated files.
Look at it this way: do you check your object files into source control? Generated source files are build artifacts just like object files, libraries and executables. They should be treated the same. Most would argue that you shouldn't be checking generated object files and executables into source control. The same arguments apply to generated source.
If you need to look at the historical version of a generated file you can sync to the historical version of its sources and rebuild.
Checking generated files of any sort into source control is analogous to database denormalization. There are occasionally reasons to do this (typically for performance), but this should be done only with great care as it becomes much harder to maintain correctness and consistency once the data is denormalized.
I would say that you should avoid adding any generated code (or other artifacts) to source control. If the generated code is the same for the given input then you could just check out the versions you want to diff and generate the code for comparison.
I call the DRY principle. If you already have the "source files" in the repository which are used to generate these code files at build time, there is no need to have the same code committed "twice".
Also, you might avert some problems this way if for example the code generation breaks someday.
No, for three reasons.
Source code is everything necessary and sufficient to reproduce a snapshot of your application as of some current or previous point in time - nothing more and nothing less. Part of what this implies is that someone is responsible for everything checked in. Generally I'm happy to be responsible for the code I write, but not the code that's generated as a consequence of what I write.
I don't want someone to be tempted to try to shortcut a build from primary sources by using intermediate code that may or may not be current (and more importantly that I don't want to accept responsibility for.) And't it's too tempting for some people to get caught up in a meaningless process about debugging conflicts in intermediate code based on partial builds.
Once it's in source control, I accept responsibility for a. it being there, b. it being current, and c. it being reliably integratable with everything else in there. That includes removing it when I'm no longer using it. The less of that responsibility the better.
I really don't think you should check them in.
Surely any change in the generated code is either going to be noise - changes between environments, or changes as a result of something else - e.g. a change in your DB. If your DB's creation scripts (or any other dependencies) are in source control then why do you need the generated scripts as well?
The general rule is no, but if it takes time to generate the code (because of DB access, web services, etc.) then you might want to save a cached version in the source control and save everyone the pain.
Your tooling also need to be aware of this and handle checking-out from the source control when needed, too many tools decide to check out from the source control without any reason.
A good tool will use the cached version without touching it (nor modifying the time steps on the file).
Also you need to put big warning inside the generated code for people to not modify the file, a warning at the top is not enough, you have to repeat it every dozen lines.
We don't store generated DB code either: since it is generated, you can get it at will at any given version from the source files. Storing it would be like storing bytecode or such.
Now, you need to ensure the code generator used at a given version is available! Newer versions can generate different code...
There is a special case where you want to check in your generated files: when you may need to build on systems where tools used to generate the other files aren't available. The classic example of this, and one I work with, is Lex and Yacc code. Because we develop a runtime system that has to build and run on a huge variety of platforms and architectures, we can only rely on target systems to have C and C++ compilers, not the tools necessary to generate the lexing/parsing code for our interface definition translator. Thus, when we change our grammars, we check in the generated code to parse it.
Leave it out.
If you're checking in generated files you're doing something wrong. What's wrong may differ, it could be that your build process is inefficient, or something else, but I can't see it ever being a good idea. History should be associated with the source files, not the generated ones.
It just creates a headache for people who then end up trying to resolve differences, find the files that are no longer generated by the build and then delete them, etc.
A world of pain awaits those who check in generated files!
In some projects I add generated code to source control, but it really depends. My basic guideline is if the generated code is an intrinsic part of the compiler then I won't add it. If the generated code is from an external tool, such as SubSonic in this case, then I would add if to source control. If you periodically upgrade the component then I want to know the changes in the generated source in case bugs or issues arise.
As far as generated code needing to be checked in, a worst case scenario is manually differencing the files and reverting the files if necessary. If you are using svn, you can add a pre-commit hook in svn to deny a commit if the file hasn't really changed.
arriving a bit late ... anyway ...
Would you put compiler's intermediate file into source version control ?
In case of code generation, by definition the source code is the input of the generator while the generated code can be considered as intermediate files between the "real" source and the built application.
So I would say: don't put generated code under version control, but the generator and its input.
Concretely, I work with a code generator I wrote: I never had to maintain the generated source code under version control. I would even say that since the generator reached a certain maturity level, I didn't have to observe the contents of generated code although the input (for instance model description) changed.
The job of configuration management (of which version control is just one part) is to be able to do the following:
Know which changes and bug fixes have gone into every delivered build.
Be able to reproduce exactly any delivered build, starting from the original source code. Automatically generated code does not count as "source code" regardless of the language.
The first one ensures that when you tell the client or end user "the bug you reported last week is fixed and the new feature has been added" they don't come back two hours later and say "no it hasn't". It also makes sure they don't say "Why is it doing X? We never asked for X".
The second one means that when the client or end user reports a bug in some version you issued a year ago you can go back to that version, reproduce the bug, fix it, and prove that it was your fix has eliminated the bug rather than some perturbation of compiler and other fixes.
This means that your compiler, libraries etc also need to be part of CM.
So now to answer your question: if you can do all the above then you don't need to record any intermediate representations, because you are guaranteed to get the same answer anyway. If you can't do all the above then all bets are off because you can never guarantee to do the same thing twice and get the same answer. So you might as well put all your .o files under version control as well.
There are good arguments both for and against presented here.
For the record, I build the T4 generation system in Visual Studio and our default out-of-the-box option causes generated code to be checked in. You have to work a bit harder if you prefer not to check in.
For me the key consideration is diffing the generated output when either the input or generator itself is updated.
If you don't have your output checked in, then you have to take a copy of all generated code before upgrading a generator or modifying input in order to be able to compare that with the output from the new version. I think this is a fairly tedious process, but with checked in output, it's a simple matter of diffing the new output against the repository.
At this point, it is reasonable to ask "Why do you care about changes in generated code?" (Especially as compared to object code.)
I believe there are a few key reasons, which come down to the current state of the art rather than any inherent problem.
You craft handwritten code that meshes tightly with generated code. That's not the case on the whole with obj files these days. When the generated code changes, it's sadly quite often the case that some handwritten code needs to change to match. Folks often don't observe a high degree of backwards compatibility with extensibility points in generated code.
Generated code simply changes its behavior. You wouldn't tolerate this from a compiler, but in fairness, an application-level code generator is targeting a different field of problem with a wider range of acceptable solutions. It's important to see if assumptions you made about previous behavior are now broken.
You just don't 100% trust the output of your generator from release to release. There's a lot of value to be had from generator tools even if they aren't built and maintained with the rigor of your compiler vendor. Release 1.0 might have been perfectly stable for your application but maybe 1.1 has a few glitches for your use case now. Alternatively you change input values and find that you are exercisig a new piece of the generator that you hadn't used before - potentially you get surprised by the results.
Essentially all of these things come down to tool maturity - most business app code generators aren't close to the level that compilers or even lex/yacc-level tools have been for years.
Both side have valid and reasonable argument, and it's difficult to agree on something common. Version Control Systems (VCSs) tracks the files
developers put into it, and have the assumption that the files inside VCS are hand crafted by developers, and developers are interested in the history
and change between any revision of the files. This assumption equalize the two concepts, "I want to get this file when I do checkout." and "I am
interested in the change of this file."
Now, the arguments from both sides could be rephrase like this:
"I want to get all these generated files when I do checkout, because I don't have the tool to generate them in this machine."
"I should not put them into VCS, since I am not interested in the change of this file."
Fortunately, it seems that the two requirements are not conflicting fundamentally. With some extension of current VCSs, it should be possible to have
both. In other words, it's a false dilemma. If we ponder a while, it's not hard to realize that the problem stems from the assumption VCSs hold. VCSs
should distinguish the files, which are hand crafted by developers, from files which are not hand crafted by developers, but just happens to be inside
this VCS. For the first category of files, which we call source files (code) usually, VCSs have done great job now. For the latter category, VCSs have
not had such concept yet, as far as I know.
Summary
I will take git as one example to illustrate what I mean.
git status should not show generated files by default.
git commit should include generated files as snapshot.
git diff should not show generated files by default.
PS
Git hooks could be used as a workaround, but it would be great if git supports it natively. gitignore doesn't meet our requirement, for ignored
files won't go into VCSs.enter code here
I would argue for. If you're using a continuous integration process that checks out the code, modifies the build number, builds the software and then tests it, then it's simpler and easier to just have that code as part of your repository.
Additionally, it's part and parcel of every "snapshot" that you take of your software repository. If it's part of the software, then it should be part of the repository.
It really depends. Ultimately, the goal is to be able to reproduce what you had if need be. If you are able to regenerate your binaries exactly, there is no need to store them. but you need to remember that in order to recreate your stuff you will probably need your exact configuration you did it with in the first place, and that not only means your source code, but also your build environment, your IDE, maybe even other libraries, generators or stuff, in the exact configuration (versions) you have used.
I have run into trouble in projects were we upgraded our build environment to newer versions or even to another vendors', where we were unable to recreate the exact binaries we had before. This is a real pain when the binaries to be deplyed depend on a kind of hash, especially in secured environment, and the recreated files somehow differ because of compiler upgrades or whatever.
So, would you store generated code: I would say no. The binaries or deliverables that are released, including the tools that you reproduced them with I would store. And then, there is no need to store them in source control, just make a good backup of those files.
I (regretfully) wind up putting a lot of derived sources under source control because I work remotely with people who either can't be bothered to set up a proper build environment or who don't have the skills to set it up so that the derived sources are built exactly right. (And when it comes to Gnu autotools, I am one of those people myself! I can't work with three different systems each of which works with a different version of autotools—and only that version.)
This sort of difficulty probably applies more to part-time, volunteer, open-source projects than to paid projects where the person paying the bills can insist on a uniform build environment.
When you do this, you're basically committing to building the derived files only at one site, or only at properly configured sites. Your Makefiles (or whatever) should be set up to notice where they are running and should refuse to re-derive sources unless they know they are running at a safe build site.
The correct answer is "It Depends". It depends upon what the client's needs are.
If you can roll back code to a particular release and stand up to any external audit's without it, then you're still not on firm ground. As dev's we need to consider not just 'noise', pain and disk space, but the fact that we are tasked with the role of generating intellectual property and there may be legal ramifications. Would you be able to prove to a judge that you're able to regenerate a web site exactly the way a customer saw it two years ago?
I'm not suggesting you save or don't save gen'd files, whichever way you decide if you're not involving the Subject Matter Experts of the decision you're probably wrong.
My two cents.
I would say that yes you want to put it under source control. From a configuration management standpoint EVERYTHING that is used to produce a software build needs to be controlled so that it can be recreated. I understand that generated code can easily be recreated, but an argument can be made that it is not the same since the date/timestamps will be different between the two builds. In some areas such as government, they require a lot of times this is what's done.
In general, generated code need not be stored in source control because the revision history of this code can be traced by the revision history of the code that generated it!
However, it sounds the OP is using the generated code as the data access layer of the application instead of manually writing one. In this case, I would change the build process, and commit the code to source control because it is a critical component of the runtime code. This also removes the dependency on the code generation tool from the build process in case the developers need to use different version of the tool for different branches.
It seems that the code only needs to be generated once instead of every build. When a developer needs to add/remove/change the way an object accesses the database, the code should be generated again, just like making manual modifications. This speeds up the build process, allows manual optimizations to be made to the data access layer, and history of the data access layer is retained in a simple manner.
If it is part of the source code then it should be put in source control regardless of who or what generates it. You want your source control to reflect the current state of your system without having to regenerate it.
Absolutely have the generated code in source control, for many reasons. I'm reiterating what a lot of people have already said, but some reasons I'd do it are
With codefiles in source control, you'll potentially be able to compile the code without using your Visual Studio pre-build step.
When you're doing a full comparison between two versions, it would be nice to know if the generated code changed between those two tags, without having to manually check it.
If the code generator itself changes, then you'll want to make sure that the changes to the generated code changes appropriately. i.e. If your generator changes, but the output isn't supposed to change, then when you go to commit your code, there will be no differences between what was previously generated and what's in the generated code now.
I would leave generated files out of a source tree, but put it in a separate build tree.
e.g. workflow is
checkin/out/modify/merge source normally (w/o any generated files)
At appropriate occasions, check out source tree into a clean build tree
After a build, checkin all "important" files ("real" source files, executables + generated source file) that must be present for auditing/regulatory purposes. This gives you a history of all appropriate generated code+executables+whatever, at time increments that are related to releases / testing snapshots, etc. and decoupled from day-to-day development.
There's probably good ways in Subversion/Mercurial/Git/etc to tie the history of the real source files in both places together.
Looks like there are very strong and convincing opinions on both sides. I would recommend reading all the top voted answers, and then deciding what arguments apply to your specific case.
UPDATE: I asked this question since I really believed there is one definitive answer. Looking at all the responses, I could say with high level of certainty, that there is no such answer. The decision should be made based on more than one parameter. Reading the other answers could provide a very good guideline to the types of questions you should be asking yourself when having to decide on this issue.

What is source code control for? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
Locked. This question and its answers are locked because the question is off-topic but has historical significance. It is not currently accepting new answers or interactions.
I read all over the Internet (various sites and blogs) about version control. How great it is and how all developers NEED to use it because it is very useful.
Here is the question: do I really need this? I'm a front-end developer (usually just HTML/CSS/JavaScript) and I NEVER had a problem like "Wow, my files from yesterday!".
I've tried to use it, installed Subversion and TortoiseSVN, I understand the concept behind version control but... I can't use it (weird for me).
OK, so... Is that bad? I usually work alone (freelancer) and I had no client that asked me to use Subversion (but it never is too late for this, right?). So, should I start and struggle to learn to use Subversion (or something similar?) Or it's just a waste of time?
Related question: Good excuses NOT to use version control.
Here's a scenario that may illustrate the usefulness of source control even if you work alone.
Your client asks you to implement an ambitious modification to the website. It'll take you a couple of weeks, and involve edits to many pages. You get to work.
You're 50% done with this task when the client calls and tells you to drop what you're doing to make an urgent but more minor change to the site. You're not done with the larger task, so it's not ready to go live, and the client can't wait for the smaller change. But he also wants the minor change to be merged into your work for the larger change.
Maybe you are working on the large task in a separate folder containing a copy of the website. Now you have to figure out how to do the minor change in a way that can be deployed quickly. You work furiously and get it done. The client calls back with further refinement requests. You do this too and deploy it. All is well.
Now you have to merge it into the work in progress for the major change. What did you change for the urgent work? You were working too fast to keep notes. And you can't just diff the two directories easily now that both have changes relative to the baseline you started from.
The above scenario shows that source control can be a great tool, even if you work solo.
You can use branches to work on longer-term tasks and then merge the branch back into the main line when it's done.
You can compare whole sets of files to other branches or to past revisions to see what's different.
You can track work over time (which is great for reporting and invoicing by the way).
You can recover any revision of any file based on date or on a milestone that you defined.
For solo work, Subversion or Git is recommended. Anyone is free to prefer one or the other, but either is clearly better than not using any version control. Good books are "Pragmatic Version Control using Subversion, 2nd Edition" by Mike Mason or "Pragmatic Version Control Using Git" by Travis Swicegood.
There are lots of benefits, yes you need it.
What tools/techniques can benefit a solo developer?
Best Version control for lone developer
You don't need version control any more than a trapese artist needs a safety net. It's like backing up your hard drive—most of the time it seems redundant as nothing happens but it will be needed eventually. There's no maybes here. It will happen. And you can never predict when and the past is a poor indicator as to when it will happen. It may happen only once ever in the future but even if you know it'll happen once you won't know how bad it will be.
Yes!
Do it. It won't hurt you..
I usually switch from Laptop to PC and back and it's absolutely great to have your code somewhere in a central repository.
Sometimes it's great to go just revert to the latest revision because you screwed up something that would be too difficult to fix..
The biggest advantage that is missing is being able to re-produce the source code that generated an old build.
At build time, you tag the source control with 'Build 4.26'. The next day you start coding Build 4.27. Three months later, when a client says, "I'm using Build 4.26, and there's a bug in the Frickershaw feature. I can't upgrade to any other build because of some changes to file formats you made in build 4.27. Is there anything you can do for me? I'm willing to pay."
Then, you can checkout a branch of the 4.26 source code... fix the Frickershaw feature, and then re-build the package for the user in about an hour or two. Then you can switch back to version 4.39, and keep working.
In the same vein, you can track down the exact point at which a bug was added. Test versions 4.25 for the bug, then 4.20, then 4.10 and eventually find the bug was introduced in version 4.12. Then you look for all changes made between 'Build 4.11' and 'Build 4.12', and then focus on the Frickershaw feature. You can quickly find the source code for the bug without ever debugging it.
Branching doesn't seem useful to you? Have you never wanted to just try something out to see if it would work? I do a lot of plain old html/css stuff too, and I find that invaluable. There is literally no danger in branching to test something, seeing if it works, and deciding "meh" and then just rolling back.
I've never needed to get to a backup (knock on wood), but I find just the rolling back functionality invaluable.
A few perks as a freelancer:
Know definitively what you changed in every single file and when (as long as you check in often)
Rollback to any version in your past. Surprising how often this is valuable.
Track a set of changes as a 'release'. This way you know what each client is currently using and what's in development.
Backup
The ability to easily share a project if you suddenly aren't solo
Try a DVCS like Git or Bazaar. They are incredibly easy to set up, easy to use, and offer all the important features of Subversion, CVS, etc.
The key is that you can revert back to a working version when you break something, which is often much faster than undoing the change manually.
I wouldn't hire a contractor without them integrating into our processes. They would have to access code via our SVN, and would be unlikely to get paid without meeting unit testing and code review requirements.
If contracting I'd make sure to have solid experience of both VSS (check-in/out) and CVS (merge & conflict) models.
Working on your own you have a great opportunity to play and learn with the latest - I'd be trying out Git.
As a lone developer you can think of source control as an unlimited undo - one that works across sessions and reboots.
A minor advantage of source control for me is that I work on multiple development computers. It is easy to move my work around between machines.
The greatest advantage in my opinion has already been listed. It allows me to sleep at night knowing that if we have to roll-back a change it will be fairly easy.
I think the main advantage in moving from a "keep-all-versions file system" to a source code control system lies in the fact that the sccs adds structure to all those versions you kept of all those files, and provides you with records of "what was the consistent state of the whole file system at point X".
In other words, "Which version of file A goes with which versions of B, C, D, ...".
And an afterthought (¡!): the special act of committing or checking in makes you think about "what is this?", and the resulting log message can, hopefully, serve as memory...
The literal answer to this question is, No, you do not NEED version control.
You do, however, WANT version control, even if you don't know it.
That said, many SCM tools can be mysterious or downright unpleasant to use until you break through the Grok Barrier, so let's revise that a bit:
"You do, however, want EASY-TO-USE version control." And it's out there...download a bunch of recommended visual clients and give them a whirl, then try whichever maps best to the way you think.
Which leads to the question you meant to ask:
Why do I want to use version control?"
Answer: Version control allows you to be FEARLESS!
Yes you need it.
At the very least, for a single developer shop, you need to move code into a ProjectName-Date-Time directory several times a day.
That is, write a script that will automatically back up your working directory at lunch and at quitting time, without overwriting other back ups.
This can grow pretty fast, so eventually you'll want to save only the differences between files, which is what a VC application does.
Since you usually work alone, I would say that it is a good idea to use version control. One of the main benefits I have found in using version control (Subversion in my case), is that when working alone it gives me more confidence in trying a new approach to the problem. You can always branch to a new method or framework of solving the problem and see if you like it better. If it turns out that this branch doesn't work, you can just abandon it and go back to the old method. This also makes it easier to try out the different solutions side by side.
So, if you have ever seen a different approach to solving a problem and you wanted to try it out, I would definitely use version control as a tool to make this easier.
If you are working on your own, and are performing backups on a regular basis, VC may not be needed (unless you count your backups as version history). As soon as you start working with another developer, you should get version control in place so that you don't start over-writing each other's work.
Even if you don't need it right now, it is something you will need whenever you work in a team.
Having a history of changes to your html/css/javascript can be a godsend. Being able to compare your front-end to the code a month, or several months ago can really go a long way in figuring out why suddenly the template is all askew.
Plus if you ever want/need to get help on your project(s), you'll have an easy system to distribute, track, and deploy updated content.
Definitely do it, you'll thank yourself once you get used to it.
Checkout (one time)
Update (beginning of day)
Commit (end of task/change after testing)
That's all there is to it. Don't commit EVERY single modification that you're refreshing in the browser, just the one's you want to go live.
Think if it like a backup. It is a little irritating until the day you need it. Then the amount of work you lose is directly proportional to the frequency of your backups.
Another benifit is being able to look at old ways you did things that may have become obsolete in a certain spot but could be usefull in another. Just cut and paste the old code that you got when doing a compare.
That is unless you like reinventing the wheel you already reinvented...
When things can go wrong they will.
It is very nice to have the ability to reference code you may have deleted a month ago, or to recover the entire project after a hardware failure or upgrade.
There may also be a point in the future when the project is worked on by more than you. The ability to prevent (or control) branching and versioning is a must in an environment like that.
Must must must must must must. You must use version control.
This is of the deepest importance.
If you don't understand why now, you will one day.
When your client phones up in a panic because something is broken on the live site and it's a regression, you'll be glad you can just open TortoiseSVN and see what it was you did last Tuesday that caused the breakage.
It's really odd. Ever since I started using version control, I've very occasionally had the need to look up old copies of my code and use them. I never needed to do this before...probably because the idea of doing didn't really stick. It's easy not to notice those times when you could have found version control helpful.
Search within an entire codebase. It's a killer feature, mainly because the search gets actioned on another machine so you can get on with your work undisturbed.
Which incidentally, is the reason why we didn't change to SourceGear Vault. It can't do this. We're still looking for a SourceSafe-compatible replacement for... well, SourceSafe. Despite what everyone says, it hasn't let us down yet*
* this may just be a matter of time.
I think you've made the right decision to use some kind of version control. For simplicity, I'd go with SVN (ignore CVS as SVN is basically a "better" CVS)
SVN can work with "local" repositories right on the filesystem and on lots of platform so you don't have to bite off too much in infrastructure (servers, networks, etc)
Great resource for SVN: http://svnbook.red-bean.com
I don't know much about GIT, but it being open source and gain lots of mindshare probably has alot of similar advantages!
Borrowing a quote from somewhere: You might not need it now, but when you do, you'll be glad you did.
Happy versioning!
Although old and crude, we have found Microsoft Visual SourceSafe to work. It works great for keeping version history. If you are not to worried about branching, which being the solo developer you may not, it might just fit the bill.
As far as checking in, finding a good checkpoint can be a challenge, but checking in on every save will only make versioning hard to track.
"Do I really need version control?"
Yes. Unless you write perfect code that never needs to get changed.
An example:
I had a requirement. I built up a webpage, spent a day or so on the page, it's Section 508 compatibility (this was about 6-7 years ago), and uploaded to the website. Next the requirement was changed drastically. I spend another day working on the page (and Hayuge Excel files didn't convert into accessible HTML easily). About a week later, client switches asks that we go back to Version A. Source control would have done this in about 10 minutes. As it was, I had to blow another %$#^^&$# day on the task.
Yes, you need version control either for development purposes or simply for storing your documents. This way, you can go back in time if you're required to do so in order to revert changes or mistake made on a code or documents.
Once you start working on a team that references ever upgrading "components" from multiple callers/applications, version control will be an absolute must. In that environment, there is no way that you can keep up with all the permutations of possible change.
You need version control just like you need insurance in life.
You need version control to manage different file versions. With it, you can get and work on the files from different places and it facilitates team members to collaborate on the same project.