Related
As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 11 years ago.
I'm interviewing this week for a position at a firm where I would be the sole initial developerplus support the application I am taking over work for. Because positions like this can vary so wildly in the details, I plan to go in advocating a number of specific approaches that would make the job workable.
One thing that I'm considering bringing up is an inclination to move the existing source code out of SourceSafe (where it is currently resident) into a better version control product like Perforce.
I've had a number of bad experiences with SourceSafe causing massive problems like permanent file lock-out and code corruption. Alone, I'm afraid that those anecdotes sound like "I want to change it because I don't like it." If I'm going to bring the subject up, I want to have a slam dunk case.
So, what are the empirical reasons that SourceSafe is viewed as an inferior product?
See also:
Any Real-Life Visual Source Safe Horror Stories
How do I convince my team to drop sourcesafe and move to SVN?
Empirically, it makes no sense to trust your precious source code to a piece of software that isn't even up to the level of reliability as Microsoft Access. The product should have been dumped years ago. It's just not up to modern standards.
I'd rate it below any open source product like CVS or SVN, and I don't know of any product I'd rate below it, except maybe an older version of VSS.
There's a long list of problems here (admittedly from 2002 but the product hasn't really changed since then)
Edit: here's the text from the link, in case it disappears. Page is licensed under CCA3.
Visual SourceSafe: Microsoft's Source Destruction System
by Alan De Smet
There are many fine solutions for revision control systems.
SourceSafe isn't
one of them.
I used SourceSafe for five years though spring 2002 . It has consistently
been an unpleasant experience. New versions failed to improve anything of import.
I hope to dissuade you from using SourceSafe,
sparing you the bad experiences I have had.
Missing Features
SourceSafe lacks usable branching support
A revision control system should provide powerful branching
support. With strong branching support, developers can easily
make minor revisions of old versions while work toward the next
major release continues. Highly experimental code can be checked
into a branch, keeping it separate from mainstream development
but backing it up and making it available to other developers.
If the project is "frozen" while a milestone or final release is
built, a developer can continue development toward the next
version on a branch. (Or more commonly, a new branch can be
created for the freeze while general development continues on the
main branch. When the release is done, changes on the frozen
branch can be merged back into the main branch.) SourceSafe's
branching support fails to effectively support any of this.
With powerful branching, a revision control system must
also provide strong merging support to reconcile different
branches. At the least, the system must allow a developer to
examine the differences between two branches, modify them to
create a merged version, and when satisfied check them in.
SourceSafe's merge support is tightly integrated with checking
in, making it difficult to examine differences and test the
proposed merge before checking it into the tree. With this weak
level of support, it's easy to check non-functioning code into
the revision control system.
SourceSafe cannot be safely extended
It should be possible to easily extend your revision control
system with additional functionality. The ability to send out
emails summarizing check-ins is essential. When working with a
team, regular email messages listing files checked in and the
check in messages associated with them really help keep everyone
up to date with recent changes. You might also want to add
filters to prevent check-ins of code that doesn't meet certain
requirements (standard copyright statements or doesn't compile).
SourceSafe barely supports this. While it is possible, every single
client needs to have the additional functionality installed. If
a single client lacks the extension, it will quietly fail to
behave as expected.
(For details, see
Visual SourceSafe 6.0 Automation.
Check the section "Trapping SourceSafe Events? An Overview".) You can
pay even
more for a third party solution, but does it make sense to
invest more money in a fundamentally broken product?
SourceSafe silently leaves stale files on your local system
When updating your local workspace to match the server, files
which were deleted on the server should brought to your
attention. (Or deleted, since the old version can be retrieved
from the revision control system.) Failure to do so risks out of
date files being used in your project, often causing problems.
I've frequently run into this problem when an out of date header
file is incorrectly included into my project. SourceSafe fails
to delete the out of date file or provide any warning.
SourceSafe badly handles slow networks and the public internet
SourceSafe is unusable over slow network connections. It's
effectively unusable over the public internet. In addition,
because SourceSafe works over network shares, if you place a
SourceSafe server on the internet, you're exposing any weaknesses
in your servers file sharing implementation to the entire world.
Of course, if
you're willing to invest more money in your ineffective revision
control system, you can buy a third party
product to solve this problem.
Managing third party modules is difficult with SourceSafe
It's not uncommon for a developer to use third party modules
in your project to quickly add required functionality. For
example, you might use Codejock Software's Xtreme
Toolkit. It's natural to check these third party modules
into your revision control system. This way, when you step
backward in time to examine a previous revision, you can get the
same versions of supporting libraries and third party modules
that were used to build your code at this time.
Unfortunately, SourceSafe makes tracking a third party module
extremely difficult. Initially checking the first version in
isn't hard. Checking a new version in requires a good memory and
attention to detail. To add a new version, you first recursively
check the folder holding the module out. Now delete the
directory on disk and replace it with the new version. Check in
new version in. You now need to identify any files or
directories added in the new version. Right click on the
module's folder in SourceSafe and use "Show difference" to
recursively generate a list of files which have been added. Note
which directories hold files which have been added and which
directories have been added. Now close the report of differences
(the report is modal, preventing you from using SourceSafe while
visible). Add the new directories as you would normally add
directories. To add the new files, visit each directory holding
new files and use File > Add Files to add them. Again, use the
"Show difference" command to recursively generate a list of files
which have been removed. Note these files and close the the
report of differences again. Now delete each of these files in
SourceSafe.
If you've actually tweaked the third party module, SourceSafe
provides no particular help in tracking down the differences and
merging them into the new version.
(For comparison, to check in a new version of a third party
module using CVS, you would
simply run the command "cvs -q import -m 'Import of Xtreme
Toolkit 1.9' xtremetoolkit Codejock XT_1_9". That's it. If
you've made changes to the module that need to be integrated, you
would use "cvs checkout -j XT_1_8 -j XT_1_9
xtremetoolkit". That will give you a local copy of the
merged changes which you can immediately check in if
satisfactory.))
Viewing and retrieving historical versions is extremely
slow
It's not unusual to need to get a historical version of the
source code. You might need an older version to investigate a
bug report, or the current code is malfunctioning and you need to
get a functioning version. SourceSafe supports this, but it's
extremely slow for non-trivial projects. To get a historical
version, you first need to generate a history for the entire
project you're interested in. On a project with hundreds of
files and just over one year of history, this can easily take
over five minutes (even if you restrict the actual search to the
last 48 hours of changes). Once this history is generated, you
specify the version to get by selecting the last check-in to
accept. The slow speed at which this process is completed
discourages developers from examining previous versions,
defeating much the purpose of a revision control system.
Difficult to maintain multiple local copies of one
project
While making extensive changes to a copy of the project, you
may be asked to make a small change to the project. The most
efficient and safest way to do this is to get another copy of the
project to make the change on. SourceSafe presents two problems
in doing so. First, SourceSafe only recognizes a single copy of
the project on your system. You'll need to either move the
project directories back where SourceSafe expects the canonical
copy, or you'll need to reset SourceSafe's notion of where the
canonical copy exists. Using either technique, it's easy to
accidentally point SourceSafe at the wrong project and check the
wrong versions of files in. Secondly, SourceSafe's weak merging
features mean that if you need to change the same file in both
copies of the project, you'll need to be very careful that
changes to one project don't destroy changes in the other.
Safety
SourceSafe degrades on large projects
Microsoft recommends that your database not exceed 5 GB.
(Source: Microsoft Best Practices)
While this is a large database, it's not unreasonable for a large
project, especially if you check in large binary files (like
Microsoft Word documents).
SourceSafe integration can crash Visual Studio
SourceSafe can hang or crash when your system loses connection
to the SourceSafe database. While this is irritating for Visual
SourceSafe, this can cause you to lose work when Visual Studio is
using SourceSafe integration. Simple having a SourceSafe managed
project open in Visual Studio is enough to open yourself to the
risk. To minimize this risk (and speed up ClassView), I suggest
you
follow
Microsoft's directions on disabling SourceSafe integration.
SourceSafe relies on dangerous file sharing
SourceSafe doesn't really run as a server, but as a set of
files shared over SMB. As a result, you're relying on each
individual client to not misbehave. A single misbehaving
computer can destroy the database. A problem in the file sharing
implementation on your operating system can damage the database. Users only needing read-only
access to the revision control system need write access to the
server, increasing the risk (Required Network Rights for the SourceSafe Directories).
SourceSafe should be scanned for corruption weekly
Of course, with this high risk of corruption,
Microsoft recommends that you run the Analyze diagnostic program
weekly.
(Source: Microsoft Best Practices)
While Analyze is running all of your developers are
locked
out of the system (I hope everyone remembered to quit from
SourceSafe first!). My experience with SourceSafe shows
that a 2 gigabyte system running under Windows 2000 takes several
hours to check if run weekly.
SourceSafe handles multiple time zones badly
If you have teams using the same SourceSafe repository in
different time zones, you're likely to have problems. (See
Microsoft's details on the time zone bug.) The only solutions
Microsoft provides are to incorrectly set the clocks of the
computers to a single time zone, or to purchase a third party
product.
Relatedly, this is a potential problem if any of the client
computers using SourceSafe fail to have synchronized clocks.
Differences of several minutes between computers can cause
strange behavior from SourceSafe with it tries to reconcile
information that appears to come from the future.
SourceSafe becomes corrupted
Your revision control system must be trustworthy. You're
entrusting your hard work to your revision control system. If
your data is corrupted, the system is worthless. SourceSafe's
fundamental design assumes that clients are trustworthy, always
function correctly, and that nothing interferes with the
communication causing corrupted data. As a result, SourceSafe is
fragile and untrustworthy. I have worked with SourceSafe at
three different jobs. In each case, eventually the SourceSafe
database became corrupted. Data has been corrupted, work has
been lost, time has been wasted on the problem. Speaking with
other developers, I have learned that my experiences are not
unique.
Irritations
Minor actions like changing the directory erase the entire
contents of the output window, making it difficult to examine past
actions.
Comparing your local version to the remote repository is
clumsy. You select the directory you're interested in SourceSafe
and select Compare Differences. The resulting report is modal,
preventing you from working with SourceSafe while examining the
report.
When getting the latest version of files from SourceSafe,
each file changed locally causes a dialog to pop up to confirm
the update. The update action entirely stops while the dialog
waits for your response. This is particularly irritating if you
get the latest version, step away from your computer for a while,
then return to discover that SourceSafe is only 10% done and
waiting for your response. You can prevent the dialog from
returning in several ways, but in doing so you get no
indication that any such files were encountered. So when you
return to the finished update, you will have no idea that
SourceSafe encountered potential problems. SourceSafe should
note these files in the output window when encountered, making it
easy to scan the output window for files to be investigated.
Conclusion
If you're considering SourceSafe, consider something else. If
you're using SourceSafe now, migrate away as soon as possible.
Here are just a few.
If you simply must use SourceSafe, definitely take the time to
look at Microsoft's list
of bugs in Visual SourceSafe 6.0 and list
of fixed bugs in Visual SourceSafe 6.0 so you know what to
expect. (These links were originally taken from Microsoft's Bugs page.
This page may be useful if you have a different version of
SourceSafe or the above links fail.)
The big one that I've experienced personally is database corruption. It happens and it is painful. Aside from that, it's pretty slow compared to more modern SCMs.
If I were you, I'd recommend moving to at least TFS. The integration with VisualStudio is just as tight, it is much speedier, and the idiom is pretty much the same. I've had no problems with it in the 4 years I've been using it. Perforce is expensive and that's probably not something to toss around in an interview.
Back in 2002/2003 in a former job, I ended up being the guy who had to babysit our VSS installation.
We gave VSS it's own dedicated server - real physical hardware - in order to minimize disruptions, and still we had regular problems.
Once a week I had to go and fix broken locks - locks that couldn't be released by the developer who placed them.
Around twice a year I had to recover from a corrupted repository - there seemed to be some kind of built-in limit at around the 1G mark, whenever the repository grew much past 1G things went bad pretty quickly.
Given that there are better tools - with better integrations - that are now available at zero-cost, switching from VSS (to me) is a no-brainer.
I agree that VSS is a horrible piece of software but other than the possible database corruption problem, it seems like your situation will be a difficult one to sell people on. For example, you can't say VSS has terrible merge support because, well, you're the only developer. You can't complain about locking checkouts for the same reason. Unless your app is pretty good sized, you can't argue from the suggested 1 GB maximum database size that VSS suggests.
I personally think that in an interview like you're suggesting, you'd be better off looking for lower hanging fruit to suggest like iterative development, TDD or a wiki for documentation. I have fought the good fight to move from VSS to Perforce in an enterprise situation and that was hard enough. I can't imagine trying to convince management of a major source control change on an application that has one developer. YMMV.
SourceSafe is an antiquated technology built upon Windows shares. The storage mechanism (non-transactional "flat files") is a recipe for poor performance and bugs. Its adoption has nothing to do with what it has over other SCM's, and everything to do with the fact that it was "already there".
I can't comment on Perforce, but I can say that VSS compared to, say, Team Foundation Server is a very weak offering and should be used only in circumstances where there is already a large investment in it and NO money can be spent.
Microsoft does not use it internally. Instead, they got a source license to Perforce, and they've hacked it up to suit their needs. This is telling, since Microsoft proudly dogfoods their other products, like Windows and Office.
My last experience with Sourcesafe was years ago, so take this with a grain of salt. In my experience, it doesn't scale well as the number of developers touching the same code goes up.
There is no way to have multiple people working on the same code, and then merging their changes together on check-in. Instead, each developer has to lock the files they are working on, while the other developers can't make progress on anything touching those same files.
Because it makes your source unsafe even when in repository.
The issue I'm wreslting with now is Visual Source Safe's insistence that my project's folder structure cannot possibly be represented within the target working directory. It always thinks that a service setup project is trying to encroach on the service project and refuses to check it in. When you add a project to source control, it invariably adds another folder to its path in source control. Version control should just represent a file structure that is ALREADY WORKING on a developer's machine, without complaint.
See also Better SCM Initiative: Version Control Systems to Avoid.
One of the issues that I have read there, and I haven't seen mentioned so far in the answers, is that VSS has no support for deleted then recreated files: either you purge history of file (and can never recover old version), or you can create file with the same name as some deleted file had. Even CVS (which is also file-based) tried to did this right by using 'Attic' area.
Code Corruption (including your entire history stack)
Binary file corruption (we had this issue specifically with PDF templates)
Poor integration into Visual Studio IDE (very buggy)
Constant file lockouts
Did i forget to mention repository corruption...eeek
No branching
I can go on and on. The bottom line is to absolutely avoid this product. I think it may go well with smaller projects, but developing enterprise level applications should not involve dealing with constant codebase repository issues.
I am working on the developement of a application that will perform online backup of the files and folder in the PC, automatically or manually. Currently, I was keeping only the latest version of the file at the server.Now, I have to implement the versioning so that only the changes can be transfered to the online server and user must be able to download any of the available version of the file at Backup Server.
I need to perform Deduplication for this. Guys, though I am able to perform it using the fixed block size but facing an overhead of transferring the file having CRC information with each version backup.
I have never worked on such technology , so lacks in experience. I am eager to know is there any feasible method to embedd this functionality in the application without much pain. Is any third party tool would help to perform same thing? Please let me know?
Note: I am using FTP protocol to transfer the data.
There's a program called dump that does something similar, but it operates on filesystem blocks rather than files. rsync also may be of interest.
You will need to keep track of a large number of blocks with multiple versions and how they fit into the various versions of the original files, so you will need some kind of database to track this information, and an efficient way to query it to determine which blocks in a given file need to be transferred. Also note that adding something to the beginning of a file will cause all your blocks to be "new" if you use a naive blocking and diff scheme.
To do this well will be very complex. I highly recommend you thoroughly research already-available solutions, and if you decide you need to write your own, consider the benefits of their designs carefully.
I am interested to know what strategies people have to keep their code AND work versioned across multiple machines. For example I have a desktop PC running XP, a macbook running OSX and VMWare running XP as well as a sales laptop for running product demos. I want to know how I can always have these in sync. Subversion is a possibility for this but i find it less useful for dealing with binary files - maybe I have overlooked something here. What do other people use as they must have similar issues? Do they keep all files on a USB drive and never on the local file system. I am not always online so remote storage is not really an option.
Like others have said, subversion is your best bet for code. For binary files/non-code, I find DropBox to be very convenient. It stores revisions, has undelete, easy sharing, etc. basically an automagic, web-friendly SVN. Not having to think about it is the biggest plus for me.
I use mercurial for keeping my workfiles in sync. It's not great for big binaries either, but it lets me commit without being online and makes it easy to branche/merge different versions.
Ah the old VCS Debate.
The simplest way to share/sync Source Code is to use some sort of VCS (Version Control System) - this gives you plenty of benefits over being able to keep things synced. There are many VCSs out there, I personally use Bazaar-NG and Subversion - though I'd suggest you trial a few and see how you feel using them.
For syncing general files, espescially if it's only for yourself, I'd reccomend using "DropBox" (http://www.getdropbox.com/) - I've been using this for the last week or so, and it makes syncing up my multiple machines with a certain set of files so much more easy.
It also has some extra features that'd probably be useful for collaboration too, but I haven't tried those out yet.
Subversion works just great in our office for sales, project management, design and code files.
I store my dotfiles (.zshrc, etc) in a Git repository that is checked out into my homedir. I also do the same for the LaTeX files comprising my classwork.
I put important builds in Source Control -- it's fine for binary files.
For most files including source code we do use Subversion. It's really great.
If there are larger files oder Project management related documents which are used by people who have no access to the source control system, we use Microsoft SharePoint.
This is especially usefull if you are working with people outside your company.
I keep all my work encrypted on a USB stick. It also has a bootable Linux partition so I can get into a sensible working development environment from any machine, such as a borrowed work laptop with some software to carry to a conference that I can't move to my own machine.
When you have more people working on the same code, I'd put it in a central Subversion repository and set up scripts (in Windows you could use the autorun feature for the USB stick) to synchronize things between the repo and a USB stick always carried along.
The main point I see reg. using SVN as central repository for binary files, is that if those files are of any reasonable size, they will take some time to be synced over the net.
So if you don't want to spend time waiting for your files coming in over the net, here the building blocks for an other mirroring solution:
MirrorFolder
No better tool to be found when it comes to syncing a Data-Tanks with
several other "local" copies.
TrueCrypt
Use this to encrypt your USB-Tank just in case you drop it somewhere.
FolderShare (http://foldershare.com) is also nice for syncing files. I use it to keep documents, etc. in sync between my laptop and my desktop, for example.
Of course, for code especially this doesn't obviate the need for source control.
Our project is held in a SourceSafe database. We have an automated build, which runs every evening on a dedicated build machine. As part of our build process, we get the source and associated data for the installation from SourceSafe. This can take quite some time and makes up the bulk of the build process (which is otherwise dominated by the creation of installation files).
Currently, we use the command line tool, ss.exe, to interact with SourceSafe. The commands we use are for a recursive get of the project source and data, checkout of version files, check-in of updated version files, and labeling. However, I know that SourceSafe also supports an object model.
Does anyone have any experience with this object model?
Does it provide any advantages over using the command line tool that might be useful in our process?
Are there any disadvantages?
Would we gain any performance increase from using the object model over the command line?
I should imagine the command line is implemented internally with the same code as you'd find in the object model, so unless there's a large amount of startup required, it shouldn't make much of a difference.
The cost of rewriting to use the object model is probably more than would be saved in just leaving it go as it is. Unless you have a definite problem with the time taken, I doubt this will be much of a solution for you.
You could investigate shadow directories so the latest version is always available, so you don't have to perform a 'getlatest' every time, and you could ensure that you're talking to a local VSS (as all commands are performed directly on the filesystem, so WAN operations are tremendously expensive).
Otherwise, you're stuck unless you'd like to go with a different SCM (and I recommend SVN - there's an excellent converter available on codeplex for it, with example code showing how to use the VSS ans SVN object models)
VSS uses a mounted file system to share the database. When you get a file from SourceSafe it works at the file system level which means that instead of just sending you the file it send you all the blocks of the disk to find the file and the file. This adds up to a lot more transactions and extra data.
When using VSS over a remote or slow connection or with huge projects it can be pretty much unusable.
There is a product which amongst other things improves the speed of VSS by ~12 times when used over a network. It does this by implementing a client server protocol. This additionally can be encripted which is useful when using VSS over the internet.
I don't work or have any connection with them I just used it in a previous company.
see SourceOffSite at www.sourcegear.com.
In answer to the only part of your question which seems to have any substance - no switching to the object model will not be any quicker as the "slowness" is coming from the protocol used for sharing the files between VSS and the database - see my other answer.
The product I mentioned works along side VSS to address the problem you have. You still use VSS and ahev to have licences to use it... it just speeds it up where you need it.
Not sure why you marked me down?!
We've since upgraded our source control to Team Foundation Server. When we were using VSS, I noticed the same thing in the CruiseControl.Net build logs (caveat: I never researched what CC uses; I'm assuming the command line).
Based on my experience, I would say the problem is VSS. Our TFS is located over 1000 miles away and gets are faster than when the servers were separated by about 6 feet of ethernet cables.
Edit: To put on my business hat, if you add up the time spent waiting for builds + the time spent trying to speed them up may be enough to warrant upgrading or the VSS add-on mentioned in another post (already +1'd it). I wouldn't spend much of your time building a solution on VSS.
I betting running the Object Model will be slower by at least 2 hours.... ;-)
How is the command line tool used? You're not by chance calling the tool once per file?
It doesn't sound like it ('recursive get' pretty much implies you're not), but I thought I'd throw this thought in. Others may have similar problems to yours, and this seems frighteningly common with source control systems.
ClearCase at one client performed like a complete dog because the client's backend scripts did this. Each command line call created a connection, authenticated the user, got a file, and closed the connection. Tens of thousands of times. Oh, the dangers of a command line interface and a little bit of Perl.
With the API, you're very likely to properly hold the session open between actions.
This is not strictly a technical question, however I feel this will be useful for many technical people as well.
I'm looking for a version management / backup solution which need not be only for source code. This could be for non-text files e.g. images.
The requirement is this -
Every time I save the file from within the application, it should create a version.
I should be able to add comments for say, major revisions.
At any time, there should be only one version current.
I should be able to view previous versions without doing a 'restore'
I should be able to move back and forth between versions.
A calendar feature showing the various versions of a file would be helpful, if I could get to it for a specific file from the Explorer context menu
I don't really need to compare different versions or anything like that.
Windows solutions only. I've looked at NTI Shadow and it comes a bit close to what I'm looking for.
Are there any paid / free / open source solutions for the above requirements?
Pretty much any version control system i know of supports binary uploads. Subversion (in short SVN) is free and pretty popular. If you also download TortoiseSVN you can handle everything from within Explorer.
The only requirement i cannot help you with is 1. automatic saving from within your application. But you can of course do this by copying over your old version of the file in the file system and committing your changes via TortoiseSVN.
PS for some reason i cannot connect to the SVN site right now. It might be down at the moment. It is still a great product, though :)
[not an actual answer, just a note about DVCS backup capabilities]
I would not advise for a DVCS (Distributed Version control System) like Git or alike for backup strategy.
As stated in DVCS Myths
So, why make backups of a source control server with so many backups?
It is improbable that many servers will suffer catastrophic hardware failures simultaneously, but it is not impossible.
A more likely scenario might be a particularly nasty computer virus that sinks its teeth into an entire network of vulnerable machines.
In any case, the probability of any or all of your backups becoming suddenly unavailable is really not the point.
The bottom line is that using independent clones as canonical backups (as opposed to temporary stopgaps) is a suboptimal strategy.
Security, for example, should be considered.
If you are using authorization rules to control access to specific portions of your repository, canonicalizing an arbitrary clone of the repository effectively renders those rules useless.
While this would rarely be a matter of practical concern in a controlled corporate environment, it is nonetheless possible.
(my imput:) Full data backup is not really possible with a DVCS, since it would implies all repositories push their changes to a "central" repository, which is not the main use case scenario in a DVCS (whereas with a classical VCS, anything committed is stored in one place)
The key win of DVCS for backups, then, is that you don't really need to invest in a "hot" backup.
When the server inevitably goes down, DVCS will buy you time. Lots of time. You'll essentially be running at full productivity (or very nearly so) while you rebuild your server from backup.
When changesets created during the server downtime are pushed back to the restored server, the freshly restored authorization rules will be reapplied and you'll be back on track.
So, for us:
hot "backup" is actually achieved with SRDF (Symmetrix Remote Data Facility), but that is commercial and is linked to our infrastructure which support LUNs duplication to achieve data replication.
incremental daily backup is achieved for a limited set of repositories (including some "central" Git repos), but in our case, with a custom tool.
I think you're looking for the benefits of a versioning file system that takes immutable snapshots of files upon each write. You could build this into a DVCS if something set up watches on files contained in the versioned directory (committing each time a file is changed) but that would get ugly, quick.
This topic was also explored in this question. I think your ideal solution would be a DVCS repository that resides on a versioning/cow file system of some type. This lets you manage revisions of each file independently of commits that you make in the DVCS.
Unless, of course, toxic revisions would not be an issue for you.
In order for this to be transparent to applications (i.e., would not need to have application implement a different API for saving/loading files to access these backup features), you'd want to do this in the Operating System, at its file system layer.
ZFS filesystem could be wrapped to provide the user interface capabilities you describe, but it is doubtful this filesystem would ever reach Windows (directly, at least).
A simpler way to think of this is to look at network storage systems which can provide you the features you need.
NetApp Snapshot offers capabilities that could be tapped to do this at the network storage level. It implements CIFS, so is definitely available on windows. Open your wallet.
If you think this is an extremely important feature, you may consider other OSes than Windows; filesystems and filesystem support in OSes other than Windows are more diverse.
I strongly suggest using subversion. I have used 4 different version control systems and have found subversion powerful and easy to use.
For windows this is the easiest server to install is Visual SVN
And Smart SVN is the best subversion client I've used.