Related
I want to use some version control for my projects, but I'm the only developer, there is not others. I want to use my pendrive like repository because I develop in many different places(but the same project).
I only worked with SVN, but in that case, was not good, I think an DVCS was better.
But now, I really don't know what to use, if SVN is the best option. I've looked for another solutions like Mercurial, Git, and Fossil, but I don't understand the differences, and mainly, if they are the best options for my situation.
I need to know what is best in this case.
If you're the only developer, then the best version control is the one that you are most comfortable with. The goal of version control is to make your life as a developer easier and safer, so there's no point in fighting with a version control that you don't know.
However, if you want to learn how to use a new version control system, this is a great opportunity.
If you think that you're going to have more developers working on this project later, then you want to think about a robust solution like SVN.
I would definitely recommend Git. It is a little cryptic at times, but it seems to be/become the de-facto standard for most open source projects. It's very powerful and it'll enable you to work with the great service that is Github :)
There are good topics on SO that compare Git, Mercurial, svn, etc.
What is the Difference Between Mercurial and Git?
To me an important requirement was easy, free, private repositories online so I started to use http://Bitbucket.org They support both Mercurial and Git.
+1 to #dudemonkey sentences. Except last - tastes can differ. Even as sole developer, you can (and have) use best techniques - i.e you may have non-linear development (thus - branching|merging), refactoring of code, different targets. Try and select best for you solution.
Nobody mentioned Fossil SCM - small, portable app in one exe, with all basic features of DVCS, integrated Wiki and tickets - which you can have (with repo) in USB, for example for max mobility
In first place stay away from SVN or any other CVCS, they suck. To learn more about DVCS, I recommend the Eric Sink's Book, it's free: http://www.ericsink.com/vcbe/
As you plan to work on different machines, the best solution would be to have an online repository, I think it's more pratical and safer than the pendrive. Some of the most known out there are: https://bitbucket.org/, https://github.com/, https://launchpad.net/, http://code.google.com/projecthosting/. Remember that with a DVCS you don't need to be online all the time, you can commit locally and push to the server later. If your project is not open source, you should stick with BitBucket as it's the only one that offers free hosting for closed source projects.
If you really want to work just in the pendrive, don't leave the repository just in the pendrive. It's safer to clone the repository from the pen drive to the machines you will work. Then you Push/Pull to the pen drive's repository to synchronize.
In my company, presently we are using Rational clearcase as the Software Configuration Management tool for our Embedded software development. The software is basically for Automobiles, to be specific for Engines (I dont think these information really matters).
But I find Clearcase to be very slow is performing any the activities (accesing files, branching and labelling), in addition to which there are various other limitations.
We have recently decided to research on some free & open source, distributed version control system which could be able to handle our large projects with speed and efficiency. This tool should be a full-fledged repository with complete history and full revision tracking capabilities, not dependent on network access or a central server. Branching and merging are fast and easy to do. It should have multisite development facility.
With these above mentioned requirement, we have come up with some of the tools that are presently available in the market:
GIT, Mercurial, Bazaar, Subversion, CVS, Perforce, and Visual SourceSafe.
I need everybody's help in finding me an approrpiate SCM tool for me which meets the above mentioned requirements.
Thanking you in Advance,
Rahamath.
Mercurial or Git are the most popular Distributed Version Control Systems. I believe Git has the speed advantage, particularly in committing, branching and merging. Furthermore, its merging algorithm is the best I've yet come across; most merges can be handled automatically without user input.
From my own experience, I would recommend Git without hesitation were it not for its very steep learning curve. However, I believe much of this is due to the paradigm shift when switching to a DVCS, such as getting the hang of pushing and pulling, the way repositories become decentralised.
Subversion, CVS, Perforce and SourceSafe are not distributed; furthermore, Perforce and SourceSafe are neither free nor open source. CVS is all but obsolete, with Subversion being its natural successor, so I wouldn't consider it any further.
If you want something "not dependent on network access or a central server", then the centralised SCMs from your list (Subversion, CVS, p4) have to go.
If you want cross platform, then I think Visual Source Safe would have to go.
Also, you mentioned Open Source, that kicks out p4 and Visual Source Safe.
CVS is quite old and if you're planning to use that, you'd best ignore it and use SVN instead.
Git is something that you can add to the list but it's support on windows is not as good as that of bzr and mercurial.
I use git myself but I develop exclusively on Gnu/Linux and so can't comment on windows support. Also, it's a bit quirky but once you get used to it can be really powerful. There is a learning curve problem so you might have to spend some time training your team on the tool.
Bzr, I don't know. When I last touched it, it had repository format issues and was horribly slow. It's much better now but I was scarred by my first exposure.
Hg is sweet and works fine on windows and Gnu/Linux but since I've used git quite heavily, I miss some of it's features on hg.
We are using ClearCase (with its advantages and pain points), and we are considering DVCS.
Right now, we are introducing Git, both on Windows (msysgit) and on a "central" Solaris server, which does met our needs in term of merging, and in term of distribution (for offshore-development)
But we have to setup "central" repositories for the developers to use as reference, and for that we had to use gitolite (the pu branch) for its fine-grained access level (repo, branch, directory access per user or per group, ldap-based)
The integration with Eclipse is in progress, and we are confident on the support level since all Eclipse projects have switched from CVS to Git (so they are committed to support it).
Mercurial has been considered and can certainly offer the same level of features, but has a much complex branching model.
Git has no extension to install. It just works (with a learning curve we manage to keep at a reasonable level through my user support services)
At work we are actually on ClearCase too and not satisfied for the some reasons... Very slow to update big projects (particularly if not local network)...
We (not me) benchmarked some products and Mercurial was choosen to be the future solution used.
I've been using SVN for some time now, and am pretty happy with how it works (but I can't say I'm an expert, and I haven't really done much with branches and merging). However an opportunity has arisen to put in some new practises on a new team and so I thought I'd take a look at DVCSs to see if it's worth making the jump.
The company I work for is a pretty standard company where we all work in the same location (or sometimes at home) and we want to keep a central store of all code.
My question is: if all you are doing with a DVCS is creating a central hub that everyone pushes their changes to, is there really any benefit to moving to a DVCS and its extra overheads in this sort of environment?
With DVCS's people can maintain their own local branches without making any changes in the central repository, and push their changes to the master repository when they think it's cooked up. Our project is stored in an SVN repository but personally I use git-svn to manage my local changes and find it quite useful, because we are not allowed to submit all the changes immediately(they have to be approved by the integrator first).
It all depends on how you want to work on projects. Distributed environments are great if everybody wants to build on its own branch. I prefer a central repository for my work (in a small team) as it makes the developers think about releasing one version of our product.
In my experience I see a lot of DVCS users who think of their own changes as the ones they don't have to review and these users review the changes of all other developers before merging them in their own tree. I like to see my changes as the change to the core product, so I review these changes before I commit them. As a result we try to keep the product pretty stable during the entire development cycle. Refactoring works OK, as we all update often.
Several DVCS users I know prefer to work on their feature on an independent tree and leave the integration with the central product to the final phase of their development. This works fine if the feature is independent, but I wouldn't like to be the one who has to integrate all the features developed this way with a deadline in sight.
If you integrate often, DVCS's don't differ much from central VCS's, and most DVCS's support a central repository, while more and more central VCS's support several features that where unique to DVCS's before, like offline commit and shelving.
(FYI: Offline commits are planned for Subversion 1.8)
Personally, I find it's a huge benefit. Even with a central repo, a DVCS changes the flow from "edit code, update from central, commit" to "edit code, commit, push to central". Among other things, that means that conflict resolution is far less stressful. It can also encourage development in smaller chunks, since you don't have to push after every commit. If your team is OK with it, that means your individual commits might leave the app in a strange state, as long as it's working when you finally push to the central repo. If they're not OK with that, as long as you're using git (or patch queues for hg, IIRC), you can still do dev in the same style, but then condense all your smaller commits into one larger commit that is complete before you push it to the central repo.
The big benefit of using a DVCS for me is that I can commit to my local repository without having to share these changes with everyone else. So when working on a big change I do small incremental commits, meaning I can revert just the last 30 minutes work, or do a diff against a version that was working yesterday, but then only push to the central repository once all my work is complete.
I think this benefit alone is worth moving to a DVCS for.
However, using a DVCS does require a little more thought and understanding and using a "standard" version control system like SVN or CVS so you will need to consider the training overhead if moving to a DVCS or your central repository will end up full of a lot different branches people didn't realise they were creating.
You'll get the inevitable war of Git vs. Mercurial starting here soon... :-) I personally use Mercurial, but what I've got to say should be suitable for all DVCS.
In my opinion, yes, they are suitable for corporate use. I use them at my own company, albeit with a small number of developers using it, but if you're worried about scalability, look at the large Open source projects using git and mercurial, e.g. Mozilla, Python.
The central hub approach works well - it's a familiar working model to users of subversion and you've always got a "definitive" version. Lock down access to this and apply any hooks to enforce commit policies and after that, developers have a large amount of freedom to work how they like with their local copies.
Another big plus is that I've found merging much less painful with mercurial than with subversion.
What's trickier with a DVCS is managing binary files - you can't require a lock on a binary file like you can with subversion (amongst others). Manage this with communication ideally.
Finally, cloning a repo is great for keeping checkouts in sync if you're working from several PCs.
Hope this helps.
K
I think the main benefit of a DVCS comes when you want to push your changes directly to other people (or machines, e.g. taking the repository home with you), without going through a central hub. If you have the need to do this, a DVCS is definitely the way to go. If, as you say,
all you are doing with a DVCS is creating a central hub that everyone pushes their changes to
then you’re not really taking advantage of the main purpose of a DVCS and I would say SVN is sufficient.
P.S. One might also make the argument that a DVCS encourages users to commit more often since they can do so in their personal repository and only publish their changes when they’re ready — but this can be easily accomplished in SVN using branches, with the only “downside” being that “personal” commits increment the version number of the whole repository.
Even with a hub workflow, a DVCS gets you the ability to make small commits locally, merge them only when you want to, and push them when they are ready.
With a non-DVCS, you are forced to either:
do your work without committing, until it's polished and you push a huge commit.
make small commits as you go, which everyone has to merge often, though merging intermediate commits brings them nothing of value.
And if you explore a dead end, without DVCS: with the first method, you can't rewind, you don't have a commit to go back to; with the second, both your commits and their reverts had to be merged pointlessly by everyone else.
Personally i think the biggest advantage of DVCS is that you commit (locally) before merging, so if halfway through the merge it turns out to be more complex than you originally thought, it is trivial to get back to a clean state without losing your work. compare to CVCS where you usually have to merge succesfully before you can commit.
additional advantages include:
working from home/at clients site becomes easier as you don't require network connection just to check something in, and if you wait till back at base to push changes the history is preserved rather than lumping everything into one change.
Most DVCS operations are actually a lot faster as they don't need to pull data over the network
Some things (e.g. user settings scripts) are better shared directly between developers who want to share them rather than via a central location
In my experience there are several ways to use a DVCS inside a corporate environment:
Multi-site support: you've separated teams and you use your DVCS to set up different "servers" at each location so they're not limited by the underlying network problems (and believe me, there will be). It used to be done with "big things" like Clearcase Multi-site or Wandisco (for SVN/CVS) but now it's pretty doable with DVCS systems.
Support "roaming users": you're a corp. developer but you want to work at home for a certain time (or ever): instead of relying on the VPN you can have a DVCS at your laptop and then you're free to commit, review, diff, branch and merge without being slow down by the central server. You synch back when you're online or back at the office.
True "distributed development": which is the extreme case: each developer having his own DVCS (like you'd do on the OSS world). It will really depend on team's skills and motivation: if the team really wants to move into it, they'll benefit, otherwise it will be SYSADM's nightmare having to manage not a single repo but hundreds... with their corresponding issues.
the overhead is not so big, in fact, in our environment, the added hg push is less of an overhead than commiting to the central svn repo was. but the biggest plus is all the bells and whistles that come with mercurial, that are great for an individual developer regardless of the team size or workflow. first and foremost, the fact that every wc is a repo is great, since you can experiment much more freely without polluting the master repo. then, there is functionality that builds on the wc == repo equality: bisect to quickly find the revision where a bug sneaked in, grep to, well, grep the history, as well as functionality simply missing from subversion, like colored diffs in the terminal.
Bazaar VCS can work as distributed VCS and as centralized VCS so you have the freedom to select the workflow you need. In the same time local private branches (where people can experiment while working on new features and in the same time commit their progress regularly) is huge benefit.
Also DVCS makes natural development workflow when mandatory code review required before new changes will be landed to trunk. This workflow (regarding SVN) described brilliantly in the UQDS article. And despite the fact that article described SVN-based workflow you'll find it more natural when you're using any DVCS instead of SVN, because in DVCS branching and merging is basic first-class operation.
It seems rather common (around here, at least) for people to recommend SVN to newcomers to source control because it's "easier" than one of the distributed options. As a very casual user of SVN before switching to Git for many of my projects, I found this to be not the case at all.
It is conceptually easier to set up a DCVS repository with git init (or whichever), without the problem of having to set up an external repository in the case of SVN.
And the base functionality between SVN, Git, Mercurial, Bazaar all use essentially identical commands to commit, view diffs, and so on. Which is all a newcomer is really going to be doing.
The small difference in the way Git requires changes to be explicitly added before they're committed, as opposed to SVN's "commit everything" policy, is conceptually simple and, unless I'm mistaken, not even an issue when using Mercurial or Bazaar.
So why is SVN considered easier? I would argue that this is simply not true.
If you use the version control only for yourself, SVN is probably harder, since the setup is harder. If you however want to work with multiple developers over the web, a server side control has advantages:
You have one central place, that always has the official state-of-the-art source, being the SVN server
Since everyone always merges his changes against a central server, there are much less collisions and much less manual fixing of collisions
You have central control over the source
You have an official revision number instead of a revision hash, being the same among all developers and showing the official progress (it's an up counting number, unlike a hash, which is just an identity fingerprint, so you can see which code is newer or older, just by this number
A distributed versioning system is A Very Good Thing (tm), but I find the primary barrier to adoption being educating users on the new possibilities a new SCM gives.
This coupled with an often lack-luster amount of UI tools (half-finished tortoise implementations etc), brings a blank stare to the eye of many co-workers who long since foreswore the commandline for the sake of a good UI tool.
Also, with tools like CVS I find that people loathe branching and merging because they really really don't want to be stuck an entire day doing three way merges, often not really sure which would be the right merge to do.
My point is: Start out by telling people what they gain (not just "hey watch this new cool toy"), and prep them to the fact that using a commandline IS the way to go and that frequent constant time branching is a good thing.
Many systems such as mercurial comes with complete patch-queue system, meaning that from a Continuous Integration standpoint you know that whatever goes into production has been approved by QA. Stuff like this is hard to do properly with CVS or SVN.
With Mercurial people would have private repositories for their current work and all developers share a developer-tree on a server. The CI system monitors the developer-tree and pulls all changes, builds, and performs unittests. If everything passes it propagates the changes to a Testing-tree from where a deliverable is built for the QA persons. Every changeset that is added gets a token. When QA deems a feature to be complete, they annotate the Testing tree with this token, and the associated changesets are then automatically propagated to the Production-tree.
Using this approach you will never commit anything by hand to the production branch, or the testing branch. Rather the state of the code, and the sign off from QA determines the contents of your production branch,
I believe it is conceptually simpler to think of a centralized repository where each developer commits his work vs. multiple copies of the entire repository, none of which represents the 'truth'. Since most developers are familiar with the notion of a client-server model and backend database, this is a natural concept.
Of course, the very strength of distributed source control systems is that they don't have to adhere to this model, but for a newcomer, it seems easier to grasp.
Svn forces a single work-line. You can either commit, or you can't. Having similar experiences, I don't find Hg or Git hard to use, however, I work on a team whom seem to find it nightmarish to use them.
The whole concept of auto-branching, multiple heads, when to and when not to merge completely eludes them, not to mention they have trouble breaking free of the "commit/checkout with $somecore" mentality which leads to complete confusion when they see pushes between 2 checked out copies.
( Had a problem for a while where somebody repeatedly merged 2 branches that were not supposed to be merged because they were unable to grasp the concept. )
However, the same people whom have trouble with distributed SCMS caused me to ask this question
edit/note Mercurials biggest difference I've noted vs git that makes mercurial harder to use for newbies is mercurials default behavior for push/pull is like doing git push --all / git pull --all , which can propagate private branches and add lots of confusion ( especially as when a new branch turns up, mercurial freezes in fear and asks you how to handle it instead of just keeping on trucking ), as well as the default merge/conflict resolution tool on mac just clobbering one set of changes blindly.
I think the toolset for SVN is much broader, so you could sit down and teach people (TortoiseSVN, RapidSVN etc) even if they did not have much conceptual idea of how the repository worked. It is also relatively easy to get SVN hosted for you (with trac for example) without needing to know anything. Distributed ones have not had this backing yet and I am sure opinions of them will change when they do.
I would argue that setting up a repository with a DVCS is practically easier, but conceptually harder. After all, with a centralized VCS the users do not set up their own repository, they just create an account on Assembla or have the repo set up for them.
DVCS is currently lacking good desktop clients. Despite what most people say, version control systems can be quite hard to use correctly so a good desktop client can really help - and here TortoiseSVN excels.
We struggle to make it as easy as possible at Codice, but it's always a little bit harder to explain, of course, it depends on the audience.
For OSS projects and small teams, specially people working on their laptops and moving here and there, working at home, plane sometimes, and so on, it's pretty easy. But, whenever you talk to corporations/enterprises, they get excited about its multi-site role, but not that much about distributed at first glance. It all depends on whether the group has a majority of advanced developers or not.
It's poor marketing, simple as that. Far too many DVCS introductions focus on the command line and say "wow, isn't it fantastic, you can do a merge just by typing hg merge" completely oblivious to the fact that many people (especially in Windows land) are terrified of the command line. Yes, Joel Spolsky, I'm looking at your own hginit.com here -- we need a TortoiseHg version please!
Maybe it was the case two years ago that they had poor GUI implementations, but they've come on in leaps and bounds recently. TortoiseHg is now in version 1.0, and while it may not be anything to write home about visually, it's pretty solid, stable, and easy to use. TortoiseGit is also rock solid, and it does a great job of abstracting away all the complexities of the git command line.
I'd like to hear from people who are using distributed version control (aka distributed revision control, decentralized version control) and how they are finding it. What are you using, Mercurial, Darcs, Git, Bazaar? Are you still using it? If you've used client/server rcs in the past, are you finding it better, worse or just different? What could you tell me that would get me to jump on the bandwagon? Or jump off for that matter, I'd be interested to hear from people with negative experiences as well.
I'm currently looking at replacing our current source control system (Subversion) which is the impetus for this question.
I'd be especially interested in anyone who's used it with co-workers in other countries, where your machines may not be on at the same time, and your connection is very slow.
If you're not sure what distributed version control is, here are a couple articles:
Intro to Distributed Version Control
Wikipedia Entry
I've been using Mercurial both at work and in my own personal projects, and I am really happy with it. The advantages I see are:
Local version control. Sometimes I'm working on something, and I want to keep a version history on it, but I'm not ready to push it to the central repositories. With distributed VCS, I can just commit to my local repo until it's ready, without branching. That way, if other people make changes that I need, I can still get them and integrate them into my code. When I'm ready, I push it out to the servers.
Fewer merge conflicts. They still happen, but they seem to be less frequent, and are less of a risk, because all the code is checked in to my local repo, so even if I botch the merge, I can always back up and do it again.
Separate repos as branches. If I have a couple development vectors running at the same time, I can just make several clones of my repo and develop each feature independently. That way, if something gets scrapped or slipped, I don't have to pull pieces out. When they're ready to go, I just merge them together.
Speed. Mercurial is much faster to work with, mostly because most of your common operations are local.
Of course, like any new system, there was some pain during the transition. You have to think about version control differently than you did when you were using SVN, but overall I think it's very much worth it.
At the place where I work, we decided to move from SVN to Bazaar (after evaluating git and mercurial). Bazaar was easy to start off, with simple commands (not like the 140 commands that git has)
The advantages that we see is the ability to create local branches and work on it without disturbing the main version. Also being able to work without network access, doing diffs is faster.
One command in bzr which I like is the shelve extension. If you start working on two logically different pieces of code in a single file and want to commit only one piece, you can use the shelve extension to literally shelve the other changes later. In Git you can do the same with playing around in the index(staging area) but bzr has a better UI for it.
Most of the people were reluctant to move over as they have to type in two commands to commit and push (bzr ci + bzr push). Also it was difficult for them to understand the concept of branches and merging (no one uses branches or merges them in svn).
Once you understand that, it will increase the developer's productivity. Till everyone understands that, there will be inconsistent behaviour among everyone.
At my workplace we switched to Git from CVS about two months ago (the majority of my experience is with Subversion). While there was a learning curve involved in becoming familiar with the distributed system, I've found Git to be superior in two key areas: flexibility of working environment and merging.
I don't have to be on our VPN, or even have network connectivity at all, to have access to full versioning capabilities. This means I can experiment with ideas or perform large refactorings wherever I happen to be when the urge strikes, without having to remember to check in that huge commit I've built up or worrying about being unable to revert when I make a mess.
Because merges are performed client-side, they are much faster and less error-prone than initiating a server-side merge.
My company currently uses Subversion, CVS, Mercurial and git.
When we started five years ago we chose CVS, and we still use that in my division for our main development and release maintenance branch. However, many of our developers use Mercurial individually as a way to have private checkpoints without the pain of CVS branches (and particularly merging them) and we are starting to use Mercurial for some branches that have up to about 5 people. There's a good chance we'll finally ditch CVS in another year. Our use of Mercurial has grown organically; some people still never even touch it, because they are happy with CVS. Everyone who has tried Mercurial has ended up being happy with it, without much of a learning curve.
What works really nicely for us with Mercurial is that our (home brewed) continuous integration servers can monitor developer Mercurial repositories as well as the mainline. So, people commit to their repository, get our continuous integration server to check it, and then publish the changeset. We support lots of platforms so it is not feasible to do a decent level of manual checks. Another win is that merges are often easy, and when they are hard you have the information you need to do a good job on the merge. Once someone gets the merged version to work, they can push their merge changesets and then no one else has to repeat the effort.
The biggest obstacle is that you need to rewire your developers and managers brains so that they get away from the single linear branch model. The best medicine for this is a dose of Linus Torvalds telling you you're stupid and ugly if you use centralised SCM. Good history visualisation tools would help but I'm not yet satisfied with what's available.
Mercurial and CVS both work well for us with developers using a mix of Windows, Linux and Solaris, and I've noticed no problems with timezones. (Really, this isn't too hard; you just use epoch seconds internally, and I'd expect all the major SCM systems get this right).
It was possible, with a fair amount of effort, to import our mainline CVS history into Mercurial. It would have been easier if people had not deliberately introduced corner cases into our mainline CVS history as a way to test history migration tools. This included merging some Mercurial branches into the CVS history, so the project looks like it was using from day one.
Our silicon design group chose Subversion. They are mainly eight timezones away from my office, and even over a fairly good dedicated line between our offices SUbversion checkouts are painful, but workable. A big advantage of centralised systems is that you can potentially check big binaries into it (e.g. vendor releases) without making all the distributed repositories huge.
We use git for working with Linux kernel. Git would be more suitable for us once a native Windows version is mature, but I think the Mercurial design is so simple and elegant that we'll stick with it.
Not using distributed source control myself, but maybe these related questions and answers give you some insights:
Distributed source control options
Why is git better than Subversion
I personnaly use Mercurial source control system. I've been using it for a bit more than a year right now. It was actually my first experience with a VSC.
I tried Git, but never really pushed into it because I found it was too much for what I needed. Mercurial is really easy to pick up if you're a Subversion user since it shares a lot of commands with it. Plus I find the management of my repositories to be really easy.
I have 2 ways of sharing my code with people:
I share a server with a co-worker and we keep a main repo for our project.
For some OSS project I work on, we create patches of our work with Mercurial (hg export) and the maintener of the project just apply them on the repository (hg import)
Really easy to work with, yet very powerful. But generally, choosing a VSC really depends on our project's needs...
Back before we switched off of Sun workstations for embedded systems development we were using Sun's TeamWare solution. TeamWare is a fully distribution solution using SCCS as the local repository file revision system and then wrappers that with a set of tools to handle the merging operations (done through branch renaming) back to the centralized repositories of which there can be many. In fact, because it is distributed, there really is no master repository per se' (except by convention if you want it) and all users have their own copies of the entire source tree and revisions. During "put back" operations, the merge tool using 3-way diffs algorithmically sorts out what is what and allows you combine the changes from different developers that have accumulated over time.
After switching to Windows for our development platform, we ended up switching to AccuRev. While AccuRev, because it depends on a centralized server, is not truely a distributed solution, logically from a workflow model comes very close. Where TeamWare would have had completely seperate copies of everything at each client, including all the revisions of all files, under AccuRev this is maintained in the central database and the local client machines only have the flat file current version of things for editing locally. However these local copies can be versioned through the client connection to the server and tracked completely seperately from any other changes (ie: branches) implicitly created by other developers
Personally, I think the distributed model implemented by TeamWare or the sort of hybrid model implemented by AccuRev is superior to completely centralized solutions. The main reason for this is that there is no notion of having to check out a file or having a file locked by another user. Also, users don't have to create or define the branches; the tools do this for you implicitly. When there are larger teams or different teams contributing to or maintaining a set of source files this resolves "tool generated" locking related collisions and allows the code changes to be coordinated more at the developer level who ultimately have to coordinate changes anyway. In a sense, the distributed model allows for a much finer grained "lock" rather than the course grained locking instituted by the centralized models.
Have used darcs on a big project (GHC) and for lots of small projects. I have a love/hate relationship with darcs.
Pluses: incredibly easy to set up repository. Very easy to move changes around between repositories. Very easy to clone and try out 'branches' in separate repositories. Very easy to make 'commits' in small coherent groups that makes sense. Very easy to rename files and identifiers.
Minuses: no notion of history---you can't recover 'the state of things on August 5'. I've never really figured out how to use darcs to go back to an earlier version.
Deal-breaker: darcs does not scale. I (and many others) have gotten into big trouble with GHC using darcs. I've had it hang with 100% CPU usage for 9 days trying to pull in
3 months' worth of changes. I had a bad experience last summer where I lost two weeks
trying to make darcs function and eventually resorted to replaying all my changes by hand into a pristine repository.
Conclusion: darcs is great if you want a simple, lightweight way to keep yourself from shooting yourself in the foot for your hobby projects. But even with some of the performance problems addressed in darcs 2, it is still not for industrial strength stuff. I will not really believe in darcs until the vaunted 'theory of patches' is something a bit more than a few equations and some nice pictures; I want to see a real theory published in a refereed venue. It's past time.
I really love Git, especially with GitHub. It's so nice being able to commit and roll back locally. And cherry-picking merges, while not trivial, is not terribly difficult, and far more advanced than anything Svn or CVS can do.
My group at work is using Git, and it has been all the difference in the world. We were using SCCS and a steaming pile of csh scripts to manage quite large and complicated projects that shared code between them (attempted to, anyway).
With Git, submodule support makes a lot of this stuff easy, and only a minimum of scripting is necessary. Our release engineering effort has gone way, way down because branches are easy to maintain and track. Being able to cheaply branch and merge really makes it reasonably easy to maintain a single collection of sources across several projects (contracts), whereas before, any disruption to the typical flow of things was very, very expensive. We've also found the scriptabability of Git to be a huge plus, because we can customize its behavior through hooks or through scripts that do . git-sh-setup, and it doesn't seem like a pile of kludges like before.
We also sometimes have situations in which we have to maintain our version control across distributed, non-networked sites (in this case, disconnected secure labs), and Git has mechanisms for dealing with that quite smoothly (bundles, the basic clone mechanism, formatted patches, etc).
Some of this is just us stepping out of the early 80s and adopting some modern version control mechanisms, but Git "did it right" in most areas.
I'm not sure of the extent of answer you're looking for, but our experience with Git has been very, very positive.
Using Subversion with SourceForge and other servers over a number of different connections with medium sized teams and it's working very well.
I am a huge proponent of centralized source control for a lot of reasons, but I did try BitKeeper on a project briefly. Perhaps after years of using a centralized model in one format or another (Perforce, Subversion, CVS) I just found distributed source control difficult to use.
I am of the mindset that our tools should never get in the way of the actual work; they should make work easier. So, after a few head pounding experiences, I bailed. I would advise doing some really hardy tests with your team before rocking the boat because the model is very different than what most devs are probably accustomed to in the SCM world.
I've used bazaar for a little while now and love it. Trivial branching and merging back in give great confidence in using branches as they should be used. (I know that central vcs tools should allow this, but the common ones including subversion don't allow this easily).
bzr supports quite a few different workflows from solo, through working as a centralised repository to fully distributed. With each branch (for a developer or a feature) able to be merged independently, code reviews can be done on a per branch basis.
bzr also has a great plugin (bzr-svn) allowing you to work with a subversion repository. You can make a copy of the svn repo (which initially takes a while as it fetches the entire history for your local repo). You can then make branches for different features. If you want to do a quick fix to the trunk while half way through your feature, you can make an extra branch, work in that, and then merge back to trunk, leaving your half done feature untouched and outside of trunk. Wonderful. Working against subversion has been my main use so far.
Note I've only used it on Linux, and mostly from the command line, though it is meant to work well on other platforms, has GUIs such as TortoiseBZR and a lot of work is being done on integration with IDEs and the like.
I'm playing around with Mercurial for my home projects. So far, what I like about it is that I can have multiple repositories. If I take my laptop to the cabin, I've still got version control, unlike when I ran CVS at home. Branching is as easy as hg clone and working on the clone.
Using Subversion
Subversion isn't distributed, so that makes me think I need a wikipedia link in case people aren't sure what I'm talking about :)
Been using darcs 2.1.0 and its great for my projects. Easy to use. Love cherry picking changes.
I use Git at work, together with one of my coworkers. The main repository is SVN, though. We often have to switch workstations and Git makes it very easy to just pull changes from a local repository on another machine. When we're working as a team on the same feature, merging our work is effortless.
The git-svn bridge is a little wonky, because when checking into SVN it rewrites all the commits to add its git-svn-id comment. This destroys the nice history of merges between my coworker's repo an mine. I predict that we wouldn't use a central repository at all if every teammember would be using Git.
You didn't say what os you develop on, but Git has the disadvantage that you have to use the command line to get all the features. Gitk is a nice gui for visualizing the merge history, but the merging itself has to be done manually. Git-Gui and the Visual Studio plugins are not that polished yet.
We use distributed version control (Plastic SCM) for both multi-site and disconnected scenarios.
1- Multi-site: if you have distant groups, sometimes you can't rely on the internet connection, or it's not fast enough and slows down developers. Then having independent server which can synchronize back (Plastic replicates branches back and forth) is very useful and speed up things. It's probably one of the most common scenarios for companies since most of them are still concerned of "totally distributed" practices where each developer has its own replicated repository.
2- Disconnected (or truly distributed if you prefer): every developer has his own repository which is replicated back and forth with his peers or the central location. It's very convenient to go to a customer's location or just go home with your laptop, and continue being able to switch branches, checkout and checkin code, look at the history, run annotates and so on, without having to access the remote "central" server. Then whenever you go back to the office you just replicate your changes (normally branches) back with a few clicks.