Related
I've done a lot of reading, and have been trialing GIT, GIT Tortoise, Tortoise SVN and PlasticSCM, to find the right source control for our small team (5-10 users).
Some background on our team: 6 copy writers/editors (2 remote), 2 developers, 2 graphic designers. We are not always working on projects together, sometimes up to 5 of us might be working on a given project. I'm unconcerned about the developers with DVCS, my concern is mainly around the other roles who are (in the nicest way) limited in their technical capability. Some of our copy writers update multiple source files (HTML, PDFs and adding concept graphics) to live, unversioned build directories (backed up as build.23.06.11.new.new.final.zip!). The copy and GD team will not have time, or to be brutally honest, the inclination to merge/resolve conflicts, or probably even remember to switch branches.
A few SO questions have shed light on what what seems to be a fairly consistent approach - main trunk (no junk in the trunk!) with teams having their own branches, and having release branches etc.
Every time I've re-read the links...
https://stackoverflow.com/questions/3854583/version-control-system-for-small-in-house-team
Getting started with Version Control
http://svn-ref.assembla.com/subversion-how-tos.html
...and google in general, I still end up asking myself the same questions:
Is it a Bad Idea to create role-specific branches for "trouble points" (copy team), where they can push to the repo, then our developers will merge their work into the actual project branch?
Should I still try to enforce a task-per-branch for everyone else?
Should I do task-per-branch for everyone but let the copy team create very broad tasks?
Is there usually a team/group/person who is considered an "admin" role for a repo who does crucial merges?
(is there an alternative suggested workflow where copy writers don't touch source?)
Unfortunately, the copy teams play a vital role in updating files which in turn affect layouts and all sorts of things, on a continual basis during dev. Its not like I can keep them in a bubble until the end of a project and chuck their work in.
... the good news is that hopefully, after a number of years, I'm ready to force everyone to move to version control! We've also settled on PlasticSCM for its intuitive GUI and Windows integration.
The best answer to this question would try to answer the 4 points above - tackle point 5 if you like - explain weak points if possible, and provide advice, gotchyas, etc.
cheers!
So basically you want to know how to get team-member of different skill-levels to use SCM and play nice with each other.
Buy-in from your team is priority #1. If you can't make them learn it, then you're left with providing a path of least resistance. So you really need to be flexible. There might be a wrong-way and a right-way to use the tool, but if the users won't accept the right-way, then the wrong-way is better than them not using it at all. How you achieve this balance is going to be different for every team.
Is it a Bad Idea to create role-specific branches for "trouble points" (copy team), where they can push to the repo, then our developers will merge their work into the actual project branch?
No, maybe its not optimal, but if this makes it easy for the Copy Team, then thats what you're left with. You could probably go even further and setup each user with their own branch. Then they never have to worry about merging other peoples changes.
Should I still try to enforce a task-per-branch for everyone else?
Each dev should have a unique "local" branch, that is not tracking an upstream branch. For example, use something generic like mydev. This makes it easy for them to switch between their local code and the current upstream branch.
You don't necessarily need to force everyone to create a local branch for every task, cause in the end, you're going to want them just to rebase their working branch onto the upstream one, and commit so it just becomes a fast-forward (i.e. linear commit).
Now for tasks that multiple devs are working on, or it is a feature that involves groups of smaller commits, then yes it does make sense to force them to create a new specific task branch. When they merge they can make sure to force a merge-commit, then it is clear that a set of commits are grouped together and all were part of a specific task. The merge commit will display like merged branch feature-X.
Should I do task-per-branch for everyone but let the copy team create very broad tasks?
It's really up to how much buy-in you can get from the Copy Team. I think if they really get confused with the DVCS tools, then you have to scale back until you can find something that does not cause too much of an impact.
One solution, is to have one of your devs help integrate the Copy Teams changes into another branch that everyone else will look at. That will help offload the learning-curve of the tool onto someone outside of the Copy Team.
Is there usually a team/group/person who is considered an "admin" role for a repo who does crucial merges?
Yes, this makes sense. However the great thing about SCM, is that everyone will be able to go back and do a code review on a merge. So if a merge breaks the code, you can either append the corrections after the merge, or remove the merge, and do it over.
(is there an alternative suggested workflow where copy writers don't touch source?)
Well, one possible technique is the Integration Manager model. The developers commit changes to their own share repos, but its up to the integration manger, to merge in the changes to the blessed repository.
I'm sure there are other methods that might work for your users, but this question is slightly ambiguous.
I work in an engineering lab, not a computer science lab. As such, our in-house software is not the deliverable product. Instead, the in-house software is used to analyze engineering problems, and we deliver the results.
This makes version control a living hell. Or perhaps I should just say that the standard "trunk and branch" version control tree structure doesn't seem to apply. I'm hoping someone can suggest a better way of doing things.
For example, each engineering project requires adding case specific input files, run-time files, and post-processing files. None of these really belong in trunk, because they aren't general, but each new project needs these files. We tried putting templates in trunk but there was no clear best practice as to when templates should be merged up.
Similarly, the in-house code is always evolving as we add new capabilities. Many of these should be merged into trunk so they will be available for future applications. However, there are also quite a few case-specific hacks which the trunk doesn't need to see.
How should we organize this mess? Obviously, the simpler the better.
We really try for our projects to keep separate:
source files (managed in any VCS of your choice, like SVN)
configuration files (specific to a team or an environment)
Branches are for development effort and those "input files, run-time files, and post-processing files" will evolve at their own pace.
For that kind of file, what we managed in a VCS are:
templates
scripts able to take that template and generate the (private, as in not versioned) config file with the right values in it.
The values come from another referential, like a database, where the teams (or environment administrators) can update them at will, without any concern about checkout/check-in/merge.
That database can then be versioned in its own VCS if needed (see this SO question for instance, or, as an alternative, that one)
In engineering version control is often underestimated whereas it is essential to restore given settings in order to repeat experiments. For general adoption easy to use, mostly GUI oriented, tools help a lot.
Leveraging version control with issue tracking that relates issues to code commits increases productivity tremendously.
Concerning repository structure, at least looking at subversion, there are just conventions but no strict rules imposed by the tool. What about having a tree called 'trunk' where all 'common code' is managed.
For every engineering task there is a branch created. Which is nothing else than a 'project folder' with version control. Source code relevant to other projects will be merged back to trunk.
I'm wondering if there is any best practice for maintaining your source code under version control among different companies. In Open Source there is a maintainer, who receives patches, decides on them and applies them. But what about closed sourced projects where different companies get different workloads and just commit them to the trunk and branches? Is this maintainer concept applicable to a project on which multiple companies work on?
You can choose from a wide range of version control systems. (Not only subversion)
With the "versioning" concept you are safe that no one damages the project permanently.
So there is no need for a manual approval process, especially when there are contracts for example between the participating companies.
I'd also set up a commit mailinglist so you have some kind of peer review of changes. So no changes can be done without anyone noticing them.
If applicable set up some kind of continous integration environment to keep the quality up.
I don't understand the question about the branches. The decision whether to use them or not is IMHO not depending on the fact that the commiters are employed in the same company or not.
Its really up to you to decide which workflow works best for the companies involved. Subversion has the ability to add permissions to your trunk and branches allowing you to lock down certain parts of your repository to people who are "trusted" with merge access to trunk. You'll need good communication amongst the companies. Using the open source Trac provides a wiki, integrated RSS feeds of the commits to the project and code browser.
Usually, each site works on its dedicated branch and can import the other remote site branch, to decide what to integrate in its own work.
But if a site need to work directly on the other site branch, one possible practice is the concept of branch membership which allows only one site at a time to work on a given branch.
(not sure it is possible with SVN though)
That allows for two remote site (with a large time shift) to work on the same task in a tightly integrated manner.
My recommendation : subversion, with that configured you give away a url and then checkout, update, get things done and when you guess that the project is ready, snapshot and deliver.
It seems rather common (around here, at least) for people to recommend SVN to newcomers to source control because it's "easier" than one of the distributed options. As a very casual user of SVN before switching to Git for many of my projects, I found this to be not the case at all.
It is conceptually easier to set up a DCVS repository with git init (or whichever), without the problem of having to set up an external repository in the case of SVN.
And the base functionality between SVN, Git, Mercurial, Bazaar all use essentially identical commands to commit, view diffs, and so on. Which is all a newcomer is really going to be doing.
The small difference in the way Git requires changes to be explicitly added before they're committed, as opposed to SVN's "commit everything" policy, is conceptually simple and, unless I'm mistaken, not even an issue when using Mercurial or Bazaar.
So why is SVN considered easier? I would argue that this is simply not true.
If you use the version control only for yourself, SVN is probably harder, since the setup is harder. If you however want to work with multiple developers over the web, a server side control has advantages:
You have one central place, that always has the official state-of-the-art source, being the SVN server
Since everyone always merges his changes against a central server, there are much less collisions and much less manual fixing of collisions
You have central control over the source
You have an official revision number instead of a revision hash, being the same among all developers and showing the official progress (it's an up counting number, unlike a hash, which is just an identity fingerprint, so you can see which code is newer or older, just by this number
A distributed versioning system is A Very Good Thing (tm), but I find the primary barrier to adoption being educating users on the new possibilities a new SCM gives.
This coupled with an often lack-luster amount of UI tools (half-finished tortoise implementations etc), brings a blank stare to the eye of many co-workers who long since foreswore the commandline for the sake of a good UI tool.
Also, with tools like CVS I find that people loathe branching and merging because they really really don't want to be stuck an entire day doing three way merges, often not really sure which would be the right merge to do.
My point is: Start out by telling people what they gain (not just "hey watch this new cool toy"), and prep them to the fact that using a commandline IS the way to go and that frequent constant time branching is a good thing.
Many systems such as mercurial comes with complete patch-queue system, meaning that from a Continuous Integration standpoint you know that whatever goes into production has been approved by QA. Stuff like this is hard to do properly with CVS or SVN.
With Mercurial people would have private repositories for their current work and all developers share a developer-tree on a server. The CI system monitors the developer-tree and pulls all changes, builds, and performs unittests. If everything passes it propagates the changes to a Testing-tree from where a deliverable is built for the QA persons. Every changeset that is added gets a token. When QA deems a feature to be complete, they annotate the Testing tree with this token, and the associated changesets are then automatically propagated to the Production-tree.
Using this approach you will never commit anything by hand to the production branch, or the testing branch. Rather the state of the code, and the sign off from QA determines the contents of your production branch,
I believe it is conceptually simpler to think of a centralized repository where each developer commits his work vs. multiple copies of the entire repository, none of which represents the 'truth'. Since most developers are familiar with the notion of a client-server model and backend database, this is a natural concept.
Of course, the very strength of distributed source control systems is that they don't have to adhere to this model, but for a newcomer, it seems easier to grasp.
Svn forces a single work-line. You can either commit, or you can't. Having similar experiences, I don't find Hg or Git hard to use, however, I work on a team whom seem to find it nightmarish to use them.
The whole concept of auto-branching, multiple heads, when to and when not to merge completely eludes them, not to mention they have trouble breaking free of the "commit/checkout with $somecore" mentality which leads to complete confusion when they see pushes between 2 checked out copies.
( Had a problem for a while where somebody repeatedly merged 2 branches that were not supposed to be merged because they were unable to grasp the concept. )
However, the same people whom have trouble with distributed SCMS caused me to ask this question
edit/note Mercurials biggest difference I've noted vs git that makes mercurial harder to use for newbies is mercurials default behavior for push/pull is like doing git push --all / git pull --all , which can propagate private branches and add lots of confusion ( especially as when a new branch turns up, mercurial freezes in fear and asks you how to handle it instead of just keeping on trucking ), as well as the default merge/conflict resolution tool on mac just clobbering one set of changes blindly.
I think the toolset for SVN is much broader, so you could sit down and teach people (TortoiseSVN, RapidSVN etc) even if they did not have much conceptual idea of how the repository worked. It is also relatively easy to get SVN hosted for you (with trac for example) without needing to know anything. Distributed ones have not had this backing yet and I am sure opinions of them will change when they do.
I would argue that setting up a repository with a DVCS is practically easier, but conceptually harder. After all, with a centralized VCS the users do not set up their own repository, they just create an account on Assembla or have the repo set up for them.
DVCS is currently lacking good desktop clients. Despite what most people say, version control systems can be quite hard to use correctly so a good desktop client can really help - and here TortoiseSVN excels.
We struggle to make it as easy as possible at Codice, but it's always a little bit harder to explain, of course, it depends on the audience.
For OSS projects and small teams, specially people working on their laptops and moving here and there, working at home, plane sometimes, and so on, it's pretty easy. But, whenever you talk to corporations/enterprises, they get excited about its multi-site role, but not that much about distributed at first glance. It all depends on whether the group has a majority of advanced developers or not.
It's poor marketing, simple as that. Far too many DVCS introductions focus on the command line and say "wow, isn't it fantastic, you can do a merge just by typing hg merge" completely oblivious to the fact that many people (especially in Windows land) are terrified of the command line. Yes, Joel Spolsky, I'm looking at your own hginit.com here -- we need a TortoiseHg version please!
Maybe it was the case two years ago that they had poor GUI implementations, but they've come on in leaps and bounds recently. TortoiseHg is now in version 1.0, and while it may not be anything to write home about visually, it's pretty solid, stable, and easy to use. TortoiseGit is also rock solid, and it does a great job of abstracting away all the complexities of the git command line.
I'd like to hear from people who are using distributed version control (aka distributed revision control, decentralized version control) and how they are finding it. What are you using, Mercurial, Darcs, Git, Bazaar? Are you still using it? If you've used client/server rcs in the past, are you finding it better, worse or just different? What could you tell me that would get me to jump on the bandwagon? Or jump off for that matter, I'd be interested to hear from people with negative experiences as well.
I'm currently looking at replacing our current source control system (Subversion) which is the impetus for this question.
I'd be especially interested in anyone who's used it with co-workers in other countries, where your machines may not be on at the same time, and your connection is very slow.
If you're not sure what distributed version control is, here are a couple articles:
Intro to Distributed Version Control
Wikipedia Entry
I've been using Mercurial both at work and in my own personal projects, and I am really happy with it. The advantages I see are:
Local version control. Sometimes I'm working on something, and I want to keep a version history on it, but I'm not ready to push it to the central repositories. With distributed VCS, I can just commit to my local repo until it's ready, without branching. That way, if other people make changes that I need, I can still get them and integrate them into my code. When I'm ready, I push it out to the servers.
Fewer merge conflicts. They still happen, but they seem to be less frequent, and are less of a risk, because all the code is checked in to my local repo, so even if I botch the merge, I can always back up and do it again.
Separate repos as branches. If I have a couple development vectors running at the same time, I can just make several clones of my repo and develop each feature independently. That way, if something gets scrapped or slipped, I don't have to pull pieces out. When they're ready to go, I just merge them together.
Speed. Mercurial is much faster to work with, mostly because most of your common operations are local.
Of course, like any new system, there was some pain during the transition. You have to think about version control differently than you did when you were using SVN, but overall I think it's very much worth it.
At the place where I work, we decided to move from SVN to Bazaar (after evaluating git and mercurial). Bazaar was easy to start off, with simple commands (not like the 140 commands that git has)
The advantages that we see is the ability to create local branches and work on it without disturbing the main version. Also being able to work without network access, doing diffs is faster.
One command in bzr which I like is the shelve extension. If you start working on two logically different pieces of code in a single file and want to commit only one piece, you can use the shelve extension to literally shelve the other changes later. In Git you can do the same with playing around in the index(staging area) but bzr has a better UI for it.
Most of the people were reluctant to move over as they have to type in two commands to commit and push (bzr ci + bzr push). Also it was difficult for them to understand the concept of branches and merging (no one uses branches or merges them in svn).
Once you understand that, it will increase the developer's productivity. Till everyone understands that, there will be inconsistent behaviour among everyone.
At my workplace we switched to Git from CVS about two months ago (the majority of my experience is with Subversion). While there was a learning curve involved in becoming familiar with the distributed system, I've found Git to be superior in two key areas: flexibility of working environment and merging.
I don't have to be on our VPN, or even have network connectivity at all, to have access to full versioning capabilities. This means I can experiment with ideas or perform large refactorings wherever I happen to be when the urge strikes, without having to remember to check in that huge commit I've built up or worrying about being unable to revert when I make a mess.
Because merges are performed client-side, they are much faster and less error-prone than initiating a server-side merge.
My company currently uses Subversion, CVS, Mercurial and git.
When we started five years ago we chose CVS, and we still use that in my division for our main development and release maintenance branch. However, many of our developers use Mercurial individually as a way to have private checkpoints without the pain of CVS branches (and particularly merging them) and we are starting to use Mercurial for some branches that have up to about 5 people. There's a good chance we'll finally ditch CVS in another year. Our use of Mercurial has grown organically; some people still never even touch it, because they are happy with CVS. Everyone who has tried Mercurial has ended up being happy with it, without much of a learning curve.
What works really nicely for us with Mercurial is that our (home brewed) continuous integration servers can monitor developer Mercurial repositories as well as the mainline. So, people commit to their repository, get our continuous integration server to check it, and then publish the changeset. We support lots of platforms so it is not feasible to do a decent level of manual checks. Another win is that merges are often easy, and when they are hard you have the information you need to do a good job on the merge. Once someone gets the merged version to work, they can push their merge changesets and then no one else has to repeat the effort.
The biggest obstacle is that you need to rewire your developers and managers brains so that they get away from the single linear branch model. The best medicine for this is a dose of Linus Torvalds telling you you're stupid and ugly if you use centralised SCM. Good history visualisation tools would help but I'm not yet satisfied with what's available.
Mercurial and CVS both work well for us with developers using a mix of Windows, Linux and Solaris, and I've noticed no problems with timezones. (Really, this isn't too hard; you just use epoch seconds internally, and I'd expect all the major SCM systems get this right).
It was possible, with a fair amount of effort, to import our mainline CVS history into Mercurial. It would have been easier if people had not deliberately introduced corner cases into our mainline CVS history as a way to test history migration tools. This included merging some Mercurial branches into the CVS history, so the project looks like it was using from day one.
Our silicon design group chose Subversion. They are mainly eight timezones away from my office, and even over a fairly good dedicated line between our offices SUbversion checkouts are painful, but workable. A big advantage of centralised systems is that you can potentially check big binaries into it (e.g. vendor releases) without making all the distributed repositories huge.
We use git for working with Linux kernel. Git would be more suitable for us once a native Windows version is mature, but I think the Mercurial design is so simple and elegant that we'll stick with it.
Not using distributed source control myself, but maybe these related questions and answers give you some insights:
Distributed source control options
Why is git better than Subversion
I personnaly use Mercurial source control system. I've been using it for a bit more than a year right now. It was actually my first experience with a VSC.
I tried Git, but never really pushed into it because I found it was too much for what I needed. Mercurial is really easy to pick up if you're a Subversion user since it shares a lot of commands with it. Plus I find the management of my repositories to be really easy.
I have 2 ways of sharing my code with people:
I share a server with a co-worker and we keep a main repo for our project.
For some OSS project I work on, we create patches of our work with Mercurial (hg export) and the maintener of the project just apply them on the repository (hg import)
Really easy to work with, yet very powerful. But generally, choosing a VSC really depends on our project's needs...
Back before we switched off of Sun workstations for embedded systems development we were using Sun's TeamWare solution. TeamWare is a fully distribution solution using SCCS as the local repository file revision system and then wrappers that with a set of tools to handle the merging operations (done through branch renaming) back to the centralized repositories of which there can be many. In fact, because it is distributed, there really is no master repository per se' (except by convention if you want it) and all users have their own copies of the entire source tree and revisions. During "put back" operations, the merge tool using 3-way diffs algorithmically sorts out what is what and allows you combine the changes from different developers that have accumulated over time.
After switching to Windows for our development platform, we ended up switching to AccuRev. While AccuRev, because it depends on a centralized server, is not truely a distributed solution, logically from a workflow model comes very close. Where TeamWare would have had completely seperate copies of everything at each client, including all the revisions of all files, under AccuRev this is maintained in the central database and the local client machines only have the flat file current version of things for editing locally. However these local copies can be versioned through the client connection to the server and tracked completely seperately from any other changes (ie: branches) implicitly created by other developers
Personally, I think the distributed model implemented by TeamWare or the sort of hybrid model implemented by AccuRev is superior to completely centralized solutions. The main reason for this is that there is no notion of having to check out a file or having a file locked by another user. Also, users don't have to create or define the branches; the tools do this for you implicitly. When there are larger teams or different teams contributing to or maintaining a set of source files this resolves "tool generated" locking related collisions and allows the code changes to be coordinated more at the developer level who ultimately have to coordinate changes anyway. In a sense, the distributed model allows for a much finer grained "lock" rather than the course grained locking instituted by the centralized models.
Have used darcs on a big project (GHC) and for lots of small projects. I have a love/hate relationship with darcs.
Pluses: incredibly easy to set up repository. Very easy to move changes around between repositories. Very easy to clone and try out 'branches' in separate repositories. Very easy to make 'commits' in small coherent groups that makes sense. Very easy to rename files and identifiers.
Minuses: no notion of history---you can't recover 'the state of things on August 5'. I've never really figured out how to use darcs to go back to an earlier version.
Deal-breaker: darcs does not scale. I (and many others) have gotten into big trouble with GHC using darcs. I've had it hang with 100% CPU usage for 9 days trying to pull in
3 months' worth of changes. I had a bad experience last summer where I lost two weeks
trying to make darcs function and eventually resorted to replaying all my changes by hand into a pristine repository.
Conclusion: darcs is great if you want a simple, lightweight way to keep yourself from shooting yourself in the foot for your hobby projects. But even with some of the performance problems addressed in darcs 2, it is still not for industrial strength stuff. I will not really believe in darcs until the vaunted 'theory of patches' is something a bit more than a few equations and some nice pictures; I want to see a real theory published in a refereed venue. It's past time.
I really love Git, especially with GitHub. It's so nice being able to commit and roll back locally. And cherry-picking merges, while not trivial, is not terribly difficult, and far more advanced than anything Svn or CVS can do.
My group at work is using Git, and it has been all the difference in the world. We were using SCCS and a steaming pile of csh scripts to manage quite large and complicated projects that shared code between them (attempted to, anyway).
With Git, submodule support makes a lot of this stuff easy, and only a minimum of scripting is necessary. Our release engineering effort has gone way, way down because branches are easy to maintain and track. Being able to cheaply branch and merge really makes it reasonably easy to maintain a single collection of sources across several projects (contracts), whereas before, any disruption to the typical flow of things was very, very expensive. We've also found the scriptabability of Git to be a huge plus, because we can customize its behavior through hooks or through scripts that do . git-sh-setup, and it doesn't seem like a pile of kludges like before.
We also sometimes have situations in which we have to maintain our version control across distributed, non-networked sites (in this case, disconnected secure labs), and Git has mechanisms for dealing with that quite smoothly (bundles, the basic clone mechanism, formatted patches, etc).
Some of this is just us stepping out of the early 80s and adopting some modern version control mechanisms, but Git "did it right" in most areas.
I'm not sure of the extent of answer you're looking for, but our experience with Git has been very, very positive.
Using Subversion with SourceForge and other servers over a number of different connections with medium sized teams and it's working very well.
I am a huge proponent of centralized source control for a lot of reasons, but I did try BitKeeper on a project briefly. Perhaps after years of using a centralized model in one format or another (Perforce, Subversion, CVS) I just found distributed source control difficult to use.
I am of the mindset that our tools should never get in the way of the actual work; they should make work easier. So, after a few head pounding experiences, I bailed. I would advise doing some really hardy tests with your team before rocking the boat because the model is very different than what most devs are probably accustomed to in the SCM world.
I've used bazaar for a little while now and love it. Trivial branching and merging back in give great confidence in using branches as they should be used. (I know that central vcs tools should allow this, but the common ones including subversion don't allow this easily).
bzr supports quite a few different workflows from solo, through working as a centralised repository to fully distributed. With each branch (for a developer or a feature) able to be merged independently, code reviews can be done on a per branch basis.
bzr also has a great plugin (bzr-svn) allowing you to work with a subversion repository. You can make a copy of the svn repo (which initially takes a while as it fetches the entire history for your local repo). You can then make branches for different features. If you want to do a quick fix to the trunk while half way through your feature, you can make an extra branch, work in that, and then merge back to trunk, leaving your half done feature untouched and outside of trunk. Wonderful. Working against subversion has been my main use so far.
Note I've only used it on Linux, and mostly from the command line, though it is meant to work well on other platforms, has GUIs such as TortoiseBZR and a lot of work is being done on integration with IDEs and the like.
I'm playing around with Mercurial for my home projects. So far, what I like about it is that I can have multiple repositories. If I take my laptop to the cabin, I've still got version control, unlike when I ran CVS at home. Branching is as easy as hg clone and working on the clone.
Using Subversion
Subversion isn't distributed, so that makes me think I need a wikipedia link in case people aren't sure what I'm talking about :)
Been using darcs 2.1.0 and its great for my projects. Easy to use. Love cherry picking changes.
I use Git at work, together with one of my coworkers. The main repository is SVN, though. We often have to switch workstations and Git makes it very easy to just pull changes from a local repository on another machine. When we're working as a team on the same feature, merging our work is effortless.
The git-svn bridge is a little wonky, because when checking into SVN it rewrites all the commits to add its git-svn-id comment. This destroys the nice history of merges between my coworker's repo an mine. I predict that we wouldn't use a central repository at all if every teammember would be using Git.
You didn't say what os you develop on, but Git has the disadvantage that you have to use the command line to get all the features. Gitk is a nice gui for visualizing the merge history, but the merging itself has to be done manually. Git-Gui and the Visual Studio plugins are not that polished yet.
We use distributed version control (Plastic SCM) for both multi-site and disconnected scenarios.
1- Multi-site: if you have distant groups, sometimes you can't rely on the internet connection, or it's not fast enough and slows down developers. Then having independent server which can synchronize back (Plastic replicates branches back and forth) is very useful and speed up things. It's probably one of the most common scenarios for companies since most of them are still concerned of "totally distributed" practices where each developer has its own replicated repository.
2- Disconnected (or truly distributed if you prefer): every developer has his own repository which is replicated back and forth with his peers or the central location. It's very convenient to go to a customer's location or just go home with your laptop, and continue being able to switch branches, checkout and checkin code, look at the history, run annotates and so on, without having to access the remote "central" server. Then whenever you go back to the office you just replicate your changes (normally branches) back with a few clicks.