Source Control Repository - Per Client or Per Application? - version-control

I've recently taken over a project from another consulting firm. I'm assuming this can happen somewhat frequently in the industry so I'm wondering how I should setup my Source Control Repository.
Should I create one repository simply for this application/client, and then create others as we do more work?
Of should I just dump everything into one single repository.
Thanks guys!

You need to be able to deliver the full source control repo to the customer as it is probably their work product (e.g., work-for-hire). I recommend using one repo per customer. I had them all in one area //depot/clients/CorpA, //depot/clients/cust-b, etc.
Made it easy for me to burn a CD with their project at the end of a contract, and by deleting the entire tree I could provide reliable assurance that I had destroyed all my copies of their IP.

One repository per client. This will give you a much easier method to hand off the application, change development environments, etc..

Related

Best practice for project with multiple related components

Background: I'm using jira for bug tracking, and git for source control. I've got a complete end-to-end system comprising of an iOS front end, and a Java/Tomcat back end that provides web services and a GUI. Right now I've got a single git repository holding all the software and a single jira project tracking issues for the whole system.
Now that the software is live, I'm finding that changes are being made to either the iOS application or the server, but generally not both. The version numbers of the two components have diverged somewhat.
It's probably too late for this project, but in future:
Should I pursue the path of having all related components in a single source repository and tracked using a single bug-tracking project; or
Should each component be in a separate repository and be managed by a separate bug-tracking project?
I can see pro's and con's for both approaches, and I can also see that the answer could easily be "it depends".
Which way would you lean, and why?
I'd go with distinct source repositories for a few reasons
The developers working on the two are likely to have distinct
skill sets. Also, you may have management reasons for wanting to
segregate who sees what.
They should not be tightly tied at a protocol level - different versions need to interact.
The first point becomes even more important when you do another front end
The second reason is my main one.
However, I'd go with a common bug database. Defects/features may need changes on both ends. Also, it is extremely likely you will have bugs that are believed to be in one component but actually end up fixed in the other. If you try to migrate across databases, information will get lost. I've seen that too many times.

Version Control from a different age

At my work I'm on a separate network to my colleague due to clearance reasons, and we both need to share code. I am wondering what the best versioning system would be? There's got to be something better than having project1.zip, project2.zip , etc - but something not as expansive as git or hg.
I would still recommend Git, as it allows to:
make a bundle (only one file, and it can be an incremental bundle)
mail that bundle to your colleague (meaning it will work even if your separate networks have no other way to communicate)
The idea is to exchange one file (from which you can pull any new history bundled in it).
And Git is very cheap for creating and adding a repo when an existing code base is already there.
That being said, any communication procedure will have to be approved by your employer: don't bypass any security measure ;)

How to move to a new version control system

My employer has tasked me with becoming our new version control admin. We are currently using two different version control systems for two different code bases. The code/functionality in the two code bases overlap in some areas. We will be moving both code bases to a new version control system.
I am soliciting ideas on how to do this. I suppose we could add the two code bases to the new version control as siblings in the new depot's hierarchy and gradually remove redundancy by gradually promoting to a third sibling in the hierarchy, ultimately working out of the third sibling exclusively. However, this is just a 30,000 ft view of the problem, not a solution. Any ideas, gotchas, procedures to avoid catastrophe?
Thanks
Git can be setup in such a way that svn, git, and cvs clients can all connect. This way you can move over to a central Git repo, but people who are still used to svn can continue to use it.
It sounds that in your specific situation, with two code-bases you want to combine, you should make three repositories and start to combine the first two into the third one.
My advice is to experiment with a few "test" migrations. See how it goes and adjust your scripts as necessary.
Then once your set, you can execute it for real and your done. Archive your old repos too.
Another place you might find inspiration is OpenOffice.org. They are in the middle of going from SVN to Mercurial. They probably have published information on their migration work.
Issues to consider:
How much history will you migrate?
How long will you need to continue using the old systems for patch-work, etc?
How long will you need to keep the old systems around to access historical information?
Does the new target VCS provide an automatic or quasi-automatic migration migration method from either of the two old VCS?
How will you reconcile branching systems in the two old VCS with the model used in the new VCS?
Will tagging work properly?
Can tags be transferred (which will not matter if you are not importing much history)?
What access controls are applied to the old VCS that must be reproduced in the new?
What access controls are to be applied to the new VCS?
This is at least a starting point - I've no doubt forgotten many important topics.

Merging and branching shared code between projects in TFS

I'm currently in charge of migrating our asp.net applications from source safe to TFS. We have three or four very similar apps (let us say e-commerce) that currently share a core library (services, business logic, entities, data access etc).
The applications are similar but not identical so one app might get a feature set the others won't get etc.
I want to stop the sharing of code and instead set up branches (if that fits) so if I change something in Application A:s core library I will need to merge the changes with the other branches instead of them getting the changes automatically. This to avoid surprises when you update from your trunk and suddenly the core has changed for another project and this project breaks in some way.
Any suggestions on how I should set this up in TFS? Should I have a "main" Core that is not directly used in any project that is the parent of all the other cores so I can push changes up to that one from one core and then distribute it to the other cores? Does that make sense and would it be easy to set up in TFS?
In response to your comment, I'd suggest you to read up on Feature branches on the CodePlex website.
Scenario 4 – Branch for Feature
In this scenario, you create a
development branch, perform work in
that branch, and then merge your work
back into your main source tree. You
organize your development branches
based on product features. The
following is a physical view showing
branching for feature development:
My Team Project
Development -> Isolated development branch container
Feature A -> Feature branch
Source
Feature B -> Feature branch
Source
Feature C -> Feature branch
Source
Main -> Main Integration branch
Source
We are alos moving from SS to TFS in the near future.
As I perceive it, we are going to keep our SS repository online and start fresh over in TFS. Our framework probably will get its own project in TFS. Project specific shared units will need to get merged from time to time.
The way you structure your repository depends on your specific situation. Every branch scenario has its specific advantages and drawbacks.
How many projects
How many developers
Are the developers dedicated
Do you need concurrent hot fixes
Do you need service packs
Take a look at the CodePlex branching guide for all the information you need to make an informed decision about your TFS structure. Print out the cheat sheets and pin them to your wall for quick reference.
Before executing on your branch plan,
pay attention to this cautionary
message - every branch you create does
have a cost so make sure you get some
value from it. The mechanics of
branching in TFS are simplified to a
single right click branch command.
However, the total cost of branching
is paid by reduced code velocity to
main, merge conflicts and additional
testing can be expensive.
I am assuming you have already investigated whether you truly need to make your "copies" seperate team projects. Remember the TFS concept of a "Team Project" is a VERY LARGE high level container. It is not the same thing as what most IT shops consider a "Project". Think of "Microsoft Vista" or "Office 2007" as a project, not, say "A new release of Company XYZ's Accounts Receivable System" as a project in the Team Project sense.
I have a client that decided on one single Team Project for TFS. There is nothing wrong with this - and it is truly the best scenario in many circumstances.
If you truly need a very strong isolation between your copies of the application (perhaps they are seperate clients and you need very strong security seperation) and must have seperate team projects.
That said - you still - as you've stated need to share code between instances of your application. The first thing I would strongly recommend is to get away from "Cut and Paste" sharing. I would truly try to isolate the shared code into a seperate Solution and generate binaries for that (perhaps you've already done this!)
This is covered in the Codeplex TFS: http://tfsguide.codeplex.com/
Another approach I've done for several clients - is to have a Team Project that contains the shared code. The "Build" creates the binaries for the shared code - and the "Deploy" simply copies those to a "known location" (ie UNC share on the build machine)
For the applications that are "Consumers" of the "Framework" we simply used the "AdditionalReferencesPath" Item group to include the location of that known location.
Furthermore - this tool: http://tfsdepreplicator.codeplex.com/ can be helpful. This would allow you to have builds automatically triggered for your "Consumer" Projects whenever the "Framework" solution is built.
My brief answer is that you should only setup one 'TFS project' and simply organize your different projects, i.e. your individual applications, and each shared library, as separate folders under that one TFS project. The alternative is to include specific (binary) builds of the shared libraries in each individual application – if you do that then you can organize each application into it's own TFS project, tho you can't merge changes or branch those projects without using the TFS command line (and some non-obvious commands to boot).
I was trying to determine the same information, this guide on codeplex is perfect
http://vsarbranchingguide.codeplex.com/releases
Includes terminology and different branching workflow approaches as well as cheat sheets.

Should I use a software hosting solution for my personal projects?

Right now, I keep all of my projects on my laptop. I'm thinking that I shouldn't do this, but instead use a version control system and check them in/out from an external hosting repository (Google Code, SourceForge, etc). I see several benefits here - first, I don't have to worry about losing my code if my computer crashes and burns or my external HDD crashes and burns; second, I can share my code with the world and perhaps even get more help when I need it.
Is this a good idea? If so, what are some other project hosts that I should investigate (other than Google Code and SourceForge)?
Assembla is awesome.
EDIT: Yes, this is a good idea - I used to use a personal copy of Vault and found it was more than I cared to manage (in case my server went down or hard drive crashed - not only was it painful to worry about losing and backing up data, but the downtime). Of course, it doesn't hurt to have your own backup as well. Cover all your bases!
After losing some freelance work to a hard drive crash, I've become keen on the philosophy that "It doesn't exist until its in source control". As I don't want to necessarily share the source for my projects with the rest of the world, I pay for webhosting (using Dreamhost who have great deals on basic shared hosting and easy one-click installs for things like subversion) and store my data that way. They don't claim to be any sort of backup service, but all I really want is a second copy offsite somewhere.
If I do decide to share the code I can always make it public later. Do note that sourceforge does not allow private/personal projects, and Google Code forces you to license your code using an open source license. Both have some limitations on the number of projects you can create (and aren't really intended to store everybody and their brother's personal projects).
Assembla looks pretty slick although it is hard to tell what all you get for free. I'm definitely going to try it out.
There is an extensive list at wikipedia.
GitHub is a really great option for git.
Most of the free, public hosting sights will insist that you license your code with an OSS license (and, possibly, your documentation). That's potentially a different thing that you're talking about (backups).
For just backups, you may want to try a for-pay service or even something like mozy.
I use Assembla - You can share your code if you want, but you are not required to. That's a big plus to me.
Online backup is cheap and easy. Why would you not?
I host most of my non-code backups on Amazon's S3 service.
Code goes on a Slicehost virtual server that has automated snapshot backups (daily as well as weekly) and runs Subversion and the Trac web interface to it.
Github is a really great hosting service if you use Git; and of course everyone should use Git. The default is free public project hosting, but if your stuff is proprietary (or perhaps embarrassing) you can get private hosting from them for some cost per month.
If you want to make your projects in some form public, than a hosting-solution may be useful for you.
I made a listing of project-hosting-sites at this question. Of these list only Origo allows you also to host a closed-source-project. As long as you want to open up your source, you can choose everyone on this list.
For my personal projects I use a git repository on a local Fedora Server (that is backed up daily). I .tgz the repository and mysqldb (for bugzilla) and back it up on Carbonite AND a local, redundant hard drive.
I can clone the git repository from any of my other machines into all other environments.
With this you have a backup and version control. I think my system is better than the one I have at work, LOL.
As long as you want to publish your personal projects as open source, you have a lot of possibilities to choose from, because there are lots of hosters that provide this.
If you just want to store your code somewhere online, but not share it with the world:
Some hosters also allow private repositories, but the only free one that I know of is Bitbucket (which I use myself for my private and open source projects).
They allow an unlimited number of public and private Mercurial and Git repositories, the only limitation is that no more than five users can access your private repositories (you can have more, but then it's not free anymore).