At my work I'm on a separate network to my colleague due to clearance reasons, and we both need to share code. I am wondering what the best versioning system would be? There's got to be something better than having project1.zip, project2.zip , etc - but something not as expansive as git or hg.
I would still recommend Git, as it allows to:
make a bundle (only one file, and it can be an incremental bundle)
mail that bundle to your colleague (meaning it will work even if your separate networks have no other way to communicate)
The idea is to exchange one file (from which you can pull any new history bundled in it).
And Git is very cheap for creating and adding a repo when an existing code base is already there.
That being said, any communication procedure will have to be approved by your employer: don't bypass any security measure ;)
Related
I have a project just getting started at http://sourceforge.net/projects/iotabuildit/ (more details at http://sourceforge.net/p/iotabuildit/wiki/Home/) that is currently using Mercurial for revision control. And it seems like Mercurial and SourceForge almost have all the right features or elements to put together the collaboration mechanism I have in mind for this project, but I think I'm not quite there yet. I want people to be able to submit, discuss and vote on individual changes from a large number of individuals (more developers than a project would normally have). And I want it to be as easy as possible for users to participate in this. The thought right now is that people can clone the "free4all" fork, which is a clone of the base "code" repository, or they can create their own fork in their own SourceForge user project (SourceForge now provides a workspace for every user to host miscellaneous project-related content). Then they can clone that to their local repository (after downloading TortoiseHg or their preferred Mercurial client). Then they can make modifications, commit them, push them to the fork, and request a merge into the base "code" repository, at which point we can discuss/review the merge request. This all is still far too many steps, and more formal than I'd like.
I see there is such a thing as "shelving" in Mercurial, but I don't see how/if that is supported in the SourceForge repository. And there probably isn't a way to discuss shelved changes as there is merge requests.
I'm looking for any suggestions that would make this easier. Ideally, I would like users to be able to:
Specify any version that they would like to play, and have that requested version extracted from source control hosted for the user to play at SourceForge (because the game can't be played locally due to security restrictions the Chrome browser properly applies to javascript code accessing image content in independent files)
Allow the user to download the requested version of the project for local editing (a C# version built from the same source is also playable locally, or Internet Explorer apparently ignores the security restriction, allowing local play in a browser)
Accept submitted modifications in a form that can be merged with any other compatible "branch" or version of the game that has been submitted/posted (ideally this would be very simple -- perhaps used just uploads the whole set of files back to the server and the compare and patch/diff extraction is performed there)
Other players can see a list of available submitted patches and choose any set to play/test with, then discuss and vote on changes.
Clearly some of these requirements are very specific, and I will probably need to write some server side code if I want to reach the ideal goal. But I want to take the path of least resistance and use the technologies available if much of the functionality I need is already almost there. Or I'd like to see if I can get any closer than the process I outlined earlier without writing any server code. So what pieces will help me do this? Does Mercurial & SourceForge support storing and sharing shelved code in the way I would want? Is there something to this "Patch Queue" (that I see, but can't understand or get to work yet) that might help? Is there a way to extract a patch file from a given set of files compared to a specific revision in a repository (on the server side), without having the user download any Mercurial components?
It sounds like something you could do with mercurial queue (mq) patch queues. The patch queue can be is own, separate versioned repository, and people can use 'guards' to apply only the patches they want to try.
But really it sounds even easier to use bitbucket or github, both of which have excellent patch-submission, review, and acceptance workflows built into them.
I've been tasked with setting up a version control for our web developers. The software, which was chosen for me because we already have other non-web developers using it, is Serena PVCS.
I'm having a hard time trying to decide how to set it up so I'm going to describe how development happens in our system, and hopefully it will generate some discussion on how best to do it.
We have 3 servers, Development, UAT/Staging, and Production. The web developers only have access to write and test their code on the Development server. Once they write the code, they must go through a certification process to get the code moved to UAT/Staging, then after the code is tested thoroughly there, it gets moved to Production.
It seems like making the Developers use version control for their code on Development which they are constantly changing and testing would be an annoyance. Normally only one developer works on a module at a time so there isn't much, if any, risk of over-writing other people's work.
My thought was to have them only use version control when they are ready to go to UAT/Staging. This allows them to develop and test without constantly checking in their code.
The certification group could then use the version control to help see what changes had been made to the module and to make sure they were always getting the latest revision from the developer to put up on UAT/Staging (now we rely on the developer zip'ing up their changed files and uploading them via a web request system).
This would take care of the file side of development, but leaves the whole database side out of version control. That's something else that I need to consider...
Any thoughts or ideas would be greatly appreciated. Thanks.
I would not treat source control as annoyance. See Nicks answer for the reasons.
If I were You, I would not decide this on my own, because it is not a
matter of setting up a version control software on some server but
a matter of changing and improving development procedures.
In Your case, it might be worth explaining and discussing release branches
with Your developers and with quality assurance.
This means that Your developers decide which feature to include into a release
and while the staging crew is busy on testing the "staging" branch of the source,
Your developers can already work on the next release without interfering with the staging team.
You can also think about feature branches, which means that there is a new branch for every specific new feature of the web site. Those branches are merged back, if the feature is implemented.
But again: Make sure, that Your teams agreed to the new development process. Otherwise, You waste Your time by setting up a version control system.
The process should at least include:
When to commit.
When to branch/merge.
What/When to tag.
The overall work flow.
I have used Serena, and it is indeed an annoyance. In addition to the unpleasantness of the workflow overhead Serena puts on top of the check in-check out process, it is a real pain with regard to doing anything besides the simplest of tasks.
In Serena ChangeMan, all code on local machines is managed through a central server. This is a really bad design. This means a lot of day-to-day branch maintenance work that would ordinarily be done by developers has to go through whomever has administrator privileges, making that person 1) a bottleneck and 2) embittered because they have a soul-sucking job.
The centralized management also strictly limits what developers are able to do with the code on their own machine. For example, if you want to create a second copy of the code locally on your box, just to do a quick test or whatever, you have to get the administrator to set up a second repository on your box. When you limit developers like this, you limit the productivity and creativity of your team.
Also, the tools are bad and the user interface is horrendous. And you will never be able to find developers who are already trained to use it, because its too obscure.
So, if another team says you have to use Serena, push back. That product is terrible.
Using source control isn't any annoyance, it's a tool. Having the benefits of branching and tagging is invaluable when working with new APIs and libraries.
And just a side note, a couple of months back one of the dev's machine's failed and lost all his newest source, we asked when the last time he committed code to the source control and it was 2 months. Sometimes just having it to back up stuff when you reach milestones is nice.
I usually commit to source control a couple of times a week, depending if I've hit a good stopping point and I'm about to move on to something different or bigger.
Following on from the last two good points I would also ask your other non-web developers what developmet process they are using so you won't have to create a new one. They would also have encountered many of he problems that occur in your environment, both technical using the same OS and setup and managerial.
I've recently taken over a project from another consulting firm. I'm assuming this can happen somewhat frequently in the industry so I'm wondering how I should setup my Source Control Repository.
Should I create one repository simply for this application/client, and then create others as we do more work?
Of should I just dump everything into one single repository.
Thanks guys!
You need to be able to deliver the full source control repo to the customer as it is probably their work product (e.g., work-for-hire). I recommend using one repo per customer. I had them all in one area //depot/clients/CorpA, //depot/clients/cust-b, etc.
Made it easy for me to burn a CD with their project at the end of a contract, and by deleting the entire tree I could provide reliable assurance that I had destroyed all my copies of their IP.
One repository per client. This will give you a much easier method to hand off the application, change development environments, etc..
I was overseeing branching and merging throughout the last release at my company, and a number of times had to modify our Subversion pre-commit hooks to enforce different requirements on check-in comments and such. I was a bit nervous every time I was editing those files, because (a) they're part of a live production system, albeit only used internally (and we're not a huge organization), and (b) they're not under version control themselves.
I'm curious what sort of fail-safes people have in place on their version control infrastructure. Daily backups? "Meta" version control? I suppose the former is in place here as part of the backup of the whole repository. And the latter would be useful as the complexity of check-in requirements grows...
Natch - the version-control and any other infrastructure code is also under version-control but I would use a separate project from any development project.
I prefer a searchable wiki or similar knowledge-base repository to clogging up your bug-tracking system with things like VCS config.
Most importantly, make sure that the documentation is kept up to date - in my experience, people are vastly better at keeping code docs up to date than admin docs. This may have been the individuals concerned . One thing that is often overlooked is, if systems are configured according to standard Unix Practices or similar philosophy, that implies a body of knowledge about locations that may not be familiar to an OS/X or Windows programmer, faced with suddenly fixing a broken script. Without being condescending, make sure basic assumptions about location and interdependency are documented.
You should document all "setup" configuration for all your tools and these documents should be checked into version control. For tools with text file configurations which allow comments, you could just checkin the config file. But for tools that require using the interface, you should have a full document with images of the dialog boxes showing what choices are chosen.
Most importantly though, these documents should say WHY you have set the values chosen (when not taking the default).
Second, as backup, the same documents should be included in your bug tracking software under a "How do I setup the version control software?" bug. (The bug tracking database is located on a different physical server, right?)
Third, all of this should be backed-up off-site. I'm sure there question on SO about backup strategies.
What's wrong with using the same version control repository for the commit hooks and other configuration files? That's how I've handled it in the past when I've been responsible for a project's configuration management.
You should also back up your svn repository. That way if the repository itself becomes corrupted or the server catches fire or something, you can recover both your project and the svn control files.
If you have build scripts that are doing this (such as Nant) then you could be checking in those.
Right now, I keep all of my projects on my laptop. I'm thinking that I shouldn't do this, but instead use a version control system and check them in/out from an external hosting repository (Google Code, SourceForge, etc). I see several benefits here - first, I don't have to worry about losing my code if my computer crashes and burns or my external HDD crashes and burns; second, I can share my code with the world and perhaps even get more help when I need it.
Is this a good idea? If so, what are some other project hosts that I should investigate (other than Google Code and SourceForge)?
Assembla is awesome.
EDIT: Yes, this is a good idea - I used to use a personal copy of Vault and found it was more than I cared to manage (in case my server went down or hard drive crashed - not only was it painful to worry about losing and backing up data, but the downtime). Of course, it doesn't hurt to have your own backup as well. Cover all your bases!
After losing some freelance work to a hard drive crash, I've become keen on the philosophy that "It doesn't exist until its in source control". As I don't want to necessarily share the source for my projects with the rest of the world, I pay for webhosting (using Dreamhost who have great deals on basic shared hosting and easy one-click installs for things like subversion) and store my data that way. They don't claim to be any sort of backup service, but all I really want is a second copy offsite somewhere.
If I do decide to share the code I can always make it public later. Do note that sourceforge does not allow private/personal projects, and Google Code forces you to license your code using an open source license. Both have some limitations on the number of projects you can create (and aren't really intended to store everybody and their brother's personal projects).
Assembla looks pretty slick although it is hard to tell what all you get for free. I'm definitely going to try it out.
There is an extensive list at wikipedia.
GitHub is a really great option for git.
Most of the free, public hosting sights will insist that you license your code with an OSS license (and, possibly, your documentation). That's potentially a different thing that you're talking about (backups).
For just backups, you may want to try a for-pay service or even something like mozy.
I use Assembla - You can share your code if you want, but you are not required to. That's a big plus to me.
Online backup is cheap and easy. Why would you not?
I host most of my non-code backups on Amazon's S3 service.
Code goes on a Slicehost virtual server that has automated snapshot backups (daily as well as weekly) and runs Subversion and the Trac web interface to it.
Github is a really great hosting service if you use Git; and of course everyone should use Git. The default is free public project hosting, but if your stuff is proprietary (or perhaps embarrassing) you can get private hosting from them for some cost per month.
If you want to make your projects in some form public, than a hosting-solution may be useful for you.
I made a listing of project-hosting-sites at this question. Of these list only Origo allows you also to host a closed-source-project. As long as you want to open up your source, you can choose everyone on this list.
For my personal projects I use a git repository on a local Fedora Server (that is backed up daily). I .tgz the repository and mysqldb (for bugzilla) and back it up on Carbonite AND a local, redundant hard drive.
I can clone the git repository from any of my other machines into all other environments.
With this you have a backup and version control. I think my system is better than the one I have at work, LOL.
As long as you want to publish your personal projects as open source, you have a lot of possibilities to choose from, because there are lots of hosters that provide this.
If you just want to store your code somewhere online, but not share it with the world:
Some hosters also allow private repositories, but the only free one that I know of is Bitbucket (which I use myself for my private and open source projects).
They allow an unlimited number of public and private Mercurial and Git repositories, the only limitation is that no more than five users can access your private repositories (you can have more, but then it's not free anymore).