How do you manage your project life cycle? [closed] - version-control

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you manage your project life cycle?
For example: Do you start with a template? Do you use versioning such as SVN as the authoritative source? Do you archive the projects, if so when and how? When a project is revived (work resumes), how’s that handled? Do you use automated scripts to do things such as create IIS sites, DBs, archive, launch, etc?
Of particular interest is management of many projects at varying points of development.

Development: We do not start with a template, because the world changes quick enough to make template maintenance a full time job. We do encourage everybody to use the same IDE (Eclipse), so that they can help eachother with their environments.
Project Management: We are using GForge to manage our projects. Sourceforge is slightly better, but GForge is much cheaper and has a different licensing fee model. GForge incorporates CVS, SVN, Document storage, Issue trackers and integrates everything nicely. This makes it easy to see where the project is at. Open issues, and closed issues with connected code changes, everything is integrated.
Versioning: Although we tried SVN, we switched back to CVS because it fits our needs better and works fine.
Backups: Our GForge server, housing all our projects and sourcecode, is running on a VMWare EX server. Backups are done daily on VM level and we make VM snapshots if we feel that we need more frequent restore points for some reason.
Reviving projects: This is very common in our business. Every project has all it's libraries and build requirements in CVS. The project always has an up-to-date development manual which describes all the steps to get a development environment running, and has a chapter with all the things which are not default, to pay attention to. We try to build software in an as-default-as-possible environment so that developers don't have to spend days tweaking their settings.
Nearly all projects are built using Maven, which also makes life easy for our developers. Ususally reviving a projects only takes a few steps:
Download eclipse
Connect to CVS over SSH (extssh is built into Eclipse)
Check out the project (default "Check Out" option)
Run "Maven Eclipse" and refresh Eclipse
Run unittests in Eclipse to see if everything is working.
Builds: All our projects are built on a seperate build server. Every morning the build server does a complete build and tags CVS if all unittests succeed. During the day, hourly builds are made and when there are failures the team automatically gets an email. Usually we use one build server per project, and it is a simple luntbuid server (Linux, Tomcat, Luntbuild).
Both the buildserver and sometimes even the developer machines are VM's. This makes reviving a project really easy. Get the VM from the fileserver, start it up, and you're good to go.
The build server creates daily sites which show unittest coverage statistics, complexity measurements, CVS activity and developer activity (who changed what and when).
All our software comes with self-building database scripts built in. Point the config file to the database, start the software, and it figures out what it needs to do to the database itself. This really comes in handy because the buildserver can just start the software. No special steps needed. Our customers are also happy, they never need to worry about their database, or upgrade scripts.
The whole project lifecycle is managed, documented and tracked in GForge, with the addition of some external spreadsheets for budget tracking because that's simply easier.
Wether you have an integrated project server or not, I think it is really important to have a system. This enables you to switch developers between projects without them getting lost. It saves time. Particularly when a customer comes back to you after 2 or 3 years for modifications on old software (yes, that happens).
All the stuff we use is open source (you can even use an open source fork of GForge). It's not in the tools, it's how you use them.

It would depend on the nature of the work. When working at home for private clients, I start by opening a folder for the client with a bunch of standard documents, which I customize, such as contracts, invoices, reports, requirements, testing, code repository, etc. As the project develops, I add/modify the directory as required.
If I had to go back to a project, I would reopen that directory, and for any non-common components, create a new directory. For example, if my client had a web application built, and now they need a second application, I would use the same directories for invoices and contracts and create new directories for the code base, requirements and testing.
In terms of backup, I archive the work at any point where I've reached a milestone, with the exception of code, which I back up daily at a minimum. At the end of each project, when I close a contract, I take the entire directory and compress it and store it on a remote server.

i create folders containning the project stages like "initialize software process" on were we placed docs like the bussiness proposal, we use another one for requirements another for the construction,releases,one for meetings minutes, and so on.
We keep those under a subversion repository but it really depends on what metodology you are using, also it depends on how do you handle the configuration management and how organize you want to be. and yes we use template for most of our artifacts so we assure in some way the quality of our products.

As for source code, we have it all in a Subversion repository. After each release, we make a branch - new features only get added to the current branch (on which the next release will be based), critical bug fixes are done in the current and the old branch (so we can deliver hotfixes for the version the customers currently have).
As for all documents belonging to a release - from the planning & resource sheets to specifications, testcases, user and technical manuals for the software we create etc. - we store them in a Sharepoint portal site. The advantages of this Sharepoint site is that users have access via a website (so no need to grant management access to your repository ;-), you can finely control the access rights, and you can turn on versioning. We also use tagging to mark whether a document belongs to a specific release (e.g. service pack xy) or product, or whether it is generally valid.
Concerning scripts, we use several to perform e.g. nightly build plus unit tests (we usually do that for the last and the current release), but also to deploy the complete software solution (including IIS site creation, database data model upgrade,...) on out test servers. These are nant scripts using lots of variables for paths, version numbers etc. so it is very easy to copy and modify them for a new release.

Related

ColdFusion Builder 3 vs. Dreamweaver & local and remote paths

I can't wrap my head around how I'm supposed to use ColdFusion Builder 3 (akin to Eclipse).
Up until this point, I've been using Dreamweaver 5, which is getting 'long-in-the-tooth', and I wanted to give CF Builder a try.
So, in Dreamweaver, it's pretty simple: you setup connections to servers using credentials... There's a Local path, which is the local copy of your code, and the webroot of the Server which is the 'live' copy of your code. Basically, you make a change to the local copy, and PUT the change to the Server. Easy peasy lemon squeezy, right?
But, how does this translate to ColdFusion Builder 3?
Just to give you an idea of our infrastructure.... we have Development and Production. Each of these boxes has multiple web instances, example: Accounting, Human Resources, IT. Each of those web instances could have multiple applications.... I'm only considered about my instance, IT, on both the Production and Development servers.
Is a workspace supposed to represent an instance on a web server?
In CFBuilder, should I configure 1 server per web app?
Is a project supposed to represent a web app?
Am I supposed to use drive mappings to the inetpub wwwroot for access to web applications? Is it even considered kosher to have a drive mapping to the web root? \server\c$\inetpub\wwwroot
Where do I keep my local copy of my code?
How do I move items from Development to Production?
My main confusion is with workspaces, projects, and servers... My intent is to debug and 'view page in browser' from CFBuilder.... However, when you setup a server, under Server Mapping and URL Prefix, you're supposed to indicate the Local and Remote paths, plus this is not directly related to the physical location of the project.... and as I've mentioned, there's multiple instances, multiple applications, and the development box is not my local machine, it's a remote server...
I would really like to know how others have made this work for them.
I really don't mind this question even though it's not directly code related because I've been using ColdFusion Builder (CFB) for years and there just isn't enough good documentation out there. I now enjoy a great experience with CFB thanks to blog posts and sharing experiences with other devs :)
My setup: CFB3 running on Windows 8.1, dev server running on a Virtual Machine so it is treated as "remote server" just like yours. I also update remote staging and production servers (although not directly from CFB).
First, let's set some reasonable expectations: Dreamweaver and CFB are very different in that CFB focuses on programming and Dreamweaver on design. CFB is built on eclipse and therefore has the advantage of benefiting from most eclipse plugins.
Your question is specifically about how to set up your projects in CFB using 2 remote servers (dev and prod). It's different for everyone but I'll share my setup with you. (sidenote: My projects are also stored in Git repositories - 1 repo for every app)
Starting from the top: A workspace in CFB deals with your whole eclipse application, not just your projects. The most important things kept in this directory are snippets and plugins. You do NOT need to keep your project files in here. This is merely the main directory where all of your settings are kept. You are not required to have more than 1 workspace (I only have one). Why would you need more than one? You may be multifaceted programmer who needs to keep separate workspaces using separate tools (like different plugins, snippets, window layouts...)
To answer your next question (1 server per web app), all you need to to is configure your dev servers in the "CF Servers" tab. You need to add 1 server per web instance for every instance that you'd like to test on. Hopefully, your dev server has RDS enabled (very helpful for remote database and file viewing, just like in Dreamweaver). During configuration, don't worry about Mappings or Virtual Host Settings (I have another recommendation later). Once configured, you'll be able to assign that server to a project.
Drive mappings: I would never recommend mapping to the webroot of a shared dev server. If you were to use that drive map as your local directory, your changes will be made directly to the development server. What you want to do is create a new project by right clicking in the Navigator area and select Import > Other > FTP. Follow the steps, choose anywhere on your local drive to store the files, then choose "New project" at the end (this will add the .project file necessary for CFB to control the project).
Once the project is created, right click on it, select ColdFusion Project and choose the CFML Dictionary version you'll be using (CF11, 10, 9...). Then, select ColdFusion Server Settings and choose the dev server. This is necessary for testing.
What you now have is a local directory with your app and eclipse knows about the remote server. In order to synchronize, you right click on the project, go to Team and synchronize from there. For detailed information about synchronization over FTP, see the help section "Guide to WebDAV and FTP".
Moving to production is not as simple as it was in Dreamweaver. The FTP configuration information only allows for 1 connection (thus giving you a list of files synchronized between your project and the dev server). Therefore, you'll need a third party FTP client to synchronize between your local project and your prod server.
As promised, my last entry will be able the "debugging" which is why I said to skip the mappings and virtual host settings in CF Server config. I really, really recommend using a third party paid plugin called FusionDebug (http://www.fusion-debug.com/). This plugin facilitates the setup and allows you to step-into all of your code (which doesn't work so well in native CFB). There's a 30 day trial and I recommend you try before your buy (or license for a year in this case!)

Domino 8.5.3 - Create an organization extension library / codestore

This is a project I've been working on off and on for months and I feel like I'm pretty close, but I just can't seem to get past the final hurdle.
The goal is to develop an organization extension library that contains both internal and 3rd party code that we frequently rely on.
History
As a test project, I started with Apache Poi because that is already in wide use in our environment. I have a plug-in and feature built just from the Poi .jars that allows me to build our current Poi applications as long as I add the plug-in (from my workspace) to my build path. The apps work on the servers because we have already distributed the Poi .jars by manually copying them.
The next step is taking that plug-in and getting it into an updatesite so that all of the servers and developers can synchronize on one version. I found and followed these two excellent blog articles (that I wish existed when I started this project):
http://www.dalsgaard-data.eu/blog/wrap-an-existing-jar-file-into-a-plug-in/
http://www.dalsgaard-data.eu/blog/deploy-an-eclipse-update-site-to-ibm-domino-and-ibm-domino-designer/
With the caveat that the articles are written for Domino 9 and we are running 8.5.3 here, but that only matters in the last (installation) step.
Current
This brings us to the problem. All of the above seems to have worked great up to a point. I can install my feature to my designer client from the eclipse update site and it works great. However, the install is failing when I import that into our updatesite.nsf database. This means that while the developers can all install from the updatesite if I put it on a network drive, that doesn't deploy updates to our servers.
The problem is that when I try to install from the .nsf update site, the Eclipse Updater just hangs. I've let it go for well over an hour and eventually Notes becomes completely unresponsive.
So the question is, is there anything I might have done wrong, either in the development of the plug-in or server configuration that might be causing this issue?
Additional Info
I'm looking at the osgi console and that is largely unhelpful. I am getting the following errors as I'm trying to install: SEVERE Could not access digest on the site: no protocol: 0/5B004DDD5E38F3FF85257CAF004C72C7/$file/digest.zip ::class.method=unknown ::thread=Worker-7 ::loggername=org.eclipse.update.core
I could generate dumps if that would be useful.
Security is also locked down fairly tight here. It could be a security issue - is there a way to troubleshoot that? Once I get to the hang I'm just stuck guessing.
This has been edited for clarity and to update information
I know that this is post is over 5 years ago but...
for those that find this and are trying to resolve the error
SEVERE Could not access digest on the site: no protocol: "
is due to the update site project not having the URL of the Domino updatesite.nsf not being added to the Archives tab of the site.xml.
I found the updatesite.nsf also needs to be anonymously accessible as no credentials are prompted/passed through to the Domino server hosting the updatesite.nsf database (at least from DDE), YMMV from eclipse. So if Anonymous connections are blocked on the Domino server you will be out of luck.
To develop a plug-in you really want to have 3 projects:
the plug-in
the feature
the update site
Of course a feature can contain more than one plug-in (and probably should) and a update site can contain more than one feature (and probably should). Once you have an update site project it features a handy button "build all" that makes sure plug-in, feature and update-site get compiled in one go. And that button is what you really want.
You can point using a setting in your Domino Designer (or local Domino server) to the feature directory. Add a plain text .link file to framework/rcp/eclipse/links, that contains the path to your install site - it then picks up the features and plug-ins from there. After a build you would need to restart designer/server to activate the updated feature.
For the Domino server the approach using an updatesite.nsf and the respective notes.ini setting makes the most sense (to me). http restart required. Lazy people script the whole thing.
I still don't have a great answer for this, but I believe the issue is related to the environment here. I don't have the authority to change the environment, even if I were able to conclusively demonstrate it is the cause of this problem, so it is a moot point. All I can say is that at least one administrator computer had no issue installing from the update site.
For me, the solution for distributing the update site is to put it on a network drive and have everyone install it from there. The server has no problem using it from the updatesite.nsf.

Need advice/resources in starting nightly builds in TFS

I'm currently looking to start setting up nightly builds with TFS and our company has never done this before. I'm looking for some pointers on maybe where to get started, what I should look out for as well as structure of solutions.
Background
Current TFS source location has 2 web projects, 5-10 windows services, 10-15 supporting dlls. These will continue to grow.
Currently there are solution files for each web project and each windows service. Each of these solutions contain the supporting projects (internal dlls) and also the correlating unit testing projects.
All of our external dependencies (log4net, nhibernate etc) are managed by NuGet and are in a folder within TFS called packages
Some of my questions include but are not limited to
Should I have a master solution file that contains all of these projects? Maybe this is easier when setting up nightly builds?
I'd also like to run the unit and integration tests as part of the nightly builds. Is this just additional configuration on the build server?
What tools are involved when setting up nightly builds with TFS?
I'm not necessarily looking for complete answers but it would be great if someone could point me to some good resources (books, websites, blogs)? Like I said I'm really green as far as nightly builds are concerned and I just want to make sure I start off on the right foot. Hopefully I can learn from others mistakes.
Here are some simple "answers" to your 3 questions (though I agree with the comments above that this isn't the most answer-able SO question):
An good read on creating reliable builds in MSBuild : http://msdn.microsoft.com/en-us/magazine/dd483291.aspx
Yes running tests is just an option in a TFS Build Definition, you can configure a few options in addition to "on/off" : http://msdn.microsoft.com/en-us/library/ms253138.aspx
You can also use TFS Lab management and test agents to execute tests in a different manner: http://blogs.msdn.com/b/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx
Configuring TFS builds : http://msdn.microsoft.com/en-us/library/dd647547.aspx

What's best Drupal deployment strategy? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I am working on my first Drupal project on XAMPP in my MacBook. It's a prototype and receives positive feedback from my client.
I am going to deploy the project on a Linux VPS two weeks later. Is there a better way than 're-do'ing everything on the server from scratch?
install Drupal
download modules (CCK, Views, Date, Calendar)
create the Contents
...
Thanks
A couple of tips:
Use source control, NOT FTP/etc., for the files. It doesn't matter what you use; we tend to spin up an Unfuddle.com subversion account for each client so they have a place to log bugs as well, but the critical first step is getting the full source tree of your site into version control. When changes are made on the testing server or staging server, you see if they work, you commit, then you update on the live server. Rollbacks and deployment gets a lot, lot simpler. For clusters of multiple webheads you can repeat the process, or rsync from a single 'canonical' server.
If you use SVN, though, you can also use CVS checkouts of Drupal and other modules/themes and the SVN/CVS metadata will be able to live beside each other happily.
For bulky folders like the files directory, use a symlink in the 'proper' location to point to a server-side directory outside of the webroot. That lets your source control repo include all the code and a symlink, instead of all the code and all the files users have uploaded.
Databases are trickier; cleaning up the dev/staging DB and pushing it to live is easiest for the initial rollout but there are a few wrinkles when doing incremental DB updates if users on the live site are also generating content.
I did a presentation on Drupal deployment best practices last year. Feel free to check the slides out.
Features.module is an extremely powerful tool for managing Drupal configuration changes.
Content Types, CCK settings, Views, Drupal Variables, Contexts, Imagecache presets, Menus, Taxonomies, and Permissions can all be rolled into a feature, which can be checked into version control. From there, deploying a new site, or pushing changes to an existing one, is easily managed with the Features UI or Drush.
Make sure you install Strongarm.module for exporting drupal config that gets stored in your Variables table. You can also static content/nodes (ie: about us, faqs, etc) into Features by installing uuid_features.module.
Hands down, this is the best way to work with other developers on the same site, and to move your site from Development to Testing to Staging and Production.
We've had an extensive discussion on this at my workplace, and the way we finally settled on was pushing code updates (including modules and themes) from development to staging to production. We're using Subversion for this, and it's working well so far.
What's particularly important is that you automate a process for pushing the database back from production, so that your developers can keep their copies of the database as close to production as possible. In a mission-critical environment, you want to be absolutely certain a module update isn't going to hose your database. The process we use is as follows:
Install a module on the development server.
Take note of whatever changes and updates were necessary. If there are any hitches, revert and do again until you have a solid, error-free process.
Test your changes! Repeat your testing process as a normal, logged-in user, and again as an anonymous user.
If the update process involved anything other than running update.php, then write a script to do it.
Copy the production database to your staging server, and perform the same steps immediately. If it fails, diagnose the failure and return to step 1. Otherwise, continue.
Test your changes!
BACK UP YOUR PRODUCTION DATABASE and TAKE NOTE OF THE REVISION YOU HAVE CHECKED OUT FROM SVN.
Put your production Drupal in maintenance mode, run "svn update" on your production tree, and go through your update process.
Take Drupal out of maintenance mode and test everything (as admin, regular user, and anonymous)
And that's it. One thing you can never really expect for a community framework such as Drupal is to be able to move your database from testing to production after you go live. From then on, all database moves are from production to testing, which complicates the deployment process somewhat. Be careful! :)
We use the Features module extensively to capture features and then install them easily at the production site.
I'm surprised that no one mentioned the Deployment module. Here is an excerpt from its project page:
... designed to allow users to easily stage content from one Drupal site to another. Deploy automatically manages dependencies between entities (like node references). It is designed to have a rich API which can be easily extended to be used in a variety of content staging situations.
I don't work with Drupal, but I do work with Joomla a lot. I deploy by archiving all the files in the web root (tar and gzip in my case, but you could use zip) and then uploading and expanding that archive on the production server. I then take a SQL dump (mysqldump -u user -h host -p databasename > dump.sql), upload that, and use the reverse command to insert the data (mysql -u produser -h prodDBserver -p prodDatabase < dump.sql). If you don't have shell access you can upload the files one at a time and write a PHP script to import dump.sql.
Any version control system (GIT, SVN) + Features module to deploy Drupal code + custom settings (content types, custom fields, module dependencies, views etc.).
As Deploy module is still in development mode, so you may like to use Node export module in Drupal 7 to deploy your content / nodes.
If you're new to deployment (and or Drupal) then be sure to do everything in one lump.
You have to be quite careful once there are users effecting content while you are working on another copy.
It is possible to leave the tables that relate to actual content, taxonomy, users, etc. rather than their structure. Then push the ones relating to configuration. However, this add an order of magnitude of complexity.
Apologies if deployment is something old hat to you, thus this is vaguely insulting.
A good strategy that I have found and am currently implementing is to use a combination of the deploy module to migrate my content, and then drush along with dbscripts to merge and update the core and modules. It takes care of database merging even if you have live content, security and module updates, and I currently have mine set up to work with svn.

How to version control the build tools and libraries?

What are the recommendations for including your compiler, libraries, and other tools in your source control system itself?
In the past, I've run into issues where, although we had all the source code, building an old version of the product was an exercise in scurrying around trying to get the exact correct configuration of Visual Studio, InstallShield and other tools (including the correct patch version) used to build the product. On my next project, I'd like to avoid this by checking these build tools into source control, and then build using them. This would also simplify things in terms of setting up a new build machine -- 1) install our source control tool, 2) point at the right branch, and 3) build -- that's it.
Options I've considered include:
Copying the install CD ISO to source control - although this provides the backup we need if we have to go back to an older version, it isn't a good option for "live" use (each build would need to start with an install step, which could easily turn a 1 hour build into 3 hours).
Installing the software to source control. ClearCase maps your branch to a drive letter; we could install the software under this drive. This doesn't take into account non-file part of installing your tools, like registry settings.
Installing all the software and setting up the build process inside a virtual machine, storing the virtual machine in source control, and figuring out how to get the VM to do a build on boot. While we capture the state of the "build machine" with ease, we get the overhead of a VM, and it doesn't help with the "make the same tools available to developers issue."
It seems such a basic idea of configuration management, but I've been unable to track down any resources for how to do this. What are the suggestions?
I think the VM is your best solution. We always used dedicated build machines to get consistency. In the old COM DLL Hell days, there were dependencies on (COMCAT.DLL, anyone) on non-development software installed (Office). Your first two options don't solve anything that has shared COM components. If you don't have any shared components issue, maybe they will work.
There is no reason the developers couldn't take a copy of the same VM to be able to debug in a clean environment. Your issues would be more complex if there are a lot of physical layers in your architecture, like mail server, database server, etc.
This is something that is very specific to your environment. That's why you won't see a guide to handle all situations. All the different shops I've worked for have handled this differently. I can only give you my opinion on what I think has worked best for me.
Put everything needed to build the
application on a new workstation
under source control.
Keep large
applications out of source control,
stuff like IDEs, SDKs, and database
engines. Keep these in a directory as ISO files.
Maintain a text document, with the source code, that has a list of the ISO files that will be needed to build the app.
I would definitely consider the legal/licensing issues surrounding the idea. Would it be permissible according to the various licenses of your toolchain?
Have you considered ghosting a fresh development machine that is able to build the release, if you don't like the idea of a VM image? Of course, keeping that ghosted image running as hardware changes might be more trouble than it's worth...
Just a note on the versionning of libraries in your version control system:
it is a good solution but it implies packaging (i.e. reducing the number of files of that library to a minimum)
it does not solves the 'configuration aspect' (that is "what specific set of libraries does my '3.2' projects need ?").
Do not forget that set will evolves with each new version of your project. UCM and its 'composite baseline' might give the beginning of an answer for that.
The packaging aspect (minimum number of files) is important because:
you do not want to access your libraries through the network (like though dynamic view), because the compilation times are much longer than when you use local accessed library files.
you do want to get those library on your disk, meaning snapshot view, meaning downloading those files... and this is where you might appreciate the packaging of your libraries: the less files you have to download, the better you are ;)
My organisation has a "read-only" filesystem, where everything is put into releases and versions. Releaselinks (essentially symlinks) point to the version being used by your project. When a new version comes along it is just added to the filesystem and you can swing your symlink to it. There is full audit history of the symlinks, and you can create new symlinks for different versions.
This approach works great on Linux, but it doesn't work so well for Windows apps that tend to like to use things local to the machine such as the registry to store things like configuration.
Are you using a continuous integration (CI) tool like NAnt to do your builds?
As a .Net example, you can specify specific frameworks for each build.
Perhaps the popular CI tool for whatever you're developing in has options that will allow you to avoid storing several IDEs in your version control system.
In many cases, you can force your build to use compilers and libraries checked into your source control rather than relying on global machine settings that won't be repeatable in the future. For example, with the C# compiler, you can use the /nostdlib switch and manually /reference all libraries to point to versions checked in to source control. And of course check the compilers themselves into source control as well.
Following up on my own question, I came across this posting referenced in the answer to another question. Although more of a discussion of the issue than an aswer, it does mention the VM idea.
As for "figuring out how to build on boot": I've developed using a build farm system custom-created very quickly by one sysadmin and one developer. Build slaves query a taskmaster for suitable queued build requests. It's pretty nice.
A request is 'suitable' for a slave if its toolchain requirements match the toolchain versions on the slave - including what OS, since the product is multi-platform and a build can include automated tests. Normally this is "the current state of the art", but doesn't have to be.
When a slave is ready to build, it just starts polling the taskmaster, telling it what it's got installed. It doesn't have to know in advance what it's expected to build. It fetches a build request, which tells it to check certain tags out of SVN, then run a script from one of those tags to take it from there. Developers don't have to know how many build slaves are available, what they're called, or whether they're busy, just how to add a request to the build queue. The build queue itself is a fairly simple web app. All very modular.
Slaves needn't be VMs, but usually are. The number of slaves (and the physical machines they're running on) can be scaled to satisfy demand. Slaves can obviously be added to the system any time, or nuked if the toolchain crashes. That'ss actually the main point of this scheme, rather than your problem with archiving the state of the toolchain, but I think it's applicable.
Depending how often you need an old toolchain, you might want the build queue to be capable of starting VMs as needed, since otherwise someone who wants to recreate an old build has to also arrange for a suitable slave to appear. Not that this is necessarily difficult - it might just be a question of starting the right VM on a machine of their choosing.