A solution to manage application packaging and updates? - nuget

We have a few Windows apps and a gallery of non-executable assets. All are quite weighty, so we have also an Electron-based app to deliver app/asset updates to our customers as they published. Currently we use an in-house tool, which similar to git compares local and remote repos and downloads/applies on user side patches, not entire packages. The problem is that our tool is outdated and we want to switch to a decent 3rd-party solution (preferably open source). Any suggestions?

There are 3rd-party solutions out there, but all will depend on the kind of application you are building and the way you are packaging and updating. from what you say i cannot know the implications of the app being "old". what do you want to solve?
one starting point would be to checkout Win sparkle or Squirrel both frameworks have their MacOs counterparts.
for a more "web based" solution you can check electron-release-server
for the packaging there are many solutions. it's up to the platforms you are targeting. there's no unique solution that covers them all, besides making a Zip file.

Related

Best version control system for a mobile phone application?

I'm developing a mobile phone application that targets a lot of mobile devices based on the capabilities they offer. There would be a base feature set which all phones are expected to support and then there would be additional features that would depend on specific set of phones.
How do I manage such a code base in terms of a version control system?
I have experience with CVS and VSS but both don't quite fit into my needs for this kind of an application. The last thing I would want to do is branch the code for each of these device sets.
Let me make this a bit more clear with the help of an example. Lets say I'm developing a J2ME application using MIDP 2.0. This is the base feature set that I would expect all phones supporting MIDP 2.0 to have. On top of this I would extend this application for specific sets of phones using their SDK's. For eg. Nokia S40, Nokia S60, Sony Ericsson, Blackberry etc. All these provide additional functionalities which lets you build more on top of your base application and most of the times these would affect your whole code base from UI to core logic.
One way to achieve this is to use a combination of a build system with preprocessor flags and trying to separate the differences enough to not have too many dependencies. This can get quite complicated at times. I am wondering if there is an easier way to handle this using a smart source control system....
I would look at Subversion's svn:externals.
With svn 1.5 you can use relative references for directories, and svn 1.6 supports file-based externals.
For example a structure like,
/Phone1/Base
/Phone1/Feature1
/Phone1/Feature2
/Phone2/Base
/Phone2/Feature1
/Phone2/Feature3
Would be easily acheived using svn:externals.
In Subversion, your repository structure is very flexible, here is one way (of many) you could lay that out:
trunk/Features/Base
trunk/Features/Feature1
trunk/Features/Feature2
trunk/Features/Feature2
trunk/Phones/Phone1 (with svn:externals to Base, Feature1, ...)
trunk/Phones/Phone2 (with svn:externals to Base, Feature3, ...)
One hint though: Make sure that you use a specific Subversion revision for each external reference, it may not seem important when starting out, but 6 months down the track it will :)
I like Subversion for projects which don't have a lot of developers on it. From your problem statement, to me it sounds like you should be able to acheive what you want with a good build system. So I don't think the source control itself would make much a difference. But I may be misunderstanding your problem.
Sayed Ibrahim Hashimi
My Book: Inside the Microsoft Build Engine : Using MSBuild and Team Foundation Build
I don't think VCS will solve your problem.
Your best bet maybe to abstract out the phone specific functionality as much as possible and/or go with a plugin type model.
I've only had experience with Subversion, CVS, Starteam, and VSS. Branches are a pain no matter what... especially if you have multiple active branches. You won't get around doing constant merges , branch comparisons, and trying to track if you've made a change to all branches.
If you organize your code into some core modules and some phone-specific modules which depend on the core modules then it doesn't really matter which VCS you use. I would recommend a decentralized VCS anyway (Mercurial, Bazaar, Git).
You could consider describing how do you want to achieve what you want (different app versions with different feature sets) to get a more reasonable advice
If you use Perforce, you can use different mappings between the depot and your workspaces and do something like:
depot/
common/
platform1/
someportedfile
platform2/
someportedfile
and have it mapped in your workspace to:
platform1/
someportedfile
common/
platform2/
somtportedfile
common/

Arguments for and against including 3rd-party libraries in version control? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 7 years ago.
Improve this question
I've met quite a few people lately who says that 3rd party libraries doesn't belong in version control. These people haven't been able to explain to me why they shouldn't yet, so I hoped you guys could come to my rescue :)
Personally, I think that when I check the trunk of a project out, it should just work - No need to go to other sites to find libraries. More often than not, you'd end up with multiple versions of the same 3rd party lib for different developers then - and sometimes with incompatibility problems.
Is it so bad to have a libs folder up there, with "guarenteed-to-work" libraries you could reference?
In SVN, there is a pattern used to store third-party libraries called vendor branches. This same idea would work for any other SVN-like version control system. The basic idea is that you include the third-party source in its own branch and then copy that branch into your main tree so that you can easily apply new versions over your local customizations. It also cleanly keeps things separate. IMHO, it's wrong to directly include the third-party stuff in your tree, but a vendor branch strikes a nice balance.
Another reason to check in libraries to your source control which I haven't seen mentioned here is that it gives you the ability to rebuild your application from a specific snapshot or version. This allows you to recreate the exact version that someone may report a bug on. If you can't rebuild the exact version you risk not being able to reproduce/debug problems.
Yes you should (when feasible).
You should be able to take a fresh machine and build your project with as few steps as possible. For me, it's:
Install IDE (e.g. Visual Studio)
Install VCS (e.g. SVN)
Checkout
Build
Anything more has to have very good justification.
Here's an example: I have a project that uses Yahoo's YUI compressor to minify JS and CSS. The YUI .jar files go in source control into a tools directory alongside the project. The Java runtime however, does not--that has become a prereq for the project much like the IDE. Considering how popular JRE is, it seems like a reasonable requirement.
No - I don't think you should put third party libraries into source control. The clue is in the name 'source control'.
Although source control can be used for distribution and deployment, that is not its prime function. And the arguments that you should just be able to check out your project and have it work are not realistic. There are always dependencies. In a web project, they might be Apache, MySQL, the programming runtime itself, say Python 2.6. You wouldn't pile all those into your code repository.
Extra code libraries are just the same. Rather than include them in source control for easy of deployment, create a deployment/distribution mechanism that allows all dependencies to easily be obtained and installed. This makes the steps for checking out and running your software something like:
Install VCS
Sync code
Run setup script (which downloads and installs the correct version of all dependencies)
To give a specific example (and I realise this is quite web centric), a Python web application might contain a requirements.txt file which reads:
simplejson==1.2
django==1.0
otherlibrary==0.9
Run that through pip and the job is done. Then when you want to upgrade to use Django 1.1 you simply change the version number in your requirements file and re-run the setup.
The source of 3rd party software doesn't belong (except maybe as static reference), but the compiled binary do.
If your build process will compile an assembly/dll/jar/module, then only keep the 3rd party source code in source control.
If you won't compile it, then put the binary assembly/dll/jar/module into source control.
This could depend on the language and/or environment you have, but for projects I work on I place no libraries (jar files) in source control. It helps to be using a tool such as Maven which fetches the necessary libraries for you. (Each project maintains a list of required jars, Maven automatically fetches them from a common repository - http://repo1.maven.org/maven2/)
That being said, if you're not using Maven or some other means of managing and automatically fetching the necessary libraries, by all means check them into your version control system. When in doubt, be practical about it.
The way I've tended to handle this in the past is to take a pre-compiled version of 3rd party libraries and check that in to version control, along with header files. Instead of checking the source code itself into version control, we archive it off into a defined location (server hard drive).
This kind of gives you the best of both worlds: a 1 step fetch process that fetches everything you need, but it doesn't bog down your version control system with a bunch of necessary files. Also, by fetching pre-compiled binaries, you can skip that phase of compilation, which makes your builds faster.
You should definitively put 3rd party libraries under the source control. Also, you should try to avoid relying on stuff installed on individual developer's machine. Here's why:
All developers will then share the same version of the component. This is very important.
Your build environment will become much more portable. Just install source control client on a fresh machine, download your repository, build and that's it (in theory, at least :) ).
Sometimes it is difficult to obtain an old version of some library. Keeping them under your source control makes sure you won't have such problems.
However, you don't need to add 3rd party source code in your repository if you don't plan to change the code. I tend just to add binaries, but I make sure only these libraries are referenced in our code (and not the ones from Windows GAC, for example).
We do because we want to have tested an updated version of the vendor branch before we integrate it with our code. We commit changes to this when testing new versions. We have the philosophy that everything you need to run the application should be in SVN so that
You can get new developers up and running
Everyone uses the same versions of various libraries
We can know exactly what code was current at a given point in time, including third party libraries.
No, it isn't a war crime to have third-party code in your repository, but I find that to upset my sense of aesthetics. Many people here seem to be of the opinion that it's good to have your whole developement team on the same version of these dependencies; I say it is a liability. You end up dependent on a specific version of that dependency, where it is a lot harder to use a different version later. I prefer a heterogenous development environment - it forces you to decouple your code from the specific versions of dependencies.
IMHO the right place to keep the dependencies is on your tape backups, and in your escrow deposit, if you have one. If your specific project requires it (and projects are not all the same in this respect), then also keep a document under your version control system that links to these specific versions.
I like to check 3rd party binaries into a "lib" directory that contains any external dependencies. After all, you want to keep track of specific versions of those libraries right?
When I compile the binaries myself, I often check in a zipped up copy of the code along side the binaries. That makes it clear that the code is not there for compiling, manipulating, etc. I almost never need to go back and reference the zipped code, but a couple times it has been helpful.
If I can get away with it, I keep them out of my version control and out of my file system. The best case of this is jQuery where I'll use Google's AJAX Library and load it from there:
<script src="http://ajax.googleapis.com/ajax/libs/jquery/1/jquery.min.js" type="text/javascript"></script>
My next choice would be to use something like Git Submodules. And if neither of those suffice, they'll end up in version control, but at that point, its only as up to date as you are...

Salesforce - How to Deploy between Environments (Sandboxes, Live etc)

We're looking into setting up a proper deployment process.
From what I've read there seems to be 4 methods of doing this.
Copy & Paste -- We don't want to do this
Using the "Package" mechanism built into the Salesforce Web Interface
Eclipse Force IDE "Deploy to Server" option
Ant Script (haven't tried this one yet)
Does anyone have advice on the limitation of the various methods .
Can you include everything in a Web Interface package?
We're looking to deploy the following items:
Apex Classes
Apex Triggers
WorkFlows
Email Templates
MailMerge Templates -- Can't seem to find these in Eclipse
Custom Fields
Page Layout
RecordTypes (can't seem to find these in Website or Eclipse)
PickList items?
SControls
I recommend the Force.com Migration Tool.
For reference:
Force.com Migration Tool Documentation
Migration Tool Guide
The Migration Tool allows you to use ant targets to move your metadata between salesforce.com organzations.
I can speak to this from recent painful experience.
Packaging: this is a very old method that predates the metadata API on which both Ant and Eclipse rely. In our experience, packaging's only benefit is in defining your project. If you're using Eclipse (which we do, and I recommend), you can define your project as being based on a particular package. As long as you remember to add new components to your package, your project hangs together
One thing that baffled us for a while, btw, are the many uses of package. We've noted the following:
Installed packages: these come in managed and unmanaged flavors and are really, in the words of a recent post on the SFDC boards, for ISVs to deploy their stuff into various unknown orgs "out there". Both managed and unmanaged packages have limitations that make them unsuitable and unneeded for deployment from development to production within an org, or in any case where you're doing custom development and don't intend to distribute code to a large anonymous base.
Non-installed packages: this is what you see when you click "Packages" in the web UI. These, that we sometimes call "development packages", seem to be just a convenient way to keep a project definition together.
Anyway, the conclusion I'm coming toward is that our team (custom development, not an ISV) does not need packages in any form.
The other forms of deployment, both Eclipse and Ant, rely on the Metadata API. In theory they are capable of exactly the same things. In reality they appear to be complementary. The Force.com migration tool, built into the Force.com IDE for Eclipse, makes deployment as easy as it can be (which is not very) and gives you a nice look at what it intends to deploy. On the other hand, we've seen Ant do some things the IDE could not. So it's probably worthwhile to learn both.
The process we're leaning toward is to keep all our projects in SVN, and use the SVN structure as the project definition (Eclipse will work with this and respect it). And we use Eclipse and sometimes Ant for migration. No apparent need for packages anywhere.
By the way, one more thing to be aware of -- not all components are migratable. Some things must be reconfigured by hand in the target environment. One example would be time-based workflows. Queues and Groups also need to behand-created, I think. Likewise the metadata API can't directly process field deletions so if you deleted a field in your source, you need to delete it by hand in the target. There are other cases as well.
Hope that's useful --
-- Steve Lane
As of Spring '09, mail merge templates are not supported in metadata but record types are. You will find record types as an XML element in the file for the object they belong to. Everything else on your list is supported with a small exception. Picklist values for standard fields cannot be edited in Spring '09. Stay tuned for news on Summer '09 feature announcements.
Update: Standard picklists on standard objects are now metadata exposed (as of API v16):
http://www.salesforce.com/us/developer/docs/api_meta/Content/meta_picklist.htm
Otherwise, Steve Lane's response is pretty accurate. The advantage of using unmanaged packages (what Steve calls non-installed packages) is that when you add metadata to a package, the metadata it depends on will automatically be added. So it's easier to grab a full set of metadata containing all its dependencies. If you are repeatedly moving metadata from one org (sandbox) to another (production), Steve's approach is probably the best way to go and certainly the most common today. I frequently use unmanaged "developer" packages to move something I've developed in one org to another unrelated org. For my purpose, I like to have the package defined in the org as opposed to an Eclipse project / SVN. But that probably doesn't make sense if you are doing team development across many dev/sandbox orgs and are using SVN already.
Jesper
Another option is to use Change Sets if you want to move meta data from a sandbox to production.
There are currently some limitations on how change sets can be used:
Sending a change set between two organizations requires a deployment
connection. Currently, change sets can only be sent between
organizations that are affiliated with a production organization, for
example, a production organization and a sandbox, or two sandboxes
created from the same organization.
From the docs:
A package must be managed for it to be published publicly on AppExchange, and for it to support upgrades. An organization can create a single managed package that can be downloaded and installed by many different organizations. They differ from unmanaged packages in that some components are locked, allowing the managed package to be upgraded later. Unmanaged packages do not include locked components and cannot be upgraded. In addition, managed packages obfuscate certain components (like Apex) on subscribing organizations, so as to protect the intellectual property of the developer.
Advantage to managed package would be that it allows you to easily version and distribute things across multiple SFDC organizations.

Version Controlling for Designers in a Digital Agency

I'm trying to implement a version control system, but as most of us know designers don't feel comfortable with version control systems. I'm looking for a solution mostly for our designers using Photoshop, Flash and other design tools.
It's not a big deal to use a version control system, like VSS 2005, with our frontend and backend coders, but we have some serious problems with our designers. They mostly refuse to use version control systems, and they are right at some points, mostly on productivity level. They mostly work on more than one file, and on more than one application like Photoshop and Flash.
I don't know if version controlling is the right answer or not. Maybe we have to implement a backup system, but there has to be a versioning system, I think. I and our designers are very tired of doing the same thing or going back to the previous designs over and over again.
It would be wonderful to know how digital agencies overcome this problem. If version controlling is the answer, please share your tips on how to make designers comfortable with version controlling.
EDIT 1: Maybe it would be great to have a solution like Dropbox, as it doesn't disturb you with check-ins/check-outs. All you have to do is to open up a file, work on it and save it, the rest is handled by Dropbox.
EDIT 2: We are on Windows, so no chance to implement anything other than Windows support :(
Thanks...
I haven't actually ever done this with graphic designers, but is it possible that Subversion's WebDAV support might work for them? You can mount a WebDAV share as a drive under Mac OS X and Windows XP & Vista, I believe. Each save becomes a new revision in the repository.
And as for your second, hidden question: Yes, you do need to implement a backup system. At least if you value your data.
Adobe has it's own version control, Version Cue, which is bundled with the Creative Suite package. http://www.adobe.com/products/creativesuite/versioncue/sdk_overview/ Apparently, Eclipse can plug into this. I haven't tried it extensively, but I know it integrates nicely into the file dialog in Creative Suite.
NOTE: Version Cue has been discontinued by adobe after the release of CS5:
http://www.adobe.com/products/adobedrive/
Adobe Version Cue maybe?
You might want to try subversion because there are plugins for windows explorer and max OS X finder. integration with the filesystem has been a big help for me on projects where non-developers had to work with source control. This includes projects that have had designers.
Another key thing that helped was having a good directory structure for the files the designers and other non-developers worked with.
I just came accross ConceptShare and it's pretty great...it's not automated version control but you could use it for that and it's a great way to collect and document feedback.
You can try Subversion (installed on a local or remote server) plus Adobe Creative Suite plug-in that would face the designers - Pixelnovel Timeline
It's compact, has previews of all versions (submitted via the plug-in), works for Photoshop, Illustrator & InDesign.
If developers also use Subversion, everything (code & design) can be kept in one place.
Instead of trying to integrate a version control system with lots of applications on different operating systems, you might want to have a look at copy-on-write file systems such as http://en.wikipedia.org/wiki/Ext3cow. That way your designers won't even notice a difference; all they will have to do is save their work to a network share on a linux/samba server using ext3cow.
I'm both a designer and coder. I usually version control code (text data) with git, and simply use "save as" with a version name for graphics (binary data). And I run Apple's Time Machine on top of all that, for safety.
To me, version control on graphic files would just be a burden. I'd have to roll back to see changes, and you wouldn't even get one of the great features of version control: see the changes you did in a specific commit just by looking at diffs. The log feature is nice though, to see how you progressed in time, and notes, but to me personally it's not worth it.
Take a look at Perforce - it has a plugin and tools that allow you to use it from within designer tools such as Photoshop, its also super fast and integrates well with Visual Studio - runs on Windows as well as Linux
What I did once was create a "Snapshot" shortcut on the desktop that added and committed everything from a specific directory.
If every designer commits to their own branch (trivial with a DVCS but easy with SVN too) there will be no conflicts, and the cross-branch merging can be done at intervals by someone who isn't afraid of it.
I've been having my eyes on GridIron's Flow for a while now. It looks like a competent version control suite that has some neat asset management features such as visualization on graphics between versions and relationships between different assets. Flow has support for handling files for adobe photoshop, illustrator, flash etc. However as of now (early january) GridIron hasn't released Flow yet other than having to announce the beta program.
Most digital agencies that I know of that mainly do web development use Subversion for version control. To avoid conflicts on image files an artist will lock the files he or she will work on. That way, another artist won't do the mistake of overwriting changes. This requires some coordination among artists and devs so that noone steps on anyone shoes. Also, if someone forgets to unlock, there is the possibility to break locks.
If you're into distributed version control you might want to take a look at Mercurial as it has good support for Windows and has some neat cheat sheets. The Ruby kids are using git but is rather lacking in Windows.
Before using version control with artists, at least make sure they know the basics of version control and let them fool around with it in a sandbox. Also make sure they've set up some basic rules of conduct when collaborating with each other and interacting through version control (i.e. ways to make sure they don't destroy each others works or step on each others toes).

Storing third-party libraries in source control

Should libraries that the application relies on be stored in source control? One part of me says it should and another part say's no. It feels wrong to add a 20mb library that dwarfs the entire app just because you rely on a couple of functions from it (albeit rather heavily). Should you just store the jar/dll or maybe even the distributed zip/tar of the project?
What do other people do?
store everything you will need to build the project 10 years from now.I store the entire zip distribution of any library, just in case
Edit for 2017:
This answer did not age well:-). If you are still using something old like ant or make, the above still applies. If you use something more modern like maven or graddle (or Nuget on .net for example), with dependency management, you should be running a dependency management server, in addition to your version control server. As long as you have good backups of both, and your dependency management server does not delete old dependencies, you should be ok. For an example of a dependency management server, see for example Sonatype Nexus or JFrog Artifcatory, among many others.
As well as having third party libraries in your repository, it's worth doing it in such a way that makes it easy to track and merge in future updates to the library easily (for example, security fixes etc.). If you are using Subversion using a proper vendor branch is worthwhile.
If you know that it'd be a cold day in hell before you'll be modifying your third party's code then (as #Matt Sheppard said) an external makes sense and gives you the added benefit that it becomes very easy to switch up to the latest version of the library should security updates or a must-have new feature make that desirable.
Also, you can skip externals when updating your code base saving on the long slow load process should you need to.
#Stu Thompson mentions storing documentation etc. in source control. In bigger projects I've stored our entire "clients" folder in source control including invoices / bills/ meeting minutes / technical specifications etc. The whole shooting match. Although, ahem, do remember to store these in a SEPARATE repository from the one you'll be making available to: other developers; the client; your "browser source view"...cough... :)
Don't store the libraries; they're not strictly speaking part of your project and uselessy take up room in your revision control system. Do, however, use maven (or Ivy for ant builds) to keep track of what versions of external libraries your project uses. You should run a mirror of the repo within your organisation (that is backed up) to ensure you always have the dependencies under your control. This ought to give you the best of both worlds; external jars outside your project, but still reliably available and centrally accessible.
We store the libraries in source control because we want to be able to build a project by simply checking out the source code and running the build script. If you aren't able to get latest and build in one step then you're only going to run into problems later on.
never store your 3rd party binaries in source control. Source control systems are platforms that support concurrent file sharing, parallel work, merging efforts, and change history. Source control is not an FTP site for binaries. 3rd party assemblies are NOT source code; they change maybe twice per SDLC. The desire to be able to wipe your workspace clean, pull everything down from source control and build does not mean 3rd party assemblies need to be stuck in source control. You can use build scripts to control pulling 3rd party assemblies from a distribution server. If you are worried about controlling what branch/version of your application uses a particular 3rd party component, then you can control that through build scripts as well. People have mentioned Maven for Java, and you can do something similar with MSBuild for .Net.
I generally store them in the repository, but I do sympathise with your desire to keep the size down.
If you don't store them in the repository, the absolutely do need to be archived and versioned somehow, and your build system needs to know how to get them. Lots of people in Java world seem to use Maven for fetching dependencies automatically, but I've not used I, so I can't really recommend for or against it.
One good option might be to keep a separate repository of third party systems. If you're on Subversion, you could then use subversion's externals support to automatically check out the libraries form the other repository. Otherwise, I'd suggest keeping an internal Anonymous FTP (or similar) server which your build system can automatically fetch requirements from. Obviously you'll want to make sure you keep all the old versions of libraries, and have everything there backed up along with your repository.
What I have is an intranet Maven-like repository where all 3rd party libraries are stored (not only the libraries, but their respective source distribution with documentation, Javadoc and everything). The reason are the following:
why storing files that don't change into a system specifically designed to manage files that change?
it dramatically fasten the check-outs
each time I see "something.jar" stored under source control I ask "and which version is it?"
I put everything except the JDK and IDE in source control.
Tony's philosophy is sound. Don't forget database creation scripts and data structure update scripts. Before wikis came out, I used to even store our documentation in source control.
My preference is to store third party libraries in a dependency repository (Artifactory with Maven for example) rather than keeping them in Subversion.
Since third party libraries aren't managed or versioned like source code, it doesn't make a lot of sense to intermingle them. Remote developers also appreciate not having to download large libraries over a slow WPN link when they can get them more easily from any number of public repositories.
At a previous employer we stored everything necessary to build the application(s) in source control. Spinning up a new build machine was a matter of syncing with the source control and installing the necessary software.
Store third party libraries in source control so they are available if you check your code out to a new development environment. Any "includes" or build commands that you may have in build scripts should also reference these "local" copies.
As well as ensuring that third party code or libraries that you depend on are always available to you, it should also mean that code is (almost) ready to build on a fresh PC or user account when new developers join the team.
Store the libraries! The repository should be a snapshot of what is required to build a project at any moment in time. As the project requires different version of external libraries you will want to update / check in the newer versions of these libraries. That way you will be able to get all the right version to go with an old snapshot if you have to patch an older release etc.
Personally I have a dependancies folder as part of my projects and store referenced libraries in there.
I find this makes life easier as I work on a number of different projects, often with inter-depending parts that need the same version of a library meaning it's not always feasible to update to the latest version of a given library.
Having all dependancies used at compile time for each project means that a few years down the line when things have moved on, I can still build any part of a project without worrying about breaking other parts. Upgrading to a new version of a library is simply a case of replacing the file and rebuilding related components, not too difficult to manage if need be.
Having said that, I find most of the libraries I reference are relatively small weighing in at around a few hundred kb, rarely bigger, which makes it less of an issue for me to just stick them in source control.
Use git subprojects, and either reference from the 3rd party library's main git repository, or (if it doesn't have one) create a new git repository for each required library. There's nothing reason why you're limited to just one git repository, and I don't recommend you use somebody else's project as merely a directory in your own.
store everything you'll need to build the project, so you can check it out and build without doing anything.
(and, as someone who has experienced the pain - please keep a copy of everything needed to get the controls installed and working on a dev platform. I once got a project that could build - but without an installation file and reg keys, you couldn't make any alterations to the third-party control layout. That was a fun rewrite)
You have to store everything you need in order to build the project.
Furthermore different versions of your code may have different dependencies on 3rd parties.
You'll want to branch your code into maintenance version together with its 3rd party dependencies...
Personally what I have done and have so far liked the results is store libraries in a separate repository and then link to each library that I need in my other repositories through the use of the Subversion svn:externals feature. This works nice because I can keep versioned copies of most of our libraries (mainly managed .NET assemblies) in source control without them bulking up the size of our main source code repository at all. Having the assemblies stored in the repository in this fashion makes it so that the build server doesn't have to have them installed to make a build. I will say that getting a build to succeed in absence of Visual Studio being installed was quite a chore but now that we got it working we are happy with it.
Note that we don't currently use many commercial third-party control suites or that sort of thing much so we haven't run into licensing issues where it may be required to actually install an SDK on the build server but I can see where that could easily become a problem. Unfortunately I don't have a solution for that and will plan on addressing it when I first run into it.