Need advice/resources in starting nightly builds in TFS - version-control

I'm currently looking to start setting up nightly builds with TFS and our company has never done this before. I'm looking for some pointers on maybe where to get started, what I should look out for as well as structure of solutions.
Background
Current TFS source location has 2 web projects, 5-10 windows services, 10-15 supporting dlls. These will continue to grow.
Currently there are solution files for each web project and each windows service. Each of these solutions contain the supporting projects (internal dlls) and also the correlating unit testing projects.
All of our external dependencies (log4net, nhibernate etc) are managed by NuGet and are in a folder within TFS called packages
Some of my questions include but are not limited to
Should I have a master solution file that contains all of these projects? Maybe this is easier when setting up nightly builds?
I'd also like to run the unit and integration tests as part of the nightly builds. Is this just additional configuration on the build server?
What tools are involved when setting up nightly builds with TFS?
I'm not necessarily looking for complete answers but it would be great if someone could point me to some good resources (books, websites, blogs)? Like I said I'm really green as far as nightly builds are concerned and I just want to make sure I start off on the right foot. Hopefully I can learn from others mistakes.

Here are some simple "answers" to your 3 questions (though I agree with the comments above that this isn't the most answer-able SO question):
An good read on creating reliable builds in MSBuild : http://msdn.microsoft.com/en-us/magazine/dd483291.aspx
Yes running tests is just an option in a TFS Build Definition, you can configure a few options in addition to "on/off" : http://msdn.microsoft.com/en-us/library/ms253138.aspx
You can also use TFS Lab management and test agents to execute tests in a different manner: http://blogs.msdn.com/b/lab_management/archive/2009/05/18/vsts-2010-lab-management-basic-concepts.aspx
Configuring TFS builds : http://msdn.microsoft.com/en-us/library/dd647547.aspx

Related

Automatically Combine/Bundle EF Migration(s)

I'm wondering if it is possible to run some automated tasks either on a Release (web deploy) action, or Branch Merge (TFS) action?
Ideally I would like to set up a process that will automatically combine EF migrations since the last release. I'm still looking into how I would automate this, but I think the first step is hooking into a suitable event.
I haven't setup a build server yet, but I'm guessing if the above isn't possible then this would be an option for attaching a custom procedure to the MSBuild task?
Alternatively, if anyone has experience in automating things like this I would be happy to hear it. I am the head of development at a web development company and I would like to facilitate our current processes by automating some of our standard procedures, and this is something we do over any over again for each development!
I appreciate your time looking at my question, thanks.
VSTS and TFS2015 both support a CI/CD process via their new build and release system. Very flexible and powerful. Check it out!
https://msdn.microsoft.com/Library/vs/alm/Release/getting-started/understand-rm
VS/WebDeploy does support deploying EF migrations with a web application:
https://msdn.microsoft.com/en-us/library/dd394698?f=255&MSPPError=-2147217396#efcfmigrations
This works fine for deploying a small application/system but when you want to deploy a larger system with many components it doesn't work as well. We create MSDeploy packages for each component of the system. For example, this is how we deploy SQL databases:
http://dotnetcatch.chief7.space/2016/02/10/deploying-a-database-project-with-msdeploy/

How can I share deployment code between Lab Management and Release Management

After having just started using Microsoft Release Management, I am more and more convinced that it is not well suited to run integration tests. This might be a false feeling I'm having, and I'd love to get more input on this. When we first considered it, I had the intention to run the tests defined in our test plan through it's pipeline, but now I'm seeing that we should be running those as frequently as possible. We would like to run integration testing every night, but our release candidates are only defined at the end of sprints, so using Release Management for that seems conflicting.
With the tool out of the equation, we are considering exploring the Lab Template again. We did some very minor tests with it a few months ago in a legacy project but never went too far. My main concern now is that both stages need deployment:
the Release Management pipeline needs to deploy our projects to the QA and production environment
the Lab Template also needs to deploy the project on a few virtual machines to run integration tests on
The Release Management uses some very nice abstractions to achieve that. You can code machine scopes and define components based on the drop folder structure to define each part of the whole application to be deployed. On the other hand, the lab management workflow does not support this (or perhaps I'm just missing it). The standard way to make deployment work for lab testing, is to write a custom power shell script that moves the files from the build drop folder to the correct places, creates the application pools for web projects, and stuff like that, all by hand.
Ideally, I'd like to just share the entire deployment workflow between both tools and, since the Release Management way of doing it seems much simpler, I'd use that. This would make it easier to maintain both pipelines at the same time, which I assume is actually commonplace.
What is the correct approach to share the deployment code as much as possible between the two tools?
I would expect that better integration between RM and MTM/LM will be a future feature. In the interim, you could investigate using Desired State Configuration to handle having a single script that configures environments for you.
DSC support isn't really out-of-the-box in RM Update 2, but RM Update 3 will have built-in support for DSC to both Azure and on-prem VMs. Update 3 CTP 1 is out right now, but it's not production-ready.
You can still use DSC from RM in Update 2, it just requires a bit more work.

Choosing the right build and deployment tools

I have a very large, mostly HTML/SSI site that I manage part-time and do weekly deployments on in addition do being an enterprise Magento developer. The site in question has ~5000 static HTML files and requires a lot of upkeep to manage deployments.
In addition to that site, I manage numerous Magento installs. I currently manage them from SVN and do exports/checkouts from various production and qa branches/tags.
While this is manageable, I don't get some of the things that I know build tools provide. Some of those features would be:
Automatic Minification of CSS/JS
Revision History
Multi-server deployment
Runtime configuration
Stats of broken builds/build time/deployment frequency
Integration with Testing frameworks
The three tools I've been reviewing are
Apache Ant
phpUnderControl
Capistrano (at the insistence of a friend of mine who is a RoR dev)
I briefly looked at Hudson, and had a ton of problems trying to get it up and running.
My Questions:
What is the upside/downside of going to this type of strategy?
Any hidden pitfalls that you've experienced?
Which tool do you think would best fit for the deployment/management of the HTML site?
Does anyone have experience with deploying distributed Magento from a deployment/build management system?
Thanks in advance...
Update
Still no movement here, so I'm going to ask this:
Should I rather rebuild in HTML5 Boilerplate which has Ant build scripts out of the box? This would afford me the ability to use Ant, but the build scripts are already pre-made so I have a good starting point. Your thoughts and suggestions are welcome.
I've got one more tool for you to review: Jenkins (earlyer: Hudson).
Its a great tool to run and control your builds. Furthermore you can remote the console and get notifications via Jabber protocol.

How do you manage your project life cycle? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 2 years ago.
Improve this question
How do you manage your project life cycle?
For example: Do you start with a template? Do you use versioning such as SVN as the authoritative source? Do you archive the projects, if so when and how? When a project is revived (work resumes), how’s that handled? Do you use automated scripts to do things such as create IIS sites, DBs, archive, launch, etc?
Of particular interest is management of many projects at varying points of development.
Development: We do not start with a template, because the world changes quick enough to make template maintenance a full time job. We do encourage everybody to use the same IDE (Eclipse), so that they can help eachother with their environments.
Project Management: We are using GForge to manage our projects. Sourceforge is slightly better, but GForge is much cheaper and has a different licensing fee model. GForge incorporates CVS, SVN, Document storage, Issue trackers and integrates everything nicely. This makes it easy to see where the project is at. Open issues, and closed issues with connected code changes, everything is integrated.
Versioning: Although we tried SVN, we switched back to CVS because it fits our needs better and works fine.
Backups: Our GForge server, housing all our projects and sourcecode, is running on a VMWare EX server. Backups are done daily on VM level and we make VM snapshots if we feel that we need more frequent restore points for some reason.
Reviving projects: This is very common in our business. Every project has all it's libraries and build requirements in CVS. The project always has an up-to-date development manual which describes all the steps to get a development environment running, and has a chapter with all the things which are not default, to pay attention to. We try to build software in an as-default-as-possible environment so that developers don't have to spend days tweaking their settings.
Nearly all projects are built using Maven, which also makes life easy for our developers. Ususally reviving a projects only takes a few steps:
Download eclipse
Connect to CVS over SSH (extssh is built into Eclipse)
Check out the project (default "Check Out" option)
Run "Maven Eclipse" and refresh Eclipse
Run unittests in Eclipse to see if everything is working.
Builds: All our projects are built on a seperate build server. Every morning the build server does a complete build and tags CVS if all unittests succeed. During the day, hourly builds are made and when there are failures the team automatically gets an email. Usually we use one build server per project, and it is a simple luntbuid server (Linux, Tomcat, Luntbuild).
Both the buildserver and sometimes even the developer machines are VM's. This makes reviving a project really easy. Get the VM from the fileserver, start it up, and you're good to go.
The build server creates daily sites which show unittest coverage statistics, complexity measurements, CVS activity and developer activity (who changed what and when).
All our software comes with self-building database scripts built in. Point the config file to the database, start the software, and it figures out what it needs to do to the database itself. This really comes in handy because the buildserver can just start the software. No special steps needed. Our customers are also happy, they never need to worry about their database, or upgrade scripts.
The whole project lifecycle is managed, documented and tracked in GForge, with the addition of some external spreadsheets for budget tracking because that's simply easier.
Wether you have an integrated project server or not, I think it is really important to have a system. This enables you to switch developers between projects without them getting lost. It saves time. Particularly when a customer comes back to you after 2 or 3 years for modifications on old software (yes, that happens).
All the stuff we use is open source (you can even use an open source fork of GForge). It's not in the tools, it's how you use them.
It would depend on the nature of the work. When working at home for private clients, I start by opening a folder for the client with a bunch of standard documents, which I customize, such as contracts, invoices, reports, requirements, testing, code repository, etc. As the project develops, I add/modify the directory as required.
If I had to go back to a project, I would reopen that directory, and for any non-common components, create a new directory. For example, if my client had a web application built, and now they need a second application, I would use the same directories for invoices and contracts and create new directories for the code base, requirements and testing.
In terms of backup, I archive the work at any point where I've reached a milestone, with the exception of code, which I back up daily at a minimum. At the end of each project, when I close a contract, I take the entire directory and compress it and store it on a remote server.
i create folders containning the project stages like "initialize software process" on were we placed docs like the bussiness proposal, we use another one for requirements another for the construction,releases,one for meetings minutes, and so on.
We keep those under a subversion repository but it really depends on what metodology you are using, also it depends on how do you handle the configuration management and how organize you want to be. and yes we use template for most of our artifacts so we assure in some way the quality of our products.
As for source code, we have it all in a Subversion repository. After each release, we make a branch - new features only get added to the current branch (on which the next release will be based), critical bug fixes are done in the current and the old branch (so we can deliver hotfixes for the version the customers currently have).
As for all documents belonging to a release - from the planning & resource sheets to specifications, testcases, user and technical manuals for the software we create etc. - we store them in a Sharepoint portal site. The advantages of this Sharepoint site is that users have access via a website (so no need to grant management access to your repository ;-), you can finely control the access rights, and you can turn on versioning. We also use tagging to mark whether a document belongs to a specific release (e.g. service pack xy) or product, or whether it is generally valid.
Concerning scripts, we use several to perform e.g. nightly build plus unit tests (we usually do that for the last and the current release), but also to deploy the complete software solution (including IIS site creation, database data model upgrade,...) on out test servers. These are nant scripts using lots of variables for paths, version numbers etc. so it is very easy to copy and modify them for a new release.

What tools do you use for Automated Builds / Automated Deployments? Why? [closed]

As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance.
Closed 10 years ago.
What tools do you use for Automated Builds / Automated Deployments? Why?
What tools do you recommend?
Hudson for automated builds. I chose it because it was the easiest to setup and demo. A system that's too complex and isn't slick-looking won't impress management enough to get them on-board for automated builds. Especially in a project that has a lot of inertia.
NAnt for builds (but MSBuild, Rake, almost anything would be fine) and CruiseControl.NET for deployments. I'm currently working with the new Cruise from ThoughtWorks studios as it provides a better way to stage the various pipelines and let's me deploy any version I want to a target environment.
We use TeamCity, from JetBrains. They also make Resharper And IntelliJ.
We use it for building our .Net applications, and it has been quite easy to set up, connect to TFS, and run additional tools from. It is very polished, and actually kinda reminds me of this site. Found it much nicer than CruiseControl, and for our team size it is free. If you need lots of different builds, more per-user builds, and so on then it costs a bit (but still quite reasonable).
Funnily enough I just spent two weeks overhauling (read implementing from scratch) our nightly build process. Great fun (no, really). I toyed with the idea of installing Team Foundation Server, but we use Perforce for source control and I didn't think it was worth the hassle.
Our process is now a set of Powershell scripts that run on a dedicated build/test server that do the following on a scheduled task:
Wipe out the entire source tree (check that you didn't have anything checked out first!)
Bring down the entire source tree from Perforce (from the last labelled build)
Generate a change report (by syncing to HEAD and watching what comes down)
Build the App
Index the PDB files to the Perforce sources
Store the binaries and symbols in a dedicated symbol server
Run the test projects
Build the installer
Label
Send out emails to the group with status reports on all of the above
Works well.
make and bash on linux
make and cmd on windows
Visual Build Pro
We use a combination of build tools and continuous integration server:
Build tools:
Maven
SBT
Gradle
Rake
Continuous Integration Servers:
Jenkins
Hudson
Travis CI
Automated Build Studio.
Instead of letting you mes with scripts or xml files, it comes with predefined graphical macro operations that allows you to create tasks easily.
For our Windows-compilable stuff, we use FinalBuilder.
CruiseControl for automated builds. Works great.
For automated builds, I think the best tool going right now is JetBrain's Team City. The free version has all the features you'll need for most 5-10 person teams. Set up is easy, configuring new projects is painless (relatively), and most importantly, it's reliable.
For automated migrations, nothing beats PowerShell.
UppercuT uses NAnt to build and it is the insanely easy to use Build Framework.
Automated Builds as easy as (1) solution name, (2) source control path, (3) company name for most projects!
http://code.google.com/p/uppercut/
Some good explanations here: UppercuT
More information
UppercuT is a conventional automated build, which means you set up a config file and then you get a bunch of features for free. Arguably the most powerful feature is the ability to specify environment settings in ONE place and have them applied everywhere, including documentation when it builds the source.
Documentation available: https://github.com/chucknorris/uppercut/wiki
Features :
Simple setup
Simple upgrades
Custom extension points (pre, post, and replace) for each step of the build process http://uppercut.pbworks.com/CustomizeUsingExtensionPoints
Has documentation for integration w/Team City, CruiseControl.NET, and Jenkins (formerly Hudson) https://github.com/chucknorris/uppercut/tree/master/docs
Works on Linux w/Mono
Versioning DLLs based on build number and source control revisions (SVN, TFS, Git, HG)
Compile activities - F5 or Ctrl + Shift + B
Strong naming made as easy as true/false
Code Testing and Analysis
Testing
NUnit
MbUnit v2
Gallio
xUnit
NCover
NDepend
Nitriq
Mono Migration Analyzer
Obfuscation
ILMerge
Environment Templating and Building (ConfigBuilder, DocBuilder, SQLBuilder, DeploymentBuilder) https://github.com/chucknorris/uppercut/blob/master/docs/ConfigBuilder.doc?raw=true
Packaging output to prepare for deployment
Zips up output
At work we use good ol' Ant to build our Java servlets.
We used to use Visual Build from Kinook software, but recently with our new application we switched to MSBuild since it had better integration with TFS and the ability to create custom tasks.
The GNU Autotools definitely. The autoconf and automake are de-facto standard for unix systems.
I've had success using buildbot, triggered by a post-commit script on a subversion repository. This has been used for both automated builds and automated testing.
ANT for both build and deployment/installs.
Makes a great cross-platform installer.
We use Hericus Zed Builds And Bugs Management for our automated builds.
We have 4 branches of code, each with java, c++, C#, cross platform compiles and installers for 5 OS's.
Make for the builds.
Debian packages for deployments (since our production servers runs it).
TeamCity running NAnt scripts for building/packaging and PowerShell for deployment.
I've found that using NAnt, powered by TeamCity, instead of the native TeamCity runners allows us to have a much richer build process (eg. css minimiser, etc). It also means the full build/package process can be run on any developers PC instead of just the TeamCity servers making it much easier to customise and debug problems in the build process.