We have our build and deployment scripts set up in TFS 2010.
But we are also evaluating indeo Build Master. Has any one used this before?
Also, in general, for a full .NET house does it makes senses to have another SCM management tool?
Here is the link for inedo
I found this while researching Inedo's BuildMaster as well. We're a .NET/TFS shop, and BuildMaster solves all sorts of different problems.
Here's a blog post I found that discusses the differences:
http://blog.inedo.com/2011/06/06/how-does-buildmaster-compare-to-team-foundation-server/
We're using the free version of BuildMaster and may upgrade to enterprise once we use it for other projects.
Buildmaster has a TFS plugin that helps grab builds from TFS Builds. We use Gated check-in to ensure the code builds and Buildmaster to package the build for 1 click to deploy through the environments. Buildmaster has a fix forward approach (as in, no roll backs), where you create many builds for a release and each propagates through each environment and when 1 or more exist in say QA and have not moved to Staging, they will both be moved to staging at the same time, but in order, thereby ensuring all artifacts move through every environment.
Prior to Buildmaster, we used an xml driven PowerShell script that worked well, but Buildmaster agents saved us from remote desktop script execution. Our Powershell script has 1 advantage that Buildmaster does not yet have. We used the xml configuration file to hold application configuration file information, including file names, relative paths and xpath settings to inject values, xml fragments and remove xml nodes from configuration files coming from source control. Buildmaster uses template configuration files stored in Buildmaster, with tag replacement for each environment. This results in high maintenance should anything change in a configuration file, such as additional environment non specific sections being added, which would require creating the template again.
Buildmaster does have a custom action that allow you to run executables, so theoretically, you can run your own commands to perform functionality that Buildmaster does not have built in, but this is not ideal.
Related
We are still using an in-house TFS 2012 server but I'm now looking at moving to VSTS. I have a couple of questions though:
Years ago I customised our build process template to perform a number of additional tasks, and I was wondering if VSTS builds can be customised in a similar way, specifically to do what we currently do:
Run StyleCop
Change the version number in every AssemblyInfo.cs file in the solution prior to building (the major and minor numbers are specified in the build definition).
Run a batch file at the end of the build which runs an InnoSetup script to create a "setup.exe" file (the batch filename is againspecified in the build definition).
(The first two are (I think) DLLs that came from the now defunct tfsbuildextensions.codeplex.com site).
Second question: we currently have an in-house NuGet repository. Am I right in saying I could host this on VSTS instead? And will that be accessible both to VSTS builds and our dev team?
The newer build system is fully extensible. You can simply add "Command Line", "Batch File", or "PowerShell" tasks to run whatever commands you'd like during your build process. Any customizations you made to your XAML build process templates will have to be ported manually, but it's entirely possible that someone has created free extensions that are available to install from the VSTS marketplace that replicate the behavior you're seeking.
VSTS supports package management feeds. It's an extension and requires additional licensing, but the simple answer to your question is "yes".
I am using Sitecore 6.6.0, we have multiple environments
Local
DEV
QA
PROD
I have to deploy few changes directly from Local to Prod (Don't ask me why directly to PROD, even if it is for QA, my question remains same), what I am doing is create a package on my local with all items and separately create folder structure for all files related to the fix an deploy that to PROD.
There is always a chance of human error, since I will have to remember all associated items and files for a fix, so is there a better automated way, which will not skip any changed Items or Files?
On the other note I am using Bit-bucket for source controlling sitecore code what about sitecore DBs? most of the sitecore developments stays in DBs. What is the best approach to source control sitecore DBs?
Update
Installed packages from nuget
After installing Unicorn from nuget and unicorn.default.config, I get the following error
Attempt by method 'Unicorn.Data.DataProvider.UnicornDataProvider..ctor(Unicorn.Data.ITargetDataStore, Unicorn.Data.ISourceDataStore, Unicorn.Predicates.IPredicate, Rainbow.Filtering.IFieldFilter, Unicorn.Data.DataProvider.IUnicornDataProviderLogger, Unicorn.Data.DataProvider.IUnicornDataProviderConfiguration, Unicorn.Predicates.PredicateRootPathResolver)' to access method 'System.Action`1<System.__Canon>..ctor(System.Object, IntPtr)' failed.
Further after following the ReadMe on Github
When I do a sync on site/unicorn.aspx.
[P] Auto-publishing of synced items is beginning.
ERROR: Method not found: 'Sitecore.Publishing.Pipelines.Publish.PublishResult Sitecore.Publishing.Publisher.PublishWithResult()'. (System.MissingMethodException)
at Unicorn.Publishing.ManualPublishQueueHandler.PublishQueuedItems(Item triggerItem, Database[] targets, IProgressStatus progress)
at Unicorn.Pipelines.UnicornSyncEnd.TriggerAutoPublishSyncedItems.Process(UnicornSyncEndPipelineArgs args)
at (Object , Object[] )
at Sitecore.Pipelines.CorePipeline.Run(PipelineArgs args)
at Unicorn.ControlPanel.SyncConsole.Process(IProgressStatus progress)
Solution:
For older sitecore versions (pre 7.2 iirc) you need to disable the auto
publish config file as it relies on a method added later by sitecore.
https://github.com/kamsar/Unicorn/issues/103
In order to track the database changes you are making, you will first need to install software that will be able to help you serialize your changes and store in source control. Team Development for Sitecore (TDS) and Unicorn are the two most popular options.
You will also want to make sure you have your own local database where you are making your changes so you can isolate those changes from your QA, PROD, etc. allowing you to maintain the same level of isolation you do for developing code.
Automation of this process helps reduce the human error you mention for the deployment by introducing a repeatable and known process. Here are a few blogs that can help you get started:
Jason Bert - Continuous Deployment (Git/TDS/TeamCity)
Jason St-Cyr - Automating with TeamCity and TFS (TFS/TDS/Team Build)
Andrew Lansdowne - Auto deploy Sitecore Items using Unicorn and TeamCity (Unicorn/TeamCity)
Brian Beckham - TDS and Build Configurations
You may also want to look into configuration transforms to support different values in your Sitecore Include patch files. SlowCheetah plugin will let create the transforms in Visual Studio (it might be in Visual Studio 2015 now...). TDS can pick up those transforms automatically and execute them on the build server for you, or you can do it with Visual Studio itself to create published packages.
For Sitecore versioning and deployment Unicorn is also a good option.
https://github.com/kamsar/Unicorn
Cheers,
Bo
I'm looking for a good way to manage deployment to a test environment of several branches of the same codebase.
My setup includes a Main branch and several Dev branches:
Main
Dev1 (feature set 1)
Dev2 (feature set 2)
etc...
These are all part of a continuous integration process where they get deployed daily for testing.
At the moment there is a "Testing" configuration, with corresponding Web.config transformation and publish profile, that control things like connection strings (Web.config) and location to deploy the application to (MSDeploy configuration in the publish profile).
It all worked fine until merging started happening. Once we start merging code, the different Web.config transformations and publish profiles inevitably get mixed up, and Visual Studio happily merges everything without finding conflicts, resulting in situations like the Main branch being deployed to the Dev1 location, or the Dev2 application getting configured with Main's connection string.
I can think of one solution, which is creating a solution configuration for each branch, and the corresponding Web.config transformations and publish profiles. This would definitely do the job of keeping everything separate and safe from merging problems, but it creates a whole lot of new configuration files all over the place. And on top of that it makes the task of creating or deleting a branch much more complicated.
Is there another way of doing it?
You should look at Release Management for Visual Studio 2013. You can configure a release pipeline with different variables at each environment that you are deploying to.
You would then only have two transformations. One, the default, for localhost development. And a second for use on the build server that contains the RM replaceable variables.
http://nakedalm.com/building-release-pipeline-release-management-visual-studio-2013/
Trying to make my life easier, Currently we have 4 developers working in Visual Studio 2012 and we are using TFS 2012 for source control. The project we work on is a multi-tenant web application (single source directory with multiple dbs) that is a mixture of legacy, asp and vb6 com components, coupled with new C# code. We use TFS for source control and for managing User Stories and Bugs. Because of the way our site works it can not be ran or debugged locally only on the server.
Source Control is currently setup with a separate branch for each developer that's working directory is mapped to a shared network path on the dev server that has a web site pointed to it in IIS. Dev01-Dev05 etc. The developers work on projects in their branch test it using their dev website, then check in changes to their own branch and merge those into the trunk. The trunk's work space is mapped to the main dev website so that the developers can test their changes against the other customer's dev domains to test against customizations and variances in functionality based on the specific dbs the are connected to.
Very long explanation but basically each dev has a branch and a site, that are then merged into the trunk with its own site.
In order to deploy our staging server:
I compile the trunk's website via a bat file on the server
Run a windows app I built to query TFS for changesets associated with
specific WorkItems in a certain status, and copy all the files for
those changesets from the publish folder to a deployment folder.
Run another bat file on the server to use RedGate's Deployment Manager
to create a package from those new files
Go to the DM site on our network to create and deploy that release (haven't been able to get the command line tools to work for this, so I have to do it manually)
Run any SQL scripts that have been saved off in Folders that match ticket numbers on each database (10 or so customer dbs) to support the release
I have tried using TFS automated build stuff and never really got it to build the website correctly. Played around with Cruise Control also with little success. Using a mishmash of skunk works projects to do this is very time consuming and unreliable at best.
My perfect scenario would be:
Gated Checkin, Attempt build/publish every time a developer merges into the trunk, rejects and notifies developer if the build fails.
End of the day collect the TFS Items of a certain status and deploys files associated with them to the staging site
Deploy SQL scripts for those TFS items across all the customer dbs in staging
Eventually* run automated regression UI tests, create new WorkItems or emails to devs if failed
Update TFS WorkItems to new state so QA/Customers know their items are ready to test in our staging environment
Send report of what items were deployed successfully
How can I get here so that I am not spending hours preparing and deploying releases to staging and eventually production? Pretty open to potential solutions, things that would be hard to change would be the source control we are using, can't really switch to subversion or something else so we are pretty stuck with TFS.
Thanks
Went back in and started trying to get TFS to build/publish my web solution. I was able to get a build to complete successfully. adding msbuild argument /p:DeployOnBuild=True and setting the msbuild platform to x86 seemed to do the trick on that.
Then I found https://github.com/red-gate/deployment-manager-tfs which gives you a build process template to do the package and deployment using the redgate tools. After playing with that for a bit I finally got it to create, package and deploy my build to our staging environment.
Next up will be to modify the template to run some custom scripts to collect only the correct items to deploy, deploy all the sql files and then to set the workitems to the appropriate statuses after completion.
Really detailed description of your process. Thanks for sharing!
I believe you can set up TFS to have gated check-in on a single branch, which if you can setup on trunk would make sure that the merges built successfully. That could trigger msbuild, if you can get that working or a custom build job.
If you can get that working then you'd be able to use that trunk code as the artifact to send to Deployment Manager. That avoids having to assemble the files for deployment through the TFS change sets, as you'd be confident that the trunk could always build.
Are you using Deployment Manager to deploy the database from source control as well as the application?
That could be a way to further automate the process. SQL Source Control and SQL CI allow you to source control the structure of a database, keep a database up to date on each check-in, and run database unit tests. They also produce database packages for Deployment Manager, so you can deploy a release that contains both the application and the database.
If you want to send me the command you're using in step 4 to deploy the release using Deployment Manager I can help out with that. The commands I use are:
DeploymentManager.exe --create-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1
DeploymentManager.exe --deploy-release --server=http://localhost:81 --project="Project Name" --apiKey=XXXXXXXXXXX--version=1.1 --deployto=CI-Environment-Name
That will create a release version 1.1 using the latest available packages for that project. You can optionally specify the package to be used when creating the release with
--packageversion=<package name>=<version>
--packageversion="application=1.5
I recently joined a company as Release Engineer where a large number of development teams develop numerous services, applications, web-apps in various languages with various inter-dependencies among them.
I am trying to find a way to simplify and preferably automate releases. Currently the release team is doing the following to "release" the software:
CURRENT PROCESS OF RELEASE
Diff the latest revision from SCM between QA and INTEGRATION branches.
Manually copy/paste "relevant" changes between those branches.
Copy the latest binaries to the right location (this is automated using a .cmd script).
Restart any services
MY QUESTION
I am hoping to avoid steps 1. and 2. altogether (obviously), but am running into issues where differences between the environments is causing the config files to be different for different environments (e.g. QA vs. INTEGRATION). Here is a sample:
IN THE QA ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.QA.domain.net/</value>
</setting>
IN THE INTEGRATION ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.integration.domain.net/</value>
</setting>
If you look closely then the only difference between the two <setting> tags above is the URL in the <value> tag. This is because the QA and INTEGRATION environments are in different data-centers and are ever so slightly not in sync (with them growing apart as development gets faster/better/stronger). Changes such as this where the URL/endpoint is different are TO BE IGNORED during "release" (i.e. these are not "relevant" changes to merge from QA to INTEGRATION).
Even in a regular release (about once a week) I have to deal with a dozen config files changes that have to released from QA to integration and I have to manually go through each config file and copy/paste non URL-related changes between the files. I can't simply take an entire package that the CI tool spits out from QA (or after QA), since the URL/endpoints are different.
Since there are multiple programming languages in use, the config file example above could be C#, C++ or Java. So am hoping any solution would be language agnostic.
SUMMARY OF ENVIRONMENTS/PROGRAMMING LANGUAGES/OS/ETC.
Multiple programming languages - C#, C++, Java, Ruby. Management is aware of this as one of the problems, since Release team is has to be king-of-all-trades and is addressing this.
Multiple OS - Windows 2003/2008/2012, CentOS, Red Hat, HP-UX. Management is addressing this too - starting to consolidate and limit to Windows 2012 and CentOS.
SCM - Perforce, TFS. Management is trying to move everyone to a single tool (likely TFS)
CI is being advocated, though not mandatory - Management is pushing change through but is taking time.
I have given example of QA and INTEGRATION, but in reality there is QA (managed by developers+testers), INTEGRATION (managed by my team), STABLE (releases to STABLE by my team but supported by Production Ops), PRODUCTION (supported by Production Ops). These are the official environments - others are currently unofficial, but devs or test teams have a few more. I would eventually want to start standardizing/consolidating these unofficial envs too, since devs+tests should not have to worry about doing this kind of stuff.
There is a lot of work being done to standardize how the binaries are being deployed using tools like DeployIT (http://www.xebialabs.com/products) which may provide some way to simplify these config changes.
The devs teams are agile and release often, but that just means more work diffing config files.
SOLUTIONS SUGGESTED BY TEAM MEMBERS:
Current mind-set is to use a LoadBalancer and standardize names across different environments, but I am not sure if "a process" such as this is the right solution. There must be a better way that can start with how devs write configs to how release environments meet dependencies.
Alternatively some team members are working on install-scripts (InstallShield / MSI) to automate find/replace or URLs/enpoints between envs. I am hoping this is not the solution, but it is doable.
If I have missed anything or should provide more information, please let me know.
Thanks
[Update]
References:
Managing complex Web.Config files between deployment environments - C# web.config specific, though a very good start.
http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx - OK, though as a first look, this seems rather rudimentary, that may break easily.
Generally the problem isn't too difficult - you need branches for each of the environments and CI build setup for them. So a merge to the QA branch would trigger a build of that code and a custom deployment to QA. Simple.
Now managing multiple config files isn;t quite so easy (unless you have 1 for each environment, in which case you just call them Int.config, QA.config etc, store them all in the SCM, and pick the appropriate one to use in each branch's deployment script - eg, when the build for QA runs, it picks qa.config and copies it to the correct location and renames it to the correct name)(incidentally, this is the approach I tend to use as its very simple).
If you have multiple configs you need to use, then its always going to be a manual process - but you can help yourself by copying all the relevant configs to a build staging area that an admin will use to perform the deployment. Its a good first step in that the build they have in a staging directory will be the correct one for them, they just have to choose which config to use either during (eg as an option in the installer) or by manually copying the appropriate config over.
I would not try to manage some automated way of taking a single config file in source control and re-writing it with different data in the build, or pre-deploy steps. That way lies madness, and a lot of continual hassle trying to maintain the data and the tooling. Keep separate configs in place and make sure the devs know to update all of them when they make a change. (Or, you can hold 1 config in the SCM tree and make sure they know that merging their changes must not overwrite any existing modifications - multiple configs is easier)
I agree with #gbjbaanb. Have one config for each environment. Get your developers to write apps that read their properties (including their URLs) from config files and commit config files for each environment. Not only does this help you with deployment, but config files under revision control provides reproducibility, full transparency, and an audit trail of your environment specific settings.
Personally, I prefer to create a single deployable package that works on any environment by including all of the environment configs (even the ones you aren't using). You can then have some deployment automation that figures out which config files the apps should use and sets that up appropriately.
Thanks to #gman and #gbjbaanb for the the answers (https://stackoverflow.com/a/16310735/143189, https://stackoverflow.com/a/16246598/143189), but I felt that they didn't help me solve the underlying problem that I am facing, and restating just to make clear.
The code seems very aware of the environment in which they run. How to write environment-agnostic code?
The suggestions in the answers above are to store 1 config file for each environment (environment-config). This is possible, but any addition/deletion/edit of non-environment settings will have to be ported over to each environment-config.
After some study, I wonder if the following would work better?
Keep the config file's structure consistent/standardized e.g. XML. Try to keep the environment-specific endpoints in this config-file but store them in a way that allows easy access to the specific individual nodes/settings (e.g. using XPath).
When deploying to a specific environment, then your deployment tool should be able to parse (e.g. using XPath) and update the environment-specific endpoint to the value for the specific environment to which you are deploying.
The above is not a unique idea. There are some existing implementations that tackle the above solution already:
http://www.iis.net/learn/develop/windows-web-application-gallery/reference-for-the-web-application-package & http://www.iis.net/learn/publish/using-web-deploy/web-deploy-parameterization (WebDeploy)
http://docs.xebialabs.com/releases/3.9/deployit/packagingmanual.html#using-placeholders-in-ci-properties (DeployIt)
Home-spun solutions using XPath find and replace.
In short, while there are programming-language-specific solutions, and programming-language-agnostic solutions, I guess the big downfall is that Release Management needs to be considered during development too, else it will cause deployment headaches - I don't like that, since it sounds like "development should be aware of what tests will be designed". Is there a need AND a way to avoid this, is the big questions.
I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.
First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ref=sr_1_1?s=books&ie=UTF8&qid=1371099379&sr=1-1
Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.
Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.
The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.
Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.
This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to #gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)
For us, Powershell, XML and Web Deploy parameterization will be instrumental.
I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.
Good luck!