Version control vs. automatic distribution vs. dependency management - version-control

Imagine a distributed software system, installed on a group of a few hundred computers (nodes). Nodes are responsible for automatically running scheduled tasks. There are hundreds of tasks, and every task is scheduled to run on about 5-10 nodes. Nodes may stop for days, and may be removed from the system. Every task is defined by one or more source files, and node-specific config files. The code is developed and tested directly on nodes (using remote access), since only these are equipped with the special hardware and have the network context required to run the tasks (building a separate test system would be too expensive). The source files of every task refer to shared source files (libraries), and libraries may refer to other libraries. The dependency tree of tasks and libraries is complicated.
I don't have any experience with distributed version control systems, but I feel that this system could be built around a DVCS. Different libraries, and source files of different tasks, would have their own repository. Every node which runs a given task should have an instance of the repo of that task. The repo of every library, used by at least one task of a node, should also be present on that node. Developers would modify and commit code locally on nodes, and distribute the modifications to repos on other nodes using DVCS techniques.
Question #1
What would be the best approach to distribute code changes to other nodes?
Some possible scenarios:
Developers push their modifications to every other node which has an instance the same repo. (But they may forget/don't have time to do so.)
Nodes automatically pull every change from every other remote repo, and update themselves. (But there may be conflicts.)
For each repo, one of the instances is used as a "reference". Developers push their modifications to this instance, and every other node having an instance automatically pulls from here and updates itself. (But the node having the reference instance may stop.)
Question #2
What would be the best way to handle dependencies?
If more than one tasks (or libraries) refer to the same library, and the referred library has to be modified, one or more referring tasks (or libraries) may stop working (dependency hell). It would be better to stick with the originally referred version, and upgrade to the new one after proper testing. That is, more than one version of the same source file should be present in the same repo, which does not seem possible. Do I have to create a new branch for the new version of the referred library? If yes, how should I upgrade the referring repos?
Thank you for your help.

I don't have any experience with distributed version control systems,
but I feel that this system could be built around a DVCS.
Wrong feeling, in common. VCS (SCM) is Version Control System (Source Control Management), i.e - track
changes
in historical aspect mostly
as flat array without complex dependences (some dependences are still taken into account)
You have to see at another category of tools - configuration management software, which handle deploys, policies, complex dependences, conditions etc natively
You can get some iteration to CM with DVCS, but it will be hard work and pale semblance of existing well-established tools as a result

Related

ALM - Track application deployment

We are using TFS 2015 for a quite of time. We have a lots of projects that running on the TFS solution. We use the Release Managements tools built-in to make the deployment and tracking more simple. But in our business context, we must deploy many different project in many different server (OnPremise, Azure, third party hosting). Some project use dependencies with other project.
The major problem we have actually is that we cannot track effectively where our different applications is deployed and wich have dependencies with other.
When is time to make maintenance on server or service, we have a hard time to detect all the dependencies and all projects that going to be affected.
We can check in every TFS project, on the Release Managements and edit every environments config, but this is not a solution as the number of projects grown.
What is the best strategy or guideline ?
Thanks
I'm afraid using the Release Management is the best choice for your situation. Even though you will met some performance or management limitation with the number of projects grown.
Release management retain full traceability (permanent audit trails)
Monitor the current status of the releases and deployments to all the environments.
Track the status of recent deployments in each of the environments.
Retain detailed audit history of all the activities performed on a release.
See the commits and work items that are associated with each release.
And since you are using it to make the deployment, is it not the best choice. You have said your team must deploy many different project in many different server (OnPremise, Azure, third party hosting). As you might imagine, which third-party tool or system will handle the such a complex cross-platform environment. And this tool should also tracking the source code, the deployment result through Release Management...

Release Management to different environments (Dev/QA/Integration/Stable)

I recently joined a company as Release Engineer where a large number of development teams develop numerous services, applications, web-apps in various languages with various inter-dependencies among them.
I am trying to find a way to simplify and preferably automate releases. Currently the release team is doing the following to "release" the software:
CURRENT PROCESS OF RELEASE
Diff the latest revision from SCM between QA and INTEGRATION branches.
Manually copy/paste "relevant" changes between those branches.
Copy the latest binaries to the right location (this is automated using a .cmd script).
Restart any services
MY QUESTION
I am hoping to avoid steps 1. and 2. altogether (obviously), but am running into issues where differences between the environments is causing the config files to be different for different environments (e.g. QA vs. INTEGRATION). Here is a sample:
IN THE QA ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.QA.domain.net/</value>
</setting>
IN THE INTEGRATION ENVIRONMENT:
<setting name="ServiceUri" serializeAs="String">
<value>https://servicepoint.integration.domain.net/</value>
</setting>
If you look closely then the only difference between the two <setting> tags above is the URL in the <value> tag. This is because the QA and INTEGRATION environments are in different data-centers and are ever so slightly not in sync (with them growing apart as development gets faster/better/stronger). Changes such as this where the URL/endpoint is different are TO BE IGNORED during "release" (i.e. these are not "relevant" changes to merge from QA to INTEGRATION).
Even in a regular release (about once a week) I have to deal with a dozen config files changes that have to released from QA to integration and I have to manually go through each config file and copy/paste non URL-related changes between the files. I can't simply take an entire package that the CI tool spits out from QA (or after QA), since the URL/endpoints are different.
Since there are multiple programming languages in use, the config file example above could be C#, C++ or Java. So am hoping any solution would be language agnostic.
SUMMARY OF ENVIRONMENTS/PROGRAMMING LANGUAGES/OS/ETC.
Multiple programming languages - C#, C++, Java, Ruby. Management is aware of this as one of the problems, since Release team is has to be king-of-all-trades and is addressing this.
Multiple OS - Windows 2003/2008/2012, CentOS, Red Hat, HP-UX. Management is addressing this too - starting to consolidate and limit to Windows 2012 and CentOS.
SCM - Perforce, TFS. Management is trying to move everyone to a single tool (likely TFS)
CI is being advocated, though not mandatory - Management is pushing change through but is taking time.
I have given example of QA and INTEGRATION, but in reality there is QA (managed by developers+testers), INTEGRATION (managed by my team), STABLE (releases to STABLE by my team but supported by Production Ops), PRODUCTION (supported by Production Ops). These are the official environments - others are currently unofficial, but devs or test teams have a few more. I would eventually want to start standardizing/consolidating these unofficial envs too, since devs+tests should not have to worry about doing this kind of stuff.
There is a lot of work being done to standardize how the binaries are being deployed using tools like DeployIT (http://www.xebialabs.com/products) which may provide some way to simplify these config changes.
The devs teams are agile and release often, but that just means more work diffing config files.
SOLUTIONS SUGGESTED BY TEAM MEMBERS:
Current mind-set is to use a LoadBalancer and standardize names across different environments, but I am not sure if "a process" such as this is the right solution. There must be a better way that can start with how devs write configs to how release environments meet dependencies.
Alternatively some team members are working on install-scripts (InstallShield / MSI) to automate find/replace or URLs/enpoints between envs. I am hoping this is not the solution, but it is doable.
If I have missed anything or should provide more information, please let me know.
Thanks
[Update]
References:
Managing complex Web.Config files between deployment environments - C# web.config specific, though a very good start.
http://www.hanselman.com/blog/ManagingMultipleConfigurationFileEnvironmentsWithPreBuildEvents.aspx - OK, though as a first look, this seems rather rudimentary, that may break easily.
Generally the problem isn't too difficult - you need branches for each of the environments and CI build setup for them. So a merge to the QA branch would trigger a build of that code and a custom deployment to QA. Simple.
Now managing multiple config files isn;t quite so easy (unless you have 1 for each environment, in which case you just call them Int.config, QA.config etc, store them all in the SCM, and pick the appropriate one to use in each branch's deployment script - eg, when the build for QA runs, it picks qa.config and copies it to the correct location and renames it to the correct name)(incidentally, this is the approach I tend to use as its very simple).
If you have multiple configs you need to use, then its always going to be a manual process - but you can help yourself by copying all the relevant configs to a build staging area that an admin will use to perform the deployment. Its a good first step in that the build they have in a staging directory will be the correct one for them, they just have to choose which config to use either during (eg as an option in the installer) or by manually copying the appropriate config over.
I would not try to manage some automated way of taking a single config file in source control and re-writing it with different data in the build, or pre-deploy steps. That way lies madness, and a lot of continual hassle trying to maintain the data and the tooling. Keep separate configs in place and make sure the devs know to update all of them when they make a change. (Or, you can hold 1 config in the SCM tree and make sure they know that merging their changes must not overwrite any existing modifications - multiple configs is easier)
I agree with #gbjbaanb. Have one config for each environment. Get your developers to write apps that read their properties (including their URLs) from config files and commit config files for each environment. Not only does this help you with deployment, but config files under revision control provides reproducibility, full transparency, and an audit trail of your environment specific settings.
Personally, I prefer to create a single deployable package that works on any environment by including all of the environment configs (even the ones you aren't using). You can then have some deployment automation that figures out which config files the apps should use and sets that up appropriately.
Thanks to #gman and #gbjbaanb for the the answers (https://stackoverflow.com/a/16310735/143189, https://stackoverflow.com/a/16246598/143189), but I felt that they didn't help me solve the underlying problem that I am facing, and restating just to make clear.
The code seems very aware of the environment in which they run. How to write environment-agnostic code?
The suggestions in the answers above are to store 1 config file for each environment (environment-config). This is possible, but any addition/deletion/edit of non-environment settings will have to be ported over to each environment-config.
After some study, I wonder if the following would work better?
Keep the config file's structure consistent/standardized e.g. XML. Try to keep the environment-specific endpoints in this config-file but store them in a way that allows easy access to the specific individual nodes/settings (e.g. using XPath).
When deploying to a specific environment, then your deployment tool should be able to parse (e.g. using XPath) and update the environment-specific endpoint to the value for the specific environment to which you are deploying.
The above is not a unique idea. There are some existing implementations that tackle the above solution already:
http://www.iis.net/learn/develop/windows-web-application-gallery/reference-for-the-web-application-package & http://www.iis.net/learn/publish/using-web-deploy/web-deploy-parameterization (WebDeploy)
http://docs.xebialabs.com/releases/3.9/deployit/packagingmanual.html#using-placeholders-in-ci-properties (DeployIt)
Home-spun solutions using XPath find and replace.
In short, while there are programming-language-specific solutions, and programming-language-agnostic solutions, I guess the big downfall is that Release Management needs to be considered during development too, else it will cause deployment headaches - I don't like that, since it sounds like "development should be aware of what tests will be designed". Is there a need AND a way to avoid this, is the big questions.
I'm working through the process of creating a "deployment pipeline" for a web application at the moment and am sifting my way through similar problems. Your environment sounds more complicated than ours, but I've got some thoughts.
First, read this book, I'm 2/3 the way through it and it's answering every question I ever had about software delivery, and many that I never thought to ask: http://www.amazon.com/Continuous-Delivery-Deployment-Automation-Addison-Wesley/dp/0321601912/ref=sr_1_1?s=books&ie=UTF8&qid=1371099379&sr=1-1
Version Control Systems are your best friend. Absolutely everything required to build a deployable package should be retrievable from your VCS.
Use a Continuous Integration server, we use TeamCity and are pretty happy with it so far.
The CI server builds packages that are totally agnostic to the eventual target environment. We still have a lot of code that "knows" about the target environments, which of course means that if we add a new environment, we have to modify all such code to make sure it will cope and then re-test it to make sure we didn't break anything in the process. I now see that this is error-prone and completely avoidable.
Tools like Visual Studio support config file transformation, which we looked at briefly but quickly realized that it depends on environment-specific config files being prepared with the code, by the developers in order to be added to the package. Instead, break out any settings that are specific to a particular environment into their own config mechanism (e.g. another xml file) and have your deployment tool apply this to the package as it deploys. Keep these files in VCS, but use a separate repository so that revisions to config don't trigger new builds and cause the build number to get falsely inflated.
This way, your environment-specific config files only contain things that change on a per-environment basis, and only if that environment needs something different to the default. Contrary to #gbjbaanb's recommendation, we are planning to do whatever is necessary to keep the package "pure" and the environment-specific config separate, even if it requires custom scripting etc. so I guess we're heading down the path of madness. :-)
For us, Powershell, XML and Web Deploy parameterization will be instrumental.
I'm also planning to be quite aggressive about refactoring the config files so that the same information isn't repeated several times in various places.
Good luck!

How to manage share libraries between applications?

We develop enterprise software and we wish to promote more code reuse between our developers (to keep this problem simple, let's assume all .NET). We are about to move to a new VCS system (mostly likely mercurial) and I want to have a strategy in place for how we will share libraries.
What is the best process for managing shared libraries that meets the following use cases:
Black Box - only the public API of the library is known and there is no assumption that consuming developers will be able to "step into" or set breakpoints into the library. The library is a black box. Often a dev does not care about the details, just give me the version of the lib that has always "worked".
Debug - the developer should be able to at least "step into" the library during development. Setting breakpoints would be a bonus too.
Parallel Development - while most likely the minority, there are seemingly valid use cases for developing the library in parallel with the consuming application. Often the authors of the library and component are the same developer. For better or worse, the applications and libraries can often be tightly coupled. Being able to make changes and debug into both can be a very productive way for us to develop.
It should be noted that solving 3, may implicitly solve 2.
Solutions may involve additional tools (such as NuGet, etc.).
By sharing libraries, you must distinguish between:
source dependencies (you are sharing sources, implying a recompilation within your project)
binary dependencies (you are share the delivery, compiled from common sources) and link to it from your project.
Regarding both, NuGet (2.0) finally introduced the "Package Restore During Build", in order to not commit to source control whatever is build in Lib or ExternalDependencies folder.
NuGet (especially with its new hierarchical config, NuGet 2.1) is well suited for module management within a C# project, and will interface with both git and Mercurial.
Combine it with the Mercurial subrepos, and you should be able to isolate in its own repo the common code base you want to reuse.
I have 2 possible solutions to this problem, neither of which seems ideal (and therefore why I posted the question).
Use the VCS to manage the dependencies. Specifically, use mercurial subrepos and always share by source.
Advantages:
All 3 usecases are solved.
Only one tool is required for source control and dependency management
Disadvantages:
Subrepos feature is considered a feature of last resort by Mercurial developers and from experimentation and reading has the following issues:
Tags cannot be easily or atomically applied to multiple repos.
Root/Shell repos are inherently fragile (can be broken if the pathing to subrepos changes). Mercurial developers suggest mitigating this issue by including no content in the shell repo and only use it to define (and track the revision) of the subrepos. Therefor allowing a dev to manually recreate a moment in time even if the subrepo pathing is broken.
Branching cannot cross repo boundaries (most likely not a big issue as one could argue that branches should only occur in a given subrepo).
Use Ivy or NuGet to manage the dependencies. There are two ways this could work.
Dependencies/Packages can simply contain official binaries. A build server can be configured to publish a new dependency/package into the company repository when a developer submits a build for new version. This solves case 1. Nuget seems to support symbol packages that may solve case 2. Case 3 is not solved and leaves developers in that case out to dry and come up with there own solution (there is basically no way to commit applications to the VCS that include dependencies by source). This seems to be the traditional way that dependency management tools are used.
Dependencies/Packages can contain a script that gets the source from mercurial. The script could be automatically executed when the dependency/package is installed. Some magic has be performed to have the .NET solution include the reference by project (rather than by browsing the filesystem), but in theory this could happen in the NuGet install script and reversed in the uninstall script.
Switching between "source" and "binary" dependencies seems to be a manual step. I would argue devs should switch to binary dependencies for releases and perhaps this could be enforced on the build server when creating a release. This further complicated by the fact that the VS solution needs to be modified to reference a project vs a binary.
How many source packages exists? Does every binary package contain the script to fetch the source that it was built with? Or do we create separate source packages that use the install script magic to get the source? This leads to the question is there a source package for every tag in mercurial? Every changeset? Or simply 1 source package that just clones and updates to the tip and leaves the dev to update to a previous revision (but this creates the problem of knowing what revision to update to).
If the dev then uses mercurial to change the revision of the source, how can this be reflected in the consuming application? The dependency/package that was used to fetch the source has not changed, but the source itself has...

Source Control for CM users

I'm looking to stear my team into this century and use source control. The developers are very capable of handling source control software - be it command line based or GUI based, Windows or -Nix.
The reason they've been locally and individually handling their code (which deeply frightens me) is because our CM group is not as technically savvy nor comfortable with the whole check-in/out process.
Is there a source control software out there that is geared towards the CM group? I'm thinking of one that would allow them to select a version of a file out of all that have been checked in and mark it for the build they are trying to create.
If you consider the CM (configuration Management) group as in charge of a release management process, then you could isolate them from the "technical details" of any (D)VCS tool you might choose by establishing a good publication process.
The publication consists of making visible somewhere (a shared directory, an artifact repository like Nexus, dedicated to releases, ...):
a deliver (a set of binary and their dependencies) necessary to run your program
a clear list of versions for those binaries (SVN revision number or tag, git tag, Nexus Group-Artifact-Version, ...) allowing the developers to find the exact set of code whenever the CM group get back to them with a list of defect to fix
a document explaining the deployment
The CM group take that set of deliveries, manages the release process and the promotion between the different deployment environment (Integration, UAT, pre-prod, prod, ...), without having to deal with the VCS tool.
That also enforces a strong separation between dev and prod (both in term of environment and process), which allows for the devs to adopt whatever workflow of development they want, withtout affect the way the CM group works.

How do you keep track of what you have released in production?

Tipically a deploy in production does not involve just a mere source code update (build) but requires a lot of other important tasks like, for example:
Db scripts
Configuration files (differents from test\production)
Batch to schedule
Executables to move to the correct path
Etc. etc.
In our company we just send an email to a "Release email address" describing the tasks in order, which changeset need to be published (TFS), which SP need to be updated, db scripts and so on.
I believe there's not a magic tool that does these tasks automagically in order, rollback included; but probably there's something better than email that helps to keep track of releases in production.
Do you have any tools to suggest or practices to share?
When multiple tasks are required to support a full project deployment (and that's frequently the case, in my experience), I'd suggest using a build/deployment tool. I've used Ant in the past with great success, but know others who swear by Capistrano, Maven and others.
Using Ant, I wrote a script that would:
Pull the specific revision I wanted from my VCS
Create a tarball of the target directory on the remote machine (in case a rollback was required)
Create a MySQL dump file of the database (also for rollback purposes)
Delete the remote directory and SSH the new content just pulled from the VCS
Perform various other logistical operations (setting file perms, ownership, etc.)
Create a release branch on the VCS itself
Create a tag with the appropriate version information so I always had a snapshot of the code base at that moment of deployment.
Hope that helps some. I've written a few blog posts about this that may (or may not) be useful. They're dated now, but the general information should still be solid enough.
Introductory thoughts
Details of how I use Ant for deploying--including scripts
You might be interested in the Team Foundation Build Recipes Website, that showcases some build scripts developed using SDC Tasks Library and the MSBuildTasks library
How about something like SVN? You can put all of your code in a repository, then when you are ready to release from production bring your stuff over from test. Then you'll have very specific revisions with information on what happened. SVN keeps track of all of it.