Class library referenced by multiple websites + version control branching - version-control

Consider the following -
I have a solution that consists of multiple projects:
DAL (Class Library)
BusinessLogic (Class Library)
Website1 (Web Application)
Website2 (Web Application)
Both Website1 and Website2 share a reference to BusinessLogic, which in turn references the DAL.
Since these are just websites, I don't need to keep track of multiple versions, as such, but I do like to have the following branches:
Trunk
Production
Trunk is where I do all my development work, and after everything is tested and ready to go, I merge from Trunk to Production when a website is actually deployed to production servers. This allows me to shelve my current work, check out the Production branch and address any major bugs that were found after deployment and immediately deploy the fix.
My problem is that, using this approach, what lives in the Production branch isn't always correct. Let's say I make an update to BusinessLogic which is utilised by Website1. It passes testing and is deployed. If I merge all the projects to the Production branch, then it's wrong because Website2 wasn't deployed to production at that time.
Or, I could merge only the relevant projects to Production. So, in this case, I would merge Website1, BusinessLogic and DAL. This is still wrong, however. If I were to check out the Production branch to do work on Website2, it would have a newer version of BusinessLogic and DAL than actually exist on our production servers.
What is the correct approach here?

You should not use a code sharing or code promotion model. It reduces quality and forces rework. Instead look to create a release pipeline where you create a package for your Business and Dal layers and consume those packaged in the web apps.
The best approach for this is to use a build server and create a NuGet package for your DAL that is consumed by the Business Layer. This in turn is packaged as a NuGet package that your websites can consume.
Your workflow for getting a change to the business layer into your website is then:
Open Business Layer solution and make fix
Check in and trigger CI build
CI build creates and publishes NuGet package
Open Website solution and update NuGet package
Clean and simple. No branching is good branching.

There can be a lot of correct ways, and it always depends on what is correct for you. Each way would have pros and cons, of course.
If you'd like source-level dependencies, create several production branches per site, each includes external:
/Site1/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/Site2/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/BusinessLogic/v1Branch (from DevBranch)
/BusinessLogic/DevBranch
Here is how you perform version upgrade:
Make a change to BusinessLogic/DevBranch, test it.
Branch it as BusinessLogic/v2Branch
Update Site2/Production's external to point to BusinessLogic/v2Branch
Build site2, test and deploy.
So you'll have -
/Site1/Production
site content
BusinessLogic [external] -> /BusinessLogic/v1Branch
/Site2/Production
site content
BusinessLogic [external] -> /BusinessLogic/v2Branch
/BusinessLogic/v1Branch (from DevBranch)
/BusinessLogic/v2Branch (from DevBranch)
/BusinessLogic/DevBranch
This requires a certain level of development culture and some amount of svn management.
Also you can also put binaries to such svn branches, which is pretty much the same scheme. In general such approach is named vendor branches.
If you prefer binary dependencies outside the source control, you could utilize local nuget repository. It works the same as official one: you create a new version, publish to nuget, then refer it from site, the build and deploy. This requires additional setup and maintenance effort and is more appropriate for larger projects.

Related

Azure DevOps & copying code base from one project to another or finding a better way of doing this

I'm seeking advice on the following:
In my development shop we support a SASS solution to our customers. We currently have 10 sites that we develop and provide technical support. We're a small team, just 2 of us. We're using Azure DevOps services to host and manage our code, right now we're just using it for a code repo. Within our organization, we multiple projects that represent site. Each site uses the same code base, except the web.config. The web.config is used to change the UI\theme for each customer. When we get a request to create a new site, we first create a new project site and then we copy our code base from the "golden copy" project.
We use a "golden copy" code base to make feature changes and bug fixes. Once we develop a new feature (or fix an issue) to the golden copy, and then we push it to test, QA beings testing. If testing is successful, then the development team copies the entire "golden copy" code files and copies the code to each site project, build and deploy to test for QA to ensure that site works with the new changes . This can be time consuming and prone to errors.
I would like to know the following:
- Is there way in dev ops azure where we merge\copy from our golden
copy to our other site project's repos?
- Can you offer a better way for reorganizing our Organization\Projects
setup based on our current setup\workflow.
Thank you,
As Shayki mentioned, you can consider adopting Git branching strategy. Distributed version control systems like Git give you flexibility in how you use version control to share and manage code.
Keep your branch strategy simple. Build your strategy from these three concepts:
Use feature branches for all new features and bug fixes.
Merge feature branches into the master branch using pull requests.
Keep a high quality, up-to-date master branch.
A strategy that extends these concepts and avoids contradictions will result in a version control workflow for your team that is consistent and easy to follow. For details ,please refer to this official document.
Is there way in dev ops azure where we merge\copy from our golden copy
to our other site project's repos?
For this issue , do you refer to synchronize the changes on the golden copy to other projects' repos? If so, I think it can only be done manually(copy the entire "golden copy" code files to each site project) or clone the entire repo into other projects through the following steps.
In other projects, select the Import repository option:

How to Accomplish This Branching and Deployment Strategy Using TeamCity and Octopus

I have been researching and am trying to figure out the best branching and deployment strategy to accomplish the requirements below. Maybe I’m missing something but it is more complicated than it seems. Ideally, we’d just have one permanent branch, ‘master’, that could have specific commits tagged to mark releases to production.
Our current strategy is based on Git Flow and has permanent branches ‘master’ (only has releases to production) and ‘develop’. The primary thing that complicates using a multiple permanent-branches model is the concept of “promoting” the same build from the staging environment to production. Currently, this needs to be done in a separate source code branch (deployments to staging come from ‘develop’, deployments to prod come from ‘master’).
Tools: Git (VSTS), TeamCity, Octopus Deploy
Requirements (feature and hotfix lifecycles):
All code is reviewed via pull requests (enforced via branch policies)
All code gets deployed to a staging environment for testing
We can quickly go back to any snapshot of code that was deployed previously
If testing is successful, then the same build can be “promoted” from our staging environment to production (no need to build again)
Features accumulate over time before pushing out to production as a single release. Hotfixes have to be able to go through without getting caught up in the "all or nothing" next regular release.
I like the idea of having one permanent branch with tags (re: The master/develop split is redundant, http://endoflineblog.com/gitflow-considered-harmful), but having additional permanent branches may better facilitate deploying to different lifecycles/versions (feature and hotfix) to Octopus.
I have been wrestling with how best to pull this off and I may be over complicating things. Any feedback is appreciated.
It seems you have a number of questions and they are quite broad... I'll add some comments to each of your requirements as a conversation starter, but this whole thread might get blocked by moderators as it is definitely not the style of questions SO was made for.
All code is reviewed via pull requests (enforced via branch policies)
I haven't looked at VSTS for ages, but I'd expect they already support branch policies and pull-requests, so not sure if there's anything you need here other than configure settings in your repositories.
In case VSTS does not support that, you might consider moving to a tool that does e.g. BitBucket, GitHub, etc. Both of these have an on-premises version in case you can't (or don't want to) use the cloud hosted version.
All code gets deployed to a staging environment for testing
You achieve that with setting up lifecycles in Octopus Deploy, to make sure deployments/promotions follow the the sequence you want.
We can quickly go back to any snapshot of code that was deployed previously
You already have source control, so all you need now is traceability from the code that is deployed in an environment, to the deployment version in Octopus Deploy, the build job in TeamCity, the branch and exact commit in your source control.
There's a few things that you can do, to achieve that:
Define a versioning scheme that works for you. I like to use semantic versioning. "Major" and "Minor" versions are defined by the developers, and the "Patch" is the auto-incremented number from TeamCity (%build.number%). Every git push build the code and generates a unique build version (%major%.%minor%.%build.number%)
As part of the build steps in TeamCity, before you compile the code, make sure your source files are patched with the version number assigned by each build, the commit hash from your source control, and the branch name. e.g. if you are using .NET, make sure all the AssemblyInfo.cs files are updated with that version, so that the version is embedded in the binaries. This allows anyone to query the version looking at the properties of the binary files, and also allows you to display the app version on the app itself (e.g. status bar, footer, caption, about box, etc.)
Have TeamCity tag your source control with the version number of every build, so you can quickly see on your source control history. You probably only want to do that for the master branch, though which is what you care about.
Have Octopus tag your source control with the deployment version number and the environment name, so that you can quickly see (from your source control) what got deployed where.
Steps 1 and 2 are the most important ones, really. 3 and 4 are just nice-to-have. Most of the time you'll just open the app in the environment, check the commit hash in the "About", and do a git checkout to that commit hash...
If testing is successful, then the same build can be "promoted" from our staging environment to production (no need to build again)
Again, Octopus Deploy lifecycles, and make sure anything different in each environment is defined in the configuration file of the application, which is updated during the Octopus deployment, using environment-specific variables.
In terms of branch workflow, this last requirement makes it mandatory to merge changes into master (or whatever your "production" branch is) before the deployment lifecycle can begin.

How to manage share libraries between applications?

We develop enterprise software and we wish to promote more code reuse between our developers (to keep this problem simple, let's assume all .NET). We are about to move to a new VCS system (mostly likely mercurial) and I want to have a strategy in place for how we will share libraries.
What is the best process for managing shared libraries that meets the following use cases:
Black Box - only the public API of the library is known and there is no assumption that consuming developers will be able to "step into" or set breakpoints into the library. The library is a black box. Often a dev does not care about the details, just give me the version of the lib that has always "worked".
Debug - the developer should be able to at least "step into" the library during development. Setting breakpoints would be a bonus too.
Parallel Development - while most likely the minority, there are seemingly valid use cases for developing the library in parallel with the consuming application. Often the authors of the library and component are the same developer. For better or worse, the applications and libraries can often be tightly coupled. Being able to make changes and debug into both can be a very productive way for us to develop.
It should be noted that solving 3, may implicitly solve 2.
Solutions may involve additional tools (such as NuGet, etc.).
By sharing libraries, you must distinguish between:
source dependencies (you are sharing sources, implying a recompilation within your project)
binary dependencies (you are share the delivery, compiled from common sources) and link to it from your project.
Regarding both, NuGet (2.0) finally introduced the "Package Restore During Build", in order to not commit to source control whatever is build in Lib or ExternalDependencies folder.
NuGet (especially with its new hierarchical config, NuGet 2.1) is well suited for module management within a C# project, and will interface with both git and Mercurial.
Combine it with the Mercurial subrepos, and you should be able to isolate in its own repo the common code base you want to reuse.
I have 2 possible solutions to this problem, neither of which seems ideal (and therefore why I posted the question).
Use the VCS to manage the dependencies. Specifically, use mercurial subrepos and always share by source.
Advantages:
All 3 usecases are solved.
Only one tool is required for source control and dependency management
Disadvantages:
Subrepos feature is considered a feature of last resort by Mercurial developers and from experimentation and reading has the following issues:
Tags cannot be easily or atomically applied to multiple repos.
Root/Shell repos are inherently fragile (can be broken if the pathing to subrepos changes). Mercurial developers suggest mitigating this issue by including no content in the shell repo and only use it to define (and track the revision) of the subrepos. Therefor allowing a dev to manually recreate a moment in time even if the subrepo pathing is broken.
Branching cannot cross repo boundaries (most likely not a big issue as one could argue that branches should only occur in a given subrepo).
Use Ivy or NuGet to manage the dependencies. There are two ways this could work.
Dependencies/Packages can simply contain official binaries. A build server can be configured to publish a new dependency/package into the company repository when a developer submits a build for new version. This solves case 1. Nuget seems to support symbol packages that may solve case 2. Case 3 is not solved and leaves developers in that case out to dry and come up with there own solution (there is basically no way to commit applications to the VCS that include dependencies by source). This seems to be the traditional way that dependency management tools are used.
Dependencies/Packages can contain a script that gets the source from mercurial. The script could be automatically executed when the dependency/package is installed. Some magic has be performed to have the .NET solution include the reference by project (rather than by browsing the filesystem), but in theory this could happen in the NuGet install script and reversed in the uninstall script.
Switching between "source" and "binary" dependencies seems to be a manual step. I would argue devs should switch to binary dependencies for releases and perhaps this could be enforced on the build server when creating a release. This further complicated by the fact that the VS solution needs to be modified to reference a project vs a binary.
How many source packages exists? Does every binary package contain the script to fetch the source that it was built with? Or do we create separate source packages that use the install script magic to get the source? This leads to the question is there a source package for every tag in mercurial? Every changeset? Or simply 1 source package that just clones and updates to the tip and leaves the dev to update to a previous revision (but this creates the problem of knowing what revision to update to).
If the dev then uses mercurial to change the revision of the source, how can this be reflected in the consuming application? The dependency/package that was used to fetch the source has not changed, but the source itself has...

Maven best practices for versioning different branches [development, qa / pre-release]

I have a couple of projects which are developed and released on different branches, namely development and release. The process works pretty well but unfortunately it has some drawbacks and I have been wondering if there is a better versioning scheme to apply in my situation.
The main development happens on a development branch (i.e. Subversion trunk but it doesn't matter much) where team of developers commit their changes. After building and packaging artifacts, Jenkins deploys them to maven repository and development integration application server. This is a DEVELOPMENT-SNAPSHOT and basically is just a feature branch containing all developed features on one common branch:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>D.16-SNAPSHOT</version>
When one particular business change is done and requested by QA team, this single change is then being merged to the release branch (branches/release). Jenkins deploys the resulting artifact to QA application server:
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>R.16-SNAPSHOT</version>
Then there's a release which happens via maven-release-plugin on the release branch version of software (which creates a maintenance tag/branch for quick bug fixing). (R.16-SNAPSHOT => R.16)
Development and release branches are currently being versioned as D.16-SNAPSHOT and R.16-SNAPSHOT respectively. This allows to separate artifacts in maven repository but creates a problem with different maven mechanisms which rely on standard maven versioning style. And this breaks OSGI versioning as well.
Now, how would you name and version maven artifacts in such a scheme? Is there a better way? Maybe I could make some changes to maven structures other than simply changing the versioning and naming schemes? But I need to keep development and QA (release) SCM branches separate.
Would a maven classifier of 'development'/'production' be a reasonable alternative?
<groupId>pl.cyfrowypolsat.process-engine</groupId>
<artifactId>process-engine</artifactId>
<version>16-SNAPSHOT</version>
<classifier>D</classifier>
As far as I know, a common naming extension for a release artifact would be just the name of the artifact, without any stuff, only the version specified. A development branch would have the same artifact name but with snapshot.
For example, take twitter4j. The artifact name of the release version is
twitter4j-2.5.5
Snapshot of their(his) development version
twitter4j-2.6.5-SNAPSHOT
That is the naming convention almost everybody uses and is recognized by most tools. For example, my Nexus repository can specify a policy to ignore development releases which basically means it ignores the artifacts containing -SNAPSHOT in their name.
EDIT:
To your followup question:
Well, depending on your build tool, you can create your snapshots to have the timestamp or some other unique identifier. However, I have never heard of some branching logic being embedded in the artifact's name just so the continuous int server can distinguish it. From the artifact's perspective, it is either a release, or a SNAPSHOT, I don't see the benefit of embedding more logic into the name of the artifact just cause your Hudson allows yo to do so. To be honest, your release cycle seems OK to me, but it would require some fine tweaking of your maven tools. If you can't live with that I would suggest you to use a classifier instead of relying on the name as it is always easier to tweak the integration server than a lot of plugins that rely on standard naming convention. In conclusion, I believe you are on the right track.
I think you could simply the process by having only two types as far as maven is concerned
Snapshot (In perpetual development)
Releasable (with a version number that can be deployed to maven repository or production release)
I would handle your branching a little differently, If you look at the iterative/scrum development model your code should be releasable/shippable at end of a iteration/sprint
Main sub version trunk is where developers commit their code
At the end of the sprint/iteration branch the main trunk and called it release branch (there should not be a QA branch any code that is to be released is tested for quality)
Bug fixes should happen on the release branch and periodically merged back to main trunk
This way you can keep creating branches for a release and any bug fixes are committed to branch
Always make sure before creating a new branch from main trunk, It has all the merges from previous branches
The release plugin from Maven supports branching. It appears to work by assuming that the branch is created to support the next version of your code.
Personally, I'm more inclined to use the versions plug-in, and explicitly set my Maven project's version numbers.

Promoting several modules (integration -> milestone) in ivy

Ivy is great for managing dependencies, but it isn't meant to handle the entire software lifecycle across many modules. That said, it does have several features that seem to support it (such as the status and branch attributes), and the ivy best practices blurb alludes to being able to promote integration revisions to milestone or release, "with some work".
Unfortunately I haven't found definitive guidance on how to manage the dev -> test -> deploy cycle. Here are some things I want to achieve:
(Given that devs typically work across many modules in a local workspace)
Dev can locally publish changes to a module, so that other modules in the workspace can get the updated artifacts.
Dev can designate a version as "ready to deploy to test" with one command.
Tester can designate a version as "ready for prod" with one command.
Dev can rebuild any version from source and the appropriate dependencies are picked up correctly (aka repeatable builds).
Some things I'm fairly clear about are:
The revision status should be used to denote whether that revision is meant only for development, is ready for testing, or is ready for production
The branch attribute should be sufficient to handle different project branches
Here is what I'm grappling with:
How to promote integration builds
Say I have these modules checked out in my workspace:
Now I'm happy with module a, and decide to publish a milestone using the checked out versions in my workspace. What needs to happen in the repo is:
e-1.0-RC1 gets published
d-1.1-RC2 gets published, referencing e-1.0-RC1 as a dependency
c-2.0-RC1 gets published, referencing d-1.1-RC2 as a dependency
b-3.3-RC1 gets published, referencing e-1.0-RC1 as a dependency
Finally, a-7.1-RC2 gets published, referencing c-2.0-RC1 and b-3.3-RC1 as dependencies.
If I try to roll my own for this, I'd probably end up doing some workspace management, ivy.xml find & replace, etc. Before I open that can of worms, I'd like to get some opinions. What's the best way to tackle this?
You can use recursive delivery to publish modules and their dependencies with a higher status.
Using your example:
e-1.0-RC1 gets published with an integration status
d-1.1-RC2 gets published with an integration status, referencing e-1.0-RC1 as a dependency
c-2.0-RC1 gets published with an integration status, referencing d-1.1-RC2 as a dependency
b-3.3-RC1 gets published with an integration status, referencing e-1.0-RC1 as a dependency
a-7.1-RC2 gets published with an integration status, referencing c-2.0-RC1 and b-3.3-RC1 as dependencies.
Finally, you decide to promote a-7.1-RC2 to a milestone status, so you do a recusive delivery (use the delivertarget attribute). This will recursively call the delivertarget for each dependency that has a status lower than milestone and publish it with a milestone status.
The nice thing about this, is that you don't need (or want) to have each project checked out in your workspace, just a. This also means that it's much easier to create a deployment pipeline and have your CI server:
run unit tests for a,
build a,
publish a as integration,
deploy a to a System Test environment,
run some System Tests
promote a from integration to milestone (which promotes it's dependencies)
deploy a to a Acceptance Test environment,
run some Acceptance Tests
promote a from milestone to release (which promotes it's dependencies)
deploy a to production (or upload it to a download site)
At no time does the pipeline need to access the dependant projects and, since the recursive delivery is generic, when you add or remove dependencies (via your ivy.xml files), you don't need to change anything in your pipeline.
I've marked this answer as a community wiki. Anyone else care to expand on it or correct anything I got wrong?
How do you do the line?:
promote a from milestone to release (which promotes it's dependencies)
I was planning on doing a retrieve and publish. Is there a better way?