Deploying code without disrupting testers - deployment

I'm attempting to find the best practice for deploying code from a TFS Development branch, to a Test branch without disrupting users who're testing.
We currently have continuous integration, wherein we are able to merge the code from Dev to Test, then fire off the separate Test - Nightly build, to have those changes committed to the build server and thus on the test site.
Are there practices/tools available to allow users to not be impacted by a deployment? I don't have any experience with Git, which I've seen a few times on Google.
Ideally we could do a deployment, the users wouldn't feel it, then if there was an issue with the code that was introduced we could quickly flip it back to it's previous state, rather than perform a roll back, rebuild etc.
I'm sure I'm explaining something that already exists I just don't know what it is.

Related

How to organise development lifecycle with TFVC

We're working on a system where we're going to be making a lot of changes over the next year or so. We use Azure DevOps with TFVC. Our plan is to develop, test internally, release into a staging environment for the client to test internally, and then release into the live environment once the bugs are ironed out, but we want to do this in fairly small iterations, so we're not releasing massive changes at once and we can get early feedback on changes to make sure we're on the right track and don't get flooded with loads of issues to fix after a big update.
The problem that is worrying me is that once we've released into the testing environment, it is going to take time for the client to test it, and during that time, we'll have been working on other improvements/bug fixes. If the client simply says that everything is fine, I can use VS to Get Specific Version as of the time we released the staging version and build and release that to live, but if there are any issues that come up and we need to fix those, we'll end up having to get the client to test all of the new stuff that we've been working on, and we may be in the middle of large changes that aren't practical to release to staging.
I've looked at TFS branches, but they seem to mess up the paths and be a pain to set up and work with, and I feel like they aren't really for this kind of situation.
How do people generally solve this problem?
(I appreciate that many people will argue that Git is a better source control system, but I'm specifically asking how to do it with TFVC)
Branches in TFVC are far from lightweight and can make switching between branches locally awkward.
The changing of local paths when switching is hard to avoid, it can be done by changing your workspace definition, remapping the folders and forcing a get latest.
Another option to use here is git tfs, it will allow you to use TFVC on the server, but Git locally and get the local benefits (including multiple local branches and the ability to switch between them in place).
A technique to work with environments and approvals while development is ongoing is to create promotion level branches. One branch for active development, branch from that a branch for the next environment (test) and then the next (acceptance?) all the way to you reach production.
Development
\— Test
\— Acceptance
\— Production
Each time development has stabilized enough, merge the code to Test. Any issues can be fixed either directly on Test or by fixing them in Development and then cherry-picking.
The larger the distance between Development and Production the harder it will become to keep the branches in sync.
An alternative is to use release branches basically collapsing the above structure a bit:
Main
\— Release 1
\— Release 2
Active development happens in Main, code is merged and hot fixed on release branches. Any changes made on a release branch must also be merged back.
There is a better solution
Instead of trying to be more productive by working ahead on new features, can you help the client test the potential releases faster? In that case you won't need to wait too long for the testing to finish, can provide super quick fixes and then switch focus on more new development. It will prevent all kinds of issues with merging, reappearing bugs from code that wasn't merged back to main correctly etc.

Versioning, CI, build automation best practices

I am looking for some advice on the best practice for versioning software.
Background
Build automation with gradle.
Continuous integration with Jenkins
CVS as SCM
Semantic Versioning
Sonatype Nexus inhouse repo
Question
Lets say I make a change to come code. An automated CI job will pull it in and run some tests against it. If these tests should pass, should Jenkins update the version of the code and push it to nexus? Should it be pushed up as a "SNAPSHOT"? Should it be pushed up to nexus at all, or instead just left in the repository until I want to do a release?
Thanks in advance
I know you said you are using CVS, but first of all, have you checked the git-flow methodology?
http://nvie.com/posts/a-successful-git-branching-model/
I have little experience with CVS, but it can be applied to it, and a good versioning and CI procedure begins with having well defined branches, basically at least one for the latest release, and one for the latest in-development version.
With this you can tell the CI application what it is working with.
Didn't have time for a more detailed answer before, so I will extend now, trying to give a general answer.
Branches
Once you have clearly defined branches you can control your work flow. For example, it is usual having a 'master' and a 'develop' branches, where the master will contain the latest release, and develop will contain the next release.
This means you can always point to the latest release of the code, it is in the master branch, while the next version is in the develop branch. Of course, this can be more detailed, such as tagging the master branch for the various releases, or having an additional branch for each main feature, but it is enough having these two.
Following this, if you need to change something to the code, you edit the develop branch, and make sure it is all correct, then keep making changes until you are happy with the current version, and move this code to the master.
Tests
Now, how to make sure all is correct and valid? By including tests in your project. There is a lot which can be read about testing, but let's keep it simple. There are two main types of tests:
White box tests, where you know the insides of the code, and prepare the tests for the specific implementation, making sure it is built as you want
Black box tests, where you don't know how the code is implemented (or at least, you act as if you didn't), and prepare more generic tests, meant to make sure it works as expected
Now, going to the next step, you won't hear much about these two tests, and instead people will talk about the following ones:
Unit tests, where you test the smallest piece of code possible
Integration test, where you connect several pieces of code and test them
"The smallest piece of code possible" has a lot of different meanings, depending on the person and project. But keeping with the simplification, if you can't make a white box test of it, then you are creating an integration test.
Integration tests include things like database access, running servers, and take a long time. At least much longer than unit testing. Also, integration tests due to their complexity may require setting up a specific environment.
This means that while unit tests can be run locally with ease, integration tests may be so slow that people dislike running them, or may just be impossible to run in your machine.
So what do you do? Easy, separate the tests, so unit tests can be run locally after each change, while integration test are run (after unit tests) by the CI server after each commit.
Additional tests
Just as a comment, don't stop at this simplified vision of tests. There are several ways of handling tests, and some tests I wouldn't fit into unit or integration tests. For example it is always a good idea validating code style rules, or you can make a test which just deploys the project into a server, to make sure it doesn't break.
CI
Your CI server should be listening to commits, and if correctly configured it will know when this commit comes from a development version, a release or anything else. Allowing you to customise the process as you wish.
First of all it should run all the tests. No excuses, and don't worry if it takes two hours, it should run all the tests, as this is your shield against future problems.
If there are errors, then the CI server will stop and send a warning. Fix the code and start again. If all tests passed, then congratulations.
Now it is the time to deploy.
Deploying should be taken with care, always. The latest version available in the dependencies repository should, always, be the most current one.
It is nice having a script to deploy the releases into the repository after a commit, but unless you have some short of final validation, a manual human-controlled one at it, you may end releasing a bad version.
Of course, you may ignore this for development versions, as long as they are segregated from the actual releases, but otherwise it is best handling the final deployment by hand.
Well, it may be with a script or whatever you prefer, but it should be you who begins the deployment of releases, not the machine.
CI customisation
Having a CI server allows for much more than just testing and building.
Do you like reports? Then generate a test coverage report, a quality metrics one, or whatever you prefer. You are using a external service for this? Then send him the files and let it work.
Your project contains files to generate documentation? Build it and store it somewhere.

continuous integration pain points

Recently my fledgling team (just two devs) attempted to implement continuous delivery practices as described by Jez Humble.
That is we ditched feature branches and pull requests (in git) and aimed to commit to the mainline branch every day at least.
We have a comprehensive unit and functional test suite for both the front and back end which is triggered automatically by Jenkins, when pushing to git.
We configured a feature switching app and resolved to use it for longer running features.
However, we encountered several problems and I'm curious to get a perspective from people who are successfully using this approach.
Delays due to Vetting/ Manual QA process
often tasks were small enough that we didn't think they warranted configuring feature switching, e.g. adding an extra field to a form, or changing some field labels. However, for various reasons that ticket would become blocked (e.g. some unforeseen aspect of the task needing UX input).
This would mean mainline ended up in a compromised state whilst we waited for external dependencies to unblock the task. Often we'd be saying "we can't deploy anything until Thursday, as that's when we can get an IA review"
The answer here is probably a much tighter vetting of which tasks are started. However, it was often difficult to completely anticipate every potential blocker. Maybe if a task becomes blocked additional dev should be done to add a feature switch, or revert the commits? Tricky situation.
Issues with code review during integration on mainline branch
Branches and pull requests give a nice breakdown of changes made on a single task. However, attempting CD we ended up with a mish-mash of unrelated commits on mainline, and the code reviewer having to somehow piece together commits that related to the task he was reviewing. And often there'd be a number of additional minor bug fixes, changes in response to review type commits at the end of a task. Essentially we couldn't figure out a clean way to code review work with this set up.
Generic code review issues
We used phabricator for a bit to do post-commit code reviews, but found it was flagging every single commit (some very minor) for code review, rather than showing us a list of changes per individual dev task. So it made reviewing the code onerous compared to git pull requests. Is there a better way?
We've now reverted back to short lived feature branching in git and raising pull requests to initiate code review and it's a nice set up, but if we could fix the issues we're having with non-feature branching CD, then we'd like to re-attempt that approach.
Could you automate the vetting process and/or run it before you integrate. If you automate the vetting process, for ex adding a form/button etc, you just need a suite of test to run post integration to validate that your mainline is not broken
You need to code review before integration i.e on the pull request . If issues are caught during a code review and fixed, the pull request is updated and the mainline is not messed.
Code review tools are very specific to a group of developers and the team needs.I suggest you play with a few code review tools to see which one suits your needs
Based on most of your questions, I would recommend running all your Vetting/code review etc before you merge.(You can do it in increments if the process is too cumbersome) and running an automated suite of tests for all the stuff that you want to do post integration.
If the process setup that you have in your team is too complicated to be finished in a day and can have multiple iterations , then it is worthwilefor you to evaluate a modified version of gitflow than a fork based CI model
If you use feature branches to work on tasks when you finish with a task you can either merge it back to the integration branch or create a pull request for the merge back to the integration branch.
In both case you get a merge commit, a summary of every change you made on the feature branch.
Do you need something more than this?

What is the correct/best/most efficient way to use centralised version control?

I work as part of a small development team (4 people).
None of us are incredibly experienced with version control, but we are required to use Perforce under our company's policies. For the most part it has been great, but we have have kept to a simple process agreed between ourselves that is starting to become less ideal. I was wondering if people could share their experiences of version control working smoothly and efficiently.
Our original setup is this:
We have a trunk, which holds production code as it is now.
Each user creates a development branch for their work, as we have
always worked on separate areas that don't really affect each other.
We develop on Redhat Linux boxes and the code is run from /var/www/html. So we sync to a workspace and copy those files to this path, change the permissions and then perform our changes there. When we want to check in, we check out the files in the workspace, overwrite them with what we have changed and submit (I think this might be our weakest part)
Any changes to trunk will be incorporated if they affect the functionality in question. The code is then deployed for testing.
When testing is complete, we merge the branch into trunk, and then create a release branch from the current trunk this is tested again and then released into production.
This worked fine previously because our projects were small and very separated. Now, however, we are all working on the same big dev branch. Changes have been released since the creation of the dev branch, and more will be made before it is finished.
We are also required to deploy the code for testing in various stages of it's development, and this code needs to be up to date with both the development changes, and any changes that have been made to production.
We have decided at this stage that we will create the release branch at the same time as the dev branch, into which we will merge current Trunk(production) and the current dev branch each time we need a testing version so that it is completely up to date. However, this merge takes a lot of time from the whole team and isn't really working out too well.
I've been told that different teams have different ways of going about things so I'm not looking for a fix for my process, but I would love to hear what setup you use of your willing to share
If you are not particularly familiar with version control and best practices I would suggest utilizing Streams in Perforce. Functionally Streams and Branches are very similar. The difference with Streams is that Perforce utilizes pre-built relationships based on the stream type and gives basic governance (i.e. you can't copy those files to the other stream until you merge).
All the commands CAN be overridden by an admin.
Once you are utilizing streams you can do things a few different ways. You have three types of streams, Release (most stable), Main (stable), and Development (least stable). You can create any hierarchy you like.
I suppose in your case I would have a Mainline, an integration development stream, and then a development stream for each developer to utilize. That way you each have your own playground and can move your changes to the integration stream once they are complete. Those completed changes can then be merged down to the other developer streams.

force stable trunk/master branch

Our development departments grows and I want to force a stable master/trunk.
Up to now every developer can commit into the master/trunk. In future developers should commit into a staging area, and if all tests pass the code gets moved to the trunk automatically. If the test fails, the developer gets a mail with the failed tests.
We have several repositories: One for the core product, several plugins and a repository for every customer.
Up to now we run SVN and git, but switching all repos to git could be done, if necessary.
Which software could help us to get this done?
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
Environment: We are 10 developers, and use python and django.
Question: Which tool can help me to force a stable master branch?
Update
I was on holiday, and now the bounty has expired. I am sorry. Thank you for your answers.
Question: Which tool can help me to force a stable master branch?
Having been researching this particular aspect of CI quasi-pathologically since our ~20 person PHP/ZF1-based dev team made the switch from SVN to Git over the winter (and I became the de-facto git mess-fixer), I can't help but share my experience with this particular aspect of continuous integration.
While obviously, having a "critical mass of unit tests running" in combination with a slew of conditionally parameterized Jenkins jobs, triggering infinitely more conditionally parameterized jobs, covering every imaginable circumstance would (perhaps) be the best and most proper way to move towards a Continuous Integration/Delivery/Deployment model, the meatspace resources required for such a migration are not insignificant.
Some questions:
Does your team have some kind of VCS workflow or, minimally, rules defined?
What percentage would you say, roughly, of your codebase is under some kind of behavioral (eg. selenium), functional or unit testing?
Does your team ( / senior devs ) actually have the time / interest to get the most out of gerrit's peer-based code review functionality?
On average, how many times do you deploy to production in any given day / week / month?
If the answers to more than one of these questions are 'no', 'none', or 'very little/few', then I'd perhaps consider investing in some whiteboard time to think through your team's general workflow before throwing Jenkins into the mix.
Also, git-hooks. Seriously.
However, if you're super keen on having a CI/Jenkins server, or you have all those basics covered already, then I'd point you to this truly remarkable gem of a blog post:
http://twasink.net/2011/09/16/making-parallel-branches-meet-regularly-with-git-and-jenkins/
And it's equally savvy cousin:
http://twasink.net/2011/09/20/git-feature-branches-and-jenkins-or-how-i-learned-to-stop-worrying-about-broken-builds/
Oh, and of course, the very necessary devopsreactions tumblr.
There a some articles on the web which explain how to use gerrit and jenkins to force a stable branch.
I am unsure if I need both, or if it is better to use something else.
gerrit is for coding review
Jenkins is a job scheduler that can run any job you want, including one:
compiling everything
launching sole unit test.
In each case, the idea is to do some guarded commit, ie pushing to an intermediate repo (gerrit, or one monitored by Jenkins), and only push to the final repo if the intermediate process (review or automatic build/test) passed successfully.
By adding intermediate repos, you can easily force one unique branch on the final "blessed" repo to which those intermediate referential will push to if the commits are deemed worthy.
It sounds like you are looking to establish a standard CI capability. You will need the following essential tools:
Source Version Control : SVN, git (You are already covered here)
CI server : Jenkins (you will need to build and run tests with each
check in, and report results. Jenkins is the defacto standard tool
used for this)
Testing : PyUnit
Artifact Repository : you will need a mechanism for organizing and
archiving the increments created with each build. This could be a
simple home grown directory based system. I have also used Archiva,
but there are other tools.
There are many additional tools that might be useful depending on your development process:
Code review : If you want to make code review a formal gate in your
process, Gerrit is a good tool.
Code coverage analysis : I've used EMMA in the past for Java. I am
sure that are some good tools for Python coverage.
Many others : a library of Jenkin's plugins that provide a variety of
useful tools is available to you. Taking some time to review
available plugins will definitely be time well spent.
In my experience, establishing the right cultural is as important as finding the right tooling.
Testing : one of the 10 principles of CI is "self testing builds". In
other words, you must have a critical mass of unit tests running.
Developers must become test infected. Unit testing must become a
natural, highly value part of each developers individual development
process. In my experience, establishing a culture of test infection
is the hardest part of deploying CI.
Frequent check-in : Developers and managers must organize there work
in a way that allows for frequent small check-ins. CI calls for daily
checkins. This is sometimes a difficult habit to establish.
Responsiveness to feedback : CI is about immediate feedback. The
developers must be conditioned to response to the immediate feedback.
If unit tests fail, the build it broken. Within 15 minutes of a CI
build breaking, the developer responsible should either have a fix
checked in, or have the original, bad check-in backed out.