How to start parallel releases in VSTS? - azure-devops

We have 10 projects to release. It would be a time saver to start or create those releases in parallel. Right now we have to click and approve 10 pipelines. It takes around half an hour and 30 clicks.
Facts about Architecture
every microservice has its own pipeline
every pipeline has 3 artifacts

Enable the Continuous Delivery trigger on each release definition.
Don't require approvals for lower environments within your deployment pipeline. Set the deployment condition for your lower environment to "After release is created".
That will immediately start deploying to the chosen environments after artifacts are published.
Then promote each service to the next stage in your pipeline as it's ready to go.
Since you stated they're microservices, they should be able to be deployed independently of one another with no degredation of service. Breaking changes should be extremely rare. If you find that you have to deploy many services in lock-step, you probably have an architecture problem, because the entire point of microservices is that they are independent of each other. At that point, you're no longer working with microservices, you're working with small, tightly coupled services.

Related

CI/CD or Release Management - More that one artifact for one release

I am trying to find the most correct and simplest deployment process for each application release we have. The difficulties are the following:
It is Azure DevOps Releases.
Application has more than one artifact that should be delivered as a whole. Application distro, Its modules, and some configuration for servers that are not directly linked with the application, and of course automation scripts (which is versioned). All of them have their own build process and can be versioned separately.
All those components (App, separate server config, automation) are delivered as one product version. However, they could have different versions, for example: 12.2.1 (app), 12.2.3 (server), 12.2.1 (automation).
The question is how to build the process when after release (official), we can stick all of the final versions together (i mean not specifying them manually during pipeline release creation for each) taking into account that one of the component's version can be increased and we should be able to increase the version for release in terms of hotfixes for example.
Release Pipelines and 3 Artifacts: ok, there are 3 artifacts and a user has to specify all 3 versions manually during creation - quite a high risk to misclick. Unfortunately, there are 10 of them... 10 multiply 3 = 30 times to do a mistake.
Release Pipeline and 1 Artifact (app): Consider only one version of the application and automatically obtain automation scripts and configuration from feed by using app version. Could work, but no observability on what artifacts are going to be used, no way to downgrade, the only the latest version of artifacts (12.2.3.*).
Specify the version in the variable group connected to the stage (environments). Can easily make a mistake because release uses the baken version of the variable group. If you update VG, but not create a release - it will be epic fail. Moreover, there is no vision of what it is going to install/update, etc.
Please, share your ideas on how to manage multiple artifact versions within one product release to make the process more robust and clear with a little of flexibility.
That's how it works for me via this simple approach:-
A single build pipeline with many Agent Jobs where each one represents an artifact.
So here you will publish the different artifacts under separate folders and automatically increase versions "If needed", using a script per each and at the end all will be under the same BuildId.
Another variety if separation isn't needed that all be under the same Agent Job.
A single release pipeline that is linked to this big nested artifact that will perform the deploy.
Here also if separation is needed, could have multi stages or multi Agent Jobs as before.
Versions increasing either will happen manually at commits or scripts will automatically increase each build.
I would prefer the first as the 2nd doesn't indicate that there was any change.
The possibility of developer forgetting to update the version will be the same as developer introduces a bug so that can be fixed later.
Then at the end, you have a Single Release => Single Build => Many versions => A commit.

Azure DevOps Release Pipelines: Letting release flow through multiple environments with manual triggers

I'm trying to configure Azure DevOps Release pipelines for our projects, and I have a pretty clear picture of what I want to achieve, but I'm only getting almost all the way there.
Here's what I'd like:
The build pipeline for each respective project outputs, as artifacts, all the things needed to deploy that version into any environment.
The release pipeline automatically deploys to the first environment ("dev" in our case) on each successful build, including PR builds.
For each successive environment, the release must have been deployed successfully to all previous environments. In other words, in order to deploy to the second environment ("st") it must have been deployed to the first one ("dev"), and in order to deploy to the third ("at") it must have been successfully deployed to all previous (both "dev" and "st"), etc.
All environments can have specific requirements on from what branches deployable artifacts must have been built; e.g. only artifacts built from master can be deployed to "at" and "prod".
Each successive deploy to any environment after the first one is triggered manually, by someone on a list of approvers. The list of approvers differs between environments.
The only way I've found to sort-of get all of the above working at the same time, is to automatically trigger the next environment after a successful deployment, and add a pre-deployment gate with a manual approval step. This works, except the manual approval doesn't trigger the deployment per se, but rather let an already triggered deployment start executing. This means that any release that's not approved for lifting into the next environment, is left hanging until manually dismissed.
I can avoid that by having a manual trigger instead of automatic, but then I can't enforce the flow from one environment to the next (it's e.g. possible to deploy to "prod" without waiting for successful deployments to the previous stages).
Is there any way to configure Azure DevOps Release Pipelines to do all of the things I've outlined above at once?
I think you are correct, you can only achieve that by setting automatic releases after successful release with approval gates. I dont see any other options with currect Azure DevOps capabilities.
Manual with approval gates doesnt check previous environments were successfully deployed to, unfortunately.
I hope this provides some clarity after the fact. Have you looked at YAML Pipelines In this you can specify the conditions on each stage
The stages can then have approvals on them as well.

VSTS + Octopus Deploy? Why do I see a lot of CI/CD setups with both?

I'm a developer whose transitioning into Devops. By observation, I've noticed that a lot of dev shops have started using Octopus Deploy and Azure Devops Services (AzDo, formerly VSTS), or they are starting new projects to setup devops ci/cd pipelines AND they spec to use both tools.
I've been through some quick training for both tools and though they aren't perfectly the same, AzDo seems to offer all of the same features as Octopus Deploy.
So, my question is if a company is already using AzDo for much of their version control, or anything CI/CD pipeline-related, why would you use Octopus? What benefit does it offer to use Octopus for your build and deploys to AzDo?
Note, I am very, very new to Devops. I'm just asking because at the "10,000 feet view" there doesn't seem to be any reason for Octopus if you're already using AzDo. I mention Octopus Deploy by name because I see it come up frequently. However, I assume there could be other tools that serve the same purpose of automatic build and deploying that might also integrate with AzDo. However, AzDo offers build and deploy built in one. Why split out the work?
Let me preface why I like to both build and deploy with VSTS:
Same permissioning end to end
Line of sight from end to end build and deployment
Reasons I favor Octopus Deploy over VSTS Release:
Ability to upload packages/artifacts
External ones that are maybe one off packages to get deployed for a specific release
Target Definition
When you create Targets or servers you are deploying to, you are able to add a target to one or multiple environments and assign tags/roles to a target. What does this mean? More flexible server definition rather than defining strict Agents to a pool or servers to a Deployment Group, you can allow a target to span multiple (ie: a testing server that spans your Dev and Test environments and only gets triggered on steps that are defined for that role). I realize you can accomplish similar things to this in VSTS but in my opinion it's far more cumbersome.
Variable Definition
Variables can be grouped at a global level and grouped by a specific pipeline/process (that part is similar to VSTS). Variables can also be grouped or scoped by environments or roles (above) so you are able to have different variable values per role per environment; both super granular and flexible. Places this comes in handy is if you have a backend server with a connection string and maybe 2 content delivery nodes (role - content delivery) that get slightly different values than the backend server. At the moment, I do not know (other than creating new environments) how one would accomplish the same in VSTS.
Process Definition
All of the above comes together in the process definition of Octopus Deploy. The super flexible and granular variables and target definition allows you to focus on the actual deployment process rather than getting hung up on the nuances of the UI and its limitations. One example would be defining a process where the first step would be taking something out of a load balancer from a central server, step two deploy code to delivery server one, step three put back in lb, step 4 take out node two from lb called from a central server, step 5 deploy code to node two, and last step, back into load balancer. I realize it's a very simple hypothetical, but within Octopus Deploy, it's one steady process filtered to execute on specific roles, within VSTS you would have to break that down into different agent phases and probably pipelines.
Above are really the biggest points I see to use Octopus Deploy over VSTS Release. Now why would someone use VSTS to build and OD to release/deploy? There are a lot of different factors that go into it, some are corporate drivers like having an enterprise git client that has permissions handled thru MSDN. Sometimes it's a project management driver of having work items tied tightly to commit and builds, but with the added flexibility that OD brings to the table for free/minimal cost.
Hoping this help shine a little light into maybe why some people are crossing streams and using both VSTS and OD.
A lot of good points have been made already, but it really comes down to what you need. I would venture a lot of us started using Octopus before Release Management was really a thing.
We use VSTS for all our source control and builds and then all our deployments are handled through Octopus.
When we started evaluating tools, VSTS had nothing for deployments. Even now, they are still playing catch up to Octopus in feature set.
If you are doing true multi-tenanted and multi-environment deployments, I don't think VSTS really compares. We are using Octopus with around 30 tenants, some on Azure, some on premise. We deploy a mix of web and desktop apps. We are even using Octopus to deploy some legacy VB6 and winforms applications.
Multi-Tenancy (critical for us)
VSTS added Deployment Groups a while ago which sound pretty similar to Octopus Environments before multi-tenancy was implemented. Before Octopus had true multi-tenancy (it's been around a while now), people would work around it by creating different environments per tenant, like "CustomerA - Dev", "CustomerA - Prod", etc. Now you just have your Dev/Test/Prod environments and each tenant can have variables scoped to those individual environments.
Support
Documentation is excellent and it's really easy to get up and running.
The few times I've needed to contact someone at Octopus, they've answered very quickly and knowledgeably.
Usability
Having the Octopus dashboard giving us an overview of all our projects is amazing. I don't know of anyway to do this in VSTS, without going into each individual project.
Octopus works great on a mobile device for checking deployment status and even starting new deployments.
Community
Octopus works with their customers to understand what they want and they often release draft RFCs and have several times completely changed course based on customer feedback.
If we know what sort of applications you are deploying, and to what kinds of environments, we would be able to better tailor our responses.
The features you see today in VSTS weren't there a few years ago, so there might be an historical reason.
But I want to state here some non-opinionated reasons that may suggest an organization to opt for different tools instead of one.
Separate responsibility and access levels
Multiple CI tools in dev teams (orgs that are using also Jenkins or TeamCity or else) and need to standardize and control deployments
An org needs a feature available only in Octopus (maybe Multi-tenancy)
Octopus does a great job of focusing on deployments. Features reach octopus before vsts, support is local and responsive. That, and you never run out of build/release minutes!
Seriously though, I just like to support smaller companies where possible and if all features were equal, I'd still pick them.
The big reason in the past was that TFS On prem and early VSTS did NOT support non-Microsoft (.Net) code very well if at all. You could utilize the source control and work features of TFS and then use octopus/Jenkins etc... as the build release parts to cover code that TFS didn't really know what to do with.
Also the release pipelines used to be very simplistic and not that useful where the other products were all plugin based and could do (almost) anything you needed them to. Most of that has changed so that VSTS is much better at working with Non-Microsoft code bases then it used to be. Over time integrations get created inside a companies walls and undoing those decisions can be more painful then just having "too many" tools. Also I feel like there is just more people out there familiar with those tools since they have been mature longer and cover a larger part of the development world then VSTS has in the past.
To fully implement CD you need both. VSTS runs tests and is a build server. OD isn’t. VSTS is light on sophisticated application installations. And if you are provisioning environments, IaC style, you need Terraform in addition. Don’t try to shoehorn everything into a single tool. DevOps requires a whole ecosystem. The reasons are not historical.

TFS Release Management 2015 - How to restrict environment deployment order

Quick question.
Is there a way to constrain/restrict what order users can can deploy builds to environments?
For example if I have these four environments configured with manual push-button deploy (not-automated) I can start all four together if I want. I don't have to wait for the other to be done before kicking off the next one:
DEV
TEST
STAGE
PROD
Microsoft seems to be missing this feature in TFS 2015. It would make sense to offer a deployment condition that states that previous environments must have successful deployments before you can run push-button deploy for the next.
Yes, I know, you are going to say "but you can automate that so the deploys run in the order you want." Management here does NOT want that. They want push button deployment for each environment WITH a constraint that previous environments must be completed first.
This means a manual start for each environment.
Other than having the release manager "eyeball" the situation before pushing the button for the next environment I can't see a way to configure this rule.
Any ideas?
There is not any restriction on manually deploy situation for now. This is designed for giving you the ability to override the release process.
Note that you can always deploy a release directly to any of the
environments in your release definition by selecting the Deploy
action when you create a new release.
In this case, the environment triggers you configure, such as a
trigger on successful deployment to another environment, do not apply.
The deployment occurs irrespective of these settings. This gives you the ability to override the release process. Performing such
direct deployments requires the Manage deployments permission, which
should only be given to selected and approved users.
Source Link: Environment triggers
Suggest you use automation triggers, you could use Parallel forked and joined deployments, in combination with the ability to define pre- and post-deployment approvals, this enables the configuration of complex and fully managed deployment pipelines to suit almost any release scenario.
If you insist on manual push-button deploy, you may have to ask the release manager "eyeball" the situation to restrict environment deployment order as you mentioned.

Bluemix: Does Delivery Pipeline support roll back of deployment

I am trying to understand the key differences of the deployment capabilities of Delivery Pipeline versus Active Deploy.
From different documentations, I could understand that Active Deploy can deploy with zero downtime and support rollback.
I am curious to know the deployment capabilities of Delivery Pipeline.
The delivery pipeline is capable of doing zero downtime deployments. There is an article here that describes how to do it. It is a little out dated, but it mostly holds.
As for doing rollbacks, the pipeline probably does not handle them as well as active deploy would. You can redeploy an older build over a newer one to do the rollback, but that is about it.
There is currently work be done to add some of the active deploy features to the delivery pipeline.