How to deal with IaC code (Infrastructure part of build pipeline) when the pipeline fails - azure-devops

This is a general question that I have been having for couple of days now and after hours of searching google I am still not sure how it works.
Say I have a single pipeline to look for my IaC code change, deploy if there are any changes, and also then build the code and then deploy to the same infrastructure created in the step before.
So, it will look something like: Pipeline
Step1/stage 1: Look for changes in the IaC code (Terraform) and then deploy if there are any changes to .tf files
step2/stage2: Build the npm application
step3/stage3: Run the tests
step4/stage4: deploy the built application to the Infrastructure.
Now let's say the if the application fails to build (step2) or if the tests (step3) fail, how do we deal with the infrastructure rollback?

You can always deploy previous versions of your application in different release or build
You should have a quality ansurance environment before production environment so as to check if new changes will work
If you want to combine rollback deployment inside the same build you can use stage conditions to add new stage which will run only if previous stages fail
Check failed() condition and combine it with 'and', 'or' keywords
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/stages?view=azure-devops&tabs=yaml#conditions
# stage B runs if A fails
- stage: B
condition: failed()

Related

Azure pipeline Stage and Job Dependencies

I'm trying to create two pipeline templates used to integrate with an API which 1) installs an application 2) deploys the application to some devices that the API administers. The install template handles the install and gets back an application ID that the deploy template will require in order to deploy the app. There is no way to query the API later on to get the app ID so I must make it available for the install template for use later on by the deploy template. The install task will be called once, but the deploy task can be called multiple times for different device "rings".
I need to support the following scenario where the deploy template can have a dependency on both a job in the current stage, and can be dependent on the same job in a later stage.
stages:
- stage: NonProd
jobs:
- template: install.yml#pipeline_template
- template: deploy.yml#pipeline_template
- stage: Prod
dependsOn: NonProd
jobs:
- template: deploy.yml#pipeline_template
I read we can do create these types of dependencies now here, but is it possible to set the dependencies to come from either a prior stage or a prior job?
I considered trying to combine these into a single template, but unfortunately if the same version of the app already exists the install step will not provide the app id back, just an error.
According to the update in 5/4 Jobs can access output variables from previous stages
You can currently specify that a stage run based on the value of an output variable set in a previous stage.
This is used in conditions of a stage.
but is it possible to set the dependencies to come from either a
prior stage or a prior job?
You can't directly depend on a job from another stage.
But, when you define multiple jobs in a single stage, you can specify dependencies between them. Pipelines must contain at least one job with no dependencies.
Besides, you can also depend one stage on others.

Deployment configured as YAML as part of a Pipeline

We have been using a YAML file to do our CI in Azure DevOps for a few months with the idea that we would get our release configured using YAML in the future.
Well that time is now and I'm confused by how you would introduce a CD process. With the MyProject-CI.yml being a Build Pipeline and our Releases being Classic Pipelines I assumed that when the time came to get the CD process down as YAML we would create a MyProject-CD.yml. That would be triggered by the dropping of an Artifact within the MyProject-CI.yml CI.
However I think that was just a misunderstanding on my behalf and what we are supposed to do is convert the original MyProject-CI.yml into a multi-stage pipeline that has the following stages
Build and Run Unit Tests
Deploy to Development and run WebTests
Deploy to Production and run WebTests
Is the switch to a multi stage CI/CD in one file correct rtaher than Release and Build in separate files?
The short answer is yes, you got the idea. A single multi-stage pipeline yml is the way to do both build and deploy, and that is the base intention. Here is an exercise that parallels your case that might help.
As your pipelines get more complex, you will likely get into scenarios with multiple files, as you can template parts of your pipeline for reuse in multiple places, or to enforce conventions from a central location.

Execute a stage in DevOps Release pipeline every night on scheduler

I have an Azure DevOps CI Build and Release pipeline in following setup:
CI Build runs with each new commit in develop branch and creates a Build Drop (Artifact)
Release pipeline runs with each new Artifact and deploys to INT and eventually to PROD (after manual approval)
I would like to add a 3rd stage (called eg. MONITOR) which would run after the PROD release every night using the same drop as the PROD stage used, with following schema:
[Build Drop] -> [INT] -> manual approval: [PROD] -> nightly scheduler: [MONITOR]
This seems to be impossible to me, do you know how to achieve this goal?
Following is crucial for me:
the MONITOR and PROD run always from exactly the same Artifact
MONITOR is executed only if the PROD was successful
if there is a newer PROD release, the old MONITOR is not executed any more and instead the newest one is executed using the newest Artifact which made it to PROD
I tried so far following:
merge develop to master when the commit made it to PROD. And then used scheduled nightly Build from master with MONITOR stage - it works, but MONITOR uses different Artifact than PROD, so not usable for me
used scheduled trigger for MONITOR after PROD - does not work, the MONITOR is executed only once at scheduled time and never again
created extra release pipeline based on specific Artifact version with a scheduled trigger - this works, but I have to maintain the specific Artifact version manually with each successful PROD release. Another caveat is that I have to use 2 separate pipelines which makes the overview not so nice. (but, so far the best solution I achieved)
do you have better ideas? many thanks
What I would do is have 2 separate Release Pipelines.
This allows you to schedule the release without producing a new artifact (scheduled build).
Then, I would do some of what #Soccerjoshj07 suggested in that I would invoke the REST api in a task on the MONITOR pipeline/stage.
I would make the REST api call to the Releases endpoint to get the top=1 releases for releasedefinitionid=x. Then use the Release Environment endpoint to get the PROD environment for that latest release id. With the environment in hand, check the status for succeeded. If not, fail the release.
Edit as per new requirement outlined in comment
Given PROD.1 is succeeded and PROD.2 is failed when MONITOR is triggered, then the artifact from PROD.1 should be used for MONITOR.
With this criteria I would change some things. Rather than have the MONITOR go digging for the latest PROD release and fail if the latest is failed, I would make the successful PROD stage tag its build artifact and employ artifact filters on the Monitor pipeline.
The tagging can occur via the REST api or using the Tag Build or Release Task from Colin's ALM Corner Build & Release Tools and might look like this:
Are you using a YAML template, and if so have you played with the cron schedules? https://learn.microsoft.com/en-us/azure/devops/pipelines/build/triggers?view=azure-devops&tabs=yaml#scheduled-triggers
If using classic Release UI, I think you can have the definition trigger be on a schedule but that would queue the entire definition. You might have to get creative with variables and maybe create 'isScheduled=true' and use that to determine if it should skip tasks.
Other ideas:
Create a logic app or function app that calls the REST API? Sample app and github link here: https://oshamrai.wordpress.com/2019/04/22/azure-devops-rest-api-19-queue-builds-and-download-build-results/
The Azure-Devops AZ CLI extension might be easier, though: https://learn.microsoft.com/en-us/cli/azure/ext/azure-devops/pipelines/build?view=azure-cli-latest#ext-azure-devops-az-pipelines-build-queue
Beside setting up two release pipelines, if you want to use scheduled trigger for only one Stage, I am afraid there is no such out of box way to achieve that, scheduled trigger is only for entire pipeline.
As a workaround, you can add a custom condition for the job of MONITOR stage.
For example in yaml:
- stage: MONITOR
jobs:
- job:
condition: and(always(), eq(variables['Release.Reason'], 'Schedule'))
steps:
In UI , you can set this in Run this job of agent job:
In this case, the stage only executed when the release triggered by scheduled trigger. If the release is triggered by other reasons, the MONITOR stage will be skipped .
The limitation of this workaround is that when your pipeline is triggered by a scheduled trigger, two other stages are also executed.
Or write a script with powershell task (in INT/PROD stages) to determine whether Release.Reason is Schedule. If yes , skip the current stage.
For how to obtain the latest artifact version of PROD and determine the deployment status of PROD, you can refer to the two answers above.

Pre-deployment conditions : deploy into prod if one the test deployment is success

Based on latest build I want to deploy one of the test environments(There are many test environments). Will choose test environment during the release then deploy into preProd and PRod if test deployment is success.
How to add Pre-deployment conditions if one of the deployment is success in the triggers?
Below one is the best example for my scenario. Deployment to the Production stage occurs if one of QA and Pre-prod stages are successful. Like or conditions.
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/triggers?view=vsts#parallel-forked-and-joined-deployments
Generally speaking we don't recommend that trigger the production deployment even if some of the test deployments are failed.
If you still want to do that, you could use the a gate for a stage. It allows us to customized or trigger conditions by our self. Based on my experience, you could invoke REST API or Invoke Azure Function to implement the logic by yourself.

Maintainability of TFS xaml build vs TFS vNext build vs Octopus Deploy

My question is about maintainability of vNext/Octopus processes vs XAML based processes. Or rather about the impossibility to maintain them sanely leading me to believe we are doing something terribly wrong.
Given:
Microsoft pushes to phase out its TFS XAML builds in favour of the vNext builds
Octopus Deploy is a popular deployment automation framework
We have many XAML based builds, but starting to port to vNext
The deployments are automated with Octopus Deploy
Concretely, we have three kinds of builds going on in QA:
Old XAML based compilation builds producing artifacts to be deployed
Ultimately just builds the code, zips it and places in a well-known location
New vNext compilation builds producing artifacts to be deployed
Same as above
Deployment builds
XAML based build definition per deployment environment. This is the source of truth for the particular deployment, containing all the configuration URLs, connection strings, certificate thumbprints, etc.. The build definition has over 100 build parameters. Each time a new environment is setup we clone an existing XAML build definition and change the parameters.
This build unpacks the build artifact, generates all the web/app config files based on the configuration parameters and kicks off Octopus Deploy with a lot of parameters using octo.exe and waiting for the end
Octopus Deploy process
Creates 3 packages from the build artifact previously unpacked by the XAML build to match three areas of deployment - web farm, background job engine cluster and the database
Delivers the relevant packages to the relevant tentacles.
The tentacles unpack and setup their respective packages
So, if we have 50 deployment environments, then we have 50 XAML deployment builds, each capturing the context of the respective environment. But the XAML deployment build delegates the deployment job to Octopus and here we are forced to have 50 Octopus projects - one per deployment.
Why is it so? We examined the option of having just one Octopus Project, but what would be the Release versions of such project? In order for us to be able to navigate amongst the gazillion releases, the release version must include:
The build version of the deployed code, e.g. 55.0.18709.3
The name of the deployment environment, e.g. atwfm
Using the example above this gives us 55.0.18709.3-atwfm, but sometimes we want to deploy the same build artifact in the same deployment environment twice. But the only Octopus project would already have the release 55.0.18709.3-atwfm, so how to deploy 55.0.18709.3 in atwfm again, without deleting the already existing release?
We could not find a workaround and so, we have Octopus project per deployment.
THIS IS ABSOLUTELY CRAZY because Octopus projects are a pain to update. Suppose we need to add a step - go do it in 50 projects. There are great advises on the Internet to use automation to edit multiple projects. Not ideal at all.
vNext, BTW, has the same problem. If I am to port the existing XAML builds to vNext I will end up with 50 vNext deployment builds. If I decide to add a step, I need to do it in all the 50 builds!!!
Note, that XAML builds do not have this problem (they have many others, though), because their the process is separate from the parameters. I can modify the workflow once and all the XAML builds are now updated with the new process change.
My question is - how do people work with vNext and Octopus, because our process drives me crazy. There must be a better way.
EDIT 1
I would like to clarify. We sometimes want to deploy the same build artifacts twice. We are not recompiling them and reusing the same version. No. We already have the build artifacts handy with the build version baked inside the artifact. We may want to deploy it the second time into the same environment because, for example, some databases in that environment have been misconfigured and now this is fixed and we need to redeploy. This does not mean we can rerun the already existing Octopus release, because the fix may involve tweaking the deployment parameters of the respective XAML deployment build definition. Hence we may be forced to restart the XAML deployment build, which never compiles code.
EDIT 2
First of all, why do we drive the deployment from TFS XAML builds rather than from Octopus? Historic reasons. We did not have Octopus at first. The deployment was done by our ad hoc code. When we introduced Octopus we decided to keep the XAML deploymenet builds for two reasons:
To save the cost of migrating all the XAML deployment builds with all the gazillion deployment parameters to Octopus. Maybe it was a wrong decision, maybe we could have automated the migration.
Because TFS has better facility to display test results. The deployment may end with deployment smoke tests and their results has to be published somewhere. We do not see how Octopus can help us publish the results, TFS can.
Why would one redeploy? For example, one of the deployment parameters is certificate thumbprint. When the certificate is renewed, this parameter must be changed (we do have automation for updating XAML build parameters). But often we discover that it was already deployed with wrong thumbprint. So, we fix the deployment and redeploy. Or, we discover some strange behavior of the deployed application and wish to redeploy with some extra tracing/debugging features.
There is a lot to unpack here, but I'll give it a go.
TL;DR It's the way you version the releases that's causing you all the pain. Change that and everything else will fall in to place
Lets start at the end and work backwards.
Octopus Deploy has a concept of Environments. This means that you can Deploy the same project to multiple environments and use Octopus's scoping mechanism to manage environment specific configuration.
So using your example.
Creates 3 packages from the build artifact previously unpacked by the
XAML build to match three areas of deployment - web farm, background
job engine cluster and the database
I set up an Environment in Octopus for each of your 50 Environments. (I'll use 3 environments in the example to keep it simple, but the principles apply no matter how many environments you have)
In my Dev Environment I have a single server so I create an environment called "Dev" and add the tentacle for that specific server. Then I tag the tentacle with the deployment type "Web", "Job", "Database"
I then set up a test environment which has 3 servers so I create the Environment and add the 3 servers. I then tag each tentacle with the deployment type "Web", "Job", "Database"
Finally I set up the Production environment. This has 5 web servers, 1 job server and 1 database server. I add all 7 tentacles to the environment, and tag them appropriately.
Now I only need 1 project to deploy to all 3 environments. In my project I have 3 steps.
Step 1 Deploy Web Site
Step 2 Deploy Jobs
Step 3 Deploy database
I can tag each of these steps to say what kind of tentacle I want to deploy to. Now when I run the deployment the link between the tags on the step, and the tags on the tentacle mean Octopus knows where to deploy the code.
Variables: Your variables can be scoped to an environment. So for example if your dev environment database connection string is dev.database.net/Instance and your test environment database connection string is test.Database.net/Instance then you can scope these in the variables section of the project. If your DNS is consitant with your environment names you could even use some of the built in variables to make adding environments more easy. i.e. ${Octopus.Environment.Name}.Database.net/Instance
Releases and version numbers: So here is where I think your problem lies. Adding the environment name to the release and trying to create multiple releases with the same version is basically causing you all of the pain.
Using the example above this gives us 55.0.18709.3-atwfm, but
sometimes we want to deploy the same build artifact in the same
deployment environment twice. But the only Octopus project would
already have the release 55.0.18709.3-atwfm, so how to deploy
55.0.18709.3 in atwfm again, without deleting the already existing release?
There are a couple of things here. In Octopus you can easily deploy again from the UI, however it sounds like you're rebuilding the artifact and trying to create a new release with the same version number. Don't do this! Each new build should have a distinct and unique build number / release number.
The principle I follow is "build once deploy many"
When you create a release it requires a version number, this release then flows through the environments. So I build my code and it gets a versions number 55.0.18709.3 then I deploy it to Dev. When the deployment has been verified I then want to "Promote" the release to test I can do this from within Octopus or I can get TFS to do this.
So I promote 55.0.18709.3 to test and then on to prod. If I need to know which release is in which environment, Octopus tells me this via the dashboard or API.
Finally I can "Orchestrate" the flow of releases through my environments using Build v.next.
So my end to end process looks something like.
Build vNext Build
Compile
Run Unit Tests
Package output
Publish package
build vNext Release
Call Octopus to create the release passing in the version number
Optionally deploy the release to the first environment on your way to live
I now have everything I need in Octopus to deploy to ANY environment with a single project and my environment specific configuration.
I can either "Deploy" the release to a specific environment or "Promote" the release from one environment to another. This can be done easily from within the Octopus UI
Or I can create a "Promote" using the Octopus plugin in TFS and use that to orchestrate the promotion of code through the environments.
Octopus Terminology.
Create release - This pulls together the Artifacts and Release number in Octopus to create an Immutable thing which will be deployed to one of more environments.
Deploy release - The act of pushing your code to a specific environment.
Promote release - Once the code has been deployed in to a single environment, it can them be promoted in to other environments
If you have a specific sequence of environments then you can use the "Lifecycles" feature of Octopus to enforce that workflow. but that's a topic for another day!
EDIT1 Response
I don't think the edit changes my answer, you can re-deploy the same release many times as you like. what you cannot do is create a new release with the same version number. You might want to decouple these steps could you add some more detail about what changes in the XAML build? You can change variables in a release, you can update them in octopus and then redeploy
EDIT 2 Response
That makes things clearer. I think you need to take the hit and migrate the parameters to Octopus. It's variable management is much better than XAML builds and although build vNext is comparable to Octopus it makes more sense to have the config in Octopus. As XAML builds are on their way out, it makes sense to move this stuff now. Whilst it might be a lot of work, at the end you'll have a much smother workflow and you can really take advantage of the power of Octopus.
The Test results point. I agree this is better suited to build vNext, so at this point you will be using build vNext as your Orchestrator and Octopus Deploy as your release management tool.
The process would look something like
Build vNext
Compile code.
Run Unit tests
Run Octopack
Publish packages
Call Octopus and Create release
Call Octopus to Deploy to "Dev"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Test"
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests
Call Octopus to Promote release to "Production" (Perhaps with a manual innervation)
Run Smoke tests
Run Integration Tests
Call "Selenium" to run Run UI tests