Azure Pipelines AzureWebApp#1 Task Start and Stop App Service - azure-devops

I am deploying my web app via Azure Pipelines to our Azure Webservice with the following YAML script:
- deployment: Api
displayName: Deploy Web Api
pool:
name: 'MyApi-FMMR'
environment: 'Prod'
strategy:
runOnce:
deploy:
steps:
- task: AzureWebApp#1
inputs:
azureSubscription: 'AzureHFMG'
appType: webApp
appName: 'cp-admin-api-prod'
package: '$(Pipeline.Workspace)/drop/*.zip'
But I am wondering, weather it is necessary to Stop the WebApp before I use this task to deploy a new version.
In the old classic pipelines I always observed something like this:
Here two tasks "Stop" and "Start" are added before and after the task, but if I try this out it works even without those.
Is it a best practice to add those tasks? Or are they implicitly called by the "AzureWebApp#1" task?

You don't necessarily need those tasks; the Deploy task will automatically restart the service.
However, this has downsides:
there will be a short amount of downtime
some services suffer for being stopped and restarted suddenly, for example not being able to respond until warmed up
These can be mitigated by using a staging slot to deploy the code to, before swapping the slots to allow the new deployment to seamlessly take over.

Related

AzDO ManualValidation step failing in YAML pipeline with no explanation of why

I'm converting to a full YAML AzDO pipeline and need to wait for manual validation for certain stages of my pipeline. Added the new ManualValidation task into a serverless job, however it fails immediately with no details about why. I did add a Delay task in there as well (just as a sanity check to make sure my serverless job was actually running successfully), and it runs fine.
- job: waitForValidation
displayName: Wait for external validation
pool: Server
timeoutInMinutes: 4320 # job times out in 3 days
steps:
- task: Delay#1
inputs:
delayForMinutes: '1'
- task: ManualValidation#0
timeoutInMinutes: 1440 # task times out in 1 day
inputs:
notifyUsers: |
me#email.com
you#email.com
instructions: 'Please validate deployment can continue and resume'
onTimeout: 'reject'
These are the docs I'm using:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-validation?view=azure-devops&tabs=yaml
I also dropped into the GitHub project just to make sure the task is still version 0 (it is).
Suggestions on why this might be failing and/or ways I can get some more details in the pipeline about WHY it failed?
Turns out we are actually using AzDO Server, not AzDO Services (thanks, Microsoft for naming them so similarly) and this task is not yet available in the Server version :(
For anyone also frustrated by this lack of functionality on-prem, here’s the documentation on using Deployment Jobs and some about Environments
We are able to get most of the functionality we were looking for this way, thou it does require setting up environments.

How to schedule stage deployments in Azure DevOps Pipelines?

With the classic Azure DevOps release pipeline our release flow was very easy to setup.
We had a build pipeline running many times during the day. On success it deployed to our development environment. Every night the latest successful deployment to dev was released to our test environment (running automated tests for hours), before it deployed to UAT. But often we also need to deploy to test during the day, if we have a new change which needs to go directly into test or UAT. The classic pipelines allowed us to skip a stage, or deploy if the previous was only partly successful.
1) Development - automatic
2) Test - nightly or manually
3) UAT - nightly or manually
4) Staging - manual approval
5) Production - manual approval
With the multi-stage pipelines the same flow seems to be very difficult to do. At least when it comes to making it as a single deployment pipeline. The first part is fine. We can have our build trigger the development deployment. But how can we hold back the release to the test environment until 0:30am, while still retain the ability to also release it manually? If I created a separate test environment pipeline, then it could work if it had no triggers, but a schedule.
Same with the UAT, as we also need the flexibility to manually run UAT deployments, then it would also need to go into its own pipeline. Releases to our staging and production environment we "gate" with manual approvals, which is fine.
While this technically could work, if we split the deployment pipeline into multiple pipelines it really gets difficult to manage "a release". Not to say that it kind of goes against the whole multi-stage pipeline principle if we create a separate pipeline per stage.
But with this being so easy to setup like this in the classic pipelines, then I cannot really imaging that other companies have not run into the same limitations. Is it just me who cannot see the light, or cannot this really not be done with multi-stage pipelines?
manually run UAT deployments
We could add Azure DevOps Multi-Stage Pipelines Approval Strategies in the yaml build.
Steps:
Open the tab Environments and click the button New environment-> Click the button approvals and checks-> My environment name is TEST.
Then use it in the yaml pipeline(just a sample):
trigger: none
pool:
vmImage: 'ubuntu-latest'
stages:
- stage: A
jobs:
- deployment: 'MyDeployment'
displayName: MyDeployment
environment: 'TEST'
- job: A1
steps:
- script: echo "##vso[task.setvariable variable=skipsubsequent;isOutput=true]false"
name: printvar
- stage: B
condition: and(succeeded(), ne(stageDependencies.A.A1.outputs['printvar.skipsubsequent'], 'true'))
dependsOn: A
jobs:
- job: B1
steps:
- script: echo hello from Stage B
Result:
We could also configure schedule Trigger and use them in the multi-stage pipelines.
Note: The schedule trigger and Approval Strategies are using in the stage level.
For scheduled jobs: you can use something like this in your YAML:
(Copied from Microsoft documentation)
schedules:
- cron: string # cron syntax defining a schedule
displayName: string # friendly name given to a specific schedule
branches:
include: [ string ] # which branches the schedule applies to
exclude: [ string ] # which branches to exclude from the schedule
always: boolean # whether to always run the pipeline or only if there have been source code changes since the last successful scheduled run. The default is false.
For manual jobs, you can use the Create Release button to create and deploy a release manually. Do note that sometimes this can create a conflict with the schedule. Also, to "hold back a release" put an approver on the release, and then when approving, defer the release:
noting that it's in UTC, and it defaults to tomorrow - you can change it to any time after now.

Azure DevOps Pipeline As Code - Validate ARM Template

I am exploring Azure Pipeline As Code and would like to understand how to make use of "deploymentMode" for validating and deploying ARM templates for each Azure environments.
I already have Release Pipelines created in Azure DevOps via Visual Builder for deployment tasks with one main ARM template and multiple paramater JSON files corresponding to each environment in Azure. Each of those pipeline has two stages. One for validation of ARM templates and Second for deployment.
I am now trying to converting those release pipelines to Azure Pipeline as Code in YAML format and would like to create one YAML file consolidating deployment validation tasks (deploymentMode: 'Validation') for each environment first followed by actual deployment (deploymentMode: 'Incremental').
1) Is it a right strategy for carrying out Azure DevOps Pipeline As code for a multi environment release cycle?
2) Will the YAML have two stages (one for validation and another one for deployment) and each stage having many tasks (each task for one environment)?
3) Do I need to create each Azure Environment first in 'Environments' section under Pipelines and configure the virtual machine for managing the deployment of various environments via YAML file?
Thanks.
According to your requirements, you could configure virtual machines for each azure environments in the Azure Pipeline -> Environments. Then you could reference the environments in Yaml code.
Here are the steps, you could refer to them.
Step1: Configure virtual machine for each Azure Environments.
Note: If the virtual machines are under the same environment, you need to add tags for each virtual machine. Tags can be used to distinguish virtual machines in the same environment.
Step2: You could create the Yaml file and add multiple stages (e.g. validation stage and deployment stage) in it. Each stage can use the environments and contain multiple tasks.
Here is an example:
trigger:
- master
stages:
- stage: validation
jobs:
- deployment: validation
displayName: validation ARM
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
- stage: deployment
jobs:
- deployment: deployment
displayName: deploy
environment:
name: testmachine
resourceType: VirtualMachine
tags: tag
strategy:
runOnce:
deploy:
steps:
- task: AzureResourceManagerTemplateDeployment#3
...
- task:
...
Here are the docs about using multiple stages and virtual machines.
Hope this helps.

Checks (approvals) for a deployment job are blocking the entire stage

I have the following YML file for my pipeline:
trigger: none
stages:
# Other stages here...
- stage: Release
jobs:
- deployment: Staging
environment: staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: DownloadBuildArtifacts#0
# ...
- task: AzureRmWebAppDeployment#4
displayName: Deploy in staging
# ...
- deployment: Production
environment: prod
dependsOn: Staging
strategy:
runOnce:
deploy:
steps:
- download: none
- task: AzureAppServiceManage#0
displayName: Swap stg-prod slots
# ...
Based on this, to give more context, my thinking is to have 2 stages: the first one is to build my application, the second one is to release in staging (QA) and to production next.
The environment "prod" though, has a check (or approval, whatever you want to call it).
I'm not sure if I'm encountering a bug or not, but what is happening is that when stage 1 completes (the build phase), the release phase of stage 2 is blocked and waiting for approval even considering that "staging" has not any check enabled (only prod).
The easiest workaround is to create different stages, one for staging and one for production, but the thing is that it's not matching my expected behaviour. I'm expecting that the deployment for the job staging completes successfully, then the job "production" waits for the approval.
Do you have any suggestion regarding this? Is this a bug?
Checks (approvals) for a deployment job are blocking the entire stage
Sorry for any inconvenience.
Personally, This behavior is by designed at this moment.
As the document state:
Approvals in multi-stage YAML pipelines
We continue to improve multi-stage YAML pipelines, we now let you add manual approvals to
these pipelines. Infrastructure owners can protect their environments
and seek manual approvals before a stage in any pipeline deploys to
them.
This feature is designed based on stage not environment, so it block the whole stage.
As I test, I could reproduce this issue as you. But your request is reasonable (Personally), this feature should be designed based on environment.
You could add your request for this feature on our UserVoice site (https://developercommunity.visualstudio.com/content/idea/post.html?space=21 ), which is our main forum for product suggestions. Thank you for helping us build a better Azure DevOps.
Hope this helps.

Azure Pipelines - Parallel steps (YAML)

I'm setting up my Azure Devops Pipelines, and have a build that requires some fairly lengthy setup steps to run. These need to run before other tasks, which can be run in parallel.
However, I can only see this being done by specifying jobs, which would require to do these lengthy steps each time. Ie:
jobs:
- job: Run1
steps:
- task: Long running setup task
- task: Run taskA
- job: Run2
- task: Long running setup task
- task: Run taskB
Is there a way to have this long running task run, and have task A/B depend on that environment without running them sequentially? Ideally it'd be something like:
-job
steps:
-task: Long running setup
-task: Parallel: taskA
-task: Parallel: taskB
Or have the previous jobs take a container/image snapshot and reuse if that's possible?
Short answer, you can't.
Tasks within a job cannot run in parallel as they run on the same agent and the environments can't be "snapshotted" by Azure Devops to be re-utilized by other jobs in parallel later. But jobs can run in parallel as they can be scheduled on different agents, so setup will run twice but in parallel. So there is a trade-off between time and resource usage which you will need to decide on based on your requirements.
There is another solution though, based on how much you would like to invest in this:
If your "setup" doesn't change that often and you are willing to host your own agents. Then you could run a separate "setup + agent" build which creates a docker image of your agents, pushes it out to your azure container registry and then deploys this image to your self-hosted agents (Azure Kubernetes Service) cluster. Then Task A and Task B can easily run in parallel with the assumption that the environment they are running in (agent + setup docker image) is always ready. This is exactly my setup.
See : Azure DevOps Docker
An update to this - a docker image as suggested by #dparker, while it is probably a better way, was a bit OTT for myself. Instead I used a pipeline artifact to cache some build / setup files. And then each other job was made dependent on this setup job.
This obviously doesn't sound great, but it works fine to get the performance optimizations I was after.
Eg Job1 would include this:
- task: PublishPipelineArtifact#0
inputs:
artifactName: 'Setup-Build'
targetPath: '$(buildDir)'
And Job2 to X would download this as an artifact:
- task: DownloadPipelineArtifact#1
displayName: 'Download Setup'
inputs:
targetPath: '$(buildDir)'
artifactName: 'Setup-Build'
Additionally there is the option to use caching, but this didn't quite fit in with my scenario. I'd recommend you make a call between artifacts and caching:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops