I am looking for a way to add a simple custom task with a button to the releases pipelines of Azure Devops (not the new yaml based pipelines).
What I basically need to do is pause the deployment and provide button for a user to click that will bring them to a separate HTML page I am hosting in an S3 bucket. The button link would be dynamic (set at deployment time).
I create a small json file with Cloud Formation stack details and save it during the first agent job. From there I set a pipeline release variable.
resultsUrl=$(jq -r .resultsUrl deploy-$(Release.ReleaseId)-$(Release.EnvironmentName)-stackInfo.json)
echo "##vso[task.setvariable variable=deploymentResultsUrl;]$resultsUrl"
Right now I have the Manual Intervention task in place - but since that is an agentless job - it does not pick up the environment variables I set in the previous agent job. The variable would need to be set at the time the release is triggered - but I won't know it then.
I know we can extend the UI for the new yaml based pipelines (I have already authored a few in house extensions). I need this to be in the "classic" release pipeline. That is where all (100s) of our deployment definitions live.
Related
I am trying to find out what is the equivalent ADO YAML task for the Classic "Task group: Docker Container Image Creation".
I tried Docker and Docker Compose tasks but both of them don't have an argument to capture the environment the application package was built for.
Since the name of the task in the classic editor is "Task group: Docker Container Image Creation", what you see in the classic editor is probably a task group:
A task group allows you to encapsulate a sequence of tasks, already defined in a build or a release pipeline, into a single reusable task that can be added to a build or release pipeline, just like any other task. You can choose to extract the parameters from the encapsulated tasks as configuration variables, and abstract the rest of the task information.
...
Task groups are a way to standardize and centrally manage deployment steps for all your applications. When you include a task group in your definitions, and then make a change centrally to the task group, the change is automatically reflected in all the definitions that use the task group. There is no need to change each one individually.
When a task group is created the creator can define their own parameters and use them in one or more subtasks inside the task group.
To replicate this behavior in yaml pipelines you need to examine the task group to understand what tasks it contains and then define a reusable template in yaml, which allows you to define reusable content, logic, and parameters
Task group is only available classic pipelines (see here).
For YAML pipelines, you can set up the step template to reuse the same steps in different YAML pipelines.
I am doing CI/CD on Azure Data Factory.
I do have a DEV instance and PROD instance of Azure Data Factory.
The deployment process is going smooth except one problem with the triggers.
I do have around 20 triggers in which 15 are in running state and 5 are stopped for a while in PROD.
Since DEV ADF is a development instance and i do not want to run any of the triggers in scheduled manner in development instance status of all triggers are set to stopped.
Currently the trigger status changes from DEV to PROD deployment is by replacing the string '"runtimeState": "Stopped"', '"runtimeState": "Started"' in the ARM template json file.
But this will start all of the triggers in the production after deployment along with the 5 triggers which are stopped for a while.
Is there any way to un touch the trigger status at the time of deployment in PROD and only add newly created trigger into PROD without touching the existing triggers in PROD?
You can add an Azure PowerShell Task to do this activity.
You can write a PowerShell script to Start/Stop trigger and place it in relevant pipeline.
In below case, I have stopped all triggers before deployment, you can add Pre or Post deployment step via PowerShell to achieve this.
Hi!
I am using deployment slots to deploy my function app first on staging and then on production. I have the above two tasks in my release pipeline. After the function is deployed on the staging slot, I want to hold the swap task until someone (a user) verifies the deployment.
So, how can we add user approval before the slot swap task?
Thank you
You can also try to split two tasks into two stages, and then set Pre-deployment approvals for the stage where slot swap task is located.
There is a task called "Manual Intervention" You can use that to pause pipeline and resume it when validation is complete.
So the steps would be:
deploy to staging
manual intervention -> validate and click on Resume button
swap slots
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/utility/manual-intervention?view=azure-devops
I'm having a hard time getting a grasp on this to be honest.
Right now my lab project is as follows:
PR to master -> Triggers Pre-Build Pipeline as condition to merge the code ->
On merge Infrastructure pipe runs only if any changes happen in my Infrastructure folder ->
On merge I want to run my deploy pipeline to deploy my web app to Azure.
The pipes in question do the things they ought to, i.e.
Pre build builds, publishes artifact, runs Unit tests, validates ARM templates.
Infra pipe deploys the necessary infra for my web app such as ResourceGroup, App plan, app service, key vault.
Deploy Pipe downloads the artifact produced in pre deploy and deploys to a stage slot and swaps it to production slot.
What I can't seem to get to work is the pipeline chaining through dependencies, if changes happen to both infra and web app code in master I want the infra pipe to run first and the deploy pipe only if it succeeds.
If I merge only app code I want only the deploy pipe to run regardless if the infra pipe ran or not.
If I merge only infra code I want only the infra pipe to run.
If I merge both app and infra code I want both infra and deploy pipe to run in specific order.
I feel this shouldn't be all that hard to accomplish, but I've spent way too much time trying to solve this to no avail, anyone able to help? :)
Edit:
Hey Sorry #HughLin-MSFT Been Trying to work around this a bit since we're trying to avoid running scripts left and right. :)
I saw you have Build Queuing planned in an upcoming release so for now I think we might have to wait for that.
If I were to merge my deploy and infra pipe, can I use:
trigger:
branches:
include:
- master
paths:
include:
- Infrastructure/*
At stage level and somehow skip a stage instead?
Seen multiple articles mention "Continue if skipped" but can't find any information on how to actually skip a stage.
For the first and second cases, you just need to set Path filters in Triggers, the pipeline only triggers when the file at the specified path is changed. Please refer to this.
For the third case, you can try to add two agent jobs in the infra pipe, add Trigger Azure DevOps Pipeline task to the second agent job to trigger the deploy pipe, and then set Only when all previous jobs have succeeded in Run this job drop-down box for job2. In addition, you need to add a powershell task before the Trigger Azure DevOps Pipeline task, and use a script to detect whether there is app code, run job2 if there is, and cancel job2 if not.
Update:
First you can create a new pipeline and create a variable:changedcode
Use Builds - Get rest api to get the commit , then get the changed code folder with Commits - Get Changes rest api.
Assign changed code folder name as value to changedcode variable.
Set custom conditions for the agent job. In the Infra job, if the changedcode variable value is Infra, run the Infra job. In the Infra job, use the Builds-Queue rest api or Trigger Azure DevOps Pipeline task to trigger the Infra pipeline. The same is true for Deploy job, the only difference is the custom condition expression.
Here is a sample structure in yaml:
jobs:
variables:
changedcode: ""
- job:
steps:
- powershell: |
#Get the changed code folder with rest api
- job: Infra
condition: containsValue($(changedcode), "Infra"))
- powershell: |
#queue Infra pipeline with rest api or Trigger Azure DevOps Pipeline task
- job: Deploy
condition: (containsValue($(changedcode), "deploy")),and ....
- powershell: |
#queue Deploy pipeline with rest api or Trigger Azure DevOps Pipeline task
I have a pipeline defined of two steps:
one building a combined helm chart from two separate artifacts
two deploying this chart to a cluster
Both stages first load a secret and then run a bash script to do the work, as shown in the following image.
My challenge now is, to submit the helm chart name and its version from the Build stage to the Deploy one. So that the 2 step can fetch the right chart.
How can I achieve this?
Trial 1: Using ##vso[task.setvariable - Did not work from the script
echo "##vso[task.setvariable variable=HELM_CHART_NAME]$HELM_CHART_NAME"
echo "##vso[task.setvariable variable=HELM_CHART_VERSION]$HELM_CHART_VERSION"
Using ##vso[task.setvariable - Did not work from the script
This caused by different stages using different agents, and the dynamic variable which set by ##vso[task.setvariable is just a agent-scoped one. Its life cycle equal with agent, so it will disappear once one agent job finished.
For the solution which can help you achieve pass variable from Build stage to Deploy stage, you'd better store these variables in one storage. Such as store it in Azure Key vault with the task Azure Key Vault actions or Write Secrets to Key Vault.
Another way is use rest api to add it as the release variable with powershell script:
PUT https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=5.0
Then, in next stage, it can access and get variable from Release Variables.