ADF deployment without making changes on trigger status - azure-data-factory

I am doing CI/CD on Azure Data Factory.
I do have a DEV instance and PROD instance of Azure Data Factory.
The deployment process is going smooth except one problem with the triggers.
I do have around 20 triggers in which 15 are in running state and 5 are stopped for a while in PROD.
Since DEV ADF is a development instance and i do not want to run any of the triggers in scheduled manner in development instance status of all triggers are set to stopped.
Currently the trigger status changes from DEV to PROD deployment is by replacing the string '"runtimeState": "Stopped"', '"runtimeState": "Started"' in the ARM template json file.
But this will start all of the triggers in the production after deployment along with the 5 triggers which are stopped for a while.
Is there any way to un touch the trigger status at the time of deployment in PROD and only add newly created trigger into PROD without touching the existing triggers in PROD?

You can add an Azure PowerShell Task to do this activity.
You can write a PowerShell script to Start/Stop trigger and place it in relevant pipeline.
In below case, I have stopped all triggers before deployment, you can add Pre or Post deployment step via PowerShell to achieve this.

Related

Custom Release Pipeline Task with UI Control for Azure Devops

I am looking for a way to add a simple custom task with a button to the releases pipelines of Azure Devops (not the new yaml based pipelines).
What I basically need to do is pause the deployment and provide button for a user to click that will bring them to a separate HTML page I am hosting in an S3 bucket. The button link would be dynamic (set at deployment time).
I create a small json file with Cloud Formation stack details and save it during the first agent job. From there I set a pipeline release variable.
resultsUrl=$(jq -r .resultsUrl deploy-$(Release.ReleaseId)-$(Release.EnvironmentName)-stackInfo.json)
echo "##vso[task.setvariable variable=deploymentResultsUrl;]$resultsUrl"
Right now I have the Manual Intervention task in place - but since that is an agentless job - it does not pick up the environment variables I set in the previous agent job. The variable would need to be set at the time the release is triggered - but I won't know it then.
I know we can extend the UI for the new yaml based pipelines (I have already authored a few in house extensions). I need this to be in the "classic" release pipeline. That is where all (100s) of our deployment definitions live.

Cannot run ASP.NET Core Web API on Azure Devops deployment group (self-hosted)

Im working on a simple deployment pipeline with azure devops. I created a deployment pipeline running on a self hosted ubuntu deployment group.
The pipeline looks like this:
Download artifacts from CI pipeline (created with dotnet publish)
Stop running deployment
Unzip the ASP.NET Core Web API to the deployment directory
Run new deployment with dotnet MyApp.dll
The first two steps work as expected. However, when the dotnet My App.dll command is run, the process runs for 10 seconds with following "error" message being printed at the end:
The STDIO streams did not close within 10 seconds of the exit event from process '/usr/bin/bash'. This may indicate a child process inherited the STDIO streams and has not yet exited.
The deployment task is successful despite the message and the app not running. I tried to work around this feature by using nohup & and relocating the command output. After some research I found that all processes started by a pipeline agent are stopped after the agent's work is done - meaning this behaviour is intended and my understanding of azure deployments/agents is wrong.
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
How do I deploy and run my app in an automated way on my own ubuntu machine using azure devops pipelines?
You are already on the right way.
All the process launched in the pipeline will be finished/clean up in “Finalize Job” step when the pipeline is over.
If you don't want the process to be closed, please try set variable Process.clean= false to stops the "finalize job" step from killing all processes.
But when you create a new pipeline next time, you need to close the app before starting it.

Continuous monitoring Azure DevOps Release

I am trying to create a setup where I deploy a "Webapp for containers" but I want to build in some checks via Azure Monitor. My idea is to deploy the webapp and then have a gate that checks an azure monitor alert set with availability. When the availability check fails then it should rollback.
The documentation states "When the release pipeline detects an Application Insights alert, the pipeline can gate or roll back the deployment until the alert is resolved", but I don't know how to configure this in azure devops.
I have an AppService Plan and a web app running. I also created an Application Insights instance and I enabled continuous monitoring through the "Azure App Service manage" task.
The alert I created is:
az monitor metrics alert create -n 'Availability' -g ${RG_NAME} --scopes "${APP_INSIGHTS_PROD}" \
--condition 'avg availabilityResults/availabilityPercentage < 90' \
--description "created from Azure DevOps"
As Post Deployment-Condition I enabled the Gates and I configured it to check for the Availability alert, which works. When I adjust something to make the app fail on purpose the Gate works and fails the Stage eventually.
I also enabled the auto-redeploy to deploy the last successful deployment but that does not do anything because the actual deployment task was successfully finished.. just the gate failing and failing the stage.
I build the Release via the UI, to make it work I had to create the pipeline via yaml. With deployment jobs I could use environments and in the Azure DevOps UI you can configure the environment to add a check. Query Azure Monitor Alerts is one of the available checks to add for an environment.
The check is done at the beginning of a job so I created a separate deployment job that refers to the environment with the Query Azure Monitor Alert check.
After that I created another job that has a dependson and condition which will only run if the query alert job failed. And that job will swap slots.

I'm having issues with DevOps production deployment - Unable to edit or replace deployment

Up until yesterday morning I was able to deploy data factory v2 changes in my release pipeline. Then last night during deployment I received an error that the connection was forced closed. Now when I try to deploy to the production environment, I get this error: "Unable to edit or replace deployment 'ArmTemplate_18': previous deployment from '12/10/2019 10:19:27 PM' is still active (expiration time is '12/17/2019 10:19:23 PM')". Am I supposed to wait a week for this error to clear itself?
This message indicates that there’s another deployment going on, with the same name, in the same ARM Resource Group. In order to perform your new deployment, you’ll need to either:
Wait for the existing deployment to complete
Stop the in-progress / active deployment
You can stop an active deployment by using the Stop-AzureRmResourceGroupDeployment PowerShell command or the azure group deployment stop command in the xPlat CLI tool. Please refer to this case.
Or you can open target Resource Group on the azure portal, go to Deployment tab, find not completed deployments, cancel it, start new deploy. You can refer to this issue for details.
In addition, there is a recently event of availability degradation of Azure DevOps .This could also have an impact. Now the engineers have mitigated this event.

Azure DevOps - Auto-redeploy trigger when release fails at certain stage

I want to rollback to previous successful deployment in case of any stage fails in a release/deployment. For that I am trying to use "Auto-redeploy trigger" under "Post-deployment conditions" in a release definition in Azure DevOps.
However, every time I have a failed deployment no redeployment is triggered. Am I missing any other/additional configuration ? Or how can i achieve this in any other simple/feasible way ?
Here is the release definition history. (I am sure that branch for all definitions is same.) All releases are trigger via CICD.
You can view the Deployments in Release pipeline to check if the deployment is triggered as expected, like this.
Which job do you use in your stage task? Agent job or Deployment group job? As my test, when I run the task in Agent job, the Auto-redeploy trigger isn’t triggered as expected, but when I run the task in Deployment group job, it works as expected. So it may be the reason for your problem, you can check it on your side.