How to know which machines were skipped during DevOps Release using Azure Pipeline Agents for a Deployment Group - azure-devops

I'm using Azure Pipeline Agents on Machines and have those Machines in a Deployment Group and I have a DevOps Release which does some things on each machine. If the Azure Pipeline Agent isn't running on a machine at release time, the release will skip over this machine (see below image). How can I know which machines were skipped?
!]1

How can I know which machines were skipped?
The easiest way to check is that you can manually check the detailed deployment log.
For example:
Then you could get the skipped agent name.
On the other hand, you could also use the Rest API : Releases - Get Release. In the API response, you could check the Job Status and the Agent name.
Here is sample:
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=6.0

Related

Azure DevOps Server - multistage YAML pipeline - release couldn't start

On my Azure DevOps Server instance (2020 Update 1.1) I have easy multistage YAML pipeline with Build job (run against BuildPool) and release job (run against ReleasePool). Build job is executed successfully. In release pool there are many idle agents but job is in waiting state with message:
The agent request is not running because all potential agents are running other requests. Current position in queue: 1
No agents in pool ReleasePool are currently able to service this request.
Other pipelines on the server against ReleasePool are executed.
This pipeline was executed one month past also successfully, and since this execution the YAML definition stays unchanged.
Pipeline have no explicit demands, I'm trying to identify implicit demands (from used tasks - I have checked tasks.json task manifests for each used task) - but there isn't used no task with demands.
I have no idea what I could try next.
Is the way how to diagnostic how are agents assigned to the pipeline jobs? I have admin permissions and access to the DB, I'm ready to do very deep analysis.
Since other tasks are running fine, maybe a specific demands on selected agents are not met and this is preventing an agent to be found.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops-2020&tabs=yaml
Or else, it's a bug.
I found a similar issue here, and Microsoft responded July 2022 with:
"We have released a fix for this issue"
https://developercommunity.visualstudio.com/t/no-agent-found-in-pool-azure-pipelines/870000
It is however not clear to me if this only applied to Azure DevOps or also to Azure DevOps Server.
But since your on 2020 Update 1.1, updating couldn't harm you:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020u1?view=azure-devops-2020

Run Azure DevOps stage if no agent available

End goal is to provision a build agent if one isn't already available.
Reason:
I'm currently provisioning build agents dynamically for each pipeline run, however, provisioning the Windows build agent on EC2 takes about 5 minutes to provision and configure (register in the ADO pool) and instead I'd like to leave a few build agents running from normal business hours, but still allow dynamic provisioning off-hours.
Is there a way to setup a condition: to check for pool/agent status? Or would setting up a stage to run some API calls to get the status of agents first and set some variable based on that, then check that variable status later be my best bet? Is anyone doing something similar?

Run Kubectl DevOps task with run-time specified resource details

We're building out a release pipeline in Azure DevOps which pushes to a Kubernetes cluster. The first step in the pipeline is to run an Azure CLI script which sets up all the resources - this is an idempotent script so we can run it each time we run the release pipeline. Our intention is to have a standardised release pipeline which we can run against several clusters, existing and new.
The final step in the pipeline is to run the Kubectl task with the apply command.
However, this pipeline task requires specifying in advance (at the time of building the pipeline) the names of the resource group and cluster against which it should be executed. But the point of the idempotent script in the first step is to ensure that the resources and to create if not.
So there's the possibility that neither the resource group nor the cluster will exist before the pipeline is run.
How can I achieve this in a DevOps pipeline if the Kubectl task requires a resource group and a cluster to be specified at design time?
This Kubectl task works with service connection type: Azure Resource Manager. And it requires to select Resource group field and Kubernetes cluster field after you select the Azure subscription, as below.
After testing, we find that these 2 fields supports variable. Thus you could use variable in these 2 fields, and using PowerShell task to set variable value before this Kubectl task. See: Set variables in scripts for details.

Command into the VM that runs a Azure DevOps pipeline

I'm new to Azure DevOps pipeline, currently nothing works,
I am using Azure DevOps Service with the hosted agent from Azure. Can I some how keep that VM that runs Azure DevOps pipeline running? I want to test my azure-pipeleines.yml file in the faster way via access this VM by terminal.
You cannot access Microsoft-hosted agents via terminal. They are assigned for your build and after they go to pool again to use by someone else.
If you want to access to agents you must have your own. You can create them on your own Azure VM's for instance.
He is right, hosted agents are just containers which are disposed when the pipeline is done. if you want to debug, like checking files or what it's not working, you need to have a self hosted agent. it can be on your own computer for debugging and you use the hosted one during normal processing

What is the recommended release definition for starting and stopping an Azure VM?

I'd like to enhance the release definition so that I don't need to have a separate environment that only starts an Azure VM.
If we take a scenario where we have a Test, Beta, Production environments. The client wants the application to be installed in Beta and Production on their local network. We internally want a Test environment to run E2E tests against, allow for non-technical folks to exercise the app without needing VPN access to the customer beta environment, etc.
So here we have Environment followed by where the Agent is running:
Test - Azure VM
Beta - Client machine
Production - Client machine
How we've solved this is to install the VSTS Agent on a machine at the client, which allows us to target that agent queue in the Beta and Production environments defined for that release. Then we typically build an Azure VM and target that agent queue for the Test environment.
We don't want to run that Azure VM 24/7/365. However if it's not running, then it can't respond to requests from Release Management.
What I've done is to create a environment named Start Test VM and Stop Test VM that use the Azure Resource Group Deployment to start and stop the VM. Those 2 additional environments can have their agent queue set to Hosted.
I'd like to figure out how to combine the first 3 environments into a logical Test instead of having to create 3 release management environments.
Start Test VM - Hosted
Test - Azure VM
Stop Test VM - Hosted
Beta - Client machine
Production - Client machine
The problem is that can be rather ugly and confusing when handing this over to one of our PM's or even myself when I circle back around 3 months later and think, "What the hell is this environment? Oh it's just there to start/stop the VM."
Options:
Stay with status quo - keep it like it is, it can't be fixed
We could open up a port on the Azure VM and use Powershell remoting. Then run on the Hosted agent or on an on-premise agent to start the VM, then deploy the application, then stop the VM. - we really dislike this because the deployment would not be the same as the client on-premise deploy. We'd like each environments' tasks to be the same, just with different variables.
You can use "Azure PowerShell" and "Azure SQL Database Deployment" tasks to configure your Azure VM and SQL or call other script to run on the Azure VM.
There isn't any way to set the agent for tasks. You can submit a feature request on VSTS User Voice for this.
And another way to reduce the environment is that: if you deploy every build linked to the release, then you can add "Start Test VM" task into your build definition to start the VM when build is successful and add "Stop Test VM" task into "Beta" environment.
What we've currently settled on is to continue with having an environment that isn't really what I would consider an environment, but more of a stage in the release pipeline that starts and/or shuts down a VM. We run that on a hosted agent so it can start the VM and make sure to check Skip artifacts download on the environment.
For a continuous integration build, we set a chain so the VM gets started, CI environment gets kicked off and then VM gets stopped. The remaining environments are then manually deployed up the chain as desired.
So here's an example:
Start CI VM
CI
Stop CI VM
Beta
Production
And here's an image of how it looks in Release Management as of 2016.06.27:
I put single quotes around environment because I think I agree with this user voice request in that, it's really more of a stage in the release pipeline. Much like database development, the logical and the physical don't necessarily map 1 to 1.