Running issue when buid defenation execute using microsoft host agent? - azure-devops

When I execute the build definition using Microsoft agent, the below error is reflecting
Error: The agent request is not running because all potential agents are running other requests. The current position in the queue: 1
The build execution took days and it is bonded. Can someone please help me to run out of this issue?
Scenarios I tried:
reinstalling the self-hosted agent and reconfigure the agent again.
I am trying with Microsoft agent "Azure pipelines"

Running issue when buid defenation execute using microsoft host agent?
If your pipeline queues but never gets an agent, check the following items:
Parallel job limits - no available agents or you have hit your free limits
You don't have enough concurrency
Your job may be waiting for approval
All available agents are in use
Demands that don't match the capabilities of an agent
Check Azure DevOps status for a service degradation
Please check this document for some more details.
Note: Please also check if Microsoft Hosted Pool `Azure Pipelines' are stuck on builds that you cancelled and deleted.
If above not help you, please share more info about your definition to help us to find the reason for this issue:
Agent pool info:
Execution plan info:
Parallel jobs info (And click the link View in-progress jobs):

Related

Azure DevOps Server - multistage YAML pipeline - release couldn't start

On my Azure DevOps Server instance (2020 Update 1.1) I have easy multistage YAML pipeline with Build job (run against BuildPool) and release job (run against ReleasePool). Build job is executed successfully. In release pool there are many idle agents but job is in waiting state with message:
The agent request is not running because all potential agents are running other requests. Current position in queue: 1
No agents in pool ReleasePool are currently able to service this request.
Other pipelines on the server against ReleasePool are executed.
This pipeline was executed one month past also successfully, and since this execution the YAML definition stays unchanged.
Pipeline have no explicit demands, I'm trying to identify implicit demands (from used tasks - I have checked tasks.json task manifests for each used task) - but there isn't used no task with demands.
I have no idea what I could try next.
Is the way how to diagnostic how are agents assigned to the pipeline jobs? I have admin permissions and access to the DB, I'm ready to do very deep analysis.
Since other tasks are running fine, maybe a specific demands on selected agents are not met and this is preventing an agent to be found.
https://learn.microsoft.com/en-us/azure/devops/pipelines/process/demands?view=azure-devops-2020&tabs=yaml
Or else, it's a bug.
I found a similar issue here, and Microsoft responded July 2022 with:
"We have released a fix for this issue"
https://developercommunity.visualstudio.com/t/no-agent-found-in-pool-azure-pipelines/870000
It is however not clear to me if this only applied to Azure DevOps or also to Azure DevOps Server.
But since your on 2020 Update 1.1, updating couldn't harm you:
https://learn.microsoft.com/en-us/azure/devops/server/release-notes/azuredevops2020u1?view=azure-devops-2020

The agent request is not running because all potential agents are running other requests

We are running a pipeline that has run successfully for ages. It is failing as of yesterday. We are getting the error
The agent request is not running because all potential agents are running other requests.
The agent pool is offline
This stackoverflow solution says I need to run.cmd, but I am not running a self-hosting agent.
We also get this additional error
Failed to get scaleset from Azure with error: No service Endpoint found with Id: xxxx and Scope xxxx
How do I bring the agent back online and is that the solution to fix the error that is preventing the publish/deploy. If not, how do we fix this issue.
There are three kinds of icon of agent pool in Azure DevOps.
Microsoft-hosted agent:
Self-hosted agent:
VMSS(Azure Virtual Machine Scale Set):
So the type you are using is VMSS(Azure Virtual Machine Scale Set) pool. This type of agent pool will based on service connection of one of your project or subscription.
So if the machines are in the VMSS, you can try to edit your VMSS pool settings on Azure DevOps side, you can set up another service connection(please make sure the related register app on azure portal side has required permissions to the VMSS).
If no machine instances or the previous instances were been deleted, then the object on DevOps side will not be able to recovery. You can rec-config to a new VMSS or delete the old pool and create a new one.

Deployment started failing - decryption operation failed

Deployment task of a pipeline has been working fine until yesterday. No changes that Im aware of. This is for a build that uses a deployment agent on an on-prem target. Not sure where to look, other than possibly reinstall the build agent via the script?
2021-08-11T18:36:20.7233450Z ##[warning]Failed to download task 'DownloadBuildArtifacts'. Error The decryption operation failed, see inner exception.
2021-08-11T18:36:20.7243097Z ##[warning]Inner Exception: {ex.InnerException.Message}
2021-08-11T18:36:20.7247393Z ##[warning]Back off 28.375 seconds before retry.
2021-08-11T18:36:51.6834124Z ##[error]The decryption operation failed, see inner exception.
Plese check announcement here
The updated implementation of BuildArtifact tasks requires an agent upgrade, which should happen automatically unless automatic upgrades have been specifically disabled or the firewalls are incorrectly configured.
If your agents are running in firewalled environments that did not follow the linked instructions, they may see failures when updating the agent or in the PublishBuildArtifacts or DownloadBuildArtifacts tasks until the firewall configuration is corrected.
A common symptom of this problem are sudden errors relating to ssl handshakes or artifact download failures, generally on deployment pools targeted by Release Management definitions. Alternatively, if agent upgrades have been blocked, you might observe that releases are waiting for an agent in the pool that never arrives, or that agents go offline half-way through their update (this latter is related to environments that erroneously block the agent CDN).
To fix that, please update your self hosted agents.

Why is my Azure DevOps Migration timing out after several hours?

I have a long running Migration (don't ask) being run by an AzureDevOps Release pipeline.
Specifically, it's an "Azure SQL Database deployment" activity, running a "SQL Script File" Deployment Type.
Despite having configured maximums in all the timeouts in the Invoke-Sql Additional Parameters settings, my migration is still timing out.
Specifically, I get:
We stopped hearing from agent Hosted Agent. Verify the agent machine is running and has a healthy network connection. Anything that terminates an agent process, starves it for CPU, or blocks its network access can cause this error.
So far it's timed out after:
6:13:15
6:13:18
6:14:41
6:10:19
So "after 6 and a bit hours". It's ~22,400 seconds, which doesn't seem like any obvious kind of number either :)
Why? And how do I fix it?
It turns out that AzureDevOps uses Hosting Agents, to execute each Task in a pipeline, and those Agents have innate lifetimes, independent from whatever task they're running.
https://learn.microsoft.com/en-us/azure/devops/pipelines/troubleshooting/troubleshooting?view=azure-devops#job-time-out
A pipeline may run for a long time and then fail due to job time-out. Job timeout closely depends on the agent being used. Free Microsoft hosted agents have a max timeout of 60 minutes per job for a private repository and 360 minutes for a public repository. To increase the max timeout for a job, you can opt for any of the following.
Buy a Microsoft hosted agent which will give you 360 minutes for all jobs, irrespective of the repository used
Use a self-hosted agent to rule out any timeout issues due to the agent
Learn more about job timeout.
So I'm hitting the "360 minute" limit (presumably they give you a little extra on top, so that no-one complains?).
Solution is to use a self-hosted agent. (or make my Migration run in under 6 hours, of course)

How to know which machines were skipped during DevOps Release using Azure Pipeline Agents for a Deployment Group

I'm using Azure Pipeline Agents on Machines and have those Machines in a Deployment Group and I have a DevOps Release which does some things on each machine. If the Azure Pipeline Agent isn't running on a machine at release time, the release will skip over this machine (see below image). How can I know which machines were skipped?
!]1
How can I know which machines were skipped?
The easiest way to check is that you can manually check the detailed deployment log.
For example:
Then you could get the skipped agent name.
On the other hand, you could also use the Rest API : Releases - Get Release. In the API response, you could check the Job Status and the Agent name.
Here is sample:
GET https://vsrm.dev.azure.com/{organization}/{project}/_apis/release/releases/{releaseId}?api-version=6.0