I am running a pipeline on a macos-12 agent which has a NodeTool#0 task
- task: NodeTool#0
inputs:
versionSource: 'spec'
versionSpec: '16.x'
checkLatest: false
`
This link: https://github.com/actions/runner-images/blob/main/images/macos/macos-12-Readme.md
mentions that NodeJS 16.8.1 is cached in macos-12 hosted agents
However, I often (but not always) get the following error when running my pipeline:
##[error]Unable to create directory '/opt/hostedtoolcache/node/16.18.1/x64'. EACCES: permission denied, mkdir '/opt/hostedtoolcache'
Why? And what can I do to prevent this error?
Thank you
Kiko
Related
This post will be a bit longer, as I not only describe my problem, but also show my different attempts to solve the problem.
I have a solution contaning .Net-6-Web-Api-Project (csproj) and a C++/CLI-Wrapper-Project (vcxproj). I have a reference from the C#-Project to the c++-Project. I use DevOps 2019 and VS22 on my local building agent.
I'm not able to successfully run this solution through an Azure DevOps Pipeline using the task DotNetCoreCLI#2, VSBuild#1 or a custom script as a workaround for the MSBuild#1 to publish.
VSBuild
My initial approach was to simply use the VSBuild#1 task. Using this task does not allow the pipeline to start, with the following error:
##[Error 1]
No agent found in pool My_Pool which satisfies the specified demands:
agent.name -equals My_Agend_Unity_1
Cmd
msbuild
visualstudio
Agent.Version -gtVersion 2.153.1
The cause is the compatibility issue between DevOps 2019 and VS2022. The agent does not recognize VS2022 and therefore does not create system capabilities for it. Its the same issue for the MSBuild#1 and why I tried a custom script to work around, because it couldn't find MSBuild.
DotNetCoreCLI
The first error I got was:
error MSB4019: The imported project "C:\Microsoft.Cpp.Default.props" was not found. Confirm that the expression in the Import declaration "\Microsoft.Cpp.Default.props" is correct, and that the file exists on disk.
So I fixed that by adding the env variable to the task:
env:
PATH: 'C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170'
The resulting further error was:
##[error]Error: Unable to locate executable file: 'dotnet'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also verify the file has a valid extension for an executable file.
So I tried to fix it by using the Task UseDotNet#2, even though it doesn't make sense to me. But at the end I still get an error similar to the first error.:
MSBuild version 17.3.2+561848881 for .NET
C:\agent\_work\2\s\XXX\YYY\CPPWrapper\MyProject.vcxproj : warning NU1503: Skipping restore for project "C:\agent\_work\2\s\XXX\YYY\CPPWrapper\MyProject.vcxproj". The project file may be invalid or missing targets required for restore. [C:\agent\_work\2\s\XXX\YYY\MySolution.sln]
Determining projects to restore...
"C:\agent\_work\2\s\XXX\YYY\DotNet6Project\MyProject.csproj" restored (in "2,4 sec").
C:\agent\_work\2\s\XXX\YYY\CPPWrapper\MyProject.vcxproj(21,3):error MSB4019: The imported project "C:\Microsoft.Cpp.Default.props" was not found. Confirm that the expression in the Import declaration "\Microsoft.Cpp.Default.props" is correct, and that the file exists on disk.
C:\agent\_work\2\s\XXX\YYY\CPPWrapper\MyProject.vcxproj(21,3): error MSB4019: The imported project "C:\Microsoft.Cpp.Default.props" was not found. Confirm that the expression in the Import declaration "\Microsoft.Cpp.Default.props" is correct, and that the file exists on disk.
##[error]Error: The process 'C:\agent\_work\_tool\dotnet\dotnet.exe' failed with exit code 1
##[error]Dotnet command failed with non-zero exit code on the following projects : C:\agent\_work\2\s\XXX\YYY\MySolution.sln
##[section]Finishing: Build & Publish XXX Service - DotNetCoreCLI#2
MSBuild
My last hope then was my custom script that I already use in another pipeline that accesses the same agent and uses MSBuild from VS22. This is the approach I've come furthest with, as it looks like the project builds fine, but then fails because of this error.
(ResolvePackageAssets Target) -> C:\Program Files\dotnet\sdk\7.0.101\Sdks\Microsoft.NET.Sdk\targets\Microsoft.PackageDependencyResolution.targets(267,5):
error NETSDK1064: Package "Microsoft.EntityFrameworkCore.Analyzers",
Version 6.0.4, not found. It may have been deleted after the NuGet restore.
Otherwise, the NuGet restore may have been only partially completed due to limitations on the maximum path length.
[C:\agent\_work\2\s\XXX\YYY\DotNet6Project\MyProject.csproj]
How to proceed with it, I do not know right now. I enabled already long paths via Group Policy Editor→Administrative templates→All Settings→Enable Win32 long paths.
My yaml file:
pool:
name: 'My_Pool'
demands:
- agent.name -equals My_Agent
variables:
buildPlatform: 'x64'
buildConfiguration: 'Release'
solution: '$(System.DefaultWorkingDirectory)/XXX/YYY/MySolution.sln'
DotNet6Project: '$(System.DefaultWorkingDirectory)/XXX/YYY/DotNet6Project/MyProject.csproj'
CPPWrapper: '$(System.DefaultWorkingDirectory)/XXX/YYY/CPPWrapper/MyProject.vcxproj'
steps:
- task: NuGetToolInstaller#0
displayName: 'NuGet Tool Installer - NuGetToolInstaller#0'
name: 'NuGetToolInstaller'
inputs:
versionSpec: '>=6.1.0'
- task: NuGetCommand#2
displayName: 'NuGet Restore - NuGetCommand#2'
inputs:
command: 'restore'
restoreSolution: '$(solution)'
noCache: true
- task: BatchScript#1
displayName: 'Run BatchScript to create DLLs, Libs & Header - BatchScript#1'
inputs:
filename: '$(System.DefaultWorkingDirectory)/ICP/ZZZ/build_release.bat'
env:
PATH: 'C:\Program Files\CMake\bin'
- task: PowerShell#2
displayName: 'Run Powershell Script to unpack Packages from BatchScript for ZZZWrapper - PowerShell#2'
inputs:
filePath: '$(System.DefaultWorkingDirectory)/XXX/YYY/CPPWrapper/install_ZZZ_package.ps1'
# Workaround for MSBuild#1
- script: |
#echo off
setlocal enabledelayedexpansion
for /f "usebackq tokens=*" %%i in (`"!ProgramFiles(x86)!\Microsoft Visual Studio\Installer\vswhere.exe" -latest -products * -requires Microsoft.Component.MSBuild -find MSBuild\**\Bin\MSBuild.exe`) do (set msbuild_exe=%%i)
"!msbuild_exe!" "$(solution)" /p:Configuration="$(buildConfiguration)" /p:Platform="$(buildPlatform)" /p:PackageLocation="$(build.artifactStagingDirectory)" /t:rebuild
displayName: 'Build - Script'
# ---------- VSBuild ----------------
#- task: VSBuild#1
# inputs:
# solution: '$(solution)'
# msbuildArgs: '/p:DeployOnBuild=true /p:WebPublishMethod=Package /p:PackageAsSingleFile=true /p:SkipInvalidConfigurations=true /p:PackageLocation="$(build.artifactStagingDirectory)"'
# platform: '$(buildPlatform)'
# configuration: '$(buildConfiguration)'
# ---------- DotNetCoreCLI ----------
#- task: UseDotNet#2
# inputs:
# packageType: 'sdk'
# version: '6.x'
#- task: DotNetCoreCLI#2
# displayName: 'Build & Publish - DotNetCoreCLI#2'
# inputs:
# command: 'publish'
# publishWebProjects: false
# projects: '$(solution)'
# arguments: '--configuration $(buildConfiguration) --output $(Build.ArtifactStagingDirectory)'
# zipAfterPublish: false
# env:
# PATH: 'C:\Program Files\Microsoft Visual Studio\2022\Community\MSBuild\Microsoft\VC\v170'
- task: PublishBuildArtifacts#1
displayName: 'Publish Build Artifacts - PublishBuildArtifacts#1'
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'XXXArtifact'
publishLocation: 'Container'
From your yaml file ,I understand you want to build a solution and then publish build artifact. Accroding the error of your described task, I would like to provide suggestions that you can check.
1 VSBuild#1
taskError information: No agent found in pool My_Pool which satisfies the specified demandsThis indicates there are no agents on your machine that meet the demand requirements. You should check whether exists the agent which name is My_Agend_Unity_1 and exists check for Cmd,msbuild,visualstudio,Agent.Version -gtVersion 2.153.1.
See more information refer to doc:pool definition
2 UseDotNet#2
There’s a warning warning NU1503: Skipping restore for project that indicates the packages required for the project MyProject are not restored correctly. You should edit the affected project to add targets for restore.
Please refer to doc:NuGet Warning NU1503
About the error MSB4019,you should check whether the project path "C:\Microsoft.Cpp.Default.props" exists. Here’s a ticekt similar to your issue.You can try to this workaround and see if it works.
3 MSBuild
MSBuildAbout error NETSDK1064, this error occurs when the build tools can't find a NuGet package that's needed to build a project. This is typically due to a package restore issue related to warning NU1503 inTask UseDotNet#2`. You can refer this doc:NETSDK1064: Package not found to take some actions provided to resolve this error.
I'm using the Cache#2 DevOps task to cache nuget packages from multiple projects:
variables:
NUGET_PACKAGES: $(Pipeline.Workspace)/.nuget/packages
- task: Cache#2
displayName: 'NuGet cache'
inputs:
key: 'nuget | "$(Agent.OS)" | **/packages.lock.json,!**/bin/**,!**/obj/**'
restoreKeys: |
nuget | "$(Agent.OS)"
nuget
path: $(NUGET_PACKAGES)
cacheHitVar: 'CACHE_RESTORED'
- task: NuGetCommand#2
displayName: 'NuGet restore'
condition: ne(variables.CACHE_RESTORED, true)
inputs:
command: 'restore'
restoreSolution: '$(solution)'
- task: VSBuild#1
displayName: 'Build solution'
...
I'm following the documentation here:
https://learn.microsoft.com/en-us/azure/devops/pipelines/artifacts/caching-nuget?view=azure-devops
In the 'NuGet cache' step, if there is a cache, it is restored:
Resolved to: nuget|"Windows_NT"|Gor2Y1OZWvAeaan3RC3GH9D0ldp6z17wm6JB2YUxrS0=
There is a cache hit: `nuget|"Windows_NT"|Gor2Y1OZWvAeaan3RC3GH9D0ldp6z17wm6JB2YUxrS0=`
Path =
Type = tar
Code Page = UTF-8
Characteristics = ASCII
Everything is Ok
Folders: 6022
Files: 8938
Size: 1894324465
Compressed: 7660544
Process exit code: 0
Cache restored.
If there is a cache hit, the 'NuGet restore' task is skipped:
Evaluating: ne(variables['CACHE_RESTORED'], True)
Expanded: ne('true', True)
Result: False
Then comes my problem. The 'Build solution' task fails with thousands of errors like these
##[error]NHO.Core\IdentityServerClientStartup.cs(32,12): Error CS0246: The type or namespace name 'OwinStartupAttribute' could not be found (are you missing a using directive or an assembly reference?)
It cannot resolve references to classes in NuGet packages.
Any idea why this fails? I'm following the documentation exactly, but I don't know if I'm still missing something?
Update: Possible solution/source of error
It seems to be an error in the documentation.
If I run the NuGet restore step normally, without the condition
- task: NuGetCommand#2
displayName: 'NuGet restore'
inputs:
command: 'restore'
restoreSolution: '$(solution)'
it is a lot quicker than normal (20 seconds instead of 2 minutes), so I'm guessing it uses the cache.
The documentation provides only a basic case that all packages and references are stored in .nuget/packages and no assets files are required. In this case, the users can skip the restore task and just perform the cache task.
However, in many other cases, performing a restore task is necessary if using Microsoft-hosted agent. Since most packages are already cached by cache task, the running time of nuget restore task will be significantly reduced.
I wanted to do CICD of my azure Databricks notebook using YAML file.
I have followed the below flow
Pushed my code from Databricks notebook to Azure Repos.
Created a Build using below YAML script.
stages:
- stage: Build
displayName: Build stage
jobs:
- job: Build
displayName: Build
steps:
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
TargetFolder: ' $(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: notebooks'
inputs:
ArtifactName: dev_release
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'publish build'
publishLocation: 'Container'
By doing above I was able to create a Artifact.
Now I have added another task to deploy that artifact to my Databricks workspace. By using below YAML Script.
- stage: Deploy
displayName: Deploy stage
jobs:
- job: Deploy
displayName: Deploy
pool:
vmImage: 'vs2017-win2016'
steps:
- task: DownloadBuildArtifacts#0
inputs:
buildType: 'current'
downloadType: 'single'
artifactName: 'dev_release'
downloadPath: '$(System.ArtifactsDirectory)'
- task: databricksDeployScripts#0
inputs:
authMethod: 'bearer'
bearerToken: 'dapj0ee865674cd9tfb583dbad61b78ce9b1-4'
region: 'Central US'
localPath: '$(System.DefaultWorkingDirectory)'
databricksPath: '/Shared'
Now i want to run the deployed notebook from here only. So I have "Configure Databricks CLI" task and "Execute Databricks" task to execute the note book.
Got below Error:
##[error]Error: Unable to locate executable file: 'databricks'. Please verify either the file path exists or the file can be found within a directory specified by the PATH environment variable. Also verify the file has a valid extension for an executable file.
##[error]The given notebook does not exist.
How can I execute notebook from Azure DevOps. My notebooks are in Scala Language.
Is there any other way to use in Production servers.
As you have deployed the Databricks Notebook using Azure DevOps and asking for any other way to run it, I would like to suggest you Azure Data Factory Service.
In Azure Data Factory, you can create pipeline that executes a Databricks notebook against the Databricks jobs cluster. You can also pass Azure Data Factory parameters to the Databricks notebook during execution.
Follow the official tutorial to Run Databricks Notebook with Databricks Notebook Activity in Azure Data Factory to deploy and run Databrick Notebook.
Additionally, you can schedule the pipeline trigger at any particular time or event to make the process completely automatic. Refer https://learn.microsoft.com/en-us/azure/data-factory/concepts-pipeline-execution-triggers
try this :
- job: job_name
displayName: test job
pool:
name: agent_name(selfhostedagent)
#pool:
workspace:
clean: all
steps:
- checkout: none
- task: DownloadBuildArtifacts#0
displayName: 'Download Build Artifacts'
inputs:
artifactName: app
downloadPath: $(System.DefaultWorkingDirectory)
- task: riserrad.azdo-databricks.azdo-databricks-configuredatabricks.configuredatabricks#0
displayName: 'Configure Databricks CLI'
inputs:
url: '$(Databricks_URL)'
token: '$(Databricks_PAT)'
- task: riserrad.azdo-databricks.azdo-databricks-deploynotebooks.deploynotebooks#0
displayName: 'Deploy Notebooks to Workspace'
inputs:
notebooksFolderPath: '$(System.DefaultWorkingDirectory)/app/path/to/notebbok'
workspaceFolder: /Shared
- task: riserrad.azdo-databricks.azdo-databricks-executenotebook.executenotebook#0
displayName: 'Execute /Shared/path/to/notebook'
inputs:
notebookPath: '/Shared/path/to/notebook'
existingClusterId: '$(cluster_id)'
(Sorry for the long post! I wanted to put in as much detail as possible)
Using Azure DevOps I am trying to deploy using AWS CodeDeploy. I already have a successful AWS pipeline that utilizes AWS CodeBuild and AWS CodeDeploy So I know everything works in that environment. My organization now wants to convert some of our processes to use Azure DevOps.
In Azure I have successfully built the code into a war file that will be used in the deployment process. This proves that he azure-pipelines.yml file is set up correctly to build. Just to get familiar with Azure DevOps talking to AWS I was able to also upload the war artifact to S3 using the Azure releases process and a properly configured S3 agent. I now want deploy the war artifact directly from Azure to AWS CodeDeploy. Just to ensure that I can interface with AWS CodeDepoy from Azure I have successfully had the war artifact copied into the AWS CodeDeploy application/deployment group. It did copy it over but it fails to deploy because it does not know how to get to the AWS CodeDeploy appspec.yml and the hooks scripts outlined in the appspec.yml.
I have the hooks located in my code project under /aws
I have the following:
/aws/before_install.sh
/aws/after_install.sh
/aws/application_start.sh
/aws/application_stop.sh
/aws/before_allow_traffic.sh
/aws/after_allow_traffic.sh
The appspec.yml I placed in the root directory in my code project:
/appspec.yml.
I assume that I need to have the appspec.yml and all the shell scripts in “/aws” copied to the same location that the war file is being copied to (I assume that this would be the Revision Bundle location). I observed that the Azure Release process/CodeDeploy agent looks at the “Revision Bundle” location, zips all the files it finds there and passes that to AWS CodeDeploy.
What do I need to add to the azure-pipelines.yml to tell it to copy the appspec.yml and all the shell scripts in /aws to the same location that the war file is placed under and make this the Revision Bundle location?
Using the below azure-pipelines.yml it builds the war and as an experiment I do see that it does copy the one shell script I defined because I do see it stating that two artifacts were uploaded (pro-0.0.1-SNAPSHOT.war and after_install.sh) when I run it :
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Gradle#2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: true
testResultsFiles: '**/TEST-*.xml'
tasks: 'build'
- publish: $(System.DefaultWorkingDirectory)/build/libs/
artifact: pro-0.0.1-SNAPSHOT.war
- task: CopyFiles#2
displayName: 'Copy the needed AWS files to the artifact directory'
inputs:
SourceFolder: '$(build.artifactstagingdirectory)'
Contents: 'aws/*.sh'
TargetFolder: '$(System.DefaultWorkingDirectory)/build/libs/'
- publish: $(System.DefaultWorkingDirectory)/aws/after_install.sh
artifact: after_install.sh
Now how do I get all the shell scripts uploaded? I tried wildcards but it didn’t work.
How do I have the Azure Codedeploy agent access/find all the files together to allow them to be zipped up and sent along to AWS CodeDeploy?
To summarize - I need all of the following files in one directory so that the Azure Codedeploy agent can access them to zip up and send to AWS CodeDeploy:
pro-0.0.1-SNAPSHOT.war, appspec.yml, after_install.sh, before_install.sh, application_start.sh, application_stop.sh¸ after_allow_traffic.sh, before_allow_traffic.sh,
FYI – I have the “Revision Bundle” location set to “$(System.DefaultWorkingDirectory)/_Trove /pro-0.0.1-SNAPSHOT.war/
Thanks for any help or advice!!
Try with below YAML script. Just do little changes to your CopyFiles and second Publish task:
trigger:
- master
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Gradle#2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: true
testResultsFiles: '**/TEST-*.xml'
tasks: 'build'
- publish: $(System.DefaultWorkingDirectory)
artifact: pro-0.0.1-SNAPSHOT.war
- task: CopyFiles#2
displayName: 'Copy the needed AWS files to the artifact directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
aws/*.sh
appspec.yml
TargetFolder: '$(build.artifactstagingdirectory)'
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)'
ArtifactName: 'build.gradle'
publishLocation: 'Container'
For others that might have a similar situation, here is what finally worked for me :
trigger:
- '*'
name: $(SourceBranchName)_$(Year:yy).$(Month).$(DayOfMonth)$(Rev:.r)-$(BuildId)
pool:
vmImage: 'ubuntu-latest'
steps:
- task: Gradle#2
inputs:
workingDirectory: ''
gradleWrapperFile: 'gradlew'
gradleOptions: '-Xmx3072m'
javaHomeOption: 'JDKVersion'
jdkVersionOption: '1.11'
jdkArchitectureOption: 'x64'
publishJUnitResults: true
testResultsFiles: '**/TEST-*.xml'
tasks: 'build'
# - publish: $(System.DefaultWorkingDirectory)
# artifact: pro-0.0.1-SNAPSHOT.war
- task: CopyFiles#2
displayName: 'Copy the needed AWS files to the artifact directory'
inputs:
SourceFolder: '$(System.DefaultWorkingDirectory)'
Contents: |
aws/*.sh
aws/appspec.yml
build/libs/*
TargetFolder: '$(build.artifactstagingdirectory)/output'
flattenFolders: true
- task: PublishBuildArtifacts#1
inputs:
PathtoPublish: '$(Build.ArtifactStagingDirectory)/output'
ArtifactName: 'pro'
publishLocation: 'Container'
We have a working classic build job in azure Devops with an self hosted agent pool. But when we tried to convert this build job to yaml method, while executing no agents are getting assigned and its hanging. Could you please correct me here if i am doing something task.
Error
"All eligible agents are disabled or offline"
below is the converted yaml file from classic build - agent job
pool:
name: MYpool
demands: maven
#Your build pipeline references an undefined variable named ‘Parameters.mavenPOMFile’. Create or edit the build pipeline for this YAML file, define the variable on the Variables tab. See https://go.microsoft.com/fwlink/?linkid=865972
steps:
- task: Maven#3
displayName: 'Maven pom.xml'
inputs:
mavenPomFile: '$(Parameters.mavenPOMFile)'
- task: CopyFiles#2
displayName: 'Copy Files to: $(build.artifactstagingdirectory)'
inputs:
SourceFolder: '$(system.defaultworkingdirectory)'
Contents: '**/*.war'
TargetFolder: '$(build.artifactstagingdirectory)'
condition: succeededOrFailed()
- task: PublishBuildArtifacts#1
displayName: 'Publish Artifact: Root'
inputs:
PathtoPublish: '$(build.artifactstagingdirectory)'
ArtifactName: Root
condition: succeededOrFailed()
- task: CopyFiles#2
displayName: 'Copy wars to build directory'
inputs:
SourceFolder: '$(build.artifactstagingdirectory)/target'
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: CopyFiles#2
displayName: 'copying docker file to Build Directory'
inputs:
SourceFolder: Admin
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- bash: |
# Write your commands here
mv /home/myadmin/builds/$(build.buildnumber)/mypack0.0.1.war /home/myadmin/builds/$(build.buildnumber)/ROOT.war
displayName: 'Name war file Root.war'
- task: Docker#2
displayName: 'Build the docker image'
inputs:
repository: 'mycontainerregistry.azurecr.io/myservice'
command: build
Dockerfile: '/home/myadmin/builds/$(build.buildnumber)/Dockerfile'
tags: '$(Build.BuildNumber)-DEV'
- bash: |
# Write your commands here
docker login mycontainerregistry.azurecr.io
docker push mycontainerregistry.azurecr.io/myservice:$(Build.BuildNumber)-DEV
displayName: 'Push Docker Image'
- task: CopyFiles#2
displayName: 'Copy Deployment file'
inputs:
SourceFolder: /home/myadmin/kubernetes
TargetFolder: '/home/myadmin/builds/$(build.buildnumber)'
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace image in deployment file'
inputs:
rootDirectory: '/home/myadmin/builds/$(build.buildnumber)'
targetFiles: '**/*.yml'
In my previous answer, I said when I wait for nearly 20-30 mins, the interface of agent will prompt below message.
In fact, this is a process which upgrade the agent to latest version automatically.
Yes, when you using YAML with private agent, the agent version must be the latest one. No matter you add the demands or not.
For our system, the agent version is a implicit demand that your agent must satisfied with the latest one when you applying it in YAML.
If it is not satisfied, it will be blocked and the agent upgrade process will be forced to be performed automatically by system after some times.
So, to execute the private agent in YAML successfully, please upgrade the agent to latest one manually.
Since what my colleague and I talked are all private to microsoft in this ticket, sorry you could not get visible on this summary. So, here I take the screenshots about it, and you can refer to it: https://imgur.com/a/4OnzHp3
We are still working on why the system prompting so confusing message like: All eligible agents are disabled or offline. And, am trying to do some contribution to let this message more clear, for example: no agents meet demands: agent version xxx.