New Azure DevOps pipeline using ASP.NET yaml template failing - azure-devops

I have a GitHub repository with a .NET Core 3.0 website solution in it. In Azure DevOps, I went through the wizard to create a new pipeline linked to that repository using the ASP.NET Core template on the Configure step of the wizard. This is what my YAML looks like:
# ASP.NET Core
# Build and test ASP.NET Core projects targeting .NET Core.
# Add steps that run tests, create a NuGet package, deploy, and more:
# https://learn.microsoft.com/azure/devops/pipelines/languages/dotnet-core
trigger:
- develop
pool:
vmImage: 'ubuntu-latest'
variables:
buildConfiguration: 'Release'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
When I try to manually run the pipeline to test it, this is the output I get everytime:
##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[warning]There was a failure in sending the provision message: Unexpected response code from remote provider NotFound
,##[error]Provisioning request delayed or failed to send 5 time(s). This is over the limit of 3 time(s).
Pool: Azure Pipelines
Image: ubuntu-latest
Started: Yesterday at 10:04 PM
Duration: 10h 54m 5s
Job preparation parameters
ContinueOnError: False
TimeoutInMinutes: 60
CancelTimeoutInMinutes: 5
Expand:
MaxConcurrency: 0
########## System Pipeline Decorator(s) ##########
Begin evaluating template 'system-pre-steps.yml'
Evaluating: eq('true', variables['system.debugContext'])
Expanded: eq('true', Null)
Result: False
Evaluating: resources['repositories']['self']
Expanded: Object
Result: True
Evaluating: not(containsValue(job['steps']['*']['task']['id'], '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Expanded: not(containsValue(Object, '6d15af64-176c-496d-b583-fd2ae21d4df4'))
Result: True
Evaluating: resources['repositories']['self']['checkoutOptions']
Result: Object
Finished evaluating template 'system-pre-steps.yml'
********************************************************************************
Template and static variable resolution complete. Final runtime YAML document:
steps:
- task: 6d15af64-176c-496d-b583-fd2ae21d4df4#1
inputs:
repository: self
I thought maybe ubuntu-latest was maybe no longer a valid vmImage, so I tried changing it to ubuntu-18.04 and got the same result. The Micosoft-hosted agents documentation says either should be valid.
Do I have something wrong with my yaml file? I have setup pipelines before with the old no-yaml interface with no issues, so I am a little confused.

i think nowadays it should looks like this:
trigger:
- develop
jobs:
- job: buildjob
variables:
buildConfiguration: 'Release'
pool:
vmImage: 'ubuntu-latest'
steps:
- script: dotnet build --configuration $(buildConfiguration)
displayName: 'dotnet build $(buildConfiguration)'
although this says you can omit jobs if you only have a single job, but I dont see anything wrong with your yaml other than the fact you use steps directly (which, again, should be fine)

Looks like nothing wrong with your yaml file and format.
Since you are using a GitHub repository with a .NET Core 3.0 website. Please pay attention when you create pipeline, make sure you have selected GitHub not Azure Repos Git.
Also as you have mentioned I have setup pipelines before with the old no-yaml interface with no issues, you could setup your pipelines with classic editor first.
There is also a view yaml option.
You could follow that format and content to create a yaml template, which may do the trick.

I was looking through my account and noticed my Agent Pool settings looked a little suspect on the project. It showed I had 11 available agents that were online even though I was using the free plan on a private repository so it should only have one.
I ended up deleting my Azure DevOps organization and creating a new one. Now the YAML configuration I initially posted works fine.

Related

Creating an .appinstaller in Azure Devops

The Goal
I have a WPF application that is to be updated from a sFTP server. Azure Devops is being used as the CI/CD and pushed out to the sFTP server.
Question
I wish to build a .appinstaller from what I gather when creating a .appinstaller for msix you require to have a public facing URI. I do not wish to do so i want it to be stored on the sFTP server and then downloaded it from there then installed locally.
Any tips?
This is a partial yaml file used to build
- task: MsixPackaging#1
inputs:
outputPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildNumber).msix'
solution: 'app.sln'
clean: true
generateBundle: true
buildConfiguration: 'release'
msbuildLocation: 'C:\Download'
buildForX86: false
updateAppVersion: false
appPackageDistributionMode: 'SideloadOnly'
msbuildLocationMethod: 'version'
msbuildVersion: 'latest'
msbuildArchitecture: 'x64'
- task: AppInstallerFile#1
displayName: 'Create AppInstaller'
inputs:
package: '$(Build.ArtifactStagingDirectory)/$(Build.BuildNumber).msix'
outputPath: '$(Build.ArtifactStagingDirectory)/$(Build.BuildNumber).appinstaller'
uri: '\\PC\Download'
mainItemUri: '\\PC\Download'
showPromptWhenUpdating: true
updateBlocksActivation: true
Since you are using sftp server, then the port related to SSH on that server should be already turned on.
You can use SSH protocol to handle the file you want to upload/push.
You need first create a SSH service connection in DevOps project settings.
And then push the file based on the task CopyFilesOverSSH#0:
trigger:
- none
pool: VMAS #This is the agent pool on my side, you can set up your own agent pool.
# vmImage: ubuntu-latest
steps:
#Put your logic here.
- task: CopyFilesOverSSH#0
inputs:
sshEndpoint: 'SSH_To_Remote_VM'
contents: 'test.appinstaller'
readyTimeout: '20000'
It works fine on my side:
Also you can use the task SSH#0 to run scripts on remote machine.

Azure DevOps The code coverage value (0%, 0 lines) is lower than the minimum value

Azure Devops pipeline started failing on self hosted agent on Check build quality task
steps:
- task: mspremier.BuildQualityChecks.QualityChecks-task.BuildQualityChecks#6
displayName: 'Check build quality'
inputs:
checkCoverage: true
coverageFailOption: fixed
coverageType: lines
coverageThreshold: 50
buildPlatform: '$(BuildPlatform)'
enabled: false
timeoutInMinutes: 24
Tried with all the task versions
Tried all the coverage types and added timeout minutes too but It's not working
I tried on Microsoft hosted agent win 2019 and It worked
, Till last week it was working on self hosted 2019 agent
Happy to provide more information if needed
Also I would like to understand if we are already doing sonar code scan does this task adds any value ?
If you're running this check on a PR without any code changes, ie. only adding this build check. By default the code coverage check will fail as theres no additional coverage added.
Have a look atthe treat0of0as100 option listed here https://github.com/MicrosoftPremier/VstsExtensions/blob/master/BuildQualityChecks/en-US/overview.md.

How to find out Azure DevOps Services pipeline demand / capability requirements to select some specific agent? (implicit and explicit demands)

I have a simple yaml pipeline file as follows:
stages:
- stage: build_xm2simu
displayName: This is the build stage of the XM Simu project
# dependsOn: string | [ string ]
# condition: string
jobs:
- job: linux_dotnet_build
pool:
name: my-desktop
# demands:
# - netcore -equals 3.1
steps:
- powershell: dotnet restore source\backend\XM2Simu\XM2Simu.csproj
and I also have configured three different agents:
my-desktop / windows host agent → capabilities: {plenty but netcore}
my-desktop / linux docker agent with dotnet → capabilities: netcore 3.1, PowerShell 6.x, {some more}
my-desktop / linux docker agent with azure cli
In case I remove the explicit demand with netcore then it runs on my windows host agent and fails as expected since there is no source\backend\XM2Simu\XM2Simu.csproj file currently there.
In case the netcore demand is added, it doesn't finds any agent suitable, and I only get the following message:
Waiting for an available agent. All eligible agents are disabled or offline
I also get the above message in case the netcore demand is removed but the windows host agent is offline.
Question: How to find out Azure DevOps Services pipeline demand / capability requirements to select some specific agent? (implicit and explicit demands)
Note: I'm currently investigating this issue and maybe related to this post.
How to find out Azure DevOps Services pipeline demand / capability requirements to select some specific agent? (implicit and explicit demands)
What you did should be correct.
According to your troubleshooting steps and the error messages you get:
Waiting for an available agent. All eligible agents are disabled or offline
Besides, I also test your yaml file with my private agent, it works fine.
It seems the error comes from the agent linux docker agent with dotne itself, it should not a eligible agents.
So, first, we could check if that agent keep Online. If yes, we could try to update the agent.
Second, if above not help, please download the latest agent from VSTS web page, and config again.
Hope this helps.

"No agent found in pool Default which satisfies the specified demands:

I am getting the below message while I am doing build in my Azure DevOps pipeline. Here I am using Azure DevOps pipelines, VS2017 and Windows 2016.
"No agent found in pool Default which satisfies the specified demands:
msbuild
visualstudio
vstest
Agent.Version -gtVersion 2.161.0 "
This is failing when I'm using three agent jobs in single pipeline. If I run the same tasks in new pipeline it works fine. Could you please suggest the solution?
In my case, we were getting the error and this is what solved it:
I logged on to our build server and restarted these three services:
In our case, it was just a problem with a single pipeline, as the other pipeline we use was running ok. I don't know why one pipeline worked and the other didn't since they both use the same agent, but restarting the services resolved it.
This issue is caused by "Download Pipeline Artifacts#2" task.
It has been reported to the product group not long ago, and our engineers have released fixes that resolve compatibility issues. This issue has now been fixed. I apologize for the inconvenience here.
For details,please refer to this case on our Developer Community forum.
Same here.
We have the same issue and MS is tracking it.
https://twitter.com/AzureDevOps/status/1207288336206815232
I got this error when I created a new agent.
This new agent didn't receive existing User-defined capabilities that were on older agents.
After comparing agent capabilities, I added the missing user-defined capabilitiesies and it started to compile.
In your pipeline definition YML file you have to specify 'windows-2016' before specifying agentpool, see below:
stages:
- stage: Build
displayName: 'IaC Build'
variables:
- name: var
value: val
jobs:
- job: Build
pool:
vmImage: 'windows-2016'
steps:
- task: ...
# Deploy Dev
- stage: DeployDevInfra
displayName: 'Deploy: DEV'
dependsOn: build
variables:
- group: your-var-group
- name: var
value: val
jobs:
- template: another-pipeline.yml
parameters:
agentpool: 'here-come-name-of-your-agent-pool'
environment: 'your-dev-environment'
In my case, I had to delete and recreate the Release Pipeline and it started working. None of the other answers worked/applied for me.
I had a similar problem - downloaded a self-hosting agent and ran it. I downloaded it after opening my custom AgentPool, so I expected that the agent will be found there. The issue was, the agent was still created in the Default pool (me being a newbie in this). End result was I was trying to use the wrong Agent pool, that would cause the same issue.

Azure Pipepline with task Sonarqube https

I added a Sonarqube task into my azure build pipeline, in order to login to my sonarqube server I need to run a command, which uses trunst store ssl.
my pipeline looks just like this:
- task: SonarSource.sonarqube.15B84CA1-B62F-4A2A-A403-89B77A063157.SonarQubePrepare#4
displayName: 'Prepare analysis on SonarQube'
inputs:
SonarQube: abc-sonarqube
scannerMode: CLI
configMode: manual
cliProjectKey: 'abc'
cliProjectName: 'abc'
cliSources: src
extraProperties: |
sonar.host.url=https://sonarqube.build.abcdef.com
sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit
I am not sure, if this command "sonar.ce.javaAdditionalOpts=-Djavax.net.ssl.trustStore=mvn/sonar.truststore -Djavax.net.ssl.trustStorePassword=changeit" correct is.
I got the error "API GET '/api/server/version' failed, error was: {"code":"UNABLE_TO_VERIFY_LEAF_SIGNATURE"}
"
PS: my project is angular project.
any solutions?
Azure Pipepline with task Sonarqube https
This issue should be related in how the configure task works. So, even if we add the certificate to the java trustore, the task that sets the configuration uses a different runtime (not java at least) to communicate with the server, that’s why you still get that certificate error.
To resolve this issue, you could try to:
set a global variable, NODE_EXTRA_CA_CERTS, and set it to a copy of
the root cert we had stored locally in a directory. See this article.
Check the related ticket for some more details.
Hope this helps.