Nuget restore task taking an excessive length of time - azure-devops

We have DevOps pipelines being run on self-hosted build servers, building a large solution (~50 projects, currently a mix of full framework and NetStandard) with a lot of nugets being referenced. Unfortunately the codebase is far too large to be pulled onto the standard azure build servers.
We use the nuget restore step (v4.9.1) against the standard nuget feed & a couple of secured customer devops feeds but the restore time has gone from ~2 mins to more than 10 minute. At the outset of this the nuget restore was failing completely, allegedly due to authentication timeout. We have managed to get it working again, but taking these much longer times using the following settings:
steps:
- task:NuGetCommand#2
displayName:'NuGet restore'
inputs:
restoreSolution: src/DesktopComponents.sln
feedsToUse: config
nugetConfigPath: nuget.config
disableParallelProcessing:true
restoreDirectory:.packages
AN example of the log output can be seen at https://1drv.ms/u/s!AjPEk97mW_qqh7wfcn3bCwctYIQ6wA?e=jWEXqF
Any assistance that anyone could offer would be much appreciated. If there are any logs/settings that would be useful please let me know
Thanks in advance
Mark Middlemist

Related

Azure pipelines on a self hosted agent gives error NU1301: Unable to load the service index for source during dotnet restore

Having the same issue on a self hosted agent but I'm not specifying password in the yml. just specifying the vstsFeed
- checkout: self
submodules: true
persistCredentials: true
- task: NuGetToolInstaller#1
inputs:
versionSpec: 6.2.1
- task: UseDotNet#2
displayName: Using Dotnet Version 6.0.400
inputs:
packageType: 'sdk'
version: '6.0.400'
- task: DotNetCoreCLI#2
displayName: Restore Nuget packages
inputs:
command: 'restore'
projects: '**/*.sln'
feedsToUse: 'select'
vstsFeed: 'ba05a72a-c4fd-43a8-9505-a97db9bf4d00/6db9ddb0-5c18-4a24-a985-75924292d079'
and it fails with following error error NU1301: Unable to load the service index for source
Nuget feed is on another project of the same organization. I can see that pipeline produces a temp nuget config where it specifies username and password for this feed during run. Been breaking my head for the last 72 hours non-stop to find what is the issue. Azure pipelines and nuget sucks. 99% of the problems we had so far was with nuget not working smoothly with azure pipelines. Microsoft has to take a step back and resolve pipelines and nuget ssues.
Just to make sure: The NuGet feed is on the same Azure instance as the agent is registered with, right?
I remember similar issues on my on-premise Azure DevOps server, but also sometimes on the paid cloud variant. Sometimes it was flaky service state, sometimes the agent itself..
Kevin did give a good point with the permissions - if those are set, you're good to go from a permissions point of view - actually reader permission is enough for a restore - make sure to check the views panel too.
If after permissions check you still got issues, you might try my "just-making-sure-lines for your .yml file:
# NuGet Authentication (safety step, normally not required as all within the same organization/project)
- task: NuGetAuthenticate#1
displayName: "Nuget Authentication"
It shouldn't be required but I have it on all my pipes since I had such issues, and it reduced the occurence of the error line you posted in my cases (Hybrid devops architecture).
Another thing I ended up with is to specifiy the feeds explicitly in a repository-wide "NuGet.Config" file - and using this file within my yml files or with script lines instead of tasks now.
If nothing helps, enable diagnostics/verbose logging to get more error details. In the worst case: Log in to your agent machine, open a terminal in the same agent work folder and manually issue a dotnet restore command to see whats going on.
Post the additional results if still no progress.
Good luck
From your description, you are using the Nuget feed on another project of the same organization.
You need to check the following points:
Check the permission of the Build Service account.
Here are the steps:
Step1: Navigate to Artifacts ->Target Feed ->Feed Settings -> Permission.
Step2: Grant the Build service account Contributor Role . Build service account name: ProjectnamethePipelinelocated Build Service (Organization name)
For example:
Check the Limit job authorization scope to current project for non-release pipelines option is Enabled in Project Settings -> Settings.
If yes, you need to disable the option and then the pipeline can use the resource outside the project.
Note: To disable this option, you need to disable the option in Organization Settings-> Settings first. Then you could disable the option in Project level.

Github Actions, Azure Devops "Publish Pipeline Artifact" Equivalent?

I see that Microsoft is likely going to move in the direction of shying away from Azure DevOps and more heavily leaning on GitHub Actions as a primary automation platform (speculation, not sure if it's true), so I am trying to move all of my automation off of DevOps onto GitHub Actions and when doing so I noticed that there are some lacking similarities.
In this specific case, I am wondering if there is an equivalent to Azure DevOps "Publish Pipeline Artifacts" task in GitHub Actions?
The closest thing I can find in GitHub Actions is "actions/upload-artifact#v2", however this more similarly resembles Azure DevOps' "Publish build artifacts". I get the use case and understand what I could use it for, but I want to see if I can upload an entire Pipeline/workflow in a package, rather than file by file.
In Azure DevOps, my pipeline runs in < 5-7 minutes because I can use the "Publish Pipeline Artifacts" task, but in GitHub Actions, I only have the "actions/upload-artifact#v2" action and now it takes up to 3 hours to do the same automation tasks. (Insane difference!). I think the added time is due to the upload/publish task in GitHub Actions going file by file whereas in Azure DevOps, the upload/publish task somehow condenses it all and it only takes ~1 minute for it to finish.
Any/All help is greatly appreciated! My Google Fu is not coming up with anything atm.
It is slow because:
GZip is used internally to compress individual files before starting an upload.
So this is not only the case due to the fact that each file is sent individually but each file is also compressed individually. Your best workaround at the moment would be compress whole directory as riQQ already wrote.
It can be done like this:
- name: 'Tar files'
run: tar -cvf my_files.tar /path/to/my/directory
- name: 'Upload Artifact'
uses: actions/upload-artifact#v2
with:
name: my-artifact
path: my_files.tar
A big drawback is that now you need to each time unpack your artifact when you download it.
For more details please check this topic - Upload artifact dir is very slow

Project code is not being analyzed for sonarqube

I have a repo in azure DevOps with only folder as test.
Now, I have given the task structure in this way in azure DevOps. But I cannot see the code getting analyzed in sonarqube. The code tab shows blank. Could someone help me with where I am going wrong?? I do not want to give folder name in sources..I want whatever code I add in the branch to be analyzed.
edit: Just realized this is happening only for feature short lived branch..My sonarqube version is 8.0
steps:
task: SonarQubePrepare#4
inputs:
SonarQube: 'connection name'
scannerMode: 'CLI'
configMode: 'manual'
cliProjectKey: 'pipeline-sonar-demo'
cliProjectName: 'pipeline-sonar-demo'
cliSources: "."
extraProperties: |
# Additional properties that will be passed to the scanner,
# Put one key=value per line, example:
sonar.exclusions=**/*.xml
SonarQube extension provides three tasks you will use in your build definitions to analyze your projects:
Prepare Analysis Configuration task, to configure all the required
settings before executing the build.
This task is mandatory.
In case of .NET solutions or Java projects, it helps to integrate
seamlessly with MSBuild, Maven and Gradle tasks.
Run Code Analysis task, to actually execute the analysis of the
source code.
This task is not required for Maven or Gradle projects, because
scanner will be run as part of the Maven/Gradle build.
Publish Quality Gate Result task, to display the Quality Gate status
in the build summary and give you a sense of whether the application
is ready for production "quality-wise".
This task is optional.
It can significantly increase the overall build time because it
will poll SonarQube until the analysis is complete. Omitting this
task will not affect the analysis results on SonarQube - it simply
means the Azure DevOps Build Summary page will not show the status
of the analysis or a link to the project dashboard on SonarQube.
It seems you still need add Run Code Analysis task. Regarding how to use SonarScanner for Azure DevOps, please refer to the following documentation:
https://docs.sonarqube.org/latest/analysis/scan/sonarscanner-for-azure-devops/

Azure Devops Pipeline: Possible to cache task container?

I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.

How to make the Nuget restore work faster?

We are building CD pipeline using VSTS hosted build servers. It takes more than 3 minutes to restore Nuget. This is too much time.
How can I make it run faster? Is there any sort of caching system we can use?
UPDATE: Caching is now generally available (docs)
Caching is currently on the feature pipeline with a TBD date. In the mean time you can use the Upload Pipeline Artifact/Download Pipeline Artifact tasks to store results in your Azure DevOps account to speed up up/downloads.
The Work-in-progress can be tracked here.
In the mean time, the Microsoft 1ES (one engineering system, internal organization) has released their internal solution that uses Universal Packages to store arbitrary packages in your Azure DevOps account. It's very fast because it can sync the delta between previous packages. There is a sample on how to configure your Azure Pipeline to store the NuGet package cache in your Sources Directory in order for the task to cache them.
variables:
NUGET_PACKAGES: $(Build.SourcesDirectory)/packages
keyfile: '**/*.csproj, **/packages.config, salt.txt'
vstsFeed: 'feed name'
steps:
- task: 1ESLighthouseEng.PipelineArtifactCaching.RestoreCache#1
displayName: 'Restore artifact'
inputs:
keyfile: $(keyfile)
targetfolder: $(NUGET_PACKAGES)
vstsFeed: $(vstsFeed)
In my scenario, Nuget restore ran quickly when run interactively, but very slowly when run through CD pipeline (Jenkins). Setting revocation check mode to offline reduced my Nuget restore times from 13+ minutes to under 30 seconds (I found this solution here)
I set an environment variable in my build script prior to running Nuget restore:
SET NUGET_CERT_REVOCATION_MODE=offline
Disclaimer: Turning off certificate revocation has implications - see this link.