We are building CD pipeline using VSTS hosted build servers. It takes more than 3 minutes to restore Nuget. This is too much time.
How can I make it run faster? Is there any sort of caching system we can use?
UPDATE: Caching is now generally available (docs)
Caching is currently on the feature pipeline with a TBD date. In the mean time you can use the Upload Pipeline Artifact/Download Pipeline Artifact tasks to store results in your Azure DevOps account to speed up up/downloads.
The Work-in-progress can be tracked here.
In the mean time, the Microsoft 1ES (one engineering system, internal organization) has released their internal solution that uses Universal Packages to store arbitrary packages in your Azure DevOps account. It's very fast because it can sync the delta between previous packages. There is a sample on how to configure your Azure Pipeline to store the NuGet package cache in your Sources Directory in order for the task to cache them.
variables:
NUGET_PACKAGES: $(Build.SourcesDirectory)/packages
keyfile: '**/*.csproj, **/packages.config, salt.txt'
vstsFeed: 'feed name'
steps:
- task: 1ESLighthouseEng.PipelineArtifactCaching.RestoreCache#1
displayName: 'Restore artifact'
inputs:
keyfile: $(keyfile)
targetfolder: $(NUGET_PACKAGES)
vstsFeed: $(vstsFeed)
In my scenario, Nuget restore ran quickly when run interactively, but very slowly when run through CD pipeline (Jenkins). Setting revocation check mode to offline reduced my Nuget restore times from 13+ minutes to under 30 seconds (I found this solution here)
I set an environment variable in my build script prior to running Nuget restore:
SET NUGET_CERT_REVOCATION_MODE=offline
Disclaimer: Turning off certificate revocation has implications - see this link.
Related
Having the same issue on a self hosted agent but I'm not specifying password in the yml. just specifying the vstsFeed
- checkout: self
submodules: true
persistCredentials: true
- task: NuGetToolInstaller#1
inputs:
versionSpec: 6.2.1
- task: UseDotNet#2
displayName: Using Dotnet Version 6.0.400
inputs:
packageType: 'sdk'
version: '6.0.400'
- task: DotNetCoreCLI#2
displayName: Restore Nuget packages
inputs:
command: 'restore'
projects: '**/*.sln'
feedsToUse: 'select'
vstsFeed: 'ba05a72a-c4fd-43a8-9505-a97db9bf4d00/6db9ddb0-5c18-4a24-a985-75924292d079'
and it fails with following error error NU1301: Unable to load the service index for source
Nuget feed is on another project of the same organization. I can see that pipeline produces a temp nuget config where it specifies username and password for this feed during run. Been breaking my head for the last 72 hours non-stop to find what is the issue. Azure pipelines and nuget sucks. 99% of the problems we had so far was with nuget not working smoothly with azure pipelines. Microsoft has to take a step back and resolve pipelines and nuget ssues.
Just to make sure: The NuGet feed is on the same Azure instance as the agent is registered with, right?
I remember similar issues on my on-premise Azure DevOps server, but also sometimes on the paid cloud variant. Sometimes it was flaky service state, sometimes the agent itself..
Kevin did give a good point with the permissions - if those are set, you're good to go from a permissions point of view - actually reader permission is enough for a restore - make sure to check the views panel too.
If after permissions check you still got issues, you might try my "just-making-sure-lines for your .yml file:
# NuGet Authentication (safety step, normally not required as all within the same organization/project)
- task: NuGetAuthenticate#1
displayName: "Nuget Authentication"
It shouldn't be required but I have it on all my pipes since I had such issues, and it reduced the occurence of the error line you posted in my cases (Hybrid devops architecture).
Another thing I ended up with is to specifiy the feeds explicitly in a repository-wide "NuGet.Config" file - and using this file within my yml files or with script lines instead of tasks now.
If nothing helps, enable diagnostics/verbose logging to get more error details. In the worst case: Log in to your agent machine, open a terminal in the same agent work folder and manually issue a dotnet restore command to see whats going on.
Post the additional results if still no progress.
Good luck
From your description, you are using the Nuget feed on another project of the same organization.
You need to check the following points:
Check the permission of the Build Service account.
Here are the steps:
Step1: Navigate to Artifacts ->Target Feed ->Feed Settings -> Permission.
Step2: Grant the Build service account Contributor Role . Build service account name: ProjectnamethePipelinelocated Build Service (Organization name)
For example:
Check the Limit job authorization scope to current project for non-release pipelines option is Enabled in Project Settings -> Settings.
If yes, you need to disable the option and then the pipeline can use the resource outside the project.
Note: To disable this option, you need to disable the option in Organization Settings-> Settings first. Then you could disable the option in Project level.
TLDR;
I have an Azure Devops Pipeline that uses chocolatey to install dependencies. I would like to cache downloaded chocolatey packages similar to how node dependencies are cached (see below) or use a caching proxy server.
Is it possible to cache downloaded packages from chocolatey to be available at the next run?
If that is not currently easily possible, then is it possible to run a caching proxy server similar to AptCacherNg for chocolatey packages?
Current Setup
I currently have a pipeline setup in Azure Devops that requires packages from chocolatey community. The step runs the equivalent of:
choco install nasm --confirm --no-progress
I am caching node dependencies using the following:
steps:
- task: Cache#2
displayName: 'Cache npm packages'
inputs:
key: '**/package-lock.json, !**/node_modules/**/package-lock.json, !**/.*/**/package-lock.json'
path: '$(System.DefaultWorkingDirectory)/node_modules'
I have considered whether it was possible to modify the keys for this to check the choco packages or use a duplicate step that uses this plugin but I don't know specifically how to do this.
Backgound
Recently, one of the modules website went offline for a few hours. While it was offline I noticed that in the logs it stated that licensed users likely didn't have the issue as they cache packages. I checked into license pricing and it seems that there is a reasonable $96/year cost for a single user license of up to 8 machines but in the license it states that using this would violate the terms for business usage. The business license is $16/year/machine with a minimum of 100 machines. $1600/year is a bit more than I want to pay at this time for such a small dev team that only needs a few packages installed. They suggested community edition.
Choco has a handy --cache option that lets you specify the cache location. Use this option together with a dedicated Cache step:
- task: Cache#2
displayName: 'Cache choco'
inputs:
key: 'path_to_a_file_or_just_a_string_that_you_update_manually'
path: '$(System.DefaultWorkingDirectory)/choco_cache'
- script: choco install nasm --confirm --no-progress --cache $(System.DefaultWorkingDirectory)/choco_cache
displayName: 'install choco packages'
To achieve continuous deployment, we used a (classic) release pipeline on Azure DevOps to deploy a webservice to a VM in our intranet. To benefit of a yaml deployment pipeline, I replaced our former deployment pool agent on that VM with an environment agent.
For the actual deployment, we use the IIS Web App Deploy task. The source directory for this tasks defaults to $(System.DefaultWorkingDirectory)\**\*.zip. The $(System.DefaultWorkingDirectory) translates to the a subdirectory of the concrete release.
Unfortunately for me, the environment agent downloads the artifact next to the a-folder instead of into it like the environment pool agent. Thus the default settings of the deploy task cannot find it. I am aware that I can easily work around this issue by using $(System.DefaultWorkingDirectory)\..\**\*.zip. I was just wondering why Microsoft introduced such a development speed bump into the environment agents.
Is there any way I can make the environment agent download the artifact into $(System.DefaultWorkingDirectory) aka. a instead of next to it?
If you are using deployment job in yaml pipeline. The artifacts will be automatically downloaded to $(Pipeline.Workspace)/(folder next to $(System.DefaultWorkingDirectory)) in deployment jobs. See below extract from here:
Artifacts from the current pipeline are downloaded to $(Pipeline.Workspace)/.
Artifacts from the associated pipeline resource are downloaded to $(Pipeline.Workspace)/{pipeline resource identifier}/.
All available artifacts from the current pipeline and from the associated pipeline resources are automatically downloaded in deployment jobs and made available for your deployment. To prevent downloads, specify download: none.
To make the environment agent download the artifact into $(System.DefaultWorkingDirectory).
You can specify download: none. And use Download Pipeline Artifacts task and specify the path parameter to download your artifacts to $(System.DefaultWorkingDirectory). See below:
- deployment:
environment: Dev
strategy:
runOnce:
deploy:
steps:
- download: none #prevent automatically download
- task: DownloadPipelineArtifact#2
inputs:
buildType: 'current'
targetPath: '$(System.DefaultWorkingDirectory)' # download to default folder.
Anther workaround is to change the source directory for IIS Web App Deploy task defaults to $(Pipeline.Workspace)\**\*.zip
When changing from Classic to YAML pipelines for deployments, you will probably need to change $(System.DefaultWorkingDirectory) to $(Pipeline.Workspace). You can verify this with the steps:
- pwsh: Get-ChildItem $(System.DefaultWorkingDirectory) -Recurse
displayName: Check System.DefaultWorkingDirectory
- pwsh: Get-ChildItem $(Pipeline.Workspace) -Recurse
displayName: Check Pipeline.Workspace
(Yes, this is an awfully verbose way of doing it. It will, though, give you more complete insight into the file structure supporting your job.)
I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.
We have DevOps pipelines being run on self-hosted build servers, building a large solution (~50 projects, currently a mix of full framework and NetStandard) with a lot of nugets being referenced. Unfortunately the codebase is far too large to be pulled onto the standard azure build servers.
We use the nuget restore step (v4.9.1) against the standard nuget feed & a couple of secured customer devops feeds but the restore time has gone from ~2 mins to more than 10 minute. At the outset of this the nuget restore was failing completely, allegedly due to authentication timeout. We have managed to get it working again, but taking these much longer times using the following settings:
steps:
- task:NuGetCommand#2
displayName:'NuGet restore'
inputs:
restoreSolution: src/DesktopComponents.sln
feedsToUse: config
nugetConfigPath: nuget.config
disableParallelProcessing:true
restoreDirectory:.packages
AN example of the log output can be seen at https://1drv.ms/u/s!AjPEk97mW_qqh7wfcn3bCwctYIQ6wA?e=jWEXqF
Any assistance that anyone could offer would be much appreciated. If there are any logs/settings that would be useful please let me know
Thanks in advance
Mark Middlemist