I have an Azure Repo with a pipeline that calls a script when triggered. The script needs a few dependencies to perform the work. Is there a way to have the dependencies by default to avoid having to install them every time the script is triggered?
If you want to avoid installing dependencies each time pipeline runs, you need to build your own self-hosted agent
From https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops&tabs=browser
Self-hosted agents give you more control to install dependent software
needed for your builds and deployments. Also, machine-level caches and
configuration persist from run to run, which can boost speed.
Related
I'm running Azure pipelines on a Windows self-hosted agent. One of my pipelines can do both a 32-bit build and a 64-bit build. I want to use the matrix and maxParallel capabilities to do both builds at once, on the same agent, to save time.
This isn't possible, because the 32-bit build and the 64-bit build both write to the registry, and whoever gets there second, errors out.
The obvious solution is to get a second Azure VM and run a second self-hosted agent on that VM. But I want to see if I can run the two build tasks as two different users, on the theory that they will then write to their own HKCU and not clobber each other.
This would require the default pipeline to trigger a second pipeline, or perhaps run a template, and run it as a different user.
Can this be done?
OTHER USEFUL INFO:
On an Azure DevOps skill-level scale of Beginner-Intermediate-Expert, I'm smack in the middle of Intermediate. Still learning.
The build step uses the built-in VSBuild task.
You can trigger a second pipeline (i.e. by using Trigger Build Task), but pipelines don't have a concept of running as a user - they run on an agent. That agent runs as specific user and it would be tricky to try and execute code as a different user.
Running a second self-hosted agent is a good direction. You don't necessarily need another VM - you could run another agent on the same machine, but as a different user, using different work directory.
You could use agent capabilities and demands to fine tune which kind of build runs on which agent.
When using Azure DevOps I notice that occasionally my pull request builds will fail. After some tracking down I noticed that this is only happening when another build is already running.
It seems that the reason is that the files in the output for the build (exe, dll, note_modules, etc.) will be locked so when another build is started the new build will fail until the currently running one is finished, then I will have to manually re-queue the build again.
I am not very familiar with Azure DevOps pipelines since we recently migrated to this platform and also not sure of the best way to fix this issue. The sln's being built include .NET Framework, .NET Core, TypeScript, and Node.js if that helps at all.
I would love to post the logs and current configuration, but due to company policy I'm not allowed to... :(
Azure DevOps build pipelines fail when another build is already running
You could try to use/add a Capability, like Agent.Name to that two specific build agents then in the build definition you put that capability as a Demands.
As stated here:
How to send TFS build to a specific agent or server
The Capabilities of the agent:
Project Settings->Agent pools->Your agent pool-> Agents->Agent->Capabilities
The Demands of the build pipeline:
Options-> Demands:
In this case, when a pipeline is running in this particular agent, another new build will be in pending state until the current build is completed.
I'm using self-hosted agent in Azure Pipelines and I installed Terraform 0.13 there. When I use Terraform tasks in Azure Devops, as commandOptions I chose '-plugin-dir=/usr/local/bin/.terraform.d/plugins' to skip plugin downloading. Unfortunately, Terraform downloads it to artifact and makes it much heavier than it should be. Also next stage (deployment stage) uses only plugins from artifact, not from our agent.
We do not have much space on our virtual machine that's why we want to avoid unnecessary downloads.
In addition, we defined .terraformrc in home directory with plugin directory. Also we added environment variable as written there:
https://www.terraform.io/docs/commands/cli-config.html#provider-installation
Thank you in advance!
You can try to set -get-plugins=false option.
-get-plugins=false — Skips plugin installation. Terraform will use plugins installed in the user plugins directory, and any plugins already installed for the current working directory. If the installed plugins aren't sufficient for the configuration, init fails.
This is stated in this document.
Eventually I did it another way - I used Cache task from Azure Pipelines. Here's a solution from ITNext:
https://itnext.io/infrastructure-as-code-iac-with-terraform-azure-devops-f8cd022a3341
This is how I described Cache task:
Describe keys well and choose right path.
I'm setting up a multi-stage Azure Devops yaml pipeline for a .Net Framework application.
Part of the pipeline will involve using the AWSPowerShellModuleScript task to configure load balancer rules in AWS.
My Task looks like so...
- task: AWSPowerShellModuleScript#1.7.0
name: SetupLoadBalancerRules
inputs:
awsCredentials: 'My AWS Service Connection'
regionName: 'ap-southeast-2'
scriptType: 'filepath'
filePath: 'pipeline-scripts/manage-aws-load-balancer-rules.ps1'
Everything is working correctly. However the AWSPowerShellModuleScript tasks are quite slow to initialise. The powershell itself is very fast, but the task requires approximately 1.5 minutes to setup.
I'm running 2 of these tasks in different stages of my pipeline, so this adds 3 minutes to the total time. This may not seem like a lot, but the application itself is quite small, so the setup for these tasks is actually the most time consuming part of the pipeline.
As far as I can tell, it seems that the pipeline is starting a generic container, and then installing the AWS Powershell tools, every time it needs to run one of these tasks.
This seems to be very wasteful and inefficient, so I was wondering if there might be some better way to handle it, for example, caching the built container after the powershell tools are installed, or use an existing image with the tools already installed etc.
I'm very new to using the yaml pipelines, so I'm not sure what's possible.
I like my pipelines to be as efficient as possible, so it just bothers me that this is re-running this repetetive install process every time I need to run a simple powershell script.
Also I should mention that I'm using a hosted Devops Agent... vmImage: 'windows-2019'
Just in case it helps. This is from the task log output...
Checking install status for AWS Tools for Windows PowerShell module.
AWS Tools for Windows PowerShell module not found.
Installing AWS Tools for Windows PowerShell module to current user scope
Name Version Source Summary
---- ------- ------ -------
nuget 2.8.5.208 https://onege... NuGet provider for the OneGet meta-package manager
So it determines that the AWS Tools are not installed, and then possibly uses nuget to install it??
I thought perhaps I could use a cache task to cache the install, but even if I could find where the tools are installed to, it seems unlikely that simply restoring the folder would be sufficient.
Using a Microsoft-hosted agent, each time you run a pipeline, you get a fresh virtual machine. So the tool needs to be installed in each pipeline.
A stage is one or more jobs, which are units of work assignable to the same machine. Using Microsoft-hosted agent, each stage uses a separate agent generally. So the tool will be installed in each stage.
In a word, Microsoft-hosted agent is not be able to cache tools. In order to pre-install the tool or not install tool every time, you could deploy Self-hosted Windows agents, and install the tool on every machine running agent service.
My team has multiple Concourse pipelines and as we refactor tasks, we've realized the need to test our actual pipelines.
We already test our tasks by using environment variables enabling task scripts to be run locally, but the pipeline yaml is another matter.
What is the best way to accomplish testing of the pipeline itself?
You can use the Concourse Pipeline Resource to monitor the git repository where you keep your pipeline config. Whenever the pipeline resource detects a change, it will automatically run a fly set-pipeline to update the config in your running Concourse installation. From there, it's easy to script tests against the updated pipeline that is now running in your Concourse installation.
fly validate-pipeline is pretty useful, running that against pipelines before merging has caught a few bugs in "obviously correct" changes for me.
If you want to test the whole pipeline before merging you need to make sure that the data it's using is static and working (no sense in failing the pipeline if it's the repo that's broken), and that there are no side effects (like notifications) shared between the 'real pipeline' and the 'test pipeline'. I suspect that as long as you're careful with the restrictions, you could make it work, but it would have to be designed in the context of your existing pipelines and infrastructure.