Authenticating with Azure Repos git module sources in an Azure Pipelines build - azure-devops

I'm currently creating a pipeline for Azure DevOps to validate and apply a Terraform configuration to different subscription.
My terraform configuration uses modules, those are "hosted" in other repositories in the same Azure DevOps Project as the terraform configuration.
Sadly, when I try to perform terraform init to fetch those modules, the pipeline task "hang" there waiting for credentials input.
As recommanded in the Pipeline Documentation on Running Git Commands in a script I tried to add a checkout step with the persistCredentials:true attribute.
From what I can see in the log of the task (see bellow), the credentials information are added specifically to the current repo and are not usable for other repos.
The command performed when adding persistCredentials:true
2018-10-22T14:06:54.4347764Z ##[command]git config http.https://my-org#dev.azure.com/my-org/my-project/_git/my-repo.extraheader "AUTHORIZATION: bearer ***"
The output of terraform init task
2018-10-22T14:09:24.1711473Z terraform init -input=false
2018-10-22T14:09:24.2761016Z Initializing modules...
2018-10-22T14:09:24.2783199Z - module.my-module
2018-10-22T14:09:24.2786455Z Getting source "git::https://my-org#dev.azure.com/my-org/my-project/_git/my-module-repo?ref=1.0.2"
How can I setup the git credentials to work for other repositories ?

You have essentially two ways of doing this.
Pre-requisite
Make sure that you read and, depending on your needs, that you apply the Enable scripts to run Git commands section from the "Run Git commands in a script" doc.
Solution #1: dynamically insert the System.AccessToken (or a PAT, but I would not recommend it) at pipeline runtime
You could to this either by:
inserting a replacement token such as __SYSTEM_ACCESSTOKEN__ in your code (as Nilsas suggests) and use some token replacement code or the qetza.replacetokens.replacetokens-task.replacetokens task to insert the value. The disadvantage of this solution is that you would also have to replace the token when you run you terraform locally.
using some code to replace all git::https://dev.azure.com text with git::https://YOUR_ACCESS_TOKEN#dev.azure.com.
I used the second approach by using the following bash task script (it searches terragrunt files but you can adapt to terraform files without much change):
- bash: |
find $(Build.SourcesDirectory)/ -type f -name 'terragrunt.hcl' -exec sed -i 's~git::https://dev.azure.com~git::https://$(System.AccessToken)#dev.azure.com~g' {} \;
Abu Belai offers a PowerShell script to do something similar.
This type of solution does not however work if modules in your terraform modules git repo call themselves modules in another git repo, which was our case.
Solution #2: adding globally the access token in the extraheader of the url of your terraform modules git repos
This way, all the modules' repos, called directly by your code or called indirectly by the called modules' code, will be able to use your access token. I did so by adding the following step before your terraform/terragrunt calls:
- bash: |
git config --global http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
git config --global http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
You will need to set the extraheader for each of the called git repos.
Beware that you might need to unset the extraheader after your terraform calls if your pipeline sets the extraheader several times on the same worker. This is because git can get confused with multiple extraheader declaration. You do this by adding to following step:
- bash: |
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader

I had the same issue, what I ended up doing is tokenizing SYSTEM_ACCESSTOKEN in terraform configuration. I used Tokenzization task in Azure DevOps where __ prefix and suffix is used to identify and replace tokens with actual variables (it is customizable but I find double underscores best for not interfering with any code that I have)
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/*.tfvars
**/*.tf
tokenPrefix: '__'
tokenSuffix: '__'
Something like find $(Build.SourcesDirectory)/ -type f -name 'main.tf' -exec sed -i 's~__SYSTEM_ACCESSTOKEN__~$(System.AccessToken)~g' {} \; would also work if you do not have ability to install custom extensions to your DevOps organization.
My terraform main.tf looks like this:
module "app" {
source = "git::https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules//azure/app-service?ref=__app-service-module-ver__"
....
}
It's not beautiful but it gets the job done. Module source (at the time of writing) does not support variable input from terraform. So what we can do is to use Terrafile it's an open source project helping with keeping up with the modules and different versions of the same module you might use by keeping a simple YAML file next to your code. It seems that it's no longer being actively maintained, however it just works: https://github.com/coretech/terrafile
my example of Terrafile:
app:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "feature/handle-twitter"
app-stable:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "1.0.5"
Terrafile by default download your modules to ./vendor directory so you can point your module source to something like:
module "app" {
source = "./vendor/modules/app-stable/azure/app_service"
....
}
Now you just have to figure out how to execute terrafile command in the directory where Terrafile is present.
My azure.pipelines.yml example:
- script: curl -L https://github.com/coretech/terrafile/releases/download/v0.6/terrafile_0.6_Linux_x86_64.tar.gz | tar xz -C $(Agent.ToolsDirectory)
displayName: Install Terrafile
- script: |
cd $(Build.Repository.LocalPath)
$(Agent.ToolsDirectory)/terrafile
displayName: Download required modules

I did this
_ado_token.ps1
# used in Azure DevOps to allow terrform to auth with Azure DevOps GIT repos
$tfmodules = Get-ChildItem $PSScriptRoot -Recurse -Filter "*.tf"
foreach ($tfmodule in $tfmodules) {
$content = [System.IO.File]::ReadAllText($tfmodule.FullName).Replace("git::https://myorg#","git::https://" + $env:SYSTEM_ACCESSTOKEN +"#")
[System.IO.File]::WriteAllText($tfmodule.FullName, $content)
}
azure-pipelines.yml
- task: PowerShell#2
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
filePath: '_ado_token.ps1'
pwsh: true
displayName: '_ado_token.ps1'

I Solved the issue by creating a Pipeline template that runs a inline powershell script. I then pull in the template as the Pipeline template a "resource" when using any terraform module form a different Repo.
The script will do a recursive search for all the .tf files. Then use regex to update all the module source urls.
I chose REGEX over tokenizing the module url, because this will make sure the modules can be pulled in on a development machine without any changes to the source.
parameters:
- name: terraform_directory
type: string
steps:
- task: PowerShell#2
displayName: Tokenize TF-Module Sources
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
targetType: 'inline'
script: |
$regex = "https://*(.+)dev.azure.com"
$tokenized_url = "https://token:$($env:SYSTEM_ACCESSTOKEN)#dev.azure.com"
Write-Host "Recursive Search in ${{ parameters.terraform_directory }}"
$tffiles = Get-ChildItem -Path "${{ parameters.terraform_directory }}" -Filter "*main.tf" -Recurse -Force
Write-Host "Found $($tffiles.Count) files ending with 'main.tf'"
if ($tffiles) { Write-Host $tffiles }
$tffiles | % {
Write-Host "Updating file $($_.FullName)"
$content = Get-Content $_.FullName
Write-Host "Replace Strings: $($content | Select-String -Pattern $regex)"
$content -replace $regex, $tokenized_url | Set-Content $_.FullName -Force
Write-Host "Updated content"
Write-Host (Get-Content $_.FullName)
}

As far as I can see, the best way to do this is exactly the same as with any other Git provider. It is only for Azure DevOps that I have ever come across the extraheader approach. I have always used this, and after not being able to get a satisfactory result with the other suggested approaches, I went back to it:
- script: |
MY_TOKEN=foobar
git config --global url."https://${MY_TOKEN}#dev.azure.com".insteadOf "https://dev.azure.com"

I don't think you can. Usually, you create another build and link to the artifacts from that build to use it in your current definition. That way you don't need to connect to a different Git repository

Related

How to create and save files using PowerShell in GitHub action?

I'm new to GitHub actions, I want to use PowerShell and save the output of a command to a file in my repository, but I don't know what's the right way to do it.
name: get hash
on: [workflow_dispatch]
jobs:
build:
name: Run Script
runs-on: windows-latest
steps:
- uses: actions/checkout#v3
- name: Script
run: ./script.ps1
shell: pwsh
and using get-filehash command I'm trying to learn creating a simple Github workflow that saves the hash of a file to a text file in my repository.
this is the script.ps1 file content
$wc = [System.Net.WebClient]::new()
$pkgurl = 'File on the web'
$FileHashSHA512 = Get-FileHash -Algorithm SHA512 -InputStream ($wc.OpenRead($pkgurl))
$FileHashSHA512.Hash > .\sss.txt
I know it's not practical but I want to learn basics of workflows. the command works on my computer but I just don't get how Github workflows work when it comes to saving files on my repository.
I also want to make this command work in the same workflow:
Invoke-WebRequest -Uri $pkgurl -OutFile .\file.zip
but again don't know how to make the workflow save the file in the repository's root.
since these are very basic, I'm trying to create/write the workflow files myself and not use other people's pre-made actions.
The actions runner is just a PC that clones the repo and runs a few commands in it. So when you create a new file on the runner, you have to add it to the repo on the runner, create a commit and push that to the repo over on GitHub.
It might help to understand that the repo in GitHub and the runner in GitHub actions are two completely seperate things. It's not like the actions runner somehow runs 'inside' your repo.
So one way to save filer is to run:
git add fileyoujustcreated.txt
git commit -m "updating file"
git push
Here is a sample GitHub action I use to update a file in my repo each time the content changes:
Composite action that changes a set of files and commits them
Workflow that calls the composite action
Alternatively, you can edit files in GitHub using the REST API, so you could invoke:
curl \
-X PUT \
-H "Accept: application/vnd.github+json" \
-H "Authorization: Bearer <YOUR-TOKEN>"\
-H "X-GitHub-Api-Version: 2022-11-28" \
https://api.github.com/repos/OWNER/REPO/contents/PATH \
-d '{"message":"my commit message","committer":{"name":"Monalisa Octocat","email":"octocat#github.com"},"content":"bXkgbmV3IGZpbGUgY29udGVudHM="}'
Or the equivalent in PowerRhell.
You can use the ${{ GitHub.token }} variable to authenticate.
See:
https://docs.github.com/en/rest/repos/contents?apiVersion=2022-11-28#create-or-update-file-contents

How to download build artifact from other versions (runs) published build artifact?

My pipeline publishes two different build artifacts when all its tests have passed - stage: publish_pipeline_as_build.
One of my tests needs to use the build that was made in the current run, of the current version.
But additionally, I need to get the build artifact of the previous version, in order to run some compatibility tests.
How do I download the build artifact from that other pipeline run?
I know the build artifact name (from runtime script), but how would I find that?
I tried playing around with azure-cli az pipelines runs artifact list. It requies a --run-id and actually my script won’t have that.
So far I kind of managed, assuming the response of az pipelines runs list retuns the latest match to the query first:
az pipelines runs list --project PROJNAME --query "[?sourceBranch=='refs/heads/releases/R21.3.0.2']" | jq '.[0]'
I currently seem to run out of Ideas.
Perhaps just some confused/frustrated questions that pop up:
How can I find that specific build artifact name's latest version and download it?
How are pipeline tasks fed with runtime generated values?
Is this so ridiculously difficult when doing it in Azure DevOps, or am I just going the wrong way?
The job I'm trying to get there with:
jobs:
- job: test_session_integration
dependsOn: easysales_Build
steps:
- template: ./utils/cache_yarn_and_install.yml
- template: ./utils/update_webdriver.yml
- template: ./utils/download_artifact.yml
parameters:
artifact: easysales_$(Build.BuildId)_build
path: $(System.DefaultWorkingDirectory)/dist
# current release name as output
- template: ./utils/get_release_name.yml
# previous release name, branch and build name output
- template: ./utils/get_prev_release.yml
# clone prev version manually - can't use output variables as task input
# (BTW: why? that is super inconvenient, is there really no way?)
- bash: |
git clone --depth 1 -b $(get_prev_release.BRANCH_NAME) \
"https://${REPO_USERNAME}:${REPO_TOKEN}#dev.azure.com/organisation/PROJECTNAME/_git/frontend-app" \
./reference
workingDirectory: $(System.DefaultWorkingDirectory)
env:
REPO_TOKEN: $(GIT_AUTH_TOKEN)
REPO_USERNAME: $(GIT_AUTH_USERNAME)
name: clone_reference_branch
Any clues?
I'd be glad for any rubber ducking hints or clues on how I would be able to achieve what I need.
I'm new to Azure DevOps and currently struggle to find orientation in the vast but also quite in many places bits and pieces documentation Microsoft offers to me. It's fine, but frankly I struggle quite a bit with it. Is it just me having this problem?
All stages and full YAML on pastebin
The main template with the stages (Expanded templates made with "download full YAML"):
stages:
- stage: install_prepare
displayName: install & prepare
jobs:
- template: az_templates/install_hls_lib_build_job.yml
- stage: test_and_build
displayName: test and build projects
dependsOn: install_prepare
jobs:
- template: az_templates/build_projects_jobs.yml
- template: az_templates/test_session_integration_job.yml
- stage: publish_pipeline_as_build
displayName: Publish finished project artifacts as builds
dependsOn: test_and_build
jobs:
- template: az_templates/build_artifact_publish_jobs.yml
I by now found a solution. Perhaps not a definitive one, but should sort of work:
In my library variable group I added a azure devops personal access token with read access to the necessary groups as ADO_PAT_TOKEN
I Sign in with azure-cli with that token
I get the latest run id with azure cli
- bash: |
az devops configure --defaults organization=$(System.CollectionUri)
az devops configure --defaults project=$(System.TeamProject)
echo "$AZ_DO_TOKEN" | az devops login
AZ_QUERY="[?sourceBranch=='refs/heads/$(prev_release.BRANCH_NAME)'] | [0].id"
ID=$(az pipelines runs list --query-order FinishTimeDesc --query "$AZ_QUERY")
echo "##vso[task.setvariable variable=ID;isOutput=true]$ID"
env:
AZ_DO_TOKEN: $(ADO_PAT_TOKEN)
name: prev_build_run
I then download the artifact with azure-cli and the queried run id
- bash: |
az pipelines runs artifact download \
--artifact-name 'easysales_$(prev_release.PREV_RELEASE_VERSION)' \
--run-id $(prev_build_run.ID) \
--path '$(System.DefaultWorkingDirectory)/reference/dist/easySales'
workingDirectory: $(System.DefaultWorkingDirectory)
name: download_prev_release_build_artifact
This roughly seems to work for me now... finally 😉
Missing
The personal access token I added to the secrets may work, but AFAIS these tokens can not be created with no expiry date further than one year in the future.
That is not ideal, since I don't want my pipeline to stop working and perhaps no one around knows how to fix this.
Perhaps someone knows how I can use azure CLI within the current pipeline without authentication?
Given it accesses only the current organization and project?
Or does anyone see a more elegant solution to my admittedly clumsy solution.

Avoid git clean with Azure Devops self-hosted Build Agent

I have a YAML build script in an Azure hosted git repository which gets triggered across 7 build agents running on a local VM. Every time this runs, the build performs a git clean which takes a significant amount of time due to a large node_modules folder which takes a long time to clean up.
The MSDN page here seems to suggest this is configurable but shows no detail of how to configure it. I can't tell whether this is a setting that should be specified on the agent, the YAML script, within DevOps on the pipeline, or where.
Is there any other documentation I'm missing or is this not possible?
Update:
The start of the YAML file is here:
variables:
BUILD_VERSION: 1.0.0.$(Build.BuildId)
buildConfiguration: 'Release'
process.clean: false
jobs:
###### ######################################################
###### 1 - Build and publish .NET
#############################################################
- job: net_build_publish
displayName: .NET build and publish
pool:
name: default
steps:
- script: echo $(BUILD_VERSION)
- task: DotNetCoreCLI#2
displayName: dotnet build $(buildConfiguration)
inputs:
command: 'build'
projects: |
myrepo/**/API/*.csproj
arguments: '-c $(buildConfiguration) /p:Version=$(BUILD_VERSION)'
The complete yaml is a lot longer, but the output from the first job includes this output in a Checkout task
Checkout myrepo#master to s
View raw log
Starting: Checkout myrepo#master to s
==============================================================================
Task : Get sources
Description : Get sources from a repository. Supports Git, TfsVC, and SVN repositories.
Version : 1.0.0
Author : Microsoft
Help : [More Information](https://go.microsoft.com/fwlink/?LinkId=798199)
==============================================================================
Syncing repository: myrepo (Git)
Prepending Path environment variable with directory containing 'git.exe'.
git version
git version 2.26.2.windows.1
git lfs version
git-lfs/2.11.0 (GitHub; windows amd64; go 1.14.2; git 48b28d97)
git config --get remote.origin.url
git clean -ffdx
Removing myrepo/Data/Core/API/bin/
Removing myrepo/Data/Core/API/customersettings.json
Removing myrepo/Data/Core/API/obj/
Removing myrepo/Data/Core/Shared/bin/
Removing myrepo/Data/Core/Shared/obj/
....
We have another job further down which runs npm install and npm build for an Angular project, and every build in the pipeline is taking 5 minutes to perform the npm install step, possibly because of this git clean when retrieving the repository?
Click on your pipeline to show the run history
Click Edit
Click the 3 dot kebab menu
Click Triggers
Click YAML
Click Get Sources
Set Clean to False and Save
To say this is obfuscated is an understatement!
I can't say what affect this will have though, I think the agent reuses the same folder each time a pipeline runs and I'm not Node.js developer so I don't know what leaving old node_modules hanging around will do!
P.S. what people were saying about pipeline caching I don't think is what you were asking, also pipeline caching zips up the cached folder and uploads it to your artifacts storage, it then downloads it each time, if you only have 1 build agent then actually not doing a git clean might be more efficent I'm not 100%
As I mentioned below. You need to calculate hash before you run npm install. If hash is the same as the one kept close to node_modules you can skip installing dependencies. This may help you achieve this:
steps:
- task: PowerShell#2
displayName: 'Calculate and save packages.config hash'
inputs:
targetType: 'inline'
pwsh: true
script: |
# generates a hash of package-lock.json
$newHash = Get-FileHash -Algorithm MD5 -Path (Get-ChildItem package-lock.json)
$hashPath = "$(System.DefaultWorkingDirectory)/cache-npm/hash.txt"
if(Test-Path -path $hashPath) {
if(Compare-Object -ReferenceObject $(Get-Content $hashPath) -DifferenceObject $newHash) {
Write-Host "##vso[task.setvariable variable=NodeModulesAreUpToDate;]true"
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
} else {
# files are the same
Write-Host "no need to install node_modules"
}
} else {
$newHash > $hashPath
Write-Host ("Hash File saved to " + $hashPath)
}
$storedHash = Get-Content $hashPath
Write-Host $storedHash
workingDirectory: '$(System.DefaultWorkingDirectory)/cache-npm'
- script: npm install
workingDirectory: '$(Build.SourcesDirectory)/cache-npm'
condition: ne(variables['NodeModulesAreUpToDate'], true)
git clean -ffdx will clean any change untracked by source control in the source. You may try Pipeline caching, which can help reduce build time by allowing the outputs or downloaded dependencies from one run to be reused in later runs, thereby reducing or avoiding the cost to recreate or redownload the same files again. Check the following link:
https://learn.microsoft.com/en-us/azure/devops/pipelines/release/caching?view=azure-devops#nodejsnpm
variables:
npm_config_cache: $(Pipeline.Workspace)/.npm
steps:
- task: Cache#2
inputs:
key: 'npm | "$(Agent.OS)" | package-lock.json'
restoreKeys: |
npm | "$(Agent.OS)"
path: $(npm_config_cache)
displayName: Cache npm
In the checkout step, it allows us to set the boolean option clean to true or false. The default is true so it runs git clean by default.
Below is a minimal example with clean set to false.
jobs:
- job: Build_Job
timeoutInMinutes: 0
pool: 'PoolOne'
steps:
- checkout: self
clean: false
submodules: recursive
- task: PowerShell#2
displayName: Make build
inputs:
targetType: 'inline'
script: |
bash -c 'make'
More documentation and related options can be found here

Azure DevOps pipeline for deploying only changed arm templates

We have a project with repo on Azure DevOps where we store ARM templates of our infrastructure. What we want to achieve is to deploy templates on every commit on master branch.
The question is: is it possible to define one pipeline which could trigger a deployment only of ARM templates changed with that commit? Let's go with example. We 3 templates in repo:
t1.json
t2.json
t3.json
The latest commit changed only t2.json. In this case we want pipeline to only deploy t2.json as t1.json and t3.json hasn't been changed in this commit.
Is it possible to create one universal pipeline or we should rather create separate pipeline for every template which is triggered by commit on specific file?
It is possible to define only one pipeline to deploy the changed template. You need to add a script task to get the changed template file name in your pipeline.
It is easy to get the changed files using git commands git diff-tree --no-commit-id --name-only -r commitId. When you get the changed file's name, you need to assign it to a variable using expression ##vso[task.setvariable variable=VariableName]value. Then you can set the csmFile parameter like this csmFile: '**\$(fileName)' in AzureResourceGroupDeployment task
You can check below yaml pipeline for example:
- powershell: |
#get the changed template
$a = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)
#assign the filename to a variable
echo "##vso[task.setvariable variable=fileName]$a"
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\$(fileName)'
It is also easy to define multiple pipelines to achieve only deploying the changed template. You only need to add the paths trigger to the specific template file in the each pipeline. So that the changed template file only triggers its corresponding pipeline.
trigger:
paths:
include:
- pathTo/template1.json
...
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\template1.json'
Hope above helps!
What you asked is not supported out of the box. From what I understood you want to have triggers (based on file change) per a step or a job (depending hoq you organize your pipeline). However, I'm not sure if you need this. Deploying ARM template which was not changed will not affect you Azure resources if use use Create Or Update Resource Group doc here
You can also try to manually detect which file was changed (using powershell for instance and git commands), then set a flag and later use this falg to fire or not some steps. But it looks like overkill for what you want achieve.

How do I obtain (recreate) the bearer token used in AzureDevOps Pipelines?

I have a failed yaml Infrastructure-as-Code deployment that is failing at the first yaml step:
- task: ArchiveFiles#1
displayName: 'Archive createADPDC.ps1 DSC files '
inputs:
rootFolder: 'Core/Templates/createADPDC.ps1'
includeRootFolder: false
replaceExistingArchive: true
archiveFile: '$(Build.ArtifactStagingDirectory)/createADPDC.ps1.zip'
To troubleshoot this, I've started a line-by-line attempt to simulate what's being done on the hosted pipeline servers, and am getting stuck at the bearer token. Unless there is a better way to diagnose why files are missing from ArtifactStagingDirectory, I'm running the commands below to inspect the files and structure that's being downloaded.
git init "C:\a\1\s"
Initialized empty Git repository in C:/a/1/s/.git/
git remote add origin https://MyLabs#dev.azure.com/MyLabs/Core/_git/Core
git config gc.auto 0
git config --get-all http.https://MyLabs#dev.azure.com/MyLabs/Core/_git/Core.extraheader
git config --get-all http.proxy
git -c http.extraheader="AUTHORIZATION: bearer ***" fetch --force --tags --prune --progress --no-recurse-submodules origin
fatal: Authentication failed for 'https://dev.azure.com/MyLabs/Core/_git/Core/'
Question
Either:
What is a better way to determine or understand why the ArchiveFiles would return
[error]ENOENT: no such file or directory, stat 'D:\a\1\s\Core\Templates\createADPDC.ps1'
What is the correct way to obtain the bearer token (PAT?) for use in the command line located in the logs
So it's probably a good idea to get a handle on the directory structure used within the pipeline.
\agent_work\1 $(Agent.BuildDirectory)
\agent_work\1\a $(Build.ArtifactStagingDirectory)
\agent_work\1\b $(Build.BinariesDirectory)
\agent_work\1\s $(Build.SourcesDirectory)
$(Agent.BuildDirectory) where all folders for a given build pipeline are created
$(Build.ArtifactStagingDirectory) artifacts are copied to before being pushed to their destination.
$(Build.BinariesDirectory) you can use as an output folder for compiled binaries
$(Build.SourcesDirectory) where your source code files are downloaded
Links for Variables and SystemAccessToken
From the error message, it looks like the rootFolder location is relative to the $(Build.SourcesDirectory). To get a good look at your files inside the $(Agent.BuildDirectory) I like to use the tree command.
- task: PowerShell#2
displayName: tree $(Agent.BuildDirectory)
inputs:
targetType: 'inline'
script: 'tree /F'
pwsh: true
workingDirectory: '$(Agent.BuildDirectory)'
Are you sure the directory is correct?
You can access the PAT in pipeline scripts by using $(system.accesstoken).
Make sure you enable persistcredentials at the job level in your yml