How do I obtain (recreate) the bearer token used in AzureDevOps Pipelines? - azure-devops

I have a failed yaml Infrastructure-as-Code deployment that is failing at the first yaml step:
- task: ArchiveFiles#1
displayName: 'Archive createADPDC.ps1 DSC files '
inputs:
rootFolder: 'Core/Templates/createADPDC.ps1'
includeRootFolder: false
replaceExistingArchive: true
archiveFile: '$(Build.ArtifactStagingDirectory)/createADPDC.ps1.zip'
To troubleshoot this, I've started a line-by-line attempt to simulate what's being done on the hosted pipeline servers, and am getting stuck at the bearer token. Unless there is a better way to diagnose why files are missing from ArtifactStagingDirectory, I'm running the commands below to inspect the files and structure that's being downloaded.
git init "C:\a\1\s"
Initialized empty Git repository in C:/a/1/s/.git/
git remote add origin https://MyLabs#dev.azure.com/MyLabs/Core/_git/Core
git config gc.auto 0
git config --get-all http.https://MyLabs#dev.azure.com/MyLabs/Core/_git/Core.extraheader
git config --get-all http.proxy
git -c http.extraheader="AUTHORIZATION: bearer ***" fetch --force --tags --prune --progress --no-recurse-submodules origin
fatal: Authentication failed for 'https://dev.azure.com/MyLabs/Core/_git/Core/'
Question
Either:
What is a better way to determine or understand why the ArchiveFiles would return
[error]ENOENT: no such file or directory, stat 'D:\a\1\s\Core\Templates\createADPDC.ps1'
What is the correct way to obtain the bearer token (PAT?) for use in the command line located in the logs

So it's probably a good idea to get a handle on the directory structure used within the pipeline.
\agent_work\1 $(Agent.BuildDirectory)
\agent_work\1\a $(Build.ArtifactStagingDirectory)
\agent_work\1\b $(Build.BinariesDirectory)
\agent_work\1\s $(Build.SourcesDirectory)
$(Agent.BuildDirectory) where all folders for a given build pipeline are created
$(Build.ArtifactStagingDirectory) artifacts are copied to before being pushed to their destination.
$(Build.BinariesDirectory) you can use as an output folder for compiled binaries
$(Build.SourcesDirectory) where your source code files are downloaded
Links for Variables and SystemAccessToken
From the error message, it looks like the rootFolder location is relative to the $(Build.SourcesDirectory). To get a good look at your files inside the $(Agent.BuildDirectory) I like to use the tree command.
- task: PowerShell#2
displayName: tree $(Agent.BuildDirectory)
inputs:
targetType: 'inline'
script: 'tree /F'
pwsh: true
workingDirectory: '$(Agent.BuildDirectory)'

Are you sure the directory is correct?
You can access the PAT in pipeline scripts by using $(system.accesstoken).
Make sure you enable persistcredentials at the job level in your yml

Related

Checkov scan particular folder or PR custom branch files

Trying to run Checkov (for IaC validation) via Azure DevOps YAML pipelines, for ARM template files stored in Azure DevOps version control. The code below:
trigger: none
pool:
vmImage: ubuntu-latest
stages:
- stage: 'runCheckov'
displayName: 'Checkov - Scan ARM files'
jobs:
- job: 'RunCheckov'
displayName: 'Checkov solution'
steps:
- bash: |
docker pull bridgecrew/checkov
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Pull bridgecrew/checkov image'
- bash: |
docker run \
--volume $(pwd):/scripts bridgecrew/checkov \
--directory /scripts \
--output junitxml \
--soft-fail > $(pwd)/CheckovReport.xml
workingDirectory: $(System.DefaultWorkingDirectory)
displayName: 'Run checkov'
- task: PublishTestResults#2
inputs:
testRunTitle: 'Checkov run results'
failTaskOnFailedTests: false
testResultsFormat: 'JUnit'
testResultsFiles: 'CheckovReport.xml'
searchFolder: '$(System.DefaultWorkingDirectory)'
mergeTestResults: false
publishRunAttachments: true
displayName: 'Publish Test results'
The problem - how to change the path/folder of ARM templates to scan. Now it scans all ARM templates found under my whole repo1, regardless what directory value I set.
Also, how to scan PR files committed to custom branch during PR review, so it would trigger the build but the build would scan only those files in the custom branch. I know how to set to trigger build via DevOps repository settings, but again, how to assure build pipeline uses/scan particular PR commit files, not whole repo1 (and master branch).
I recommend you use the Docker image bridgecrew/checkov to set up a container job to run the Checkov scan. The container job will run all the tasks of the job into the Docker container started from this image.
In the container job, you can check out the source repository into the container, then use a script task (such as Bash task) to run the related Checkov CLI to do the files scan. On the script task, you can use the 'workingDirectory' option to specify the path/folder where the command lines run in. Normally, the command lines will only act on files which are in the specified directory and its subdirectories.
If you want to only scan the files in a specific branch in the job, you can clone/checkout the specific branch to the working directory of the job in the container, then like as above mentioned, use the related Checkov CLI to scan files under the specified directory.
[UPDATE]
In the pipeline job, you can try to call the Azure DevOps REST API "Commits - Get Changes" to get all the changed files and folders for the particular commit.
Then use the Checkov CLI with the parameter --directory (-d) or --file (-f) to scan the specified file or folder.

How to copy specific file from one directory to another directory in same repository in Azure DevOps

I am moving specific file (e.g. 5 json file) from one directory to another directory in same repository. I was able to extract the file and published to Devops successfully, but those specific file didn't upload to main root directory of Azure Devops where all other files are present.
Steps performed-
Download specific file from old repository as a zip folder on local machine.
Then upload the same zip file to new Azure Devops repository where I want to copy those file. PFA,Screenshot 1.
Created a pipeline which includes the following task activity:
Used the extract Files in built task and used the same zip file in archive file pattern and destination folder - $(BuildSourceDirectory)
Copy all the extracted files from $(BuildSourceDirectory) to $(BuildArtifactStagingDirectory).
Publish artifacts from $(BuildArtifactStagingDirectory) to 'azure pipeline' artifact publish location.
Till this point, artifact has been generated and published successfully and I can see specific file extracted there (e.g. 5 file). PFA, Screenshot 2.
PFA, Screenshot 3 where I want to copy a specific file.
source (zip file)= main root directory of Azure Devops Repository XX
Target = main root directory of Azure Devops Repository XX
Just follow the below YAML will help you move the .json files in Azure Git repository:
trigger:
- none
pool:
vmImage: windows-latest
variables:
- name: system.debug
value: true
steps:
- checkout: self
persistCredentials: true
- task: PythonScript#0
inputs:
scriptSource: 'inline'
script: |
#cut json files from dir1 to dir2
import os
import shutil
sourcefolder = "./dir1"
targetfolder = "./dir2"
def move_json_file_fromxtox(sourcefolder, targetfolder):
try:
for root, dirs, files in os.walk(sourcefolder):
for file in files:
if file.endswith(".json"):
shutil.move(os.path.join(root, file), targetfolder)
except Exception as e:
print(e)
move_json_file_fromxtox(sourcefolder, targetfolder)
- task: PowerShell#2
inputs:
targetType: 'inline'
script: |
git config --global user.email "<email address here>"
git config --global user.name "<user name here>"
git add .
git commit -m 1
git push origin HEAD:main
You need these settings before run the pipeline:
This is my original repository structure:
After run the above pipeline, those files will be moved successfully, the repository structure will be:

Can we replace nuget.config file with command parameters?

I am working on an Azure pipeline to run on a Windows self-hosted agent.
We configured a Artefact feed with an upstream to connect to Nuget.
As we are behind a firewall, it seems the only way to connect to Nuget.
My pipeline was working with this nuget.config file :
<packageSources>
<clear />
<add key="FeedName" value="https://***.pkgs.visualstudio.com/***/_packaging/FeedName/nuget/v3/index.json" />
</packageSources>
And this YAML:
- task: NuGetAuthenticate#0
- task: CmdLine#2
inputs:
script: '"C:\dotnet\dotnet.exe" publish ${{ parameters.solutionToPublishPath }} -c ${{ parameters.buildConfiguration }} -o $(Build.BinariesDirectory)'
The nuget.config file breaks the previous pipelines in TeamCity!!
To keep the old one running while I work on the new one, I am looking for a way to move the information from the nuget.config file to the script.
Is it possible ?
I tried with this:
- task: CmdLine#2
inputs:
script: '"C:\dotnet\dotnet.exe" add "src/project/project.API.csproj" package FeedName -s https://***.pkgs.visualstudio.com/***/_packaging/FeedName/nuget/v3/index.json'
I get this message which for me indicates that it tried to reach Nuget directly and failed, this is why we use a feed.
error: Unable to load the service index for source https://api.nuget.org/v3/index.json.
error: Response status code does not indicate success: 302 (Moved Temporarily).
Thanks for any help
You may check Replace Tokens extension to see whether it helps you. It can replace tokens in files with variable values during pipeline.
I would not call it a solution as I can't move the nuget.config information out of the file into the command line, I'll remove the file to enable Team City to work and put it back when running Azure pipelines. Thanks.
We are overriding Nuget.config in Azure DevOps pipeline script with DotNetCoreCLI#2 and restoreArguments
- task: DotNetCoreCLI#2
displayName: Restore
inputs:
command: 'restore'
projects: |
$(buildProjects)
!$(testProjects)
restoreArguments: --source https://api.nuget.org/v3/index.json --source $(Build.SourcesDirectory)/Nugets

How to keep secure files after a job finishes in Azure Devops Pipeline?

Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).

Authenticating with Azure Repos git module sources in an Azure Pipelines build

I'm currently creating a pipeline for Azure DevOps to validate and apply a Terraform configuration to different subscription.
My terraform configuration uses modules, those are "hosted" in other repositories in the same Azure DevOps Project as the terraform configuration.
Sadly, when I try to perform terraform init to fetch those modules, the pipeline task "hang" there waiting for credentials input.
As recommanded in the Pipeline Documentation on Running Git Commands in a script I tried to add a checkout step with the persistCredentials:true attribute.
From what I can see in the log of the task (see bellow), the credentials information are added specifically to the current repo and are not usable for other repos.
The command performed when adding persistCredentials:true
2018-10-22T14:06:54.4347764Z ##[command]git config http.https://my-org#dev.azure.com/my-org/my-project/_git/my-repo.extraheader "AUTHORIZATION: bearer ***"
The output of terraform init task
2018-10-22T14:09:24.1711473Z terraform init -input=false
2018-10-22T14:09:24.2761016Z Initializing modules...
2018-10-22T14:09:24.2783199Z - module.my-module
2018-10-22T14:09:24.2786455Z Getting source "git::https://my-org#dev.azure.com/my-org/my-project/_git/my-module-repo?ref=1.0.2"
How can I setup the git credentials to work for other repositories ?
You have essentially two ways of doing this.
Pre-requisite
Make sure that you read and, depending on your needs, that you apply the Enable scripts to run Git commands section from the "Run Git commands in a script" doc.
Solution #1: dynamically insert the System.AccessToken (or a PAT, but I would not recommend it) at pipeline runtime
You could to this either by:
inserting a replacement token such as __SYSTEM_ACCESSTOKEN__ in your code (as Nilsas suggests) and use some token replacement code or the qetza.replacetokens.replacetokens-task.replacetokens task to insert the value. The disadvantage of this solution is that you would also have to replace the token when you run you terraform locally.
using some code to replace all git::https://dev.azure.com text with git::https://YOUR_ACCESS_TOKEN#dev.azure.com.
I used the second approach by using the following bash task script (it searches terragrunt files but you can adapt to terraform files without much change):
- bash: |
find $(Build.SourcesDirectory)/ -type f -name 'terragrunt.hcl' -exec sed -i 's~git::https://dev.azure.com~git::https://$(System.AccessToken)#dev.azure.com~g' {} \;
Abu Belai offers a PowerShell script to do something similar.
This type of solution does not however work if modules in your terraform modules git repo call themselves modules in another git repo, which was our case.
Solution #2: adding globally the access token in the extraheader of the url of your terraform modules git repos
This way, all the modules' repos, called directly by your code or called indirectly by the called modules' code, will be able to use your access token. I did so by adding the following step before your terraform/terragrunt calls:
- bash: |
git config --global http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
git config --global http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
You will need to set the extraheader for each of the called git repos.
Beware that you might need to unset the extraheader after your terraform calls if your pipeline sets the extraheader several times on the same worker. This is because git can get confused with multiple extraheader declaration. You do this by adding to following step:
- bash: |
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader
I had the same issue, what I ended up doing is tokenizing SYSTEM_ACCESSTOKEN in terraform configuration. I used Tokenzization task in Azure DevOps where __ prefix and suffix is used to identify and replace tokens with actual variables (it is customizable but I find double underscores best for not interfering with any code that I have)
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/*.tfvars
**/*.tf
tokenPrefix: '__'
tokenSuffix: '__'
Something like find $(Build.SourcesDirectory)/ -type f -name 'main.tf' -exec sed -i 's~__SYSTEM_ACCESSTOKEN__~$(System.AccessToken)~g' {} \; would also work if you do not have ability to install custom extensions to your DevOps organization.
My terraform main.tf looks like this:
module "app" {
source = "git::https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules//azure/app-service?ref=__app-service-module-ver__"
....
}
It's not beautiful but it gets the job done. Module source (at the time of writing) does not support variable input from terraform. So what we can do is to use Terrafile it's an open source project helping with keeping up with the modules and different versions of the same module you might use by keeping a simple YAML file next to your code. It seems that it's no longer being actively maintained, however it just works: https://github.com/coretech/terrafile
my example of Terrafile:
app:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "feature/handle-twitter"
app-stable:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "1.0.5"
Terrafile by default download your modules to ./vendor directory so you can point your module source to something like:
module "app" {
source = "./vendor/modules/app-stable/azure/app_service"
....
}
Now you just have to figure out how to execute terrafile command in the directory where Terrafile is present.
My azure.pipelines.yml example:
- script: curl -L https://github.com/coretech/terrafile/releases/download/v0.6/terrafile_0.6_Linux_x86_64.tar.gz | tar xz -C $(Agent.ToolsDirectory)
displayName: Install Terrafile
- script: |
cd $(Build.Repository.LocalPath)
$(Agent.ToolsDirectory)/terrafile
displayName: Download required modules
I did this
_ado_token.ps1
# used in Azure DevOps to allow terrform to auth with Azure DevOps GIT repos
$tfmodules = Get-ChildItem $PSScriptRoot -Recurse -Filter "*.tf"
foreach ($tfmodule in $tfmodules) {
$content = [System.IO.File]::ReadAllText($tfmodule.FullName).Replace("git::https://myorg#","git::https://" + $env:SYSTEM_ACCESSTOKEN +"#")
[System.IO.File]::WriteAllText($tfmodule.FullName, $content)
}
azure-pipelines.yml
- task: PowerShell#2
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
filePath: '_ado_token.ps1'
pwsh: true
displayName: '_ado_token.ps1'
I Solved the issue by creating a Pipeline template that runs a inline powershell script. I then pull in the template as the Pipeline template a "resource" when using any terraform module form a different Repo.
The script will do a recursive search for all the .tf files. Then use regex to update all the module source urls.
I chose REGEX over tokenizing the module url, because this will make sure the modules can be pulled in on a development machine without any changes to the source.
parameters:
- name: terraform_directory
type: string
steps:
- task: PowerShell#2
displayName: Tokenize TF-Module Sources
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
targetType: 'inline'
script: |
$regex = "https://*(.+)dev.azure.com"
$tokenized_url = "https://token:$($env:SYSTEM_ACCESSTOKEN)#dev.azure.com"
Write-Host "Recursive Search in ${{ parameters.terraform_directory }}"
$tffiles = Get-ChildItem -Path "${{ parameters.terraform_directory }}" -Filter "*main.tf" -Recurse -Force
Write-Host "Found $($tffiles.Count) files ending with 'main.tf'"
if ($tffiles) { Write-Host $tffiles }
$tffiles | % {
Write-Host "Updating file $($_.FullName)"
$content = Get-Content $_.FullName
Write-Host "Replace Strings: $($content | Select-String -Pattern $regex)"
$content -replace $regex, $tokenized_url | Set-Content $_.FullName -Force
Write-Host "Updated content"
Write-Host (Get-Content $_.FullName)
}
As far as I can see, the best way to do this is exactly the same as with any other Git provider. It is only for Azure DevOps that I have ever come across the extraheader approach. I have always used this, and after not being able to get a satisfactory result with the other suggested approaches, I went back to it:
- script: |
MY_TOKEN=foobar
git config --global url."https://${MY_TOKEN}#dev.azure.com".insteadOf "https://dev.azure.com"
I don't think you can. Usually, you create another build and link to the artifacts from that build to use it in your current definition. That way you don't need to connect to a different Git repository