We are using Classic Azure DevOps Pipeline(Due to organization restriction). Normally we can use multiple repose in yaml during setup but same doesn't apply to Classic. I wanted to other repo in same project in my classic pipeline.
I got this Document from Microsoft for accessing additional repo Microsoft Document. I followed the same steps. Added script task with git clone -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" Task and enabled oauth
Oauth
Starting: Bash Script
==============================================================================
Task : Bash
Description : Run a Bash script on macOS, Linux, or Windows
Version : 3.214.0
Author : Microsoft Corporation
Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
Script contents:
git clone -c http.extraheader="AUTHORIZATION: bearer ***" "https://Project#dev.azure.com/Project/DevOps%20Integration%20Demo/_git/DevOps%20Integration%20Demo"
"C:\Program Files\Git\bin\bash.exe" -c pwd
/d/a/_temp
========================== Starting Command Output ===========================
"C:\Program Files\Git\bin\bash.exe" /d/a/_temp/d0ad30a0-9750-4561-9854-3bb4d4da9743.sh
Cloning into 'DevOps%20Integration%20Demo'...
remote: TF401019: The Git repository with name or identifier DevOps Integration Demo does not exist or you do not have permissions for the operation you are attempting.
fatal: repository 'https://dev.azure.com/Project/DevOps%20Integration%20Demo/_git/DevOps%20Integration%20Demo/' not found
##[error]Bash exited with code '128'.
Finishing: Bash Script
I get the error while running the pipeline. This Project is accessible and I am Project Administrator of the project. How to solve this issue
i fixed the issue using
git clone
https://:$env:SYSTEM_ACCESSTOKEN#dev.azure.com/MyOrg/MyProj/_git/MyRepoYou
Need to Disable Protect access to repositories in YAML pipelines setting in project pipeline for cloning in classic Pipeline
Setting
Related
I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.
Is there a way to set default shell options for the entire pipelines file?
In github actions you could do it at the workflow level using the 'defaults > shell' value e.g. bash --noprofile --norc -eo pipefail {0}
and this would affect all shell steps in the entire workflow. As far as I can tell, there is no equivalent option in azure devops and I would be forced to copy/paste for each 'bash' task.
You can add the SHELLOPTS variable for the entire pipeline, which would at least get you the -eo pipefail part:
variables:
SHELLOPTS: errexit:pipefail
It is currently not supported to set default shell options in Azure DevOps. Every bash task represents a new session in one Azure Pipeline workflow.
You could create a new Azure DevOps feature request via : https://developercommunity.visualstudio.com/report?space=21&entry=suggestion
My gitlab-ci.yml contains follwing Lint code
stages:
- build
build:
stage: build
script:
- exit 1
When running, the job doesn't fail!
Running with gitlab-runner 13.10.0 (54944146)
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository
Checking out b70613dd
git-lfs/2.13.2 (GitHub; windows amd64; go 1.14.13; git fc664697)
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ exit 1
Cleaning up file based variables
00:03
Job succeeded
How to avoid having false successes while the job should fail.
PowerShell 5 (the Windows default PS instance) returns a false result. When using PowerShell Core, the problem no longer appears.
I'm setting up a CI/CD pipeline using GitHub Actions and a self-hosted agent installed on a windows 2019 server.
The problem I'm facing is that the action actions/checkout#v2 fails to check out the repo and fully unzip it. When I say "fully unzip" I mean that there are some files inthe target folder it managed to unzip before stalling.
From the log:
Run actions/checkout#v2
Syncing repository: Syd/ExternWebb
Getting Git version info
Deleting the contents of 'C:\actions-runner\_work\ExternWebb\ExternWebb'
The repository will be downloaded using the GitHub REST API
To create a local Git repository instead, add Git 2.18 or higher to the PATH
Downloading the archive
Writing archive to disk
Extracting the archive
C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe -NoLogo -Sta -NoProfile -NonInteractive -ExecutionPolicy Unrestricted -Command "$ErrorActionPreference = 'Stop' ; try { Add-Type -AssemblyName System.IO.Compression.FileSystem } catch { } ; [System.IO.Compression.ZipFile]::ExtractToDirectory('C:\actions-runner\_work\ExternWebb\ExternWebb\b3193a49-e100-4cbd-81c9-6bd23ff47313.tar.gz', 'C:\actions-runner\_work\ExternWebb\ExternWebb\b3193a49-e100-4cbd-81c9-6bd23ff47313')"
Exception calling "ExtractToDirectory" with "2" argument(s): "Could not find a part of the path 'C:\actions-runner\_wor
k\ExternWebb\ExternWebb\b3193a49-e100-4cbd-81c9-6bd23ff47313\Syd-ExternWebb-77d0427f54bc3e4d6694
f0719ca9fe3ab3be3706\ExternWebb.Library\Custom\Plugins\DomainRedirectModule\DomainRedirectConfigurationCollection
.cs'."
At line:1 char:111
+ ... catch { } ; [System.IO.Compression.ZipFile]::ExtractToDirectory('C:\a ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : DirectoryNotFoundException
Error: The process 'C:\Windows\System32\WindowsPowerShell\v1.0\powershell.exe' failed with exit code 1
I've tried running the same action on a locally self-hosted agent without any trouble.
A reddit post suggests that the error can be mitigated by fetching the repo via Git instead, pointing to this clue in the log:
The repository will be downloaded using the GitHub REST API
To create a local Git repository instead, add Git 2.18 or higher to the PATH
I tried to install Git v.2.30 and added it to the PATH as instructed above, but for some reason the action still downloads the repo via the GIT API. I don't know if a restart of the server is required but the git commands are available in Powershell.
I'm guessing that's why the locally self-hosted agent (which has access to GIT) can check out the repo, but the agent running on the server can't.
This is the workflow yml-file:
name: Build Stage
on:
push:
branches: [ develop ]
pull_request:
branches: [ develop ]
jobs:
build:
runs-on: [self-hosted, stage]
env:
CONFIG: Stage
BUILD_FOLDER: _build
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
steps:
- name: Checking out source code
uses: actions/checkout#v2
Any insights are appretiated.
Ensure that you stop and restart the GitHub Actions Runner service after installing Git. That did it for me.
It's enough if you install Git and reboot runner PC if Actions Runner is hosted as service.
I'm currently creating a pipeline for Azure DevOps to validate and apply a Terraform configuration to different subscription.
My terraform configuration uses modules, those are "hosted" in other repositories in the same Azure DevOps Project as the terraform configuration.
Sadly, when I try to perform terraform init to fetch those modules, the pipeline task "hang" there waiting for credentials input.
As recommanded in the Pipeline Documentation on Running Git Commands in a script I tried to add a checkout step with the persistCredentials:true attribute.
From what I can see in the log of the task (see bellow), the credentials information are added specifically to the current repo and are not usable for other repos.
The command performed when adding persistCredentials:true
2018-10-22T14:06:54.4347764Z ##[command]git config http.https://my-org#dev.azure.com/my-org/my-project/_git/my-repo.extraheader "AUTHORIZATION: bearer ***"
The output of terraform init task
2018-10-22T14:09:24.1711473Z terraform init -input=false
2018-10-22T14:09:24.2761016Z Initializing modules...
2018-10-22T14:09:24.2783199Z - module.my-module
2018-10-22T14:09:24.2786455Z Getting source "git::https://my-org#dev.azure.com/my-org/my-project/_git/my-module-repo?ref=1.0.2"
How can I setup the git credentials to work for other repositories ?
You have essentially two ways of doing this.
Pre-requisite
Make sure that you read and, depending on your needs, that you apply the Enable scripts to run Git commands section from the "Run Git commands in a script" doc.
Solution #1: dynamically insert the System.AccessToken (or a PAT, but I would not recommend it) at pipeline runtime
You could to this either by:
inserting a replacement token such as __SYSTEM_ACCESSTOKEN__ in your code (as Nilsas suggests) and use some token replacement code or the qetza.replacetokens.replacetokens-task.replacetokens task to insert the value. The disadvantage of this solution is that you would also have to replace the token when you run you terraform locally.
using some code to replace all git::https://dev.azure.com text with git::https://YOUR_ACCESS_TOKEN#dev.azure.com.
I used the second approach by using the following bash task script (it searches terragrunt files but you can adapt to terraform files without much change):
- bash: |
find $(Build.SourcesDirectory)/ -type f -name 'terragrunt.hcl' -exec sed -i 's~git::https://dev.azure.com~git::https://$(System.AccessToken)#dev.azure.com~g' {} \;
Abu Belai offers a PowerShell script to do something similar.
This type of solution does not however work if modules in your terraform modules git repo call themselves modules in another git repo, which was our case.
Solution #2: adding globally the access token in the extraheader of the url of your terraform modules git repos
This way, all the modules' repos, called directly by your code or called indirectly by the called modules' code, will be able to use your access token. I did so by adding the following step before your terraform/terragrunt calls:
- bash: |
git config --global http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
git config --global http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader "AUTHORIZATION: bearer $(System.AccessToken)"
You will need to set the extraheader for each of the called git repos.
Beware that you might need to unset the extraheader after your terraform calls if your pipeline sets the extraheader several times on the same worker. This is because git can get confused with multiple extraheader declaration. You do this by adding to following step:
- bash: |
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-first-repo-project>/_git/<your-first-repo>.extraheader
git config --global --unset-all http.https://dev.azure.com/<your-org>/<your-second-repo-project>/_git/<your-second-repo>.extraheader
I had the same issue, what I ended up doing is tokenizing SYSTEM_ACCESSTOKEN in terraform configuration. I used Tokenzization task in Azure DevOps where __ prefix and suffix is used to identify and replace tokens with actual variables (it is customizable but I find double underscores best for not interfering with any code that I have)
- task: qetza.replacetokens.replacetokens-task.replacetokens#3
displayName: 'Replace tokens'
inputs:
targetFiles: |
**/*.tfvars
**/*.tf
tokenPrefix: '__'
tokenSuffix: '__'
Something like find $(Build.SourcesDirectory)/ -type f -name 'main.tf' -exec sed -i 's~__SYSTEM_ACCESSTOKEN__~$(System.AccessToken)~g' {} \; would also work if you do not have ability to install custom extensions to your DevOps organization.
My terraform main.tf looks like this:
module "app" {
source = "git::https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules//azure/app-service?ref=__app-service-module-ver__"
....
}
It's not beautiful but it gets the job done. Module source (at the time of writing) does not support variable input from terraform. So what we can do is to use Terrafile it's an open source project helping with keeping up with the modules and different versions of the same module you might use by keeping a simple YAML file next to your code. It seems that it's no longer being actively maintained, however it just works: https://github.com/coretech/terrafile
my example of Terrafile:
app:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "feature/handle-twitter"
app-stable:
source: "https://token:__SYSTEM_ACCESSTOKEN__#dev.azure.com/actualOrgName/actualProjectName/_git/TerraformModules"
version: "1.0.5"
Terrafile by default download your modules to ./vendor directory so you can point your module source to something like:
module "app" {
source = "./vendor/modules/app-stable/azure/app_service"
....
}
Now you just have to figure out how to execute terrafile command in the directory where Terrafile is present.
My azure.pipelines.yml example:
- script: curl -L https://github.com/coretech/terrafile/releases/download/v0.6/terrafile_0.6_Linux_x86_64.tar.gz | tar xz -C $(Agent.ToolsDirectory)
displayName: Install Terrafile
- script: |
cd $(Build.Repository.LocalPath)
$(Agent.ToolsDirectory)/terrafile
displayName: Download required modules
I did this
_ado_token.ps1
# used in Azure DevOps to allow terrform to auth with Azure DevOps GIT repos
$tfmodules = Get-ChildItem $PSScriptRoot -Recurse -Filter "*.tf"
foreach ($tfmodule in $tfmodules) {
$content = [System.IO.File]::ReadAllText($tfmodule.FullName).Replace("git::https://myorg#","git::https://" + $env:SYSTEM_ACCESSTOKEN +"#")
[System.IO.File]::WriteAllText($tfmodule.FullName, $content)
}
azure-pipelines.yml
- task: PowerShell#2
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
filePath: '_ado_token.ps1'
pwsh: true
displayName: '_ado_token.ps1'
I Solved the issue by creating a Pipeline template that runs a inline powershell script. I then pull in the template as the Pipeline template a "resource" when using any terraform module form a different Repo.
The script will do a recursive search for all the .tf files. Then use regex to update all the module source urls.
I chose REGEX over tokenizing the module url, because this will make sure the modules can be pulled in on a development machine without any changes to the source.
parameters:
- name: terraform_directory
type: string
steps:
- task: PowerShell#2
displayName: Tokenize TF-Module Sources
env:
SYSTEM_ACCESSTOKEN: $(System.AccessToken)
inputs:
targetType: 'inline'
script: |
$regex = "https://*(.+)dev.azure.com"
$tokenized_url = "https://token:$($env:SYSTEM_ACCESSTOKEN)#dev.azure.com"
Write-Host "Recursive Search in ${{ parameters.terraform_directory }}"
$tffiles = Get-ChildItem -Path "${{ parameters.terraform_directory }}" -Filter "*main.tf" -Recurse -Force
Write-Host "Found $($tffiles.Count) files ending with 'main.tf'"
if ($tffiles) { Write-Host $tffiles }
$tffiles | % {
Write-Host "Updating file $($_.FullName)"
$content = Get-Content $_.FullName
Write-Host "Replace Strings: $($content | Select-String -Pattern $regex)"
$content -replace $regex, $tokenized_url | Set-Content $_.FullName -Force
Write-Host "Updated content"
Write-Host (Get-Content $_.FullName)
}
As far as I can see, the best way to do this is exactly the same as with any other Git provider. It is only for Azure DevOps that I have ever come across the extraheader approach. I have always used this, and after not being able to get a satisfactory result with the other suggested approaches, I went back to it:
- script: |
MY_TOKEN=foobar
git config --global url."https://${MY_TOKEN}#dev.azure.com".insteadOf "https://dev.azure.com"
I don't think you can. Usually, you create another build and link to the artifacts from that build to use it in your current definition. That way you don't need to connect to a different Git repository