Deployment to Azure Blob Storage ends up with error - deployment

Trying to deploy artifacts ends up with the following error:
The input is not a valid Base-64 string as it contains a non-base 64
character, more than two padding characters, or an illegal character
among the padding characters.
I'm running two scripts before and after I build the app in AppVeyor:
cd $env:APPVEYOR_BUILD_FOLDER\patch;
npm install;
node patch-project-json.js $env:APPVEYOR_BUILD_FOLDER\src\Project1\project.json $env:APPVEYOR_BUILD_VERSION;
node patch-project-json.js $env:APPVEYOR_BUILD_FOLDER\src\Project2\project.json $env:APPVEYOR_BUILD_VERSION;
node patch-project-json.js $env:APPVEYOR_BUILD_FOLDER\src\Project3\project.json $env:APPVEYOR_BUILD_VERSION;
cd $env:APPVEYOR_BUILD_FOLDER
dotnet restore
and
dotnet publish .\src\Project1 --output $env:APPVEYOR_BUILD_FOLDER\deploy\Project1 --configuration Release --no-build;
dotnet publish .\src\Project2 --output $env:APPVEYOR_BUILD_FOLDER\deploy\Project2 --configuration Release --no-build;
dotnet publish .\src\Project3 --output $env:APPVEYOR_BUILD_FOLDER\deploy\Project3 --configuration Release --no-build
As you can see, I am using this to set versions in project.json files based on the $env:APPVEYOR_BUILD_VERSION. I don't know if it's relevant information. So after successful build and publish, I want to upload an artifact to the blob storage.

Turned out I had typos in Deployment settings for Storage access key entry ;)

Related

Is it possible to split up a GitHub workflow such that each step has a separate badge?

I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.

Entity Framework Core Migration in Azure Pipeline

Does anyone know why a connectionString is required when running ef migration from YAML and not Developer Command Prompt?
I believe it might have something to do with not pulling the connectionString value from the Pipeline Variable.
I am setting the connection string here:
builder.Service.AddDbContext<myContext>(options=>
options.UseSqlServer(builder.Configuration.GetConnectionString("MyDB")));
I have tried my Pipeline Variable's names as
MyDB
ConnectionStrings_MyDB
ConnectionStrings:MyDB
The following YAML gives me the error "Value cannot be null. (Parameter 'connectionString')"
- task: DotNetCoreCLI#2
displayName: Create SQL Scripts
inputs:
command: custom
custom: 'ef '
arguments: migrations script --output $(sqlOutputPath) --idempotent --project $(solution)
However running the following command from Developer Command Prompt executes successfully:
dotnet ef migrations script --output complete.sql --idempotent --project myproject
It turned out that I forgot to add the reference for Azure KeyVault to my project; therefore it was not even looking at keyvault for the ConnectionString. Also the name would need to be ConnectionStrings--MyDB

How to download build artifact from other versions (runs) published build artifact?

My pipeline publishes two different build artifacts when all its tests have passed - stage: publish_pipeline_as_build.
One of my tests needs to use the build that was made in the current run, of the current version.
But additionally, I need to get the build artifact of the previous version, in order to run some compatibility tests.
How do I download the build artifact from that other pipeline run?
I know the build artifact name (from runtime script), but how would I find that?
I tried playing around with azure-cli az pipelines runs artifact list. It requies a --run-id and actually my script won’t have that.
So far I kind of managed, assuming the response of az pipelines runs list retuns the latest match to the query first:
az pipelines runs list --project PROJNAME --query "[?sourceBranch=='refs/heads/releases/R21.3.0.2']" | jq '.[0]'
I currently seem to run out of Ideas.
Perhaps just some confused/frustrated questions that pop up:
How can I find that specific build artifact name's latest version and download it?
How are pipeline tasks fed with runtime generated values?
Is this so ridiculously difficult when doing it in Azure DevOps, or am I just going the wrong way?
The job I'm trying to get there with:
jobs:
- job: test_session_integration
dependsOn: easysales_Build
steps:
- template: ./utils/cache_yarn_and_install.yml
- template: ./utils/update_webdriver.yml
- template: ./utils/download_artifact.yml
parameters:
artifact: easysales_$(Build.BuildId)_build
path: $(System.DefaultWorkingDirectory)/dist
# current release name as output
- template: ./utils/get_release_name.yml
# previous release name, branch and build name output
- template: ./utils/get_prev_release.yml
# clone prev version manually - can't use output variables as task input
# (BTW: why? that is super inconvenient, is there really no way?)
- bash: |
git clone --depth 1 -b $(get_prev_release.BRANCH_NAME) \
"https://${REPO_USERNAME}:${REPO_TOKEN}#dev.azure.com/organisation/PROJECTNAME/_git/frontend-app" \
./reference
workingDirectory: $(System.DefaultWorkingDirectory)
env:
REPO_TOKEN: $(GIT_AUTH_TOKEN)
REPO_USERNAME: $(GIT_AUTH_USERNAME)
name: clone_reference_branch
Any clues?
I'd be glad for any rubber ducking hints or clues on how I would be able to achieve what I need.
I'm new to Azure DevOps and currently struggle to find orientation in the vast but also quite in many places bits and pieces documentation Microsoft offers to me. It's fine, but frankly I struggle quite a bit with it. Is it just me having this problem?
All stages and full YAML on pastebin
The main template with the stages (Expanded templates made with "download full YAML"):
stages:
- stage: install_prepare
displayName: install & prepare
jobs:
- template: az_templates/install_hls_lib_build_job.yml
- stage: test_and_build
displayName: test and build projects
dependsOn: install_prepare
jobs:
- template: az_templates/build_projects_jobs.yml
- template: az_templates/test_session_integration_job.yml
- stage: publish_pipeline_as_build
displayName: Publish finished project artifacts as builds
dependsOn: test_and_build
jobs:
- template: az_templates/build_artifact_publish_jobs.yml
I by now found a solution. Perhaps not a definitive one, but should sort of work:
In my library variable group I added a azure devops personal access token with read access to the necessary groups as ADO_PAT_TOKEN
I Sign in with azure-cli with that token
I get the latest run id with azure cli
- bash: |
az devops configure --defaults organization=$(System.CollectionUri)
az devops configure --defaults project=$(System.TeamProject)
echo "$AZ_DO_TOKEN" | az devops login
AZ_QUERY="[?sourceBranch=='refs/heads/$(prev_release.BRANCH_NAME)'] | [0].id"
ID=$(az pipelines runs list --query-order FinishTimeDesc --query "$AZ_QUERY")
echo "##vso[task.setvariable variable=ID;isOutput=true]$ID"
env:
AZ_DO_TOKEN: $(ADO_PAT_TOKEN)
name: prev_build_run
I then download the artifact with azure-cli and the queried run id
- bash: |
az pipelines runs artifact download \
--artifact-name 'easysales_$(prev_release.PREV_RELEASE_VERSION)' \
--run-id $(prev_build_run.ID) \
--path '$(System.DefaultWorkingDirectory)/reference/dist/easySales'
workingDirectory: $(System.DefaultWorkingDirectory)
name: download_prev_release_build_artifact
This roughly seems to work for me now... finally 😉
Missing
The personal access token I added to the secrets may work, but AFAIS these tokens can not be created with no expiry date further than one year in the future.
That is not ideal, since I don't want my pipeline to stop working and perhaps no one around knows how to fix this.
Perhaps someone knows how I can use azure CLI within the current pipeline without authentication?
Given it accesses only the current organization and project?
Or does anyone see a more elegant solution to my admittedly clumsy solution.

Dotnet command failed with non-zero exit code on a unit test suite, no apparent error details

I am running a unit-test suite and suddenly dotnet exits with an error but an error which I cannot see what is related to ?
2020-09-28T14:45:44.4406132Z ##[error]Error: The process 'C:\ag2\_w\_tool\dotnet\dotnet.exe' failed with exit code 1
2020-09-28T14:45:45.0411575Z Result Attachments will be stored in LogStore
2020-09-28T14:45:45.0682638Z Run Attachments will be stored in LogStore
2020-09-28T14:45:45.1670771Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_12.trx'.
2020-09-28T14:45:45.1766433Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_27.trx'.
2020-09-28T14:45:45.1863176Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_44.trx'.
2020-09-28T14:45:45.1889993Z Info: Azure Pipelines hosted agents have been updated to contain .Net Core 3.x (3.1) SDK/Runtime along with 2.1. Unless you have locked down a SDK version for your project(s), 3.x SDK might be picked up which might have breaking behavior as compared to previous versions.
2020-09-28T14:45:45.1890767Z Some commonly encountered changes are:
2020-09-28T14:45:45.1891763Z If you're using `Publish` command with -o or --Output argument, you will see that the output folder is now being created at root directory rather than Project File's directory. To learn about more such changes and troubleshoot, refer here: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#troubleshooting
2020-09-28T14:45:45.1893725Z ##[error]Dotnet command failed with non-zero exit code on the following projects :
2020-09-28T14:45:45.1924867Z ##[section]Async Command Start: Publish test results
2020-09-28T14:45:45.3885730Z Publishing test results to test run '564748'.
2020-09-28T14:45:45.3917393Z TestResults To Publish 21, Test run id:564748
2020-09-28T14:45:45.3960633Z Test results publishing 21, remaining: 0. Test run id: 564748
2020-09-28T14:45:45.3977089Z Publishing test results to test run '564754'.
2020-09-28T14:45:45.3978225Z TestResults To Publish 17, Test run id:564754
It seems you are running dotnet publish command. Check your pipeline whether you have similar syntax:
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/*.csproj'
arguments: '-o testpath'
zipAfterPublish: false
modifyOutputPath: true
In addition, check the following link to see whether it helps you:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#troubleshooting

Azure DevOps pipeline for deploying only changed arm templates

We have a project with repo on Azure DevOps where we store ARM templates of our infrastructure. What we want to achieve is to deploy templates on every commit on master branch.
The question is: is it possible to define one pipeline which could trigger a deployment only of ARM templates changed with that commit? Let's go with example. We 3 templates in repo:
t1.json
t2.json
t3.json
The latest commit changed only t2.json. In this case we want pipeline to only deploy t2.json as t1.json and t3.json hasn't been changed in this commit.
Is it possible to create one universal pipeline or we should rather create separate pipeline for every template which is triggered by commit on specific file?
It is possible to define only one pipeline to deploy the changed template. You need to add a script task to get the changed template file name in your pipeline.
It is easy to get the changed files using git commands git diff-tree --no-commit-id --name-only -r commitId. When you get the changed file's name, you need to assign it to a variable using expression ##vso[task.setvariable variable=VariableName]value. Then you can set the csmFile parameter like this csmFile: '**\$(fileName)' in AzureResourceGroupDeployment task
You can check below yaml pipeline for example:
- powershell: |
#get the changed template
$a = git diff-tree --no-commit-id --name-only -r $(Build.SourceVersion)
#assign the filename to a variable
echo "##vso[task.setvariable variable=fileName]$a"
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\$(fileName)'
It is also easy to define multiple pipelines to achieve only deploying the changed template. You only need to add the paths trigger to the specific template file in the each pipeline. So that the changed template file only triggers its corresponding pipeline.
trigger:
paths:
include:
- pathTo/template1.json
...
- task: AzureResourceGroupDeployment#2
inputs:
....
templateLocation: 'Linked artifact'
csmFile: '**\template1.json'
Hope above helps!
What you asked is not supported out of the box. From what I understood you want to have triggers (based on file change) per a step or a job (depending hoq you organize your pipeline). However, I'm not sure if you need this. Deploying ARM template which was not changed will not affect you Azure resources if use use Create Or Update Resource Group doc here
You can also try to manually detect which file was changed (using powershell for instance and git commands), then set a flag and later use this falg to fire or not some steps. But it looks like overkill for what you want achieve.