With powershell a Gitlab-ci job passed while failure occured - powershell

My gitlab-ci.yml contains follwing Lint code
stages:
- build
build:
stage: build
script:
- exit 1
When running, the job doesn't fail!
Running with gitlab-runner 13.10.0 (54944146)
Preparing the "shell" executor
Using Shell executor...
Preparing environment
Getting source from Git repository
Fetching changes with git depth set to 50...
Reinitialized existing Git repository
Checking out b70613dd
git-lfs/2.13.2 (GitHub; windows amd64; go 1.14.13; git fc664697)
Skipping Git submodules setup
Executing "step_script" stage of the job script
00:03
$ exit 1
Cleaning up file based variables
00:03
Job succeeded
How to avoid having false successes while the job should fail.

PowerShell 5 (the Windows default PS instance) returns a false result. When using PowerShell Core, the problem no longer appears.

Related

Access Multi Repository in Azure DevOps Classic Pipeline

We are using Classic Azure DevOps Pipeline(Due to organization restriction). Normally we can use multiple repose in yaml during setup but same doesn't apply to Classic. I wanted to other repo in same project in my classic pipeline.
I got this Document from Microsoft for accessing additional repo Microsoft Document. I followed the same steps. Added script task with git clone -c http.extraheader="AUTHORIZATION: bearer $(System.AccessToken)" Task and enabled oauth
Oauth
Starting: Bash Script
==============================================================================
Task : Bash
Description : Run a Bash script on macOS, Linux, or Windows
Version : 3.214.0
Author : Microsoft Corporation
Help : https://docs.microsoft.com/azure/devops/pipelines/tasks/utility/bash
==============================================================================
Generating script.
Script contents:
git clone -c http.extraheader="AUTHORIZATION: bearer ***" "https://Project#dev.azure.com/Project/DevOps%20Integration%20Demo/_git/DevOps%20Integration%20Demo"
"C:\Program Files\Git\bin\bash.exe" -c pwd
/d/a/_temp
========================== Starting Command Output ===========================
"C:\Program Files\Git\bin\bash.exe" /d/a/_temp/d0ad30a0-9750-4561-9854-3bb4d4da9743.sh
Cloning into 'DevOps%20Integration%20Demo'...
remote: TF401019: The Git repository with name or identifier DevOps Integration Demo does not exist or you do not have permissions for the operation you are attempting.
fatal: repository 'https://dev.azure.com/Project/DevOps%20Integration%20Demo/_git/DevOps%20Integration%20Demo/' not found
##[error]Bash exited with code '128'.
Finishing: Bash Script
I get the error while running the pipeline. This Project is accessible and I am Project Administrator of the project. How to solve this issue
i fixed the issue using
git clone
https://:$env:SYSTEM_ACCESSTOKEN#dev.azure.com/MyOrg/MyProj/_git/MyRepoYou
Need to Disable Protect access to repositories in YAML pipelines setting in project pipeline for cloning in classic Pipeline
Setting

Is it possible to split up a GitHub workflow such that each step has a separate badge?

I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.

Avoid GitHub action buffering

I have a Github repo that contains some Linux utilites.
To run test I use GitHub actions and some simple custom workflow that runs on a remote Linux server using self-hosted runner:
name: CMake
on: [push]
env:
BUILD_TYPE: Release
jobs:
build:
runs-on: self-hosted
steps:
- uses: actions/checkout#v2
- name: Build
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: cmake --build . --config $BUILD_TYPE
- name: Run test
working-directory: ${{runner.workspace}}/Utils
shell: bash
run: ./testing/test.sh
The workflow is very simple - I compile the source using cmake and then run a script from the same repo to test the build. The test script is a bash script and looks as following:
#!/bin/bash
export LD_LIBRARY_PATH=/path/to/bin
cd /path/to/bin
echo -e "Starting the tests"
./run_server &
./run_tests
if [ $? -eq 0 ]; then
echo "successful"
else
echo "failed"
exit 1
fi
Here the scripts starts up an compiled application from my code (run_server) and then run the testing utility that communicates with it and prints a result.
Actually I use C printf() inside run_tests to print the output strings. If I run that on a local machine I get output like the following:
Starting the tests
test1 executed Ok
test2 executed Ok
test3 executed Ok
successful
Each test takes about 1 sec. so I the application prints a line like testX executed Ok one per second.
But if it runs using Github actions the output looks a bit different (copied from the Github actions console):
./testing/test.sh
shell: /bin/bash --noprofile --norc -e -o pipefail {0}
env:
BUILD_TYPE: Release
Starting the tests
successful
And even in this case the output from the bash script printed out only after the script has finished.
So I have 2 problems:
no printf() output from the test application
the script output (printed using echo) come only after the script has finished
I expected the same behavior from the Github action as I get on a local machine i.e. I see a lines printed ~ 1/sec. immediately.
It looks like Github action buffers all the output until a step executed and ignores all the print from the application that run inside the bash script.
Is there a way to get all the output in real time while a step executes?

Dotnet command failed with non-zero exit code on a unit test suite, no apparent error details

I am running a unit-test suite and suddenly dotnet exits with an error but an error which I cannot see what is related to ?
2020-09-28T14:45:44.4406132Z ##[error]Error: The process 'C:\ag2\_w\_tool\dotnet\dotnet.exe' failed with exit code 1
2020-09-28T14:45:45.0411575Z Result Attachments will be stored in LogStore
2020-09-28T14:45:45.0682638Z Run Attachments will be stored in LogStore
2020-09-28T14:45:45.1670771Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_12.trx'.
2020-09-28T14:45:45.1766433Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_27.trx'.
2020-09-28T14:45:45.1863176Z No Result Found to Publish 'C:\ag2\_w\_temp\bld_bc-dev-bld-01_2020-09-28_14_45_44.trx'.
2020-09-28T14:45:45.1889993Z Info: Azure Pipelines hosted agents have been updated to contain .Net Core 3.x (3.1) SDK/Runtime along with 2.1. Unless you have locked down a SDK version for your project(s), 3.x SDK might be picked up which might have breaking behavior as compared to previous versions.
2020-09-28T14:45:45.1890767Z Some commonly encountered changes are:
2020-09-28T14:45:45.1891763Z If you're using `Publish` command with -o or --Output argument, you will see that the output folder is now being created at root directory rather than Project File's directory. To learn about more such changes and troubleshoot, refer here: https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#troubleshooting
2020-09-28T14:45:45.1893725Z ##[error]Dotnet command failed with non-zero exit code on the following projects :
2020-09-28T14:45:45.1924867Z ##[section]Async Command Start: Publish test results
2020-09-28T14:45:45.3885730Z Publishing test results to test run '564748'.
2020-09-28T14:45:45.3917393Z TestResults To Publish 21, Test run id:564748
2020-09-28T14:45:45.3960633Z Test results publishing 21, remaining: 0. Test run id: 564748
2020-09-28T14:45:45.3977089Z Publishing test results to test run '564754'.
2020-09-28T14:45:45.3978225Z TestResults To Publish 17, Test run id:564754
It seems you are running dotnet publish command. Check your pipeline whether you have similar syntax:
steps:
- task: DotNetCoreCLI#2
displayName: 'dotnet publish'
inputs:
command: publish
publishWebProjects: false
projects: '**/*.csproj'
arguments: '-o testpath'
zipAfterPublish: false
modifyOutputPath: true
In addition, check the following link to see whether it helps you:
https://learn.microsoft.com/en-us/azure/devops/pipelines/tasks/build/dotnet-core-cli?view=azure-devops#troubleshooting

How to build a gitlab pipeline if my code needs to be ran on a windows server?

I implemented a bunch of infrastructure checks (PowerShell scripts) that need to be ran on Window Servers (most of them use Get-WmiObject cmdlet). I put them along with their Pester tests on GitLab and trying to build a pipeline.
I have read creating-your-first-windows-container-with-docker-for-windows and building-a-simple-release-pipeline-in-powershell-using-psake-pester-and-psdeploy but I am very confused. My understanding is that to have the code run on GitLab CI, I will need to build a Windows Server docker image?
the following is my .gitlab-ci.yml file but it has authentication errors, the image can be found here:
image: ltsc2019
stages:
- build
- test
- deploy
build:
stage: build
script:
# run PowerShell script
- powershell -File "\Deploy\Build.ps1"
test:
stage: test
script:
- powershell -File "\Deploy\CodeCoverage.ps1"
deploy:
stage: deploy
script:
- powershell -File "\Deploy\Deploy_Local.ps1"
It wouldn't pass the initial build and here are the error I got:
# Error 1
ERROR: Job failed: Error response from daemon: pull access denied for ltsc2019, repository does not exist or may require 'docker login' (executor_docker.go:168:3s)
# Error 2 (this happened because I added 'shell: "powershell"'
# after executor in the gitlab-runner congif file)
ERROR: Preparation failed: Docker doesn't support shells that require script file
ltsc2019 is one tag of the mcr.microsoft.com/windows/servercore.
You need to refer this image at the beginning of your .gitlab-ci.yml :
image: mcr.microsoft.com/windows/servercore:ltsc2019
Anyone who struggles to get docker images working on your Docker for Windows, Please read Docker executor currently doesn't support Docker for Windows. Please check out executor if you are building a pipeline that needs a container to run it