Where is coverage.xml located in Codecov? - github

I use /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml but I think the path is not correct. My report is not uploading on GitHub Actions.
Full file.

The coverage file is located at /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml. In general, it is /home/runner/work/<project>/<project>/coverage.xml.
I solved it with the following code.
- name: Generate Report
run: |
pip install codecov
pip install pytest-cov
pytest --cov=./ --cov-report=xml
codecov
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v3.1.0
with:
token: ${{ secrets.CODECOV_TOKEN }}
directory: ./coverage/reports/
env_vars: OS,PYTHON
files: /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml
flags: tests
Full code is here.

From your last run, check if the "Upload coverage to CodeCov" is actually executed.
The workflow seems to stop before that, in the test with pytest step:
/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/distutils/file_util.py:44: DistutilsFileError
=========================== short test summary info ============================
FAILED tests/__main__.py::test_merge_ani - distutils.errors.DistutilsFileErro...
FAILED tests/tests.py::test_merge_ani - distutils.errors.DistutilsFileError: ...
========================= 2 failed, 30 passed in 4.23s =========================
If it does not stop before that, you need to check if the file is indeed created, considering the error message is:
None of the following appear to exist as files: ./coverage.xml

Related

Problem showing contents of file on GitHub Actions

In the GitHub actions we have for doxygen (see https://github.com/doxygen/doxygen especially https://github.com/doxygen/doxygen/blob/master/.github/workflows/build_cmake.yml) we have the GitHub action:
- name: Run tests
id: test
shell: cmake -P {0}
run: |
set(ENV{CTEST_OUTPUT_ON_FAILURE} "ON")
execute_process(
COMMAND cmake --build build --target tests TEST_FLAGS="--xml --xmlxsd --xhtml --qhp --docbook --rtf --pdf"
RESULT_VARIABLE result
)
if (NOT result EQUAL 0)
message(FATAL_ERROR "Running tests failed!")
endif()
due to the --pdf this results nowadays in an error but is not clear why this error occurs. A file build/testing/test_output_053/latex/refman.log is written with the log information from the pdf process so in case of an error we want to be able to view this.
Based on https://stackoverflow.com/a/60679655/1657886 I first changed the if construct into:
if (NOT result EQUAL 0)
echo ::set-output name=tst_053::$(cat build/testing/test_output_053/latex/refman.log)
echo ${{ steps.test.outputs.tst_053 }}
message(FATAL_ERROR "Running tests failed!")
endif()
but this gave an error:
CMake Error at /home/runner/work/_temp/cf662be5-7e60-4c73-89b3-cdd645a722b9:8:
Parse error. Expected "(", got unquoted argument with text "::set-output".
CMake Error: Error processing file: /home/runner/work/_temp/cf662be5-7e60-4c73-89b3-cdd645a722b9
Error: Process completed with exit code 1.
so I changed it into:
if (NOT result EQUAL 0)
echo "name=tst_053::$(cat build/testing/test_output_053/latex/refman.log)" >> $GITHUB_OUTPUT
echo ${{ steps.test.outputs.tst_053 }}
message(FATAL_ERROR "Running tests failed!")
endif()
which also gave an error:
CMake Error at /home/runner/work/_temp/d4611e08-3fc9-4a91-942b-fc0cc78a9904:8:
Parse error. Expected "(", got quoted argument with text
"name=version::$(cat build/testing/test_output_053/latex/refman.log)".
CMake Error: Error processing file: /home/runner/work/_temp/d4611e08-3fc9-4a91-942b-fc0cc78a9904
Error: Process completed with exit code 1.
Also the GitHub action from https://github.com/juliangruber/read-file-action like
steps:
- name: Read log file
id: package
uses: juliangruber/read-file-action#v1
with:
path: ./build/testing/test_output_053/latex/refman.log
- name: Echo log file
run: echo "${{ steps.package.outputs.content }}"
could be an alternative, but I don't see a way to plug this into the if... endif block as run and uses are not allowed in one step.
Any suggestions / solutions?

How to access Bitrise iOS test results from Swift DangerFile?

I'm trying to implement danger-swift but I don't know where I can access test results and flag failing ones from Bitrise.
I'm using a plugin for danger-swift called DangerXCodeSummary but I don't know where Bitrise stores test results from xcode-test#2.
DangerFile:
import Danger
import DangerXCodeSummary
import Foundation
let danger = Danger()
let testReportPath = "??" // What's the path for the test results?
XCodeSummary(filePath: testReportPath).report()
Bitrise script:
...
UnitTests:
before_run:
- _ensure_dependencies
after_run:
- _add_build_result_to_pr
steps:
- xcode-test#2: {} # What's the file path for the test results?
- deploy-to-bitrise-io#1: {}
envs:
- ots:
is_expand: false
BITRISE_PROJECT_PATH: MyApp.xcodeproj
- opts:
is_expand: false
BITRISE_SCHEME: AppTests
description: Unit Tests running at every commit.
...
_add_build_result_to_pr:
steps:
- script#1:
title: Commenting on the PR
is_always_run: true
inputs:
- content: |-
#!/usr/bin/env bash
# fail if any commands fails
set -e
echo "################### DANGER ######################"
echo "Install Danger"
brew install danger/tap/danger-swift
echo "Run danger"
danger-swift ci
echo "#################################################"
You can see what outputs the step generates on the Workflow Editor UI, and alternatively in the Step's repository in step.yml. For Xcode Test step specifically: https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L287
Am I right that you're looking for the xcresult test output? If you are, you can read it from the BITRISE_XCRESULT_PATH environment variable (https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L296), which is the output of Xcode Test (once Xcode Test is finished, it sets this environment variable to the xcresult's path). Keep in mind this is in .xcresult format (the official Xcode test result format).

Luigi does not send error codes to concourse ci

I have a test pipeline on concourse with one job that runs a set of luigi tasks. My problem is: failures in the luigi tasks do not rise up to the concourse job. In other words, if a luigi task fails, concourse will not register that failure and states that the concourse job completed successfully. I will first post the code I am running, then the solutions I have tried.
luigi-tasks.py
class Pipeline1(luigi.WrapperTask):
def requires(self):
yield Task1()
yield Task2()
yield Task3()
tasks.py
class Task1(luigi.Task):
def requires(self):
return None
def output(self):
return luigi.LocalTarget('stuff/task1.csv')
def run(self):
#uncomment line below to generate task failure
#assert(True==False)
print('task 1 complete...')
t = pd.DataFrame()
with self.output().open('w') as outtie:
outtie.write('complete')
# Tasks 2 and 3 are duplicates of this, but with 1s replaced with 2s or 3s.
config file
[retcode]
# codes are in increasing level of severity (for most applications)
already_running=10
missing_data=20
not_run=25
task_failed=30
scheduling_error=35
unhandled_exception=40
begin.sh
#!/bin/sh
set -e
export PYTHONPATH='.'
luigi --module luigi-tasks Pipeline1 --local-scheduler
echo $?
pipeline.yml
# <resources, resource types, and docker image build job defined here>
#job of interest
- name: run-docker-image
plan:
- get: timer
trigger: true
- get: docker-image-ecr
passed: [build-docker-image]
- get: run-git
- task: run-script
image: docker-image-ecr
config:
inputs:
- name: run-git
platform: linux
run:
dir: ./run-git
path: /bin/bash
args: ["begin.sh"]
I've introduced errors in a few ways: assertions/raising an exception (ValueError) within an individual task's run() method and within the wrapper, and sys.exit(luigi.retcodes.retcode().unhandled_exception). I also tried failing all tasks. I did this in case the error needed to be generated in a specific manner/location. Though they all produced a failed task, none of them produced an error in the concourse server.
At first, I thought concourse just gives a success if it can run the file or command tasked to it. I'm not sure it's that simple, though. Interestingly, when I run the pipeline on my local computer (luigi --modules luigi-tasks Pipeline1 --local-scheduler) I get an appropriate return code (e.g. 30), but when I run the pipeline within the concourse server, I get a return code of 0 after the luigi tasks complete (from echo $? in the bash script).
Would appreciate any insight into this problem.
My suspicion is that luigi doesn't see your config file with return codes. Its default behavior is to return 0, whether tasks fail or succeed.
This experiment should help to debug that:
Force a failed job: add an exit 1 at the end of begin.sh
Hijack the job: fly -t <target> i -j <pipeline>/<job> -> select run-script
cd ./run-git; /bin/bash begin.sh
Ensure the luigi config is present and named appropriately, e.g. luigi.cfg
Re-run the command: LUIGI_CONFIG_PATH=luigi.cfg bash ./begin.sh
Check output: echo $?

gitlab-ci.yml: 'script: -pytest cannot find any tests to check'

I'm having trouble implementing a toy example that runs pytest within .gitlab-ci.yml
gitlab_ci is a repo containing a single file test_hello.py
gitlab_ci/
test_hello.py
test_hello.py
# test_hello.py
import pytest
def hello():
print("hello")
def hello_test():
assert hello() == 'hello'
.gitlab-ci.yml
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pytest # if this is removed, the job outputs 'Success'
CI/CD terminal output
$ pytest
=== test session starts ===
platform linux -- Python 3.6.9, pytest-5.2.0, py-1.8.0, pluggy-0.13.0
rootdir: /builds/kunov/gitlab_ci
collected 0 items
=== no tests ran in 0.02s ===
ERROR: Job failed: exit code 1
I'm not sure why the test did not run... pytest does not seem to recognize test_hello.py
Solution
Put the python file inside the newly creared tests folder:
gitlab_ci/
.gitlab-ci.yml
tests/
test_hello.py
Modify gitlab-ci.yml in the following manner:
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
And test_hello.py would stay the same as before.
This blog post mentions a similar pipeline, but:
However, this did not work as pytest was unable to find the ‘bild’ module (ie. the source code) to test.
The problem encountered here is that the ‘bild’ module is not able to be found by the test_*.py files, as the top-level directory of the project was not being specified in the system path:
pytest:
stage: Test
script:
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
The OP kunov confirms in the comments:
It works now! I put the single file inside a newly created folder called 'test'
Manipulation of the PYTHONPATH variable is considered by some to be a bad practice (see e.g., this answer on stackoverflow or this Level Up Coding post). While this is possible not a huge issue in the scope of a GitLab CI job, here is a solution based on Alberto Mardegan's comment at the mentioned blog post without the need to fiddle with PYTHONPATH (also somewhat cleaner):
pytest:
stage: Test
script:
- pwd
- ls -l
- python -m pytest
Why does this work? From the pytest docs:
You can invoke testing through the Python interpreter from the command
line:
python -m pytest [...]
This is almost equivalent to invoking the
command line script pytest [...] directly, except that calling via
python will also add the current directory to sys.path.
test_hello.py
def test_hello():#func name must start with "test_",not "hello_test"

How to force Devel::Cover to ignore a folder when using perl-helpers via Travis CI

The MetaCPAN Travis CI coverage builds are quite slow. See https://travis-ci.org/metacpan/metacpan-web/builds/238884497 This is likely in part because we're not successfully ignoring the /local folder that gets created by Carton as part of our build. See https://coveralls.io/builds/11809290
We're using perl-helpers to help with our Travis configuration. I thought I should be able to use the DEVEL_COVER_OPTIONS environment variable in order to fix this, but I guess I don't have the correct incantation. I've included the entire config below because a few snippets out of context seemed misleading.
language: perl
perl:
- "5.22"
matrix:
fast_finish: true
allow_failures:
- env: COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- env: USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
env:
global:
# Carton --deployment only works on the same version of perl
# that the snapshot was built from.
- DEPLOYMENT_PERL_VERSION=5.22
- DEVEL_COVER_OPTIONS="-ignore ^local/"
matrix:
# Get one passing run with coverage and one passing run with Test::Vars
# checks. If run together they more than double the build time.
- COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
- USE_CPANFILE_SNAPSHOT=true
before_install:
- git clone git://github.com/travis-perl/helpers ~/travis-perl-helpers
- source ~/travis-perl-helpers/init
- npm install -g less js-beautify
# Pre-install from backpan to avoid upgrade breakage.
- cpanm -n http://cpan.metacpan.org/authors/id/M/ML/MLEHMANN/common-sense-3.6.tar.gz
- cpanm -n App::cpm Carton
install:
- cpan-install --coverage # installs converage prereqs, if enabled
- 'cpm install `test "${USE_CPANFILE_SNAPSHOT}" = "false" && echo " --resolver metadb" || echo " --resolver snapshot"`'
before_script:
- coverage-setup
script:
# Devel::Cover isn't in the cpanfile
# but if it's installed into the global dirs this should work.
- carton exec prove -lr -j$(test-jobs) t
after_success:
- coverage-report
notifications:
email:
recipients:
- olaf#seekrit.com
on_success: change
on_failure: always
irc: "irc.perl.org#metacpan-travis"
# Use newer travis infrastructure.
sudo: false
cache:
directories:
- local
The syntax for the Devel::Cover options on the command line is weird. You need to put stuff comma-separated. At least when you use PERL5OPT.
DEVEL_COVER_OPTIONS="-ignore,^local/"
See for example https://github.com/simbabque/AWS-S3/blob/master/.travis.yml#L26, where it's a whole lot of stuff with commas.
PERL5OPT=-MDevel::Cover=-ignore,"t/",+ignore,"prove",-coverage,statement,branch,condition,path,subroutine prove -lrs t