Problem showing contents of file on GitHub Actions - github

In the GitHub actions we have for doxygen (see https://github.com/doxygen/doxygen especially https://github.com/doxygen/doxygen/blob/master/.github/workflows/build_cmake.yml) we have the GitHub action:
- name: Run tests
id: test
shell: cmake -P {0}
run: |
set(ENV{CTEST_OUTPUT_ON_FAILURE} "ON")
execute_process(
COMMAND cmake --build build --target tests TEST_FLAGS="--xml --xmlxsd --xhtml --qhp --docbook --rtf --pdf"
RESULT_VARIABLE result
)
if (NOT result EQUAL 0)
message(FATAL_ERROR "Running tests failed!")
endif()
due to the --pdf this results nowadays in an error but is not clear why this error occurs. A file build/testing/test_output_053/latex/refman.log is written with the log information from the pdf process so in case of an error we want to be able to view this.
Based on https://stackoverflow.com/a/60679655/1657886 I first changed the if construct into:
if (NOT result EQUAL 0)
echo ::set-output name=tst_053::$(cat build/testing/test_output_053/latex/refman.log)
echo ${{ steps.test.outputs.tst_053 }}
message(FATAL_ERROR "Running tests failed!")
endif()
but this gave an error:
CMake Error at /home/runner/work/_temp/cf662be5-7e60-4c73-89b3-cdd645a722b9:8:
Parse error. Expected "(", got unquoted argument with text "::set-output".
CMake Error: Error processing file: /home/runner/work/_temp/cf662be5-7e60-4c73-89b3-cdd645a722b9
Error: Process completed with exit code 1.
so I changed it into:
if (NOT result EQUAL 0)
echo "name=tst_053::$(cat build/testing/test_output_053/latex/refman.log)" >> $GITHUB_OUTPUT
echo ${{ steps.test.outputs.tst_053 }}
message(FATAL_ERROR "Running tests failed!")
endif()
which also gave an error:
CMake Error at /home/runner/work/_temp/d4611e08-3fc9-4a91-942b-fc0cc78a9904:8:
Parse error. Expected "(", got quoted argument with text
"name=version::$(cat build/testing/test_output_053/latex/refman.log)".
CMake Error: Error processing file: /home/runner/work/_temp/d4611e08-3fc9-4a91-942b-fc0cc78a9904
Error: Process completed with exit code 1.
Also the GitHub action from https://github.com/juliangruber/read-file-action like
steps:
- name: Read log file
id: package
uses: juliangruber/read-file-action#v1
with:
path: ./build/testing/test_output_053/latex/refman.log
- name: Echo log file
run: echo "${{ steps.package.outputs.content }}"
could be an alternative, but I don't see a way to plug this into the if... endif block as run and uses are not allowed in one step.
Any suggestions / solutions?

Related

How to access Bitrise iOS test results from Swift DangerFile?

I'm trying to implement danger-swift but I don't know where I can access test results and flag failing ones from Bitrise.
I'm using a plugin for danger-swift called DangerXCodeSummary but I don't know where Bitrise stores test results from xcode-test#2.
DangerFile:
import Danger
import DangerXCodeSummary
import Foundation
let danger = Danger()
let testReportPath = "??" // What's the path for the test results?
XCodeSummary(filePath: testReportPath).report()
Bitrise script:
...
UnitTests:
before_run:
- _ensure_dependencies
after_run:
- _add_build_result_to_pr
steps:
- xcode-test#2: {} # What's the file path for the test results?
- deploy-to-bitrise-io#1: {}
envs:
- ots:
is_expand: false
BITRISE_PROJECT_PATH: MyApp.xcodeproj
- opts:
is_expand: false
BITRISE_SCHEME: AppTests
description: Unit Tests running at every commit.
...
_add_build_result_to_pr:
steps:
- script#1:
title: Commenting on the PR
is_always_run: true
inputs:
- content: |-
#!/usr/bin/env bash
# fail if any commands fails
set -e
echo "################### DANGER ######################"
echo "Install Danger"
brew install danger/tap/danger-swift
echo "Run danger"
danger-swift ci
echo "#################################################"
You can see what outputs the step generates on the Workflow Editor UI, and alternatively in the Step's repository in step.yml. For Xcode Test step specifically: https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L287
Am I right that you're looking for the xcresult test output? If you are, you can read it from the BITRISE_XCRESULT_PATH environment variable (https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L296), which is the output of Xcode Test (once Xcode Test is finished, it sets this environment variable to the xcresult's path). Keep in mind this is in .xcresult format (the official Xcode test result format).

Where is coverage.xml located in Codecov?

I use /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml but I think the path is not correct. My report is not uploading on GitHub Actions.
Full file.
The coverage file is located at /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml. In general, it is /home/runner/work/<project>/<project>/coverage.xml.
I solved it with the following code.
- name: Generate Report
run: |
pip install codecov
pip install pytest-cov
pytest --cov=./ --cov-report=xml
codecov
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v3.1.0
with:
token: ${{ secrets.CODECOV_TOKEN }}
directory: ./coverage/reports/
env_vars: OS,PYTHON
files: /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml
flags: tests
Full code is here.
From your last run, check if the "Upload coverage to CodeCov" is actually executed.
The workflow seems to stop before that, in the test with pytest step:
/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/distutils/file_util.py:44: DistutilsFileError
=========================== short test summary info ============================
FAILED tests/__main__.py::test_merge_ani - distutils.errors.DistutilsFileErro...
FAILED tests/tests.py::test_merge_ani - distutils.errors.DistutilsFileError: ...
========================= 2 failed, 30 passed in 4.23s =========================
If it does not stop before that, you need to check if the file is indeed created, considering the error message is:
None of the following appear to exist as files: ./coverage.xml

How do you write a cmake.yml which allows you to review a file on GitHub?

I have written a program which takes in an single command line argument containing the input file, and runs it through some algorithms and creating a txtfile containing the results.
What I need to be able to do is review the file using GitHub Actions. My program builds with GitHub Actions, I just can't review the output files.
Currently this is how I have my cmake.yml set up:
on:
push:
branches: [ main ]
env:
BUILD_TYPE: Debug
file's
TEST_EXE: 22s_pa01_nicandpaige
INPUT_FILE: input/input.txt
OUTPUT_FILE: output/test-highvalue.txt output/test-custom.txt. output/test-bruteforce
jobs:
build:
runs-on: ubuntu-latest
timeout-minutes: 3
steps:
- uses: actions/checkout#v2
- name: Configure CMake
run: cmake -B ${{github.workspace}}/build -DCMAKE_BUILD_TYPE=${{env.BUILD_TYPE}}
- name: Build
run: cmake --build ${{github.workspace}}/build --config ${{env.BUILD_TYPE}}
- name: Execute Project
working-directory: ${{github.workspace}}/build
run: ${{github.workspace}}/build/${{env.TEST_EXE}} ${{env.INPUT_FILE}} ${{env.OUTPUT_FILE}}
- name: Upload output files to GitHub so they can be reviewed
uses: actions/upload-artifact#v2
with:
name: project_output
path: ${{github.workspace}}/build/output
And with regards to how I have written the textfiles within my program:
std:: ofstream myfile;
myfile.open ("../output/test-highvalue.txt");
for (auto & i : x){
myfile << i.getId();
myfile << " ";
myfile << i.getValue();
totalValue += i.getValue();
myfile << " ";
myfile << i.getLength();
myfile << " ";
myfile << i.getH();
myfile << std:: endl;
}
The following code above is replicated cross three different classes and gets called in main.
As for how we have called for the input file:
std::vector<paintingData> read_paintings(char* arg){
std::ifstream inFS(arg);
if(!inFS.is_open()){
std::cout << "Failed to open " << arg << std::endl;
return std::vector<paintingData>();
}
This function is more extensive than this, but it gets called in main, and the data gets passed to the necessary algorithms.
Although it is building, when I review the build closer I get the following messages
Please execute this program with the input file name included as an argument.
and
Warning: No files were found with the provided path: /home/runner/work/22s-pa01-nicandpaige/22s-pa01-nicandpaige/build/output. No artifacts will be uploaded.
I am not to sure where exactly the problem lies as this is my first time writing a cmake.yml, and I have hit a block.
Any guidance would be greatly appreciated.
This seems to be an error message emitted by your program:
Please execute this program with the input file name included as an argument.
We can't know what goes wrong because you don't show the part of your program that emits this.
You say
I have written a program which takes in an single command line argument
but you give multiple arguments:
${{env.INPUT_FILE}} ${{env.OUTPUT_FILE}}
You show the code line
myfile.open ("../output/test-highvalue.txt");
your working directory is ${{github.workspace}}/build, therefore your output directory is ${{github.workspace}}/output, not ${{github.workspace}}/build/output.
You don't show how you create the output directory.
To sum up: There are some fishy things but you don't give enough information for us to actually figure out what goes wrong.

Luigi does not send error codes to concourse ci

I have a test pipeline on concourse with one job that runs a set of luigi tasks. My problem is: failures in the luigi tasks do not rise up to the concourse job. In other words, if a luigi task fails, concourse will not register that failure and states that the concourse job completed successfully. I will first post the code I am running, then the solutions I have tried.
luigi-tasks.py
class Pipeline1(luigi.WrapperTask):
def requires(self):
yield Task1()
yield Task2()
yield Task3()
tasks.py
class Task1(luigi.Task):
def requires(self):
return None
def output(self):
return luigi.LocalTarget('stuff/task1.csv')
def run(self):
#uncomment line below to generate task failure
#assert(True==False)
print('task 1 complete...')
t = pd.DataFrame()
with self.output().open('w') as outtie:
outtie.write('complete')
# Tasks 2 and 3 are duplicates of this, but with 1s replaced with 2s or 3s.
config file
[retcode]
# codes are in increasing level of severity (for most applications)
already_running=10
missing_data=20
not_run=25
task_failed=30
scheduling_error=35
unhandled_exception=40
begin.sh
#!/bin/sh
set -e
export PYTHONPATH='.'
luigi --module luigi-tasks Pipeline1 --local-scheduler
echo $?
pipeline.yml
# <resources, resource types, and docker image build job defined here>
#job of interest
- name: run-docker-image
plan:
- get: timer
trigger: true
- get: docker-image-ecr
passed: [build-docker-image]
- get: run-git
- task: run-script
image: docker-image-ecr
config:
inputs:
- name: run-git
platform: linux
run:
dir: ./run-git
path: /bin/bash
args: ["begin.sh"]
I've introduced errors in a few ways: assertions/raising an exception (ValueError) within an individual task's run() method and within the wrapper, and sys.exit(luigi.retcodes.retcode().unhandled_exception). I also tried failing all tasks. I did this in case the error needed to be generated in a specific manner/location. Though they all produced a failed task, none of them produced an error in the concourse server.
At first, I thought concourse just gives a success if it can run the file or command tasked to it. I'm not sure it's that simple, though. Interestingly, when I run the pipeline on my local computer (luigi --modules luigi-tasks Pipeline1 --local-scheduler) I get an appropriate return code (e.g. 30), but when I run the pipeline within the concourse server, I get a return code of 0 after the luigi tasks complete (from echo $? in the bash script).
Would appreciate any insight into this problem.
My suspicion is that luigi doesn't see your config file with return codes. Its default behavior is to return 0, whether tasks fail or succeed.
This experiment should help to debug that:
Force a failed job: add an exit 1 at the end of begin.sh
Hijack the job: fly -t <target> i -j <pipeline>/<job> -> select run-script
cd ./run-git; /bin/bash begin.sh
Ensure the luigi config is present and named appropriately, e.g. luigi.cfg
Re-run the command: LUIGI_CONFIG_PATH=luigi.cfg bash ./begin.sh
Check output: echo $?

gitlab-ci.yml: 'script: -pytest cannot find any tests to check'

I'm having trouble implementing a toy example that runs pytest within .gitlab-ci.yml
gitlab_ci is a repo containing a single file test_hello.py
gitlab_ci/
test_hello.py
test_hello.py
# test_hello.py
import pytest
def hello():
print("hello")
def hello_test():
assert hello() == 'hello'
.gitlab-ci.yml
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pytest # if this is removed, the job outputs 'Success'
CI/CD terminal output
$ pytest
=== test session starts ===
platform linux -- Python 3.6.9, pytest-5.2.0, py-1.8.0, pluggy-0.13.0
rootdir: /builds/kunov/gitlab_ci
collected 0 items
=== no tests ran in 0.02s ===
ERROR: Job failed: exit code 1
I'm not sure why the test did not run... pytest does not seem to recognize test_hello.py
Solution
Put the python file inside the newly creared tests folder:
gitlab_ci/
.gitlab-ci.yml
tests/
test_hello.py
Modify gitlab-ci.yml in the following manner:
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
And test_hello.py would stay the same as before.
This blog post mentions a similar pipeline, but:
However, this did not work as pytest was unable to find the ‘bild’ module (ie. the source code) to test.
The problem encountered here is that the ‘bild’ module is not able to be found by the test_*.py files, as the top-level directory of the project was not being specified in the system path:
pytest:
stage: Test
script:
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
The OP kunov confirms in the comments:
It works now! I put the single file inside a newly created folder called 'test'
Manipulation of the PYTHONPATH variable is considered by some to be a bad practice (see e.g., this answer on stackoverflow or this Level Up Coding post). While this is possible not a huge issue in the scope of a GitLab CI job, here is a solution based on Alberto Mardegan's comment at the mentioned blog post without the need to fiddle with PYTHONPATH (also somewhat cleaner):
pytest:
stage: Test
script:
- pwd
- ls -l
- python -m pytest
Why does this work? From the pytest docs:
You can invoke testing through the Python interpreter from the command
line:
python -m pytest [...]
This is almost equivalent to invoking the
command line script pytest [...] directly, except that calling via
python will also add the current directory to sys.path.
test_hello.py
def test_hello():#func name must start with "test_",not "hello_test"