How to access Bitrise iOS test results from Swift DangerFile? - swift

I'm trying to implement danger-swift but I don't know where I can access test results and flag failing ones from Bitrise.
I'm using a plugin for danger-swift called DangerXCodeSummary but I don't know where Bitrise stores test results from xcode-test#2.
DangerFile:
import Danger
import DangerXCodeSummary
import Foundation
let danger = Danger()
let testReportPath = "??" // What's the path for the test results?
XCodeSummary(filePath: testReportPath).report()
Bitrise script:
...
UnitTests:
before_run:
- _ensure_dependencies
after_run:
- _add_build_result_to_pr
steps:
- xcode-test#2: {} # What's the file path for the test results?
- deploy-to-bitrise-io#1: {}
envs:
- ots:
is_expand: false
BITRISE_PROJECT_PATH: MyApp.xcodeproj
- opts:
is_expand: false
BITRISE_SCHEME: AppTests
description: Unit Tests running at every commit.
...
_add_build_result_to_pr:
steps:
- script#1:
title: Commenting on the PR
is_always_run: true
inputs:
- content: |-
#!/usr/bin/env bash
# fail if any commands fails
set -e
echo "################### DANGER ######################"
echo "Install Danger"
brew install danger/tap/danger-swift
echo "Run danger"
danger-swift ci
echo "#################################################"

You can see what outputs the step generates on the Workflow Editor UI, and alternatively in the Step's repository in step.yml. For Xcode Test step specifically: https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L287
Am I right that you're looking for the xcresult test output? If you are, you can read it from the BITRISE_XCRESULT_PATH environment variable (https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L296), which is the output of Xcode Test (once Xcode Test is finished, it sets this environment variable to the xcresult's path). Keep in mind this is in .xcresult format (the official Xcode test result format).

Related

Where is coverage.xml located in Codecov?

I use /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml but I think the path is not correct. My report is not uploading on GitHub Actions.
Full file.
The coverage file is located at /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml. In general, it is /home/runner/work/<project>/<project>/coverage.xml.
I solved it with the following code.
- name: Generate Report
run: |
pip install codecov
pip install pytest-cov
pytest --cov=./ --cov-report=xml
codecov
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v3.1.0
with:
token: ${{ secrets.CODECOV_TOKEN }}
directory: ./coverage/reports/
env_vars: OS,PYTHON
files: /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml
flags: tests
Full code is here.
From your last run, check if the "Upload coverage to CodeCov" is actually executed.
The workflow seems to stop before that, in the test with pytest step:
/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/distutils/file_util.py:44: DistutilsFileError
=========================== short test summary info ============================
FAILED tests/__main__.py::test_merge_ani - distutils.errors.DistutilsFileErro...
FAILED tests/tests.py::test_merge_ani - distutils.errors.DistutilsFileError: ...
========================= 2 failed, 30 passed in 4.23s =========================
If it does not stop before that, you need to check if the file is indeed created, considering the error message is:
None of the following appear to exist as files: ./coverage.xml

Luigi does not send error codes to concourse ci

I have a test pipeline on concourse with one job that runs a set of luigi tasks. My problem is: failures in the luigi tasks do not rise up to the concourse job. In other words, if a luigi task fails, concourse will not register that failure and states that the concourse job completed successfully. I will first post the code I am running, then the solutions I have tried.
luigi-tasks.py
class Pipeline1(luigi.WrapperTask):
def requires(self):
yield Task1()
yield Task2()
yield Task3()
tasks.py
class Task1(luigi.Task):
def requires(self):
return None
def output(self):
return luigi.LocalTarget('stuff/task1.csv')
def run(self):
#uncomment line below to generate task failure
#assert(True==False)
print('task 1 complete...')
t = pd.DataFrame()
with self.output().open('w') as outtie:
outtie.write('complete')
# Tasks 2 and 3 are duplicates of this, but with 1s replaced with 2s or 3s.
config file
[retcode]
# codes are in increasing level of severity (for most applications)
already_running=10
missing_data=20
not_run=25
task_failed=30
scheduling_error=35
unhandled_exception=40
begin.sh
#!/bin/sh
set -e
export PYTHONPATH='.'
luigi --module luigi-tasks Pipeline1 --local-scheduler
echo $?
pipeline.yml
# <resources, resource types, and docker image build job defined here>
#job of interest
- name: run-docker-image
plan:
- get: timer
trigger: true
- get: docker-image-ecr
passed: [build-docker-image]
- get: run-git
- task: run-script
image: docker-image-ecr
config:
inputs:
- name: run-git
platform: linux
run:
dir: ./run-git
path: /bin/bash
args: ["begin.sh"]
I've introduced errors in a few ways: assertions/raising an exception (ValueError) within an individual task's run() method and within the wrapper, and sys.exit(luigi.retcodes.retcode().unhandled_exception). I also tried failing all tasks. I did this in case the error needed to be generated in a specific manner/location. Though they all produced a failed task, none of them produced an error in the concourse server.
At first, I thought concourse just gives a success if it can run the file or command tasked to it. I'm not sure it's that simple, though. Interestingly, when I run the pipeline on my local computer (luigi --modules luigi-tasks Pipeline1 --local-scheduler) I get an appropriate return code (e.g. 30), but when I run the pipeline within the concourse server, I get a return code of 0 after the luigi tasks complete (from echo $? in the bash script).
Would appreciate any insight into this problem.
My suspicion is that luigi doesn't see your config file with return codes. Its default behavior is to return 0, whether tasks fail or succeed.
This experiment should help to debug that:
Force a failed job: add an exit 1 at the end of begin.sh
Hijack the job: fly -t <target> i -j <pipeline>/<job> -> select run-script
cd ./run-git; /bin/bash begin.sh
Ensure the luigi config is present and named appropriately, e.g. luigi.cfg
Re-run the command: LUIGI_CONFIG_PATH=luigi.cfg bash ./begin.sh
Check output: echo $?

gitlab-ci.yml: 'script: -pytest cannot find any tests to check'

I'm having trouble implementing a toy example that runs pytest within .gitlab-ci.yml
gitlab_ci is a repo containing a single file test_hello.py
gitlab_ci/
test_hello.py
test_hello.py
# test_hello.py
import pytest
def hello():
print("hello")
def hello_test():
assert hello() == 'hello'
.gitlab-ci.yml
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pytest # if this is removed, the job outputs 'Success'
CI/CD terminal output
$ pytest
=== test session starts ===
platform linux -- Python 3.6.9, pytest-5.2.0, py-1.8.0, pluggy-0.13.0
rootdir: /builds/kunov/gitlab_ci
collected 0 items
=== no tests ran in 0.02s ===
ERROR: Job failed: exit code 1
I'm not sure why the test did not run... pytest does not seem to recognize test_hello.py
Solution
Put the python file inside the newly creared tests folder:
gitlab_ci/
.gitlab-ci.yml
tests/
test_hello.py
Modify gitlab-ci.yml in the following manner:
# .gitlab-ci.yml
pytest:
image: python:3.6
script:
- apt-get update -q -y
- pip install pytest
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
And test_hello.py would stay the same as before.
This blog post mentions a similar pipeline, but:
However, this did not work as pytest was unable to find the ‘bild’ module (ie. the source code) to test.
The problem encountered here is that the ‘bild’ module is not able to be found by the test_*.py files, as the top-level directory of the project was not being specified in the system path:
pytest:
stage: Test
script:
- pwd
- ls -l
- export PYTHONPATH="$PYTHONPATH:."
- python -c "import sys;print(sys.path)"
- pytest
The OP kunov confirms in the comments:
It works now! I put the single file inside a newly created folder called 'test'
Manipulation of the PYTHONPATH variable is considered by some to be a bad practice (see e.g., this answer on stackoverflow or this Level Up Coding post). While this is possible not a huge issue in the scope of a GitLab CI job, here is a solution based on Alberto Mardegan's comment at the mentioned blog post without the need to fiddle with PYTHONPATH (also somewhat cleaner):
pytest:
stage: Test
script:
- pwd
- ls -l
- python -m pytest
Why does this work? From the pytest docs:
You can invoke testing through the Python interpreter from the command
line:
python -m pytest [...]
This is almost equivalent to invoking the
command line script pytest [...] directly, except that calling via
python will also add the current directory to sys.path.
test_hello.py
def test_hello():#func name must start with "test_",not "hello_test"

How to force Devel::Cover to ignore a folder when using perl-helpers via Travis CI

The MetaCPAN Travis CI coverage builds are quite slow. See https://travis-ci.org/metacpan/metacpan-web/builds/238884497 This is likely in part because we're not successfully ignoring the /local folder that gets created by Carton as part of our build. See https://coveralls.io/builds/11809290
We're using perl-helpers to help with our Travis configuration. I thought I should be able to use the DEVEL_COVER_OPTIONS environment variable in order to fix this, but I guess I don't have the correct incantation. I've included the entire config below because a few snippets out of context seemed misleading.
language: perl
perl:
- "5.22"
matrix:
fast_finish: true
allow_failures:
- env: COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- env: USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
env:
global:
# Carton --deployment only works on the same version of perl
# that the snapshot was built from.
- DEPLOYMENT_PERL_VERSION=5.22
- DEVEL_COVER_OPTIONS="-ignore ^local/"
matrix:
# Get one passing run with coverage and one passing run with Test::Vars
# checks. If run together they more than double the build time.
- COVERAGE=1 USE_CPANFILE_SNAPSHOT=true
- USE_CPANFILE_SNAPSHOT=false HARNESS_VERBOSE=1
- USE_CPANFILE_SNAPSHOT=true
before_install:
- git clone git://github.com/travis-perl/helpers ~/travis-perl-helpers
- source ~/travis-perl-helpers/init
- npm install -g less js-beautify
# Pre-install from backpan to avoid upgrade breakage.
- cpanm -n http://cpan.metacpan.org/authors/id/M/ML/MLEHMANN/common-sense-3.6.tar.gz
- cpanm -n App::cpm Carton
install:
- cpan-install --coverage # installs converage prereqs, if enabled
- 'cpm install `test "${USE_CPANFILE_SNAPSHOT}" = "false" && echo " --resolver metadb" || echo " --resolver snapshot"`'
before_script:
- coverage-setup
script:
# Devel::Cover isn't in the cpanfile
# but if it's installed into the global dirs this should work.
- carton exec prove -lr -j$(test-jobs) t
after_success:
- coverage-report
notifications:
email:
recipients:
- olaf#seekrit.com
on_success: change
on_failure: always
irc: "irc.perl.org#metacpan-travis"
# Use newer travis infrastructure.
sudo: false
cache:
directories:
- local
The syntax for the Devel::Cover options on the command line is weird. You need to put stuff comma-separated. At least when you use PERL5OPT.
DEVEL_COVER_OPTIONS="-ignore,^local/"
See for example https://github.com/simbabque/AWS-S3/blob/master/.travis.yml#L26, where it's a whole lot of stuff with commas.
PERL5OPT=-MDevel::Cover=-ignore,"t/",+ignore,"prove",-coverage,statement,branch,condition,path,subroutine prove -lrs t

How to fix "Test reports were found but none of them are new. Did tests run?" in Jenkins

I am getting the error "Test reports were found but none of them are new. Did tests run?" when trying to send unit test results by email. The reason is that I have a dedicated Jenkins job that imports the artifacts from a test job to itself, and sends the test results by email. The reason why I am doing this is because I don't want Jenkins to send all the developers email during the night :) so I am "post-poning" the email sending since Jenkins itself does not support delayed email notifications (sadly).
However, by the time the "send test results by email" job executes, the tests are hours old and I get the error as specified in the question title. Any ideas on how to get around this problem?
You could try updating the timestamps of the test reports as a build step ("Execute shell script"). E.g.
cd path/to/test/reports
touch *.xml
mvn clean test
via terminal or jenkins. This generates new tests reports.
The other answer that says cd path/to/test/reports touch *.xml didn't work for me, but mvn clean test yes.
Updating the last modified date can also be achieved in gradle itself is desired:
task jenkinsTest{
inputs.files test.outputs.files
doLast{
def timestamp = System.currentTimeMillis()
test.testResultsDir.eachFile { it.lastModified = timestamp }
}
}
build.dependsOn(jenkinsTest)
As mentioned here: http://www.practicalgradle.org/blog/2011/06/incremental-tests-with-jenkins/
Here's an updated version for Jenkinsfile (Declarative Pipeline):
pipeline {
agent any
stages {
stage('Build') {
steps {
sh 'make build'
}
}
stage('Test') {
steps {
sh 'make test'
script {
def testResults = findFiles(glob: 'build/reports/**/*.xml')
for(xml in testResults) {
touch xml.getPath()
}
}
}
}
}
post {
always {
archiveArtifacts artifacts: 'build/libs/**/*.jar', fingerprint: true
junit 'build/reports/**/*.xml'
}
}
}
Because gradle caches results from previous builds I ran into the same problem.
I fixed it by adding this line to my publish stage:
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
So my file is like this:
....
stage('Unit Tests') {
sh './gradlew test'
}
stage('Publish Results') {
// Fool Jenkins into thinking the tests results are new
sh 'find . -name "TEST-*.xml" -exec touch {} \\;'
junit '**/build/test-results/test/TEST-*.xml'
}
Had same issue for jobs running repeatedly (every 30 mins).
For the job, go to Configure, Build, Advanced and within the Switches section add:
--stacktrace
--continue
--rerun-tasks
This worked for me
Navigate to report directory cd /report_directory
Delete all older report rm *.xml
Add junit report_directory/*.xml in pipeline
Rerun the test script , navigate to Build Number → Test Result
Make sure you have one successful build without any failure, only after this you can able to see the reports
Make sure that you have mentioned the correct path against "Test report XMLs" under jenkins configuration, such as "target/surefire-reports/*.xml"
There is no need to touch *.xml as jenkins won't complain even though test results xml file does not change.
if you use Windows slave, you can 'touch' results using groovy pipeline stage with powershell:
powershell 'ls "junitreports\\*.*" | foreach-object { $_.LastWriteTime = Get-Date }'
It happens if you are using a test report which is not modified by that job in that run.
In case for test purpose if you are testing with already created file then, add below command inside jenkins job under Build > Execute Shell
chmod -R 775 /root/.jenkins/workspace/JmeterTest/output.xml
echo " " >> /root/.jenkins/workspace/JmeterTest/output.xml
Above command changes timestamp of file hence error wont display.
Note: To achieve same in Execute Shell instead of above, do not try renaming file using move mv command etc. it won't work , append and delete same for change file timestamp only works.
For me commands like chmod -R 775 test-results.xml or touch test-results.xml does not work due to permission error. As work around use is to set new file in test report settings and command to copy old xml report file to new file.
you can add following shell command to your "Pre Steps" section when configure your job on Jenkins
mvn clean test
this will clean the test
Here's an updated version of the gradle task that touch each test result files.
From Jenkins pipeline script, just call "testAndTouchTestResult" task instead of "test" task.
The code below is with Kotlin syntax:
tasks {
register("testAndTouchTestResult") {
setGroup("verification")
setDescription("touch Test Results for Jenkins")
inputs.files(test.get().outputs)
doLast {
val timestamp = System.currentTimeMillis()
fileTree(test.get().reports.junitXml.destination).forEach { f ->
f.setLastModified(timestamp)
}
}
}
}
The solution for me was delete node_modules and change node version (from 7.1 to 8.4) on jenkins. That's it.