I have a test pipeline on concourse with one job that runs a set of luigi tasks. My problem is: failures in the luigi tasks do not rise up to the concourse job. In other words, if a luigi task fails, concourse will not register that failure and states that the concourse job completed successfully. I will first post the code I am running, then the solutions I have tried.
luigi-tasks.py
class Pipeline1(luigi.WrapperTask):
def requires(self):
yield Task1()
yield Task2()
yield Task3()
tasks.py
class Task1(luigi.Task):
def requires(self):
return None
def output(self):
return luigi.LocalTarget('stuff/task1.csv')
def run(self):
#uncomment line below to generate task failure
#assert(True==False)
print('task 1 complete...')
t = pd.DataFrame()
with self.output().open('w') as outtie:
outtie.write('complete')
# Tasks 2 and 3 are duplicates of this, but with 1s replaced with 2s or 3s.
config file
[retcode]
# codes are in increasing level of severity (for most applications)
already_running=10
missing_data=20
not_run=25
task_failed=30
scheduling_error=35
unhandled_exception=40
begin.sh
#!/bin/sh
set -e
export PYTHONPATH='.'
luigi --module luigi-tasks Pipeline1 --local-scheduler
echo $?
pipeline.yml
# <resources, resource types, and docker image build job defined here>
#job of interest
- name: run-docker-image
plan:
- get: timer
trigger: true
- get: docker-image-ecr
passed: [build-docker-image]
- get: run-git
- task: run-script
image: docker-image-ecr
config:
inputs:
- name: run-git
platform: linux
run:
dir: ./run-git
path: /bin/bash
args: ["begin.sh"]
I've introduced errors in a few ways: assertions/raising an exception (ValueError) within an individual task's run() method and within the wrapper, and sys.exit(luigi.retcodes.retcode().unhandled_exception). I also tried failing all tasks. I did this in case the error needed to be generated in a specific manner/location. Though they all produced a failed task, none of them produced an error in the concourse server.
At first, I thought concourse just gives a success if it can run the file or command tasked to it. I'm not sure it's that simple, though. Interestingly, when I run the pipeline on my local computer (luigi --modules luigi-tasks Pipeline1 --local-scheduler) I get an appropriate return code (e.g. 30), but when I run the pipeline within the concourse server, I get a return code of 0 after the luigi tasks complete (from echo $? in the bash script).
Would appreciate any insight into this problem.
My suspicion is that luigi doesn't see your config file with return codes. Its default behavior is to return 0, whether tasks fail or succeed.
This experiment should help to debug that:
Force a failed job: add an exit 1 at the end of begin.sh
Hijack the job: fly -t <target> i -j <pipeline>/<job> -> select run-script
cd ./run-git; /bin/bash begin.sh
Ensure the luigi config is present and named appropriately, e.g. luigi.cfg
Re-run the command: LUIGI_CONFIG_PATH=luigi.cfg bash ./begin.sh
Check output: echo $?
Related
I'm trying to implement danger-swift but I don't know where I can access test results and flag failing ones from Bitrise.
I'm using a plugin for danger-swift called DangerXCodeSummary but I don't know where Bitrise stores test results from xcode-test#2.
DangerFile:
import Danger
import DangerXCodeSummary
import Foundation
let danger = Danger()
let testReportPath = "??" // What's the path for the test results?
XCodeSummary(filePath: testReportPath).report()
Bitrise script:
...
UnitTests:
before_run:
- _ensure_dependencies
after_run:
- _add_build_result_to_pr
steps:
- xcode-test#2: {} # What's the file path for the test results?
- deploy-to-bitrise-io#1: {}
envs:
- ots:
is_expand: false
BITRISE_PROJECT_PATH: MyApp.xcodeproj
- opts:
is_expand: false
BITRISE_SCHEME: AppTests
description: Unit Tests running at every commit.
...
_add_build_result_to_pr:
steps:
- script#1:
title: Commenting on the PR
is_always_run: true
inputs:
- content: |-
#!/usr/bin/env bash
# fail if any commands fails
set -e
echo "################### DANGER ######################"
echo "Install Danger"
brew install danger/tap/danger-swift
echo "Run danger"
danger-swift ci
echo "#################################################"
You can see what outputs the step generates on the Workflow Editor UI, and alternatively in the Step's repository in step.yml. For Xcode Test step specifically: https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L287
Am I right that you're looking for the xcresult test output? If you are, you can read it from the BITRISE_XCRESULT_PATH environment variable (https://github.com/bitrise-steplib/steps-xcode-test/blob/deb39d7e9e055a22f33550ed3110fb3c71beeb79/step.yml#L296), which is the output of Xcode Test (once Xcode Test is finished, it sets this environment variable to the xcresult's path). Keep in mind this is in .xcresult format (the official Xcode test result format).
I use /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml but I think the path is not correct. My report is not uploading on GitHub Actions.
Full file.
The coverage file is located at /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml. In general, it is /home/runner/work/<project>/<project>/coverage.xml.
I solved it with the following code.
- name: Generate Report
run: |
pip install codecov
pip install pytest-cov
pytest --cov=./ --cov-report=xml
codecov
- name: Upload coverage to Codecov
uses: codecov/codecov-action#v3.1.0
with:
token: ${{ secrets.CODECOV_TOKEN }}
directory: ./coverage/reports/
env_vars: OS,PYTHON
files: /home/runner/work/SIESTAstepper/SIESTAstepper/coverage.xml
flags: tests
Full code is here.
From your last run, check if the "Upload coverage to CodeCov" is actually executed.
The workflow seems to stop before that, in the test with pytest step:
/opt/hostedtoolcache/Python/3.7.13/x64/lib/python3.7/distutils/file_util.py:44: DistutilsFileError
=========================== short test summary info ============================
FAILED tests/__main__.py::test_merge_ani - distutils.errors.DistutilsFileErro...
FAILED tests/tests.py::test_merge_ani - distutils.errors.DistutilsFileError: ...
========================= 2 failed, 30 passed in 4.23s =========================
If it does not stop before that, you need to check if the file is indeed created, considering the error message is:
None of the following appear to exist as files: ./coverage.xml
My ultimate goal is to have tests run automatically anytime a container is updated. For example, if update /api, it should sync the changes between local and the container. After that it should automatically run the tests... ultimately.
I'm starting out with Hello World! though per the example:
# DevSpace --version = 5.16.0
version: v1beta11
...
hooks:
- command: |
echo Hello World!
container:
imageSelector: ${APP-NAME}/${API-DEV}
events: ["after:initialSync:${API}"]
...
I've tried all of the following and don't get the desired behavior:
stop:sync:${API}
restart:sync:${name}
after:initialSync:${API}
devCommand:after:sync
At best I can just get Hello World! to print on the initial run of devspace dev -b, but nothing after I make changes to the files for /api which causes files to sync.
Suggestions?
You will need a post-sync hook for this, which is separate from the DevSpace lifecycle hooks. You can define it with the dev.sync directly and it looks like this:
dev:
sync:
- imageSelector: john/devbackend
onUpload:
execRemote:
onBatch:
command: bash
args:
- -c
- "echo 'Hello World!' && other commands..."
More information in the docs: https://devspace.sh/cli/docs/configuration/development/file-synchronization#onupload
Imagine I have a workflow with 5 steps.
Step 2 may or may not create a file as its output (which is then used as input to subsequent steps).
If the file is created, I want to run the subsequent steps.
If no file gets created in step 2, I want to mark the workflow as completed and not execute steps 3 through to 5.
I'm sure there must be a simple way to do this yet I'm failing to figure out how.
I tried by making step 2 return non-zero exit code when no file is created and then using
when: "{{steps.step2.outputs.exitCode}} == 0" on step 3, but that still executes step 4 and 5 (not to mention marks step 2 as "failed")
So I'm out of ideas, any suggestions are greatly appreciated.
By default, a step that exits with a non-zero exit code fails the workflow.
I would suggest writing an output parameter to determine whether the workflow should continue.
- name: yourstep
container:
command: [sh, -c]
args: ["yourcommand; if [ -f /tmp/yourfile ]; then echo continue > /tmp/continue; fi"]
outputs:
parameters:
- name: continue
valueFrom:
default: "stop"
path: /tmp/continue
Alternatively, you can override the fail-on-nonzero-exitcode behavior with continueOn.
continueOn:
failed: true
I'd caution against continueOn.failed: true. If your command throws a non-zero exit code for an unexpected reason, the workflow won't fail like it should, and the bug might go un-noticed.
I was new to concourse, and set up the environment in my centos7.6 like below.
$ wget https://concourse-ci.org/docker-compose.yml
$ docker-compose up -d
Then login by `fly --target example login --team-name main --concourse-url http://192.168.77.140:8080/ -u test -p test`
I can see below.
[root#centostest ~]# fly targets
name url team expiry
example http://192.168.77.140:8080 main Sun, 16 Jun 2019 02:23:48 UTC
I used below yaml.xml named with 2.yaml
---
resources:
- name: my-git-repo
type: git
source:
uri: https://github.com/ruanbekker/concourse-test
branch: basic-helloworld
jobs:
- name: hello-world-job
public: true
plan:
- get: my-git-repo
- task: task_print-hello-world
file: my-git-repo/ci/task-hello-world.yml
Then I run below commands step by step.
fly -t example sp -c 2.yaml -p pipeline-01
fly -t example up -p pipeline-01
fly -t example tj -j pipeline-01/hello-world-job --watch
But i just hang on there , no useful response like below.
[root#centostest ~]# fly -t example tj -j pipeline-01/hello-world-job --watch
started pipeline-01/hello-world-job #3
Theoretically, it should print something like below.
Cloning into '/tmp/build/get'...
Fetching HEAD
292c84b change task name
initializing
running echo hello world
hello world
succeeded
Where I did wrong? thanks.
welcome to Concourse!
One thing that can be confusing when starting with Concourse is understanding when Concourse detects that the pipeline has changed and what happens if the pipeline is one file or multiple files.
Your pipeline (as the majority of real-world pipelines) is "nested": main pipeline file 2.yaml refers to a task file named my-git-repo/ci/task-hello-world.yml
What sets Concourse apart from other CI systems is that:
the main pipeline file (2.yaml) can reside everywhere, also in a different repository.
Due to 1, Concourse is unable to detect a change to the main pipeline file, you have to tell Concourse that the file has changed, either with fly set-pipeline or with automatic means such as the concourse-pipeline-resource.
So the following errors happen often:
Changing the main pipeline file, committing and pushing, and expecting Concourse to pick up the change. Missing: you have to do fly set-pipeline
Once doing fly set-pipeline becomes second nature, you can stumble upon the opposite error: Change both the main pipeline file and the nested task file, not pushing, doing set-pipeline. In this case, the only changes picked up by Concourse will be the ones to the main pipeline file, not to the task file. Missing: commit and push.
From the description of your problem, I have the feeling that it is a mixture of the gotchas I mentioned.