I want to run a test through Github action for each PR.
My Github repository structure is
Root Dir
Sub Dirs
Child Dir
Java Module Dir
Sub Dirs
Child Dir
Java Module Dir
Sub Dirs
Child Dir
Java Module Dir
...
In this situation
I want to test only Java modules.
I don't want to create a repository as a multi-module project
The reason is that it is going to be used in combination with other languages, so it doesn't look good for the repository to look like it's based on Java.
Anyway, to run Java tests by subdirectory
Do I have to write that directory to the workflow file to run it?
Is it possible to dynamically find and execute Java-based directories?
(Java-based directories are probably prefixed with "Java-")
I set the matrix with GitHub action workflow
The directory description was tested.
I want to dynamically directory Testing in Github Action
You can dynamically create the input for the matrix in the first job and then use that as an input for the matrix in the second job.
For a project where all test directories are in a directory call tests, you could collect the directories like so:
collect_dirs:
runs-on: ubuntu-latest
outputs:
dirs: ${{ steps.dirs.outputs.dirs }}
steps:
- uses: actions/checkout#v2
- id: dirs
run: echo "::set-output name=dirs::$(ls -d tests/*/ | jq --raw-input --slurp --compact-output 'split("\n")[:-1]')"
The tricky part here is how to construct the dirs output:
First, we list all directories in the directory tests/
Second, jq formats the output as a JSON array. For example, if the directory contains the subdirectories a, b, and c, then the output is set to ["a","b","c"].
In a second job, you can use the matrix based on that output:
run_tests:
needs: collect_dirs
runs-on: ubuntu-latest
strategy:
matrix:
dir: ${{ fromJson(needs.collect_dirs.outputs.dirs) }}
steps:
- run: echo ${{ matrix.dir }}
Related
I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.
I am trying to set my working directory with an environment variable so that I can handle 2 different situations, however, if I use context with the working-directory it does not find files in the directory, but if I hardcode the path, it does. I have tried many different syntax iterations but will paste one iteration below so it is easier to see what I am trying to accomplish.
- uses: actions/checkout#v2
. . .
- name: Monorepo - Set working Directory
if: env.WASMCLOUD_REPO_STYLE == 'MONO' # Run if monorepo
run: |
echo "WORKING_DIR = ${GITHUB_WORKSPACE}/actors/${{ env.ACTOR_NAME }}" >> $GITHUB_ENV
- name: Multirepo - Set working Directory
if: env.WASMCLOUD_REPO_STYLE == 'MULTI' # Run if multirepo
run: |
echo "WORKING_DIR = ${GITHUB_WORKSPACE}" >> $GITHUB_ENV
- name: Build wasmcloud actor
run: make
working-directory: ${{ env.WORKING_DIR }} # If I Hardcode path here it works
The environment variable is showing the correct path during debugging which is formatted as: /home/runner/work/REPO_NAME/REPO_NAME/actors/ACTOR_NAME
For step 3 if I type working-directory: actors/ACTOR_NAME it works (until a later issue :P where it again does not find a directory)
This is my first few days with GitHub Actions so help would be really appreciated. I am not sure why context is not being accepted but a static path is. Examples often show working-directory using context.
See #Benjamin W.'s comments above for the answer.
My current GitHub workflow executes test cases even for .md file or .txt file.
I want to skip the test for any changes made to documentation files. My folder structure is:
project
| source-codes
│ docs
│ file001.txt
| file.rst
| file.md
The GitHub workflow must skip for docs folder or anything with .rst or .md file.
I have the code below but doesn't seem to work. I am currently trying this from my feature branch.
.github/workflows/main.yml
# This is a basic workflow to help you get started with Actions
name: Documentation Test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "main" branch
push:
branches:
- 'main'
# Allows you to run this workflow manually from the Actions tab
# workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "Documentation Testcase"
documentation_testcase:
# The type of runner that the job will run on
name: Documentation Job
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- name: Checkout
uses: actions/checkout#v3
with:
# Checkout as many commits as needed for the diff
fetch-depth: 2
- shell: pwsh
# Give an id to the step, so we can reference it later.
id: check_file_changed
run: |
# Diff HEAD with the previous commit
$diff = git diff --name-only HEAD^ HEAD
# Check if a file under docs/ or with the .md extension has changed (added, modified, deleted)
$SourceDiff = $diff | Where-Object { $_ -match '^docs/' -or $_ -match '.rst$' }
$HasDiff = $SourceDiff.Length -gt 0
# Set the output named "docs_changed"
Write-Host "::set-output name=docs_changed::$HasDiff"
# Run the step only with "docs_changed" equals "True"
- shell: pwsh
# steps.<step_id>.outputs.<output name>
if: steps.check_file_changed.outputs.docs_changed == 'True'
run: echo publish docs
Instead of complex shell script, I'd recommend using paths-ignore setting.
Read more about this configuration here: Link
Example Action: Link
Code Example:
on:
push:
paths-ignore:
- 'README.md'
- 'backup/**'
- 'masterDir/contentDir/**/*.draft.md'
I'm trying to set-up and run a GitHub action in a nested folder of the repo.
I thought I could use working-directory, but if I write this:
jobs:
test-working-directory:
runs-on: ubuntu-latest
name: Test
defaults:
run:
working-directory: my_folder
steps:
- run: echo test
I get an error:
Run echo test
echo test
shell: /usr/bin/bash -e {0}
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/my_repo/my_repo/my_folder'. No such file or directory
I notice my_repo appears twice in the path of the error.
Here is the run on my repo, where:
my_repo = novade_flutter_packages
my_folder = packages
What am I missing?
You didn't check out the repository on the second job.
Each job runs on a different instance, so you have to checkout it separately for each one of them.
I am new to github actions and need some help.
I have a readme.md file for a collection of scripts that is 100s of lines long. I want to translate it into different languages using this github action (https://github.com/dephraiim/translate-readme)
But instead of putting all the files it creates into the main directory. I want to create a folder inside my directory and put all these created files in there.
So in the main directory I have all the scripts I've written and one readme.md file. What I want is something like this.
Maindirectory/newreadmefolder/(all the new language readme.md files)
A directory can be created this way.:
- name: Create dir
run: |
mkdir mydirname
This has to be added in steps part like in this full action file:
name: mkdir example
on:
push:
branches: [ '**' ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Create dir
run: |
mkdir mydirname
If you also want to move some file, you can add something like mv readme-*.md mydirname at the end, e.g.:
- name: Create dir
run: |
mkdir mydirname
mv readme-*.md mydirname