Is it possible to split up a GitHub workflow such that each step has a separate badge? - github

I am relatively new to GitHub workflows and testing. I am working in a private GitHub repository with a dozen colleagues. We want to avoid using services like CircleCI for the time being and see how much we can do with just the integrated GitHub actions, since we are unsure about the kind of access a third party service would be getting to the repo.
Currently, we have two workflows (each one tests the same code for a separate Python environment) that get triggered on push or pull request in the master branch.
The steps of the workflow are as follows (the full workflow yml file is given at the bottom):
Install Anaconda
Create the conda environment (installing dependencies)
Patch libraries
Build a 3rd party library
Run python unit tests
It would be amazing to know immediately which part of the code failed given some new pull requests. Right now, every aspect of the codebase gets tested by a single python file run_tests.py. I was thinking of splitting up this file and creating a workflow per aspect I want to test separately, but then I would have to create a whole new environment, patch the libraries and build the 3rd party library every time I want to conduct a single test. These tests already take quite some time.
My question is now: is there any way to avoid doing that? Is there a way to build everything on the Linux server and re-use that, so that they don't need to be rebuilt every test? Is there a way to display a badge per python test that fails/succeeds, so that we can give more information than just "everything passed" or "everything failed". Is such a thing better suited for a service like CircleCI (or other recommendations are also welcome)?
Here is the full yml file for the workflow for the Python 3 environment. The Python2 one is identical except for the anaconda environment steps.
name: (Python 3) install and test
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the master branch
push:
branches: [ master ]
pull_request:
branches: [ master ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
defaults:
run:
shell: bash -l {0}
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Install Anaconda3 and update conda package manager
- name: Install Anaconda3
run: |
wget https://repo.anaconda.com/archive/Anaconda3-2020.11-Linux-x86_64.sh --quiet
bash Anaconda3-2020.11-Linux-x86_64.sh -b -p ~/conda3-env-py3
source ~/conda3-env-py3/bin/activate
conda info
# Updating the root environment. Install dependencies (YAML)
# NOTE: The environment file (yaml) is in the 'etc' folder
- name: Install ISF dependencies
run: |
source ~/conda3-env-py3/bin/activate
conda-env create --name isf-py3 --file etc/env-py3.yml --quiet
source activate env-py3
conda list
# Patch Dask library
- name: Patch dask library
run: |
echo "Patching dask library."
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd installer
python patch_dask_linux64.py
conda list
# Install pandas-msgpack
- name: Install pandas-msgpack
run: |
echo "Installing pandas-msgpack"
git clone https://github.com/abast/pandas-msgpack.git
# Applying patch to pandas-msgpack (generating files using newer Cython)
git -C pandas-msgpack apply ../installer/pandas_msgpack.patch
source ~/conda3-env-py3/bin/activate
source activate env-py3
cd pandas-msgpack; python setup.py install
pip list --format=freeze | grep pandas
# Compile neuron mechanisms
- name: Compile neuron mechanisms
run: |
echo "Compiling neuron mechanisms"
source ~/conda3-env-py3/bin/activate
source activate env-py3
pushd .
cd mechanisms/channels_py3; nrnivmodl
popd
cd mechanisms/netcon_py3; nrnivmodl
# Run tests
- name: Testing
run: |
source ~/conda3-env-py3/bin/activate
source activate env-py3
export PYTHONPATH="$(pwd)"
dask-scheduler --port=38786 --dashboard-address=38787 &
dask-worker localhost:38786 --nthreads 1 --nprocs 4 --memory-limit=100e15 &
python run_tests.py
Many thanks in advance
Tried:
Building everything in a single github workflow, testing everything in the same workflow.
Expected:
Gaining information on specific steps that failed or worked. Displaying this information as a badge on the readme page.
Actual result:
Only the overall success status can be displayed as badge. Only the success status of "running all tests" is available.

Related

Update/Edit of workflow file in GitHub action

I have configured a manual workflow and it runs OK, but once I update/edit it and commit it to the same branch, the changes do not affect it. I mean the action still runs but uses the old version of the workflow file. is there any step I need to do?
Steps I followed for editing the workflow file:
https://docs.github.com/en/actions/learn-github-actions/finding-and-customizing-actions#browsing-marketplace-actions-in-the-workflow-editor
Here is workflow file details, just in case
The original:
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the "development" branch
release:
types: [created]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v3
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world Nikzad!, 3
Note: Let's say I just replace the last line with the bellow line, but my output still says Hello, world Nikzad!, 3 where it should say Hello, world Nikzad!, 4.
run: echo Hello, world Nikzad!, 4
I found my problem. actually, when we are creating a new release, we are providing a tag name for that release, and when we create that tag name and push it to the repo, the version of the workflow at that point is what matters, so when I want to edit or update my workflow, I do the followings:
Edit the workflow (Make any change you want, eg change the title or add echo)
Commit and push workflow changes
Create a new tag and push
From the GitHub panel, create a new release based on the new tag
Now I see the action is running based on the new workflow
Note: Maybe it is more about learning of the process, but I did not find it straight forward anywhere, I hope it helps someone.

How to use working-directory in GitHub actions?

I'm trying to set-up and run a GitHub action in a nested folder of the repo.
I thought I could use working-directory, but if I write this:
jobs:
test-working-directory:
runs-on: ubuntu-latest
name: Test
defaults:
run:
working-directory: my_folder
steps:
- run: echo test
I get an error:
Run echo test
echo test
shell: /usr/bin/bash -e {0}
Error: An error occurred trying to start process '/usr/bin/bash' with working directory '/home/runner/work/my_repo/my_repo/my_folder'. No such file or directory
I notice my_repo appears twice in the path of the error.
Here is the run on my repo, where:
my_repo = novade_flutter_packages
my_folder = packages
What am I missing?
You didn't check out the repository on the second job.
Each job runs on a different instance, so you have to checkout it separately for each one of them.

Updating GitHub issues from GitHub Actions

I was trying to make a GitHub action using some simple scripts (which I already use locally) that I would like to run inside a docker container.
A new issue should trigger the event to update the said issue with its content based on some processing. An example of this might be:
Say I have a list of labels defined in my script and it checks the issue's title and adds a label to the issue.
I'm still reading the GitHub Action's documentation so I may be not completely informed but the issue I seem to have is that in my local machine these scripts use gh cli for doing such tasks (eg. adding labels). So I was wondering if I need to have the gh installed in that docker container or is there a better way to update the issue? I'm very much willing to make these scripts from scratch again using the GitHub's event payloads and stuff as long as I don't have to write in TypeScript.
I've looked around the documentation and couldn't find anything that talked about updating issues. Also couldn't find a similar question being asked here; it may be that I've missed something so if that is the case direct me to relevant material and I would very much appreciate it.
An option could be (as you said) to install GH in that docker container, and then run GH commands.
Example using a container:
jobs:
build:
runs-on: ubuntu-latest
container:
image: docker://myrepoandimagewithghinstalled
steps:
- name: Github CLI Authentication
run: gh auth login --hostname <your hostname>
- name: Github CLI commands execution samples
run: |
gh command1
gh command2
gh command3
Another option could be to install GH directly on the OS (for exemple ubuntu-latest), authenticate, and then use the "run" option to execute GH command.
Example installing GH on the OS:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Install Github CLI
run: |
sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-key C99B11DEB97541F0
sudo apt-add-repository https://cli.github.com/packages
sudo apt update
sudo apt install gh
- name: Github CLI Authentication
run: gh auth login --hostname <your hostname>
- name: Github CLI commands execution samples
run: |
gh command1
gh command2
gh command3
Finally, you could also create a script consuming the Github API service to update an ISSUE and execute the script using the run option.
Example to execute a Python script in your workflow:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: checkout repo content
uses: actions/checkout#v2 # checkout the repository content to github runner.
- name: setup python
uses: actions/setup-python#v2
with:
python-version: 3.8 #install the python needed
- name: execute py script # run the run.py to get the latest data
run: |
python run.py
env:
key: ${{ secrets.key }} # if run.py requires passwords..etc, set it as secrets
- name: export index
.... # use crosponding script or actions to help export.

How can I run ipynb file in Github in some period via Github Action

I want to run periodically ipynb file in my github repository (Like every 30 minutes).
I know that I can use Github Action to create yml file for this progress but I have no idea how to reorganize yml file.
How can I do it?
Here is my test tml file defined below.
# This is a basic workflow to help you get started with Actions
name: CI
# Controls when the action will run. Triggers the workflow on push or pull request
# events but only for the master branch
on:
schedule:
- cron: '*/5 * * * *'
push:
branches: [ master ]
pull_request:
branches: [ master ]
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
# Checks-out your repository under $GITHUB_WORKSPACE, so your job can access it
- uses: actions/checkout#v2
# Runs a single command using the runners shell
- name: Run a one-line script
run: echo Hello, world!
# Runs a set of commands using the runners shell
- name: Run a multi-line script
run: |
echo Add other actions to build,
echo test, and deploy your project.
You can check out this GitHub action that runs your jupyter notebook and lets you upload the artifacts. As for how to organize your workflow file, you can read the documentation here.

How to keep secure files after a job finishes in Azure Devops Pipeline?

Currently I'm working on a pipeline script for Azure Devops. I want to provide a maven settings file as a secure files for the pipeline. The problem is, when I define a job only for providing the file, the file isn't there anymore when the next job starts.
I tried to define a job with a DownloadSecureFile task and a copy command to get the settings file. But when the next job starts the file isn't there anymore and therefore can't be used.
I already checked that by using pwd and ls in the pipeline.
This is part of my current YAML file (that actually works):
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
I wanted to put the DownloadSecureFile task and "cp $(settingsxml.secureFilePath) ./settings.xml" into an own job, because there are more jobs that need this file for other branches/releases and I don't want to copy the exact same code to all jobs.
This is the YAML file as I wanted it:
some variables
...
trigger:
branches:
include:
- stable
- master
jobs:
- job: provide_maven_settings
# no condition because all branches need the file
- task: DownloadSecureFile#1
name: settingsxml
displayName: Download maven settings xml
inputs:
secureFile: settings.xml
- script: |
cp $(settingsxml.secureFilePath) ./settings.xml
- job: Latest_Release
condition: eq(variables['Build.SourceBranchName'], 'master')
steps:
- script: |
docker login -u $(AzureRegistryUser) -p $(AzureRegistryPassword) $(AzureRegistryUrl)
docker build -t $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest) .
docker push $(AzureRegistryUrl)/$(projectName):$(projectVersionNumber-Latest)
....
other jobs
In my dockerfile the settings file is used like this:
FROM maven:3.6.1-jdk-8-alpine AS MAVEN_TOOL_CHAIN
COPY pom.xml /tmp/
COPY src /tmp/src/
COPY settings.xml /root/.m2/ # can't find file when executing this
WORKDIR /tmp/
RUN mvn install
...
The error happens, when docker build is started, because it can't find the settings file. It can though, when I use my first YAML example. I have a feeling that it has something to do with each job having a "Checkout" phase, but I'm not sure about that.
Each job in Azure DevOps is running on different agent, so when you use Microsoft Hosted Agents and you separator the pipeline to few jobs, if you copy the secure file in one job, the second job running in new fresh agent that of course don't have the file.
You can solve your issue by using Self Hosted agent (then copy the file to your machine and the second job running in the same machine).
Or you can upload the file to somewhere else (secured) that you can downloaded it in the second job (so why not do it from the start...).