I just got a GitHub account and writing small scripts in Python which I am learning.
While adding my code to GitHub I noticed there is an option to run tests/validation on my code but mine is empty.
I googled around and found that lint and black and are good checks.
I found this Action that I want to add - https://github.com/marketplace/actions/python-quality-and-format-checker
There is a "script" and a "config" that I think I need to add/update somewhere. Also when I click "Use latest version" it tells me to add the code into some .yml.
Can anyone assist me in installing this Action or point me in the right direction? Also, how can I use this Action on all my repositories/code?
=======================================
EDIT:
This link has the instructions - https://help.github.com/en/actions/configuring-and-managing-workflows/configuring-a-workflow
place yaml or yml in this directory -> .github/workflows
For this Action: https://github.com/marketplace/actions/python-quality-and-format-checker
the code inside the file will look like this:
on: [push, pull_request]
name: Python Linting
jobs:
PythonLinting:
name: Python linting
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#master
- name: Konstruktoid Python linting
uses: konstruktoid/action-pylint#master
thanks to: bertrand martel
pylint is part of the new GitHub Super Linter (github/super-linter):
Introducing GitHub Super Linter: one linter to rule them all
The Super Linter is a source code repository that is packaged into a Docker container and called by GitHub Actions. This allows for any repository on GitHub.com to call the Super Linter and start utilizing its benefits.
When you’ve set your repository to start running this action, any time you open a pull request, it will start linting the code case and return via the Status API.
It will let you know if any of your code changes passed successfully, or if any errors were detected, where they are, and what they are.
This then allows the developer to go back to their branch, fix any issues, and create a new push to the open pull request.
At that point, the Super Linter will run again and validate the updated code and repeat the process.
And you can set it up to only int new files if you want.
Update August 2020:
github/super-linter issue 226 has been closed with PR 593:
This Pr will add:
Black python linting
Updated tests
Related
I am working on using GitHub Actions to build and deploy some Sphinx documentation to GitHub pages and am running into a problem. When the action runs, I get the error:
Notebook error:
183
PandocMissing in examples/Admissions Data.ipynb:
184
Pandoc wasn't found.
185
Please check that pandoc is installed:
186
https://pandoc.org/installing.html
187
make: *** [Makefile:20: html] Error 2
This is very similar to this question Building docs fails due to missing pandoc, but when I try out the solutions outlined in these answers it does not fix the problem. Also, I am using Sphinx not read the docs.
My sphinx.yml file (according to the suggestion outlined in the deploying Sphinx docs online tutorial) is:
name: Sphinx build
on: push
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: Build HTML
uses: ammaraskar/sphinx-action#master
- name: Upload artifacts
uses: actions/upload-artifact#v3
with:
name: html-docs
path: docs/build/html/
- name: Deploy
uses: peaceiris/actions-gh-pages#v3
if: github.ref == 'refs/heads/main'
with:
github_token: ${{ secrets.GITHUB_TOKEN }}
publish_dir: docs/build/html
Additionally, my requirements.txt file is:
furo==2021.11.16
nbsphinx
sphinx_gallery
sphinx_rtd_theme
markupsafe==2.0.1
geostates
geopandas
pandas
jinja2>=3.0
pypandoc
Clearly something is breaking here with the pandoc installation where I have .ipynb files tying to be converted into markdown files to be rendered in HTML. Does anyone have an insight into what the fix is here? I tried adding the code from the answer in the aforementioned question into my conf.py file but this did not fix the problem. I am thinking it might have to do with conda vs pip, but I am not sure how to fix this.
My repo for this testing project is available here where you can view the entire build error messages under the Actions tab.
Update: It seems like I need to do (possible) something related to adding some lines of code in the sphinx.yml file?
EDIT 9/1/22:
From reading the Pandoc tutorial online is says that something along the lines of:
name: Simple Usage
on: push
jobs:
convert_via_pandoc:
runs-on: ubuntu-18.04
steps:
- uses: docker://pandoc/core:2.9
with:
args: "--help" # gets appended to pandoc command
is how Pandoc is set up with GitHub Actions. The only thing is I am pretty confused about how to integrate this code into the sphinx.yml file (where and what code to copy over) and which commands to pass through as the arguments?
The error happens in the ammaraskar/sphinx-action#master action. The action sets up it's own Docker container, so installing pandoc in the main runner will have no effect.
The docs of that action make it seem like it should be possible to install pandoc by setting an appropriate pre-build-command, e.g.,
- name: Build HTML
uses: ammaraskar/sphinx-action#master
with:
pre-build-command: >-
apt-get update && apt-get install -y pandoc
However, this does not work; there is a pull requests that might fix this, but it hasn't been merged.
One possibility, that has also been noted in the comments by #mzjn, would be to use pypandoc_binary instead of pypandoc in file docs/requirements.txt. This would usually be the preferred solution. However, the versions used in the action image are rather old, so the installer cannot find a matching version, and the installer errors with a message that it "cannot find a matching distribution for pypandoc_binary".
It therefore seems like the only real option is to use a different action, or to fork and fix ammaraskar/sphinx-action. E.g., after forking the repository on GitHub, the action's Dockerfile can be changed to include a copy of pandoc:
COPY --from=pandoc/minimal:2.19.2 /pandoc /usr/bin/pandoc
Adding the above line somewhere in the middle of the Dockerfile will include a statically compiled pandoc binary in the imaged used to run the action.
The new action can be used by replacing ammaraskar/sphinx-action in sphinx.yml with the name of the new fork. Everything should run smoothly with that in place.
I'm not sure if this is a bug or a breaking change happened as of yesterday, I have a pretty simple setup calling three reusable workflows:
name: pr-checks
on:
pull_request:
branches: "**"
jobs:
lint:
name: Call Lint
uses: ./.github/checks/check-lint.yaml
test:
name: Call Test
uses: ./.github/checks/check-test.yaml
e2e:
name: Call E2E
uses: ./.github/checks/check-e2e.yaml
But this throws
"invalid value workflow reference: no version specified"
as of now, even though identical workflows have worked yesterday.
When reusing workflows like this at the 'job' level - it is not necessary to specify version, in fact, it used to error out if I specified the version.
Screenshots attached as I think this doesn't make much sense.
I did click on 're-run all jobs and it re-ran successfully.
However, without any discenrable difference and after also removing the build step just to be sure there's nothing weird happening there:
As you can see in your 2 screenshots, one is referring to the .github/workflows directory (the one which worked), and the other to the .github/checks directory (the one which didn't).
Short answer: If you change the workflow folder back to workflows instead of checks, it should work as expected.
Long answer: It seems there is a confusion between the syntax of two different concepts:
local actions (using an action in the same repo)
reusable workflows (reusing the same workflow in different workflows)
LOCAL ACTIONS
To access local actions (folders with action.yml file) from your workflow, you need to use the actions/checkout first, to allow it to access the other repository folders and files.
Example:
steps:
- uses: actions/checkout#v3 # Necessary to access local action
- name: Local Action Call
uses: ./.github/actions/local-action #path/to/action
I've made a POC here some time ago if you want to have a look.
REUSABLE WORKFLOWS
Now, if you want to use reusable workflows, the issue is different:
As with other workflow files, you locate reusable workflows in the
.github/workflows directory of a repository. Subdirectories of the
workflows directory are not supported.
GitHub documentation reference
In that case, according to this other section from the documentation:
You reference reusable workflow files using one of the following
syntaxes:
{owner}/{repo}/.github/workflows/{filename}#{ref} for reusable
workflows in public repositories.
./.github/workflows/{filename} for reusable workflows in the same repository.
{ref} can be a SHA, a release tag, or a branch name.
Example:
lint:
name: Call Lint
uses: ./.github/workflows/check-lint.yaml#{SHA/TAG/BRANCH}
or
lint:
name: Call Lint
uses: ./.github/workflows/check-lint.yaml
Here is another POC for the workflow call using this reusable workflow
CONCLUSION
It's like you were trying to call a reusable workflow as if it was a local action, which won't work as reusable workflows need to be located in the .github/workflows directory.
Note that you could eventually add the #branch-name at the end of the workflow call to be sure to use the workflow from the branch you want to test if the reusable workflow is already present on the default branch.
How to test a custom GitHub action without publishing it to marketplace?
Similar question to ^ - I'm trying to test a GHA in a different public repository without publishing it.
This solution doesnt work for me
steps:
- uses: <username>/<repo-name>#<branch-name>
with:
# input params if there is any
I get this error when running my GHA
line 1: <actualUserName>/<actual-repo-name>#main: No such file or directory
Error: Process completed with exit code 127.
Do I need to provide a specific URL to the repo?
I couldn't find much about whether "unpublishing" a GitHub Action from the Marketplace is possible, as of Dec, 2020.
There is a lot of doc regarding "how to publish", but couldn't find anything about unpublishing.
Am I using a bad keyword? Am I understand how publishing work correctly? I had assumed published actions are different from public actions available on GitHub directly, but I'm not so sure anymore.
Also, I read https://julienrenaux.fr/2019/12/20/github-actions-security-risk/, which basically states there is a huge security issue by blindly using something like peter-evans/create-or-update-comment#v1 without using a specific hash. But I haven't seen use of hashes anywhere so far.
Here is an example of code that we actually use in our company, in our GitHub Action:
# On E2E success, add a comment to the PR, if there is an open PR for the current branch
- name: Comment PR (E2E success)
uses: peter-evans/create-or-update-comment#v1
if: steps.pr_id_finder.outputs.number && success()
with:
token: ${{ secrets.GITHUB_TOKEN }}
issue-number: ${{ steps.pr_id_finder.outputs.number }}
body: |
:white_check_mark: E2E tests **SUCCESS** for commit ${{ github.sha }} previously deployed at [${{ env.VERCEL_DEPLOYMENT_URL }}](${{ env.VERCEL_DEPLOYMENT_URL }})
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
Basically, the above article states using peter-evans/create-or-update-comment#v1 is dangerous, because anybody can publish against the v1 tag and update the v1 version we're using, without us even noticing such a change. (And, from there comes all sort of dangerous thoughts, such as Secrets stealing)
The article is 1 year old, maybe things have changed since then? It's hard to believe GitHub would leave such a security hole at the heart of their Actions Marketplace. I never heard of it before and I'm quite shocked/concerned about that.
So, there were 2 questions at hand:
What is the role of the GitHub Actions Marketplace?
How to unpublish an action published in GitHub Actions Marketplace?
About 1), unlike what I assumed, the role of the Marketplace is limited to indexing GitHub Actions, so that they're easier to find. It's very different from NPM, which is to secure the published packages so that no one can temper with them.
Because Actions are referenced using their GitHub path, an author can destroy their own action any time. Deleting their repository or marking it private are actions that will break all existing integrations right away.
Short story: Forking and referencing actions you use using their hash/SHA is the only way to build resilient actions that won't break your CI when someone changes their branch/tag, or delete/hide their GitHub repository.
Long story: See https://github.com/UnlyEd/next-right-now/discussions/223
About 2), by "unpublishing" an Action from the Marketplace, what you really do is "deindex" it, that's all. You don't destroy anything nor break any workflow, you only remove it from the marketplace, and it won't be shown anymore.
You can do so by editing your releases (on GitHub) and uncheck the "Publish this action to the GitHub Marketplace".
See https://docs.github.com/en/free-pro-team#latest/actions/creating-actions/publishing-actions-in-github-marketplace#removing-an-action-from-github-marketplace
I currently have TravisCI building on PRs in a public GitHub repo.
The instructions for Coveralls say to put this in a .coveralls.yml file:
service_name: travis-pro
repo_token: <my_token>
That doesn't work for me because the .coveralls.yml file would be public--checked into GitHub. My TravisCI is integrated into my GitHub repo wired to a branch and fires on PR.
So I tried this:
In TravisCI's site I set an environment var:
COVERALLS_REPO_TOKEN to my token's value.
Then modded my .travis.yml to look like this:
language: scala
scala:
- 2.11.7
notifications:
email:
recipients:
- me#my_email.com
jdk:
- oraclejdk8
script: "sbt clean coverage test"
after_success: "sbt coverageReport coveralls"
script:
- sbt clean coverage test coverageReport &&
sbt coverageAggregate
after_success:
- sbt coveralls
Now when I create a PR on the branch this runs ok--no errors and I see output in Travis' console that the coverage test ran and generated files. But when I go to Coveralls I see nothing--"There have been no builds for this repo."
How can I set this up?
EDIT: I also tried creating a .coveralls.yml with just service_name: travis-ci
No dice, sadly.
How can I set this up?
Step 1 - Enable Coveralls
The first thing to do is to enable Coveralls for your repository.
You can do that on their website http://coveralls.io:
go to http://coveralls.io
sign in with your GitHub credentials
click on "Repositories", then "Add Repo"
if the repo isn't listed, yet, then "Sync GitHub Repos"
finally, flip the "enable coveralls" switch to "On"
Step 2 - Setup Travis-CI to push the coverage infos to Coveralls
You .travis.yml file contains multiple entries of the script and after_success sections. So, let's clean that up a bit:
language: scala
scala: 2.11.7
jdk: oraclejdk8
script: "sbt clean coverage test"
after_success: "sbt coveralls"
notifications:
email:
recipients:
- me#my_email.com
Now, when you push, the commands in the script sections are executed.
This is were your coverage data is generated.
When the commands finish successfully the after_success section is executed.
This is were the coverage data is pushed to coveralls.
The .coveralls config file
The .coveralls file is only needed to:
public Travis-CI repos do not need this config file since Coveralls can get the information via their API (via access token exchange)
the repo_token (found on the repo page on Coveralls) is only needed for private repos and should be kept secret. If you publish it, then anyone could submit some coverage data for your repo.
Boils down to: you need the file only in two cases:
to specify a custom location to the files containing the coverage data
or when you are using Travis-Pro and private repositories. Then you have to configure "travis-pro" and add the token:
service_name: travis-pro
repo_token: ...
I thought it might be helpful to explain how to set this up for PHP, given that the question applies essentially to any language that Coveralls supports (and not just Lua).
The process is particularly elusive for PHP because the PHP link on Travis-CI's website points to a password-protected page on Coveralls' site that provides no means by which to login using GitHub, unlike the main Coveralls site.
Equally confusing is that the primary PHP page on Coveralls' site seems to contain overly-complicated instructions that require yet another library called atoum/atoum (which looks to be defunct) and are anything but complete.
What ended-up working perfectly for me is https://github.com/php-coveralls/php-coveralls/ . The documentation is very thorough, but it boils-down to this:
Enable Coveralls for your repository (see Step 1 in the Accepted Answer).
Ensure that xdebug is installed and enabled in PHP within your Travis-CI build environment (it should be by default), which is required for code-coverage support in PHPUnit.
Add phpunit and the php-coveralls libraries to the project with Composer:
composer require phpunit/phpunit php-coveralls/php-coveralls
Update travis.yml at the root of the project to include the following directives:
script:
- mkdir -p build/logs
- vendor/bin/phpunit tests --coverage-clover build/logs/clover.xml
after_success:
- travis_retry php vendor/bin/php-coveralls
Create .coveralls.yml at the root of the project and populate it with:
service_name: travis-ci
I'm not positive that this step is necessary for public repositories (the Accepted Answer implies that it's not), but the php-coveralls documentation says of this directive (emphasis mine):
service_name: Allows you to specify where Coveralls should look to find additional information about your builds. This can be any string, but using travis-ci or travis-pro will allow Coveralls to fetch branch data, comment on pull requests, and more.
Push the above changes to the remote repository on GitHub and trigger a Travis-CI build (if you don't already have hooks to make it happen automatically).
Slap a Coveralls code-coverage badge in your README (or wherever else you'd like). The required markup may be found on the Coveralls page for the repository in question, in the Badge column.