github actions get URL of test build - github

I can't seem to find out how to get the URL of a test workflow anywhere in the docs. I have a simple job which runs tests and on fail it needs to post the URL of the failed job to another web service.
I was expecting this to be in the default env vars but apparently not.
Thanks,

${{ github.server_url }}/${{ github.repository }}/actions/runs/${{ github.run_id }}
This was originally suggested in this post on GH forum.

Am convinced there is nothing about this in the docs, but I eventually found that this works:
https://github.com/<name>/<repo>/commit/$GITHUB_SHA/checks

You can get the GitHub Action URL for the particular commit by formulating the URL like the example below for a shell script step.
- name: Run shell cmd
run: echo "https://github.com/${{github.repository}}/commit/${{github.sha}}/checks/${{github.run_id}}"
Alternatively, GitHub action provides the env's as GITHUB_REPOSITORY, GITHUB_SHA, GITHUB_RUN_ID in each step and only need to construct the URL in the above pattern.

Related

How can I use web-deploy on github actions?

I have my domain and my hosting through one.com and I'm tired of moving individual files through filezilla and wanted to automate that process using github actions.
I guess I'll start out by saying I'm completely new to this and this is my first time trying to setup an action. Basically what I want is to just push the code to my github repo and then it gets build and sent to my host. Kinda like how it is with Netlify.
I stumbled upon this https://github.com/SamKirkland/web-deploy which should do the trick. I've seen tutorials using this method on youtube, but I guess they have a different provider than I do making it easier.
This is what information I have to go off of and I hope it will be enough to set it up:
Host: ssh.one-example.com
Username: one-example.com
Password: the one you chose for SSH in your Control Panel
Port: 22
and this is what I put in the yml file:
on:
push:
branches:
- main
name: Publish Website
jobs:
web-deploy:
name: 🚀 Deploy Website Every Commit
runs-on: ubuntu-latest
steps:
- name: 🚚 Get Latest Code
uses: actions/checkout#v3
- name: 📂 Sync Files
uses: SamKirkland/web-deploy#v1
with:
target-server: ${{ secrets.ftp_host }}
remote-user: ${{ secrets.ftp_username }}
private-ssh-key: ${{ secrets.ftp_password }}
destination-path: ~/destinationFolder/
I've tried having the target server both be ssh.one-example.com (obviously using my own here) and I've tried one-example.com#ssh.one-example.com
But I'm ending up with the following error when the action is running:
Error: Error: The process '/usr/bin/rsync' failed with exit code 255
So safe to say I'm a little lost and would like some guidance on how to make it work. Is it what I'm typing that's the issue, is it the host? And if so how do I fix it?
Any help is much appreciated.
Try an SSH action first, to see if you actually can open an SSH session. This is just for testing the connection: try and execute on the remote server a trivial command (ls, or pwd)
Then, regarding your current original action, check the error messages before your "The process '/usr/bin/rsync' failed with exit code 255".
See as an example of previous error messages SamKirkland/web-deploy issue 5 (not yet resolved).

Ensure that a workflow in Github Actions is only ever triggered once

I have workflow that is triggered when a specific file is changed. Is it possible to ensure that this workflow is only triggered the first time that file is changed?
The use case is:
a repository is created from a template repository
to initialize the README and other things in the repo, some variables can be set in a JSON config file
once that file is committed, the workflow runs, creates the README from a template etc.
I have tried to do let the workflow delete itself before it commits and pushes the changed files. That doesn't work: fatal: not in a git directory and fatal: unsafe repository ('/github/workspace' is owned by someone else).
What might work is adding a topic like initialized to the repo at the end of the workflow and check for the presence of this topic at the beginning of the workflow. However, this is feels like a hack in that I'm abusing topics for something they're probably not meant to do.
So my question remains: is there a good way to only run the workflow once?
With the help of the comment by frennky I managed to do solve this by using the GitHub CLI to disable the workflow.
Here is the step at the end of the workflow that does this:
- name: Disable this workflow
shell: bash
run: |
gh workflow disable -R $GITHUB_REPOSITORY "${{ github.workflow }}"
env:
GITHUB_TOKEN: ${{secrets.GITHUB_TOKEN}}

Does skip_deploy_on_missing_secrets work in static web app pipeline?

I would like to only build my static web app and not deploy it. I saw there is a env setting "skip_deploy_on_missing_secrets' but after setting that in the pipeline it just gets ignored and the pipeline fails with error saying the deployment token is not set. How exactly should I use this env setting? Does it actually work?
There's not much info on the internet about this parameter. However, at least Dapr docs suggest that it should work, and I doubt they'd put it in their docs if it didn't (here).
However, I had problems getting it working as well.
One thing to notice there is that Dapr docs actually show a GitHub Action, and they work a little bit differently than Azure CICD YAML Pipelines, which I was using.
Finally I stumbled upon this comment on a similar issue on GitHub which hints that this magic undocumented parameter should be passed as an environment variable. I was passing it as an input. Maybe GitHubActions forward these params to envs automatically?
So I tried setting it as ENV and it worked!
- task: AzureStaticWebApp#0
inputs:
app_location: ...blahblahblah
....
#skip_deploy_on_missing_secrets: true
# ABOVE: this one is documented in few places, but it's expected to be a ENV var!
#see https://github.com/Azure/static-web-apps/issues/679
env:
SKIP_DEPLOY_ON_MISSING_SECRETS: true

"Error: timeout of 600000ms exceeded" in github build with ghost inspector tests

I have the below error in github build with ghost inspector tests
"Error: timeout of 600000ms exceeded"
I tried maxTimeout in git build .yml file. But it's not working.
https://ghostinspector.com/docs/integration/github-actions/
If anyone knows this solution share it with me.
Instead of changing the concurrency in the ghostinpector, you can use the ghostinprctor-cli for run the git actions.
Ex:
https://ghostinspector.com/docs/api/cli/
- uses: docker://ghostinspector/cli
with:
args: suite execute ${{ secrets.GI_SUITE }} \
--apiKey ${{ secrets.GI_API_KEY }} \
--errorOnFail
secrets.GI_SUITE and secrets.GI_API_KEY are the ghostinspector API and SUITE keys. you can get those from the ghostinspector settings
secrets and the github secrets https://docs.github.com/en/actions/security-guides/encrypted-secrets
The above issue has based on concurrency limitations.
Previously I used 25 tests that not working.
Now I changed that 50 tests. Now, maxTimeout=600000ms is working correctly for me.

GitHub Actions: How to dynamically set environment url based on deployment step output?

I found out about a really nice GitHub Actions Feature called Environments. Using the appropriate syntax a Environment could also be created inside a GitHub Action workflow.yml like this:
environment:
name: test_environment
url: https://your-apps-url-here.com
As the docs state thats a valid way to create GitHub Action Environments:
Running a workflow that references an environment that does not exist
will create an environment with the referenced name.
But inside my current GitHub Action workflow is there a way I dynamically set the url based on a deployment step output? I have a dynamic URL resulting from the deployment process to AWS which I can't define up-front.
The job workflow docs tell us that there's also a way of using expressions inside the url field:
environment:
name: test_environment
url: ${{ steps.step_name.outputs.url_output }}
Now imagine a ci.yml workflow file that uses AWS CLI to deploy a static website to S3, where we used a tool like Pulumi to dynamically create a S3 Bucket inside our AWS account. We can read the dynamically created S3 url using the following command pulumi stack output bucketName. The deploy step inside the ci.yml could then look like this:
- name: Deploy Nuxt.js generated static site to S3 Bucket via AWS CLI
id: aws-sync
run: |
aws s3 sync ../dist/ s3://$(pulumi stack output bucketName) --acl public-read
echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)"
working-directory: ./deployment
There are 2 crucial points here: First we should use id inside the deployment step to define a step name we could easily access via step_name inside our environment:url. Second we need to define a step output using echo "::set-output name=s3_url::http://$(pulumi stack output bucketUrl)". In this example I create a variable s3_url. You could replace the pulumi stack output bucketUrl with any other command you'd like or tool you use, which responds with your dynamic environment url.
Be also sure to add a http:// or https:// in order to prevent an error message like this:
Environment URL 'microservice-ui-nuxt-js-hosting-bucket-bc75fce.s3-website.eu-central-1.amazonaws.com' is not a valid http(s) URL, so it will not be shown as a link in the workflow graph.
Now the environment definition at the top of our ci.yml can access the s3_url output variable from our deployment step like this:
jobs:
ci:
runs-on: ubuntu-latest
environment:
name: microservice-ui-nuxt-js-deployment
url: ${{ steps.aws-sync.outputs.s3_url }}
steps:
- name: Checkout
...
Using steps.aws-sync we reference the deployment step directly, since we defined it with the id. The appended .outputs.s3_url then directly references the variable containing our S3 url. If you defined everything correctly the GitHub Actions UI will render the environment URL directly below the finished job:
Here's also a fully working workflow embedded inside a example project.