GitHub Actions - Have Checkout action in its own job - github

I'm trying to configure a workflow in GitHub Actions using a self-hosted runner.
The runner itself has Node installed for the checkout task, but not Python, hence why I'm trying to run the python script inside the specified container. I'm trying to execute a simple Python script from inside the repo, however, when the second job runs inside the container, it cannot find the file main.py.
name: GitHub Actions Test
on:
workflow_dispatch:
inputs:
job:
description: 'checkout and run'
required: true
default: 'checkout-repo'
jobs:
checkout-repo:
runs-on: self-hosted
steps:
- name: Checkout
uses: actions/checkout#v3
run-python:
runs-on: self-hosted
container:
image: <some_python3_docker_image>
credentials:
username: ${{ github.actor }}
password: ${{ secrets.github_token }}
steps:
- run: python3 main.py
Is there any way to make the repo workspace persist between the two jobs?

Related

Execute github actions workflow/job in directory where code was changed

I am trying to implement a github actions workflow with a job which will plan and apply my terraform code changes only for directory where changes were made. The problem I am currently facing is that I can't figure out how to switch directories so that terraform plan is executed from a directory where code has been updated/changed.
I have a monorepo setup which is as follow:
repo
tf-folder-1
tf-folder-2
tf-folder-3
Each folder contains an independent terraform configuration. So, for example I would like run a workflow only when files change inside tf-folder-1. Such workflow needs to switch to working directory which is tf-folder-1 and then run terraform plan/apply.
jobs:
terraform:
name: "Terraform"
runs-on: ubuntu-latest
defaults:
run:
working-directory: ./tf-folder-1
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Configure AWS credentials from Test account
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: arn:aws:iam::000000000000000:role/deploy-role
aws-region: eu-west-2
- name: Setup Terraform
uses: hashicorp/setup-terraform#v2
...
So far, I have the above terraform job but it only runs for statically defined working-directory. It doesn't work with a use case where it should run the workflow when changes happen within specific folder. Can someone advise how to fix this pipeline?
Thanks
GitHub Actions has path filtering you can take advantage of when you are working with workflows that are triggered off a push or push_request event.
For example say you have a monorepo with the directories, tf_1, tf_2, and tf_3. You can do something like below for when changes occur to the directory tf_1.
name: Demonstrate GitHub Actions on Monorepo
on:
push:
branches:
- master
paths:
- 'tf_1/**'
defaults:
run:
working-directory: tf_1
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
For more details on path filtering, please refer to the GitHub Actions syntax documentation.
You can use a GitHub action that outputs the directories where the files have changed/modified, for example, this one: Changed-files or even perform the calculation with a shell step using git diff.
If you use the GHA suggested you can set the input dir_names to true, which would output unique changed directories instead of filenames, based on the results of that you can change the directory to run your Terraform operations.
Here is the solution to run multiple jobs based on number of directories that have been update.
In the below snippet you can see directories job which will check which directories have been updated, later it output an array or switch which is then used in matrix strategy for terraform job.
jobs:
directories:
name: "Directory-changes"
runs-on: ubuntu-latest
steps:
- uses: theappnest/terraform-monorepo-action#master
id: directories
with:
ignore: |
aws/**/policies
aws/**/templates
aws/**/scripts
- run: echo ${{ steps.directories.outputs.modules }}
outputs:
dirs: ${{ steps.directories.outputs.modules }}
terraform:
name: "Terraform"
runs-on: ubuntu-latest
needs: directories
strategy:
matrix:
directories: ${{ fromJson(needs.directories.outputs.dirs) }}
defaults:
run:
working-directory: ${{ matrix.directories }}
steps:
- name: Checkout
uses: actions/checkout#v3
- name: Setup Terraform
uses: hashicorp/setup-terraform#v2
with:
cli_config_credentials_token: ${{ secrets.TF_CLOUD_TEAM_API_TOKEN_PREPROD }}
- name: Terraform Format
id: fmt
run: terraform fmt -check
- name: Terraform Init
id: init
run: terraform init
- name: Terraform Validate
id: validate
run: terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: terraform plan -no-color -input=false
continue-on-error: true

Github CI/CD reuse step

I'm kinda beginner with CI/CD, but I wrote a code that deploys Vue/Vite project to Ubuntu VPS. But, it's not as it should be. So what am I doing actually?
First as usual, installing the project and building it.
jobs:
build:
name: "Build"
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install
run: yarn
- name: Build
run: yarn build
So the problem is when that passes. I'm connecting to ssh like:
deploy:
name: "Deploy"
needs: project_setup
runs-on: ubuntu-latest
steps:
- name: Deploy to server
uses: appleboy/ssh-action#master
env:
GIT_REPO: Comet-Frontend
GIT_SSH: ${{ github.repositoryUrl }}
with:
host: ${{ secrets.VPS_IP }}
username: ${{ secrets.VPS_USER }}
password: ${{ secrets.VPS_PASSWORD }}
port: ${{ secrets.VPS_PORT }}
envs: GIT_SSH, GIT_REPO
and at the very bottom:
script: |
cd /var/www/vue
git pull
ls
yarn
yarn build
cp -R /root/Frontend/dist /var/www/vue
So I would like to define ssh connection once and run those scripts separately with different step names. Is that possible or I have to connect to ssh for every step?
If each step needs SSH to access either a remote repository URL or your VPS target server, then yes, you would need SSH to each step.
The alternative being to copy a deployment script to the server (through SSH): the steps included in that script could be executed directly on that server (where the script has been copied). No need for SSH then for that script execution, since it is already at target.

github `registry_package` event doesn’t trigger

I managed to create two actions on 1 private repository:
The first one builds the image and push the docker image to GitHub
Container Registry
The second one needs to be triggered when newer
image is published to the GitHub container registry and deploy the
image
The issue is that the second one it doesn't get triggered and doesn't run. I use GitHub Repo Token, and I found this that says triggering new workflows should be done using a personal access token. Is this the real issue or there is some workaround? Personally I don't want to put my github token there.
As reference here is the yml code for the fist github action:
name: Build Docker Image
on:
push:
branches:
- feature/ver-64/service-template
workflow_dispatch:
env:
REGISTRY: ghcr.io
IMAGE_NAME: ${{ github.repository }}
jobs:
build:
runs-on: ubuntu-latest
steps:
-
name: Checkout
uses: actions/checkout#v2
-
name: Docker meta
id: meta
uses: docker/metadata-action#v3
with:
images: |
${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
tags: |
type=ref,event=branch
type=ref,event=pr
type=semver,pattern={{version}}
type=sha
-
name: Set up Docker Buildx
uses: docker/setup-buildx-action#v1
-
name: Login to Github Container Repository
if: github.event_name != 'pull_request'
uses: docker/login-action#v1
with:
registry: ${{ env.REGISTRY }}
username: ${{ github.actor }}
password: ${{ secrets.GITHUB_TOKEN }}
-
name: Build and push
uses: docker/build-push-action#v2
with:
context: .
push: ${{ github.event_name != 'pull_request' }}
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
And this is the yml for the second one that needs to be trigered once the first one publish new image to the registry:
name: Deploy to Azure
on:
registry_package:
types: [ published, updated ]
jobs:
debug:
runs-on: ubuntu-latest
steps:
- uses: hmarr/debug-action#v2
GitHub actions prevents triggering more actions. Sort of to protect against infinite loops. Hence why the token used by GitHub Actions has a special flag on it which causes the 2nd workflow not to trigger.
You have a few options:
Use a PAT to push to GitHub Container Registry. (as per the docs)
Have a 2nd stage that depends on the first one in your existing workflow to perform the deployment.
A variation on 2, use a template to extract the deploy logic to a single template, use the same template action in both the workflow that pushes the image as well as the workflow that triggers when an image is pushed

Github actions how to configure two runners in two servers

I have a GitHub repo called api.
api has two branches DEV and QA
I have set up a workflow for the DEV branch and worked correctly.
This is the workflow for DEV branch
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Node.js CI
on:
push:
branches: [DEV]
pull_request:
branches: [DEV]
jobs:
build:
runs-on: self-hosted
strategy:
matrix:
node-version: [14.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: "npm"
- run: npm ci
# - run: pm2 stop app.js
- run: pm2 start ecosystem.config.js --update-env
Then I created my second EC2 instance and second runner and another workflow file
# This workflow will do a clean install of node dependencies, cache/restore them, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: QA Build
on:
push:
branches: [ QA ]
pull_request:
branches: [ QA ]
jobs:
build:
runs-on: self-hosted
strategy:
matrix:
node-version: [14.x]
# See supported Node.js release schedule at https://nodejs.org/en/about/releases/
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v2
with:
node-version: ${{ matrix.node-version }}
cache: 'npm'
- run: npm i
- run: pm2 start ecosystem.config.js --update-env
But whenever I push some code to the QA branch still my first runner runs and first EC-2 instance. Seems the second instance or workflow doesn't use at all.
How do I specify the runner and the instance based on the branch?
If you just have two runners with the default setup, you will not be able to differentiate between the two. As such a job just takes any of the two.
A label can mark one specific runner, which you can then choose directly. See the GitHub self-hosted runners docs on labels
You can then use the specific runner like this
runs-on: [self-hosted, dev]

GitHub Actions unable to load via SSH despite it appearing to work using ssh-access

I am working on a github action to runs tests on my PRs and pushes but I am having trouble ensuring that the tests are able to access my private repos.
I have tested the SSH credentials I am using locally and they 100% work.
https://github.com/webfactory/ssh-agent
Here is the SSH agent I am using.
and here is my github action
# This workflow will do a clean install of node dependencies, build the source code and run tests across different versions of node
# For more information see: https://help.github.com/actions/language-and-framework-guides/using-nodejs-with-github-actions
name: Node.js CI
on:
push:
branches:
- master
- release/*
pull_request:
branches:
- master
- release/*
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [10.x, 12.x, 14.x]
steps:
- uses: actions/checkout#v2
- uses: webfactory/ssh-agent#v0.4.0
with:
ssh-private-key: ${{ secrets.SSH_PRIVATE_KEY }}
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: npm ci
- run: npm run build --if-present
- run: npm test
It appears to be making no attempt to utilize the SSH keys that it is getting
Since https://github.com/Tixpire/tixpire-server seems to be private, you will need to use a PAT (personal access token) to access it.
See also actions/checkout issue 95.
It is an HTTPS URL, so no amount of SSH keys will work: you would need an SSH URL for that (git#github.com:Tixpire/tixpire-server)