How can I trigger a single deployment hook from a matrix in github? - github

So I have this matrix
name: test
on: [create, push]
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker compose
run: |
docker-compose -f docker-compose.${{ matrix.context }}.yml up -d
docker ps
- name: install liquibase
run: |
wget --quiet https://github.com/liquibase/liquibase/releases/download/v3.8.4/liquibase-3.8.4.tar.gz
wget --quiet https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
mkdir -p liquibase
tar --extract --file liquibase-*.tar.gz --directory liquibase
- name: wait for dbs
run: |
set -x
wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh
chmod +x wait-for-it.sh
./wait-for-it.sh localhost:5432
docker pull postgres:alpine
isReady() {
docker run --network host --rm postgres:alpine pg_isready \
--host localhost --dbname test --username postgres --timeout 30
}
until isReady
do
sleep 1
done
- name: db migration
run: |
./liquibase/liquibase --defaultsFile=liquibase-${{ matrix.context }}.properties update \
|| ( docker-compose logs && exit 1)
the matrix's only point is to test different contexts for liquibase. I don't actually want to create different binaries for each matrix or anything like that. I see matrix as kind of a thread fork, but I don't know how to join at the end so I can kick off a single deployment event.
I think that running on check_run.completed should allow me to do this, but... that event doesn't seem to trigger either.
how can I kick off a single deployment event after the entire matrix has run?

If I'm understanding your requirement correctly you can just add another job that depends on the build job containing the matrix using needs. It will wait for all the matrix jobs to finish before running deploy.
on: push
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- name: Tests
run: echo "Testing ${{ matrix.context }}"
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy
run: echo "Deploying"
Perhaps this answer completes the solution for you?

I see two ways of accomplishing this:
Split the current matrix into two jobs and have the deployment hook depend on the test job. This way test and pgtest run in parallel and when test finishes the deployment will start. The problem with this approach is code readbility and maintenance as you'll have the code completely duplicated unless you actually encapsulated it into an action itself, extreme overkill.
Run the deployment hook as a conditional last step of test. This seems the best option given the question you've asked, but there might be situations where this itself is not optimal.
The last step for solution (2) would look something like this
- name: Deployment
if: matrix.context == 'test'
run: echo "Do something"
Hope this helps.

Related

npm run prod not actually running in github action despite showing successful

I don't believe that nmp run prod is actually running(?) in my github action despite not throwing any kind of error. The reasons why I believe that are:
If I delete my public/js/app.js file locally and push the change, it doesn't get rebuilt and my production site breaks as there's no app.js file.
If I leave the file in place and push my code to production, it's not minified, and one of the keys I need to reference still contains the dev value.
If I replace the aforementioned key with a different value and run npm run prod locally, then app.js is minified and contains my updated value.
Why would the npm run prod command not work within a github action, and also indicate that it ran successfully?
Here's my entire workflow file:
name: Prod
on:
push:
branches: [ main ]
jobs:
laravel_tests:
runs-on: ubuntu-20.04
env:
DB_CONNECTION: mysql
DB_HOST: localhost
DB_PORT: 3306
DB_DATABASE: testdb
DB_USERNAME: root
DB_PASSWORD: root
steps:
- name: Set up MySQL
run: |
sudo systemctl start mysql
mysql -e 'CREATE DATABASE testdb;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
- uses: actions/checkout#main
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Clean Install
run: npm ci
- name: Compile assets
run: npm run prod
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
forge_deploy:
runs-on: ubuntu-20.04
needs: laravel_tests
steps:
- name: Make Get Request
uses: satak/webrequest-action#master
with:
url: ${{ secrets.PROD_DEPLOY_URL }}
method: GET
UPDATE:
My suspicion is that running the build process in the action isn't actually updating the repo (actually I'm fairly certain it's probably not as that would likely not be the desired behavior). So then the deploy url that I'm using to push the code is likely just grabbing the repo as-is and deploying it.
I need a way to update only the public folder on the repo with the output of the npm run prod command. Not sure if this is possible, or advisable, but I'm nearly positive that's what's going on.

Jmeter upload test artifacts on GIT

Hello I want to upload the HTML file generated from the execution of my Jmeter, unfortunately I'm encountering an error upon executing my script. Your response is highly appreciated. Thank you
Here's my YAML file.
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:
inputs:
choice:
type: choice
description: Environment
options:
- test
- dev
- uat
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: setup-jmeter
run: |
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib/ext && sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
- name: run-jmeter-test
run: |
echo "choice is ${{ github.event.inputs.choice }}" / ${{ inputs.choice }}
$GITHUB_WORKSPACE/apache-jmeter-5.3/bin/./jmeter.sh -n -t testGIT.jmx -Jchoice="${{ github.event.inputs.choice }}" -l result.jtl -e -o $GITHUB_WORKSPACE/html/test
- name: Upload Results
uses: actions/upload-artifact#v2
with:
name: jmeter-results
path: result.jtl
- name: Upload HTML
uses: actions/upload-artifact#v2
with:
name: jmeter-results-HTML
path: index.html
Expected Result:
I should able to see 2 entries for the result one for jmeter-results and the other one is jmeter-results-HTML.
Screenshot:
Note: the index.html generated from my local this is what I want to display from my execution
You're creating HTML Reporting Dashboard under html/test folder and trying to upload index.html file from the current folder. I believe you need to change the artifact path to
path: html/test/index.html
It doesn't make sense to archive index.html alone, it relies on the content and sbadmin2-1.0.7 folders so it's better to consider uploading the whole folder otherwise the dashboard will not be usable.
According to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.5 (or whatever is the latest stable version available at JMeter Downloads page)

I don't have access my codes in the runner in GitHub Actions

I created the following "main.yml" file.
name: Deploy
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: test
run: ls -al && cd .. && ls -al
- name: Create SSH key
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ~/.ssh/private.key
sudo chmod 600 ~/.ssh/private.key
ssh-keyscan -H ${{secrets.SSH_HOST}} > ~/.ssh/known_hosts
echo "Host ${{secrets.SSH_HOST}}
User ${{secrets.SSH_USER}}
IdentityFile ~/.ssh/private.key" > ~/.ssh/config
cat ~/.ssh/config
shell: bash
env:
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
- name: test-remote
run: rsync -r ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}}:~/${{secrets.SSH_HOST}}
- name: Deploy with rsync
run: cd .. && ls -al && rsync -avz ./ ${{ secrets.SSH_USER }}#${{ secrets.SSH_HOST }}:/var/www/${{ secrets.SSH_HOST }}
However, I cannot access my codes in the github repository as seen in the following output in the runner.
Maybe I'm using the rsync command incorrectly, so I tried to output with ls and even to output from its parent directory. How do you think I can solve it?
Junior things... I forgot to checkout in the beginning. I added checkout to the beginning of the steps as below and the problem was solved.
- name: Checkout
uses: actions/checkout#main

How to add scripts to image before running them in gitlab CI

I am trying to run a CI job in gitlab, where the integration tests depend on postgresql.
In gitlab I've used the postgresql runner. The issue is that the integration tests require the extension uuid-ossp. I could run the SQL commands before each test to ensure the extension is applied, but I'd rather apply it once before running all the tests.
So I've used the image tag in the CI script to add a .sh file in the postgresql image in /docker-entrypoint-initdb.d/, and then try to run the integration tests with the same image. The problem is that it doesn't seem to apply the extension as the integration tests fail where the uuid functions are used -- function uuid_generate_v4() does not exist
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- echo "#!/bin/bash
set -e
psql \"$POSTGRES_DB\" -v --username \"$POSTGRES_USER\" <<-EOSQL
create extension if not exists \"uuid-ossp\";
EOSQL" > /docker-entrypoint-initdb.d/create-uuid-ossp-ext.sh
artifacts:
untracked: true
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- go test ./... -v -race -tags integration
An alternate i was hoping that would work was
prep-postgres:
stage: setup-db
image: postgres:12.2-alpine
script:
- psql -d postgresql://postgres#localhost:5432/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
artifacts:
untracked: true
But in this case the client is unable to connect to postgres (i imagine it's because i'm editing the image not running it?)
I must be missing something obvious, or is this even possible?
In both case in the job prep-postgres, you make changes in a running container (from postgres:12.2-alpine image) but you don't save these changes, so test-integration job can't use them.
I advice you to build your own image using a Dockerfile and the entrypoint script for the Postgres Docker image. This answer from #Elton Stoneman could help.
After that, you can refer your previously built image as services: in the test-integration job and you will benefit from the created extension.
At the moment i've had to do something a little smelly and download postgres client before running the extension installation.
.prepare_db: &prepare_db |
apt update \
&& apt install -y postgresql-client \
&& psql -d postgresql://postgres#localhost/db_name -c "CREATE EXTENSION IF NOT EXISTS \"uuid-ossp\";"
test-integration:
stage: test
services:
- postgres:12.2-alpine
variables:
POSTGRES_DB: db_name
POSTGRES_USER: postgres
script:
- *prepare_db
- go test ./... -v -race -tags integration
This isn't perfect. I was hoping that there was a way to save the state of the docker image between stages but there doesn't seem to be that option. So the options seem to be either:
install it during the test-integration stage.
create a base image specifically for this purpose where the installation of the expansion has already been done.
I've gone with option 1 for now, but will reply if i find something more concise, easier to maintain and fast.

How to build/run one Dockerfile per build job in Travis-CI?

I've got a repository with multiple Dockerfiles which take ~20min each to build: https://github.com/fredrikaverpil/pyside2-wheels
I'd like to efficiently divide these Dockerfiles to be built in its own jobs.
Right now, this is my .travis.yml:
language: python
sudo: required
dist: trusty
python:
- 2.7
- 3.5
services:
- docker
install:
- docker build -f Dockerfile-Ubuntu16.04-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-ubuntu16.04-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This creates two jobs, one per Python version. However, I'd rather have one job per Dockerfile, as I'm about to add more such files.
How can this be achieved?
Managed to solve it, I think.
language: python
sudo: required
dist: trusty
services:
- docker
matrix:
include:
- env: DOCKER_OS=ubuntu16.04
python: 2.7
- env: DOCKER_OS=ubuntu16.04
python: 3.5
- env: DOCKER_OS=centos7
python: 2.7
install:
- docker build -f Dockerfile-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} -t fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION} .
- docker run --rm -v $(pwd):/pyside-setup/dist fredrikaverpil/pyside2-$DOCKER_OS-py${TRAVIS_PYTHON_VERSION}
script:
- ls -al *.whl /
This results in three job builds.