npm run prod not actually running in github action despite showing successful - github

I don't believe that nmp run prod is actually running(?) in my github action despite not throwing any kind of error. The reasons why I believe that are:
If I delete my public/js/app.js file locally and push the change, it doesn't get rebuilt and my production site breaks as there's no app.js file.
If I leave the file in place and push my code to production, it's not minified, and one of the keys I need to reference still contains the dev value.
If I replace the aforementioned key with a different value and run npm run prod locally, then app.js is minified and contains my updated value.
Why would the npm run prod command not work within a github action, and also indicate that it ran successfully?
Here's my entire workflow file:
name: Prod
on:
push:
branches: [ main ]
jobs:
laravel_tests:
runs-on: ubuntu-20.04
env:
DB_CONNECTION: mysql
DB_HOST: localhost
DB_PORT: 3306
DB_DATABASE: testdb
DB_USERNAME: root
DB_PASSWORD: root
steps:
- name: Set up MySQL
run: |
sudo systemctl start mysql
mysql -e 'CREATE DATABASE testdb;' -uroot -proot
mysql -e 'SHOW DATABASES;' -uroot -proot
- uses: actions/checkout#main
- name: Copy .env
run: php -r "file_exists('.env') || copy('.env.example', '.env');"
- name: Install Dependencies
run: composer install -q --no-ansi --no-interaction --no-scripts --no-progress
- name: Generate key
run: php artisan key:generate
- name: Directory Permissions
run: chmod -R 777 storage bootstrap/cache
- name: Clean Install
run: npm ci
- name: Compile assets
run: npm run prod
- name: Execute tests (Unit and Feature tests) via PHPUnit
run: vendor/bin/phpunit
forge_deploy:
runs-on: ubuntu-20.04
needs: laravel_tests
steps:
- name: Make Get Request
uses: satak/webrequest-action#master
with:
url: ${{ secrets.PROD_DEPLOY_URL }}
method: GET
UPDATE:
My suspicion is that running the build process in the action isn't actually updating the repo (actually I'm fairly certain it's probably not as that would likely not be the desired behavior). So then the deploy url that I'm using to push the code is likely just grabbing the repo as-is and deploying it.
I need a way to update only the public folder on the repo with the output of the npm run prod command. Not sure if this is possible, or advisable, but I'm nearly positive that's what's going on.

Related

Jmeter upload test artifacts on GIT

Hello I want to upload the HTML file generated from the execution of my Jmeter, unfortunately I'm encountering an error upon executing my script. Your response is highly appreciated. Thank you
Here's my YAML file.
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:
inputs:
choice:
type: choice
description: Environment
options:
- test
- dev
- uat
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: setup-jmeter
run: |
sudo apt-get update
sudo apt install curl -y
sudo apt install -y default-jdk
sudo curl -O https://archive.apache.org/dist/jmeter/binaries/apache-jmeter-5.3.tgz
sudo tar -xvf apache-jmeter-5.3.tgz
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo curl -O https://repo1.maven.org/maven2/kg/apc/cmdrunner/2.2.1/cmdrunner-2.2.1.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib/ext && sudo curl -O https://repo1.maven.org/maven2/kg/apc/jmeter-plugins-manager/1.6/jmeter-plugins-manager-1.6.jar
cd $GITHUB_WORKSPACE/apache-jmeter-5.3/lib && sudo java -jar cmdrunner-2.2.1.jar --tool org.jmeterplugins.repository.PluginManagerCMD install-all-except jpgc-hadoop,jpgc-oauth,ulp-jmeter-autocorrelator-plugin,ulp-jmeter-videostreaming-plugin,ulp-jmeter-gwt-plugin,tilln-iso8583
- name: run-jmeter-test
run: |
echo "choice is ${{ github.event.inputs.choice }}" / ${{ inputs.choice }}
$GITHUB_WORKSPACE/apache-jmeter-5.3/bin/./jmeter.sh -n -t testGIT.jmx -Jchoice="${{ github.event.inputs.choice }}" -l result.jtl -e -o $GITHUB_WORKSPACE/html/test
- name: Upload Results
uses: actions/upload-artifact#v2
with:
name: jmeter-results
path: result.jtl
- name: Upload HTML
uses: actions/upload-artifact#v2
with:
name: jmeter-results-HTML
path: index.html
Expected Result:
I should able to see 2 entries for the result one for jmeter-results and the other one is jmeter-results-HTML.
Screenshot:
Note: the index.html generated from my local this is what I want to display from my execution
You're creating HTML Reporting Dashboard under html/test folder and trying to upload index.html file from the current folder. I believe you need to change the artifact path to
path: html/test/index.html
It doesn't make sense to archive index.html alone, it relies on the content and sbadmin2-1.0.7 folders so it's better to consider uploading the whole folder otherwise the dashboard will not be usable.
According to JMeter Best Practices you should always be using the latest version of JMeter so consider upgrading to JMeter 5.5 (or whatever is the latest stable version available at JMeter Downloads page)

Introducing secret variables to dockerfile on Github Actions

I am trying to configure my etc/pip.conf file to download a private PyPi artifactory while using a secret variable on my dockerfile.
Dockerfile
FROM python
WORKDIR ./app
COPY . /app
RUN pip install --upgrade pip
RUN pip install -r pre-requirements.txt
RUN echo ${{ secrets.PIP }} > etc/pip.conf
RUN pip install -r post-requirements.txt
CMD ["python", "./simpleflask.py"]
docker-image.yml
name: CI
on:
push:
branches: [ "main" ]
pull_request:
branches: [ "main" ]
workflow_dispatch:
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Setup JFrog CLI
uses: jfrog/setup-jfrog-cli#v2
env:
JF_ARTIFACTORY_SERVER: ${{ secrets.JFROG_CLI }}
- name: Checkout
uses: actions/checkout#v3
- name: Build
run: |
docker build -t simple-flask .
docker tag simple-flask awakzdev.jfrog.io/docker-local/simple-flask:latest
docker push awakzdev.jfrog.io/docker-local/simple-flask:latest
pretty simple and straightfoward but my pipeline returns the following
Step 6/8 : RUN echo ${{ secrets.PIP }} > etc/pip.conf
---> Running in deb3e3f4167f
/bin/sh: 1: Bad substitution
The command '/bin/sh -c echo ${{ secrets.PIP }} > etc/pip.conf' returned a non-zero code: 2
Error: Process completed with exit code 2.
Edit :
Trying a slightly difference approach and went to install dependencies in the pipeline
my .yml looks like this now
jobs:
build:
runs-on: ubuntu-latest
steps:
- name: Setup JFrog CLI
uses: jfrog/setup-jfrog-cli#v2
env:
JF_ARTIFACTORY_SERVER: ${{ secrets.JFROG_CLI }}
- name: Checkout
uses: actions/checkout#v3
- name: install dependencies
run: |
pip config -v list
echo "${{ secrets.PIP }}" > /etc/pip.conf
pip install ganesha-experimental==2.0.1
- name: Build
run: |
docker build -t simple-flask .
docker tag simple-flask awakzdev.jfrog.io/docker-local/simple-flask:latest
docker push awakzdev.jfrog.io/docker-local/simple-flask:latest
but the following error is being returned:
1s
Run pip config -v list
For variant 'global', will try loading '/etc/xdg/pip/pip.conf'
For variant 'global', will try loading '/etc/pip.conf'
For variant 'user', will try loading '/home/runner/.pip/pip.conf'
For variant 'user', will try loading '/home/runner/.config/pip/pip.conf'
For variant 'site', will try loading '/usr/pip.conf'
/home/runner/work/_temp/09382b8f-ce09-4646-816f-fb337f40ad4b.sh: line 2: /etc/pip.conf: Permission denied
Error: Process completed with exit code 1.
I've placed the secret on my .yml file instead.
as for the broken pip permissions I used
sudo chown runner /etc/
echo ${{ secrets.PIP }} > /etc/pip.conf
which resulted in another error with the contents of the pip.conf file (it was configured correctly through secrets)
so I found you can specify the url like so
ganesha_experimental==5.0.0 --find-links=https://awakzdev.jfrog.io/artifactory/

I don't have access my codes in the runner in GitHub Actions

I created the following "main.yml" file.
name: Deploy
# Controls when the workflow will run
on:
# Triggers the workflow on push or pull request events but only for the main branch
push:
branches: [ main ]
pull_request:
branches: [ main ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
# A workflow run is made up of one or more jobs that can run sequentially or in parallel
jobs:
# This workflow contains a single job called "build"
build:
# The type of runner that the job will run on
runs-on: ubuntu-latest
# Steps represent a sequence of tasks that will be executed as part of the job
steps:
- name: test
run: ls -al && cd .. && ls -al
- name: Create SSH key
run: |
mkdir -p ~/.ssh/
echo "$SSH_PRIVATE_KEY" > ~/.ssh/private.key
sudo chmod 600 ~/.ssh/private.key
ssh-keyscan -H ${{secrets.SSH_HOST}} > ~/.ssh/known_hosts
echo "Host ${{secrets.SSH_HOST}}
User ${{secrets.SSH_USER}}
IdentityFile ~/.ssh/private.key" > ~/.ssh/config
cat ~/.ssh/config
shell: bash
env:
SSH_PRIVATE_KEY: ${{secrets.SSH_PRIVATE_KEY}}
- name: test-remote
run: rsync -r ${{secrets.SSH_USER}}#${{secrets.SSH_HOST}}:~/${{secrets.SSH_HOST}}
- name: Deploy with rsync
run: cd .. && ls -al && rsync -avz ./ ${{ secrets.SSH_USER }}#${{ secrets.SSH_HOST }}:/var/www/${{ secrets.SSH_HOST }}
However, I cannot access my codes in the github repository as seen in the following output in the runner.
Maybe I'm using the rsync command incorrectly, so I tried to output with ls and even to output from its parent directory. How do you think I can solve it?
Junior things... I forgot to checkout in the beginning. I added checkout to the beginning of the steps as below and the problem was solved.
- name: Checkout
uses: actions/checkout#main

How can I trigger a single deployment hook from a matrix in github?

So I have this matrix
name: test
on: [create, push]
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: docker compose
run: |
docker-compose -f docker-compose.${{ matrix.context }}.yml up -d
docker ps
- name: install liquibase
run: |
wget --quiet https://github.com/liquibase/liquibase/releases/download/v3.8.4/liquibase-3.8.4.tar.gz
wget --quiet https://jdbc.postgresql.org/download/postgresql-42.2.9.jar
mkdir -p liquibase
tar --extract --file liquibase-*.tar.gz --directory liquibase
- name: wait for dbs
run: |
set -x
wget --quiet https://raw.githubusercontent.com/vishnubob/wait-for-it/master/wait-for-it.sh
chmod +x wait-for-it.sh
./wait-for-it.sh localhost:5432
docker pull postgres:alpine
isReady() {
docker run --network host --rm postgres:alpine pg_isready \
--host localhost --dbname test --username postgres --timeout 30
}
until isReady
do
sleep 1
done
- name: db migration
run: |
./liquibase/liquibase --defaultsFile=liquibase-${{ matrix.context }}.properties update \
|| ( docker-compose logs && exit 1)
the matrix's only point is to test different contexts for liquibase. I don't actually want to create different binaries for each matrix or anything like that. I see matrix as kind of a thread fork, but I don't know how to join at the end so I can kick off a single deployment event.
I think that running on check_run.completed should allow me to do this, but... that event doesn't seem to trigger either.
how can I kick off a single deployment event after the entire matrix has run?
If I'm understanding your requirement correctly you can just add another job that depends on the build job containing the matrix using needs. It will wait for all the matrix jobs to finish before running deploy.
on: push
jobs:
build:
strategy:
matrix:
context: [test, pgtest]
runs-on: ubuntu-latest
steps:
- name: Tests
run: echo "Testing ${{ matrix.context }}"
deploy:
needs: build
runs-on: ubuntu-latest
steps:
- name: Deploy
run: echo "Deploying"
Perhaps this answer completes the solution for you?
I see two ways of accomplishing this:
Split the current matrix into two jobs and have the deployment hook depend on the test job. This way test and pgtest run in parallel and when test finishes the deployment will start. The problem with this approach is code readbility and maintenance as you'll have the code completely duplicated unless you actually encapsulated it into an action itself, extreme overkill.
Run the deployment hook as a conditional last step of test. This seems the best option given the question you've asked, but there might be situations where this itself is not optimal.
The last step for solution (2) would look something like this
- name: Deployment
if: matrix.context == 'test'
run: echo "Do something"
Hope this helps.

Github actions scp into VPS via ssh only

This is currently my workflow
name: CI
on: [push]
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v1
- uses: actions/setup-node#v1
with:
node-version: '10.x'
- run: npm install
- run: npm install -g #angular/cli > /dev/null
- run: ng build --prod
- run: scp -o StrictHostKeyChecking=no -r ./dist/pwa/* user#domain.com://home/user/domain.com/pwa
The above is roughly a translation of what I have on CircleCI. However, obviously the above fails.
CircleCI allowed adding 'SSH Permissions' to a project, so as during setting up build to run, it attaches that to the environment, thus making any ssh commands to the VPS easy.
How can I accomplish a similar approach in Github? Github Actions supports SSH Permissions? If not, is there a workaround?
How do you folks copy files from your workflow builds to an external server via ssh (i.e scp)?
This is what I do, after adding the SSH key to github secrets:
run: |
mkdir -p ~/.ssh
echo "${{ secrets.SSH_KEY }}" > ~/.ssh/id_rsa
chmod 700 ~/.ssh/id_rsa
ssh-keyscan -H domain.com >> ~/.ssh/known_hosts
scp -o StrictHostKeyChecking=no -r ./dist/pwa/* user#domain.com://home/user/domain.com/pwa