Github Actions deploy artifacts - github

Below workflow fails when deploying the artifacts to S3. In the last line, in the deployment job, it's complaining that ./build doesn't exist, probably because it can't find the artifacts.
The user-provided path ./build does not exist.
##[error]Process completed with exit code 255.
How do I make it recognise the artifacts created in the build job?
name: Build and deploy
on:
push:
branches:
- master
jobs:
build:
name: Build
runs-on: ubuntu-18.04
strategy:
matrix:
node-version: [10.x]
steps:
- uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- run: |
npm i -D
npm test --if-present
npm run build:prod
env:
CI: true
- name: Upload Artifact
uses: actions/upload-artifact#master
with:
name: build
path: build
deploy:
name: Deploy
needs: build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#master
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp \
--recursive \
--acl public-read \
--region ap-southeast-2 \
./build s3://example

You need to download the artifact in the deploy job. See the actions/download-artifact action.
deploy:
name: Deploy
needs: build
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#master
- name: Download Artifact
uses: actions/download-artifact#master
with:
name: build
path: build
- name: Deploy to S3
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
run: |
aws s3 cp \
--recursive \
--acl public-read \
--region ap-southeast-2 \
./build s3://example

Related

Caching artifacts in GitHub actions using runner controller

I want to set up self-hosted runners on a k8s cluster using actions-runner-controller.
My question is, given that as per the official docs, persistent runners are not recommended
Although not generally recommended, it’s possible to disable the
passing of the --ephemeral flag by explicitly setting ephemeral: false
in the RunnerDeployment or RunnerSet spec. When disabled, your runner
becomes “persistent”.
how can one leverage artifact caching when using this controller?
Where will the cache content will be stored in the k8s cluster, given that containers are ephemeral?
If you are not using the enterprise version, the caches will be handled by Github itself. I came across some similar problems at my self-hosted runner to create a cache for nodeJs, VueJs, and Java. Here's what I did:
VueJs (moving dist folder) (note the actions/upload-artifact#v3)
name: CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
build-web:
runs-on: self-hosted
container:
image: node:14
steps:
- uses: actions/checkout#v3
- name: Build shc-web
run: |
yarn config set cache-folder .yarn
yarn
yarn run build
- uses: actions/upload-artifact#v3
with:
name: dist-folder
path: dist/
registry-web:
runs-on: self-hosted
needs: ['build-web']
steps:
- uses: actions/checkout#v3
- uses: actions/download-artifact#v3
with:
name: dist-folder
path: dist/
- name: Configure AWS
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Registry on AWS repository
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: shccp
run: |
docker build -t $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID .
docker push $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID
Also, I used two different jobs to handle the build. It could be done in only one so there was no need to upload/download the dist. Actually, that was precisely what I had to do in the NodeJs action. The node_modules is just too big to be uploaded.
NodeJS:
name: CI
on:
push:
branches: [ "stage" ]
pull_request:
branches: [ "stage" ]
workflow_dispatch:
jobs:
ci-api:
runs-on: self-hosted
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 14
- name: Build api
run: npm install
- name: Configure AWS
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Registry on AWS repository
id: registry-aws
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: shcapi
run: |
docker build -t $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID .
docker push $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID
echo "::set-output name=image-tag::$REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID"
No cache is needed once it is done in a single job. That is a pretty nice feature of Github actions btw.
The Java cache, on the other hand, is handled by the following action:
name: CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
ci-etlv4:
runs-on: self-hosted
steps:
- uses: actions/checkout#v3
- uses: actions/setup-java#v3
with:
distribution: adopt-openj9
java-version: 8
cache: 'maven'
- uses: stCarolas/setup-maven#v4.4
with:
maven-version: 3.8.2
- name: Build ETLv4
run: |
echo ${{ secrets.SETTINGS_BASE64 }} | base64 -d > settings.xml
mvn --settings settings.xml --global-settings settings.xml clean package -DskipTests=true
- uses: docker/login-action#v2
with:
registry: "iad.ocir.io"
username: ${{ secrets.OCI_REGISTRY_USER }}
password: ${{ secrets.OCI_REGISTRY_PASSWORD }}
- uses: docker/setup-qemu-action#v2
- uses: docker/setup-buildx-action#v2
with:
driver: docker
- uses: docker/build-push-action#v3
with:
context: .
push: true
tags: XXXXX
The actions/setup-java#v3 can deal with the maven/gradle caches.
Hope it helps.

AWS CI/CD with GItHub Actions and Code Deploy to the EC2 instance

I am trying to do ci/cd with github actions and aws code deploy to the ec2 instance.
I have one ec2 instance and three github repositories(each repository has their own gitflow as well)
name: Deployment
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
buildAndTest:
name: CI Pipeline
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [ '14.x' ]
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Install project dependencies, test and build
- name: Install dependencies
run: yarn
- name: Run build
run: yarn build
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: ['14.x']
appname: ['app_name']
deploy-group: ['group_name']
region: ['region']
needs: [buildAndTest]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Step 1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ matrix.region }}
# Step 2
- name: Create CodeDeploy Deployment
id: deploy
run: |
aws deploy create-deployment \
--application-name ${{ matrix.appname }} \
--deployment-group-name ${{ matrix.deploy-group }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--github-location repository=${{ github.repository }},commitId=${{ github.sha }}
It works good when I push or do pull request to one repo, but when I push two repo at once which means I am gonna push and deploy concurrently, only one is success and another one is failed.
version: 0.0
os: linux
files:
- source: .
destination: /var/www/source
hooks:
ApplicationStart:
- location: deploy.sh // yarn install and restart server.
timeout: 300
runas: root
What is really curious is that except main location(in ec2), some files excluding build or so in other repos(two) are removed ???
I am using the same application and group id for three repositories and Is it a problem?
Any help would be super helpful :)
AWS CodeDeploy application group can not make two deployments at the same time.

How to create a dotenv file using Github Actions in the Deploy steps?

I'm learning CI CD to deploy REST API to VPS using Github Actions.
I am confused about how to make a dotenv file so that it is on the VPS server using Github Actions, because this REST API requires a Jsonwebtoken key and I have to insert it in the .env file. I've tried several ways but nothing works.
If you have the answer, please modify my .yml file below and include your answer so I can understand it clearly.
I really appreciate any answer.
name: Node.js CICD
on:
push:
branches: [master]
pull_request:
branches: [master]
jobs:
test:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [14.x]
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: create env file
run: |
touch .env
echo JWT_ACCESS_TOKEN_SECRET=${{ secrets.JWT_ACCESS_TOKEN_SECRET }} >> .env
- name: install and test
run: |
npm i
npm run build --if-present
npm run test
env:
CI: true
deploy:
needs: [test]
runs-on: ubuntu-latest
steps:
- name: deploy with SSH
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
key: ${{ secrets.PRIVATE_KEY }}
port: 22
script: |
cd ~/apps/routeros-api
git pull origin master
npm i --production
pm2 restart routeros-api

Inject .env.local file or custom set of environment variables to yarn build in Github Actions

I have a github action which is building the React app (based on create-react-app) and deploying it to AWS S3. I have to pass some environment variables to correctly run yarn build command.
I could hold them directly in .env file, but I don't want to hold them inside the repository. Currently I'm just adding environment variables right before the yarn build command, but it's annoying solution and seems to be a bit hacky. Ideally, I'd like to inject .env.local file with my own configuration, but I don't have any good idea how to do it.
Here's my build.yml file:
name: Build
on:
push:
branches:
- master
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.13.1]
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Yarn install
run: yarn install
- name: Build
run: REACT_APP_GRAPHQL_URL=https://some.url/graphql CI=false yarn build
- name: Deploy to S3
uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --delete
env:
AWS_S3_BUCKET: my-bucket-name
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
SOURCE_DIR: "build"
So as you can see the magic happens here:
run: REACT_APP_GRAPHQL_URL=https://some.url/graphql CI=false yarn build
How can I make it look nicer? It's quite ok when I have two variables, but what if I'll have dozens of them?
By the way - it's a private repository, if it makes any difference.
And I don't want to use another CI solution, currently Github Actions seems to be enough for me.
you can do magic like this,
name: Build
on:
push:
branches:
- master
env:
CI : false
REACT_APP_GRAPHQL_URL : https://some.url/graphql
jobs:
build:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [12.13.1]
steps:
- uses: actions/checkout#v1
- name: Use Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
- name: Yarn install
run: yarn install
- name: Build
run: yarn build
- name: Deploy to S3
uses: jakejarvis/s3-sync-action#master
with:
args: --acl public-read --delete
env:
AWS_S3_BUCKET: my-bucket-name
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_REGION: ${{ secrets.AWS_REGION }}
SOURCE_DIR: "build"
I think it makes look nicer

Github Action: [!] Error: Cannot find module 'rollup-plugin-commonjs'

In my package.json there are rollup and rollup-plugin-commonjs
but inside github actions it could not find those packages!
If I do not add rollup in global package installation step of github-action it shows that rollup is not found. But after adding both rollup and rollup-plugin-commonjs I get [!] Error: Cannot find module 'rollup-plugin-commonjs'
this is my workflow file:
name: Github Action
on:
push:
branches:
- fix/auto-test
jobs:
test:
runs-on: ubuntu-18.04
steps:
- uses: actions/checkout#v1
- name: Bootstrap app on Ubuntu
uses: actions/setup-node#v1
with:
node-version: '11.x.x'
- name: Install global packages
run: npm install -g prisma rollup rollup-plugin-commonjs
- name: Get yarn cache directory path
id: yarn-cache-dir-path
run: echo "::set-output name=dir::$(yarn cache dir)"
- name: Cache Project dependencies test
uses: actions/cache#v1
id: yarn-cache
with:
path: ${{ steps.yarn-cache-dir-path.outputs.dir }}
key: ${{ runner.os }}-yarn-${{ hashFiles('**/yarn.lock') }}
restore-keys: |
${{ runner.os }}-yarn-
- name: Install project deps
if: steps.yarn-cache.outputs.cache-hit != 'true'
run: yarn
- name: Run docker
run: docker-compose -f docker-compose.test.prisma.yml up --build -d
- name: Sleep
uses: jakejarvis/wait-action#master
with:
time: '30s'
- name: Reset the database for safety
run: yarn reset:backend
- name: Deploy
run: yarn deploy:backend
- name: Build this great app
run: yarn build
- name: start app and worker concurrently and create some instances
run: |
yarn start &
yarn start:worker &
xvfb-run --auto-servernum yarn test:minimal:runner