Github Actions - Multiple branches - AWS S3 static site - github

I'm working with Github Actions to push my site to S3 for continuous integration. Its working for my main branch. Was looking to have it push the site branch code to my Quality Assurance bucket.
Main branch -> live production S3 bucket
QA branch -> QA S3 bucket
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Deploy static site to S3 bucket
run: aws s3 sync . s3://PROD_Bucket --delete
I tried the following but it fails. I'm not sure if I'm misunderstanding the logic of the yaml.
name: Upload Website
on:
push:
branches:
- master
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Deploy static site to S3 bucket
run: aws s3 sync . s3://PROD_Bucket --delete
on:
push:
branches:
- QA
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- name: Checkout
uses: actions/checkout#v1
- name: Configure AWS Credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-2
- name: Deploy static site to S3 bucket
run: aws s3 sync . s3://QA_Bucket --delete
Is there an easy way to say "when commit is made to QA, go to bucket_2. When commit is made to main, go to bucket_1"?

Oh, you just need a second yaml under root/.github/workflows/
It would just be the second branch you want to whitelist. Duh.

Related

Github actions token workflow not set error

Hello everyone I am currently writing a workflow to auto merge when a pull request is made but I am stuck at an error telling me my token is not set more specifically: 2023-02-19T02:09:08.581Z ERROR environment variable GITHUB_TOKEN not set!. I have set all my tokens in my repo and settings tab. Any help would be appreciated.
name: CI/CD
on:
pull_request:
branches: [ master ]
jobs:
super-linter:
runs-on: ubuntu-latest
steps:
- name: Checkout code
uses: actions/checkout#v2
- name: Super-Linter
uses: github/super-linter#v4.10.1
with:
files: ${{ join(github.event.pull_request.changed_files, ',') }}
Merge:
runs-on: ubuntu-latest
needs: super-linter
steps:
- name: Checkout Code
uses: actions/checkout#v2
- name: Merge pull requests
uses: pascalgn/automerge-action#v0.14.1
with:
GITHUB_TOKEN: ${{ secrets.TOKEN }}
deploy:
runs-on: self-hosted
needs: Merge
steps:
#- uses: actions/checkout#v2 #this is used for if you want to push all source code into runner
- name: update code base
working-directory: /test_pipe/www/html
run: sudo git pull origin master
- name: restart
working-directory: /test_pipe/www/html
run: sudo systemctl restart nginx
image of error
pascalgn/automerge-action accepts GITHUB_TOKEN as an env variable, not as an argument. So it should be:
- name: Merge pull requests
uses: pascalgn/automerge-action#v0.14.1
env:
GITHUB_TOKEN: ${{ secrets.TOKEN }}
Refer to the documentation: https://github.com/pascalgn/automerge-action#usage

Cannot send github actions to a specific branch 'staging'

I've created a master.yml and staging.yml to send deploy my backend php with ftp deploy but isn't scheduling the job on github actions.
This is my staging.yml
on:
push:
branches: [staging]
name: Staging ๐Ÿš€
jobs:
web-deploy:
name: ๐ŸŽ‰ Deploy
runs-on: ubuntu-latest
steps:
- name: ๐Ÿšš Get latest code
uses: actions/checkout#v2
- name: ๐Ÿ“‚ Sync files
uses: SamKirkland/FTP-Deploy-Action#4.3.2
with:
server: ${{ secrets.ftp_host }}
username: ${{ secrets.ftp_username }}
password: ${{ secrets.ftp_password }}
server-dir: staging/backend/
```

getting Unable to resolve action `my_repo/s3-sync-action#v0.5.1`, repository not found

I am not a DevOps expert so not sure what this means exactly, I have a GitHub actions pipeline, there when I am trying to run the pipeline I get the above error. My code is as below
Main.yml
deploy_dev:
name: deploy_dev
runs-on: ubuntu-latest
container:
image: some-image
needs: build-dev
steps:
-
name: "Checkout Repo"
uses: actions/checkout#v2
-
name: artifacts
uses: actions/upload-artifact#v2
with:
name: "upload artifacts"
path: build/
# S3 Deployment
-
name: S3 Deployment
uses: my_repo/s3-sync-action#v0.5.1
with:
args: --acl public-read --follow-symlinks --delete
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID_DEV}}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY_DEV }}
AWS_S3_BUCKET: ${{ secrets.BUCKET_NAME_DEV }}
DISTRIBUTION: ${{ secrets.CDN_DISTRIBUTN_ID_DEV }}
Any idea how i can fix this error ? Thank you.
Maybe you have forked the s3-sync-action but maybe, in your repo, you don't have the v0.5.1 tag. So I would suggest to use main or master as reference (depending of your repo's config) like:
# S3 Deployment
-
name: S3 Deployment
uses: my_repo/s3-sync-action#main

Caching artifacts in GitHub actions using runner controller

I want to set up self-hosted runners on a k8s cluster using actions-runner-controller.
My question is, given that as per the official docs, persistent runners are not recommended
Although not generally recommended, itโ€™s possible to disable the
passing of the --ephemeral flag by explicitly setting ephemeral: false
in the RunnerDeployment or RunnerSet spec. When disabled, your runner
becomes โ€œpersistentโ€.
how can one leverage artifact caching when using this controller?
Where will the cache content will be stored in the k8s cluster, given that containers are ephemeral?
If you are not using the enterprise version, the caches will be handled by Github itself. I came across some similar problems at my self-hosted runner to create a cache for nodeJs, VueJs, and Java. Here's what I did:
VueJs (moving dist folder) (note the actions/upload-artifact#v3)
name: CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
build-web:
runs-on: self-hosted
container:
image: node:14
steps:
- uses: actions/checkout#v3
- name: Build shc-web
run: |
yarn config set cache-folder .yarn
yarn
yarn run build
- uses: actions/upload-artifact#v3
with:
name: dist-folder
path: dist/
registry-web:
runs-on: self-hosted
needs: ['build-web']
steps:
- uses: actions/checkout#v3
- uses: actions/download-artifact#v3
with:
name: dist-folder
path: dist/
- name: Configure AWS
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Registry on AWS repository
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: shccp
run: |
docker build -t $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID .
docker push $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID
Also, I used two different jobs to handle the build. It could be done in only one so there was no need to upload/download the dist. Actually, that was precisely what I had to do in the NodeJs action. The node_modules is just too big to be uploaded.
NodeJS:
name: CI
on:
push:
branches: [ "stage" ]
pull_request:
branches: [ "stage" ]
workflow_dispatch:
jobs:
ci-api:
runs-on: self-hosted
steps:
- uses: actions/checkout#v3
- uses: actions/setup-node#v3
with:
node-version: 14
- name: Build api
run: npm install
- name: Configure AWS
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-east-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Registry on AWS repository
id: registry-aws
env:
REGISTRY: ${{ steps.login-ecr.outputs.registry }}
REPOSITORY: shcapi
run: |
docker build -t $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID .
docker push $REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID
echo "::set-output name=image-tag::$REGISTRY/$REPOSITORY:3.1.x-$GITHUB_RUN_ID"
No cache is needed once it is done in a single job. That is a pretty nice feature of Github actions btw.
The Java cache, on the other hand, is handled by the following action:
name: CI
on:
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]
workflow_dispatch:
jobs:
ci-etlv4:
runs-on: self-hosted
steps:
- uses: actions/checkout#v3
- uses: actions/setup-java#v3
with:
distribution: adopt-openj9
java-version: 8
cache: 'maven'
- uses: stCarolas/setup-maven#v4.4
with:
maven-version: 3.8.2
- name: Build ETLv4
run: |
echo ${{ secrets.SETTINGS_BASE64 }} | base64 -d > settings.xml
mvn --settings settings.xml --global-settings settings.xml clean package -DskipTests=true
- uses: docker/login-action#v2
with:
registry: "iad.ocir.io"
username: ${{ secrets.OCI_REGISTRY_USER }}
password: ${{ secrets.OCI_REGISTRY_PASSWORD }}
- uses: docker/setup-qemu-action#v2
- uses: docker/setup-buildx-action#v2
with:
driver: docker
- uses: docker/build-push-action#v3
with:
context: .
push: true
tags: XXXXX
The actions/setup-java#v3 can deal with the maven/gradle caches.
Hope it helps.

AWS CI/CD with GItHub Actions and Code Deploy to the EC2 instance

I am trying to do ci/cd with github actions and aws code deploy to the ec2 instance.
I have one ec2 instance and three github repositories(each repository has their own gitflow as well)
name: Deployment
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
buildAndTest:
name: CI Pipeline
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [ '14.x' ]
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Install project dependencies, test and build
- name: Install dependencies
run: yarn
- name: Run build
run: yarn build
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: ['14.x']
appname: ['app_name']
deploy-group: ['group_name']
region: ['region']
needs: [buildAndTest]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Step 1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ matrix.region }}
# Step 2
- name: Create CodeDeploy Deployment
id: deploy
run: |
aws deploy create-deployment \
--application-name ${{ matrix.appname }} \
--deployment-group-name ${{ matrix.deploy-group }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--github-location repository=${{ github.repository }},commitId=${{ github.sha }}
It works good when I push or do pull request to one repo, but when I push two repo at once which means I am gonna push and deploy concurrently, only one is success and another one is failed.
version: 0.0
os: linux
files:
- source: .
destination: /var/www/source
hooks:
ApplicationStart:
- location: deploy.sh // yarn install and restart server.
timeout: 300
runas: root
What is really curious is that except main location(in ec2), some files excluding build or so in other repos(two) are removed ???
I am using the same application and group id for three repositories and Is it a problem?
Any help would be super helpful :)
AWS CodeDeploy application group can not make two deployments at the same time.