Github Action Rollback Strategy - github

I am trying to create a rollback strategy for ecs task that managed with github action. What i am trying to do is:
if previous task definition's image is not found on ecr, set revision number-=1 and check one more previous task definition image, until it is found a valid image (imagetag actually but it doesnt matter.)
If previous task definition revision number is not found check previous (previous revision number -1 like above) revision until found a valid one.
According to that target: when id:tag-checker step is hit on else block i need to repeat all the step below from id:previous-revision-image-tag until my if else blocks pass with true fields.
So how can i achieve this purpose with github action?
Basically i want to repeat all the steps and below steps from a step that i pick.
name: AWS Rollback
on:
workflow_dispatch:
env:
AWS_REGION: "region"
ECR_REPOSITORY: "nodejs-1"
ECS_SERVICE: "nodejs-service"
ECS_CLUSTER: "test-1"
ECS_TASK_DEFINITION: ".aws/staging.paris.json"
CONTAINER_NAME: "nodejs-test"
jobs:
Rollback:
name: "Rollback"
runs-on: ubuntu-latest
environment: production
steps:
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ env.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Set Current Task Revision
id: current-revision
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
IMAGE_TAG: ${{ steps.date.outputs.date }}-${{ steps.vars.outputs.sha_short }}
run: |
echo "REVISION_NUMBER=$(aws ecs describe-services --cluster ${{ env.ECS_CLUSTER }} --query "services[].taskDefinition" --services ${{ env.ECS_SERVICE }} --output text | cut -d: -f7)" >> $GITHUB_ENV
echo "REVISION_NAME=$(aws ecs describe-services --cluster ${{ env.ECS_CLUSTER }} --query "services[].taskDefinition" --services ${{ env.ECS_SERVICE }} --output text | cut -d: -f1-6)" >> $GITHUB_ENV
- name: Set Previous Task Revision Number
id: previous-revision-number
run: |
echo "PREVIOUS_REVISION_NUMBER"=$((${{ env.REVISION_NUMBER }}-1)) >> $GITHUB_ENV
- name: Set Previous Task Revision Image Tag
id: previous-revision-image-tag
env:
PREVIOUS_REVISION_NUMBER: ${{ env.PREVIOUS_REVISION_NUMBER }}
run: |
echo "IMAGE_TAG"=$(aws ecs describe-task-definition --task-definition "${{ env.ECR_REPOSITORY }}:$PREVIOUS_REVISION_NUMBER" --query "taskDefinition.containerDefinitions[0].image" --output text |cut -d: -f2) >> $GITHUB_ENV
- name: Check if previous revision image is exist or not
id: tag-checker
env:
IMAGE_TAG: ${{ env.IMAGE_TAG }}
run: |
if (aws ecr describe-images --repository-name=${{ env.ECR_REPOSITORY }} --image-ids=imageTag=$IMAGE_TAG &> /dev/null); then
echo "Image Found"
else
echo 'Image is Not Found'
fi
- name: Check if previous task revision exist or not
id: revision-checker
env:
PREVIOUS_REVISION_NUMBER: ${{ env.PREVIOUS_REVISION_NUMBER }}
run: |
if (aws ecs describe-task-definition --task-definition "${{ env.ECR_REPOSITORY }}:$PREVIOUS_REVISION_NUMBER" --output text &> /dev/null); then
echo "Task definition Found"
else
echo 'Task definition not Found'
fi
# - name: Rollback to previous version
# id: rollback
# run: |
# aws ecs update-service --cluster ${{ env.ECS_CLUSTER }} --service ${{ env.ECS_SERVICE }} --task-definition ${{ env.REVISION_NAME }}:${{ env.PREVIOUS_REVISION_NUMBER }}

I have a solution for you without updating revision and task.
Lets think you have a ecr repo with tags
latest
v1.0.2
v1.0.1
v1.0.0
latest point to your latest version (v1.0.2)
You need to update your ecs task definition you use latest version always.
When you want to rollback. You can do a hack on ECR point latest version to v1.0.1 then just invoke ecs to force re-deploy services.
IMAGE_TAG_YOU_WANT_TO_DEPLOY="v1.0.1"
# fetch v1.0.1 manifest
MANIFEST=$(aws ecr batch-get-image --repository-name ${ECR_REPOSITORY} --image-ids imageTag=${IMAGE_TAG_YOU_WANT_TO_DEPLOY} --output json | jq --raw-output --join-output '.images[0].imageManifest')
# move latest tag pointer to v1.0.1
aws ecr put-image --repository-name ${ECR_REPOSITORY} --image-tag latest --image-manifest "$MANIFEST"
aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --force-new-deployment --region us-east-2
For new deployment you will create a image tag (v1.0.3 and latest) together and push both images to ECR.
then just invoke update-service only. (new latest is v1.0.3)
aws ecs update-service --cluster ${ECS_CLUSTER} --service ${ECS_SERVICE} --force-new-deployment --region us-east-2

Related

github action pull request event is not running

I have a GitHub action code with terraform and ECR, ECS now I have two branch master and feature and when I created Pull-request for feature to master
then only my terraform plan code will run but when i create a Pull-request and merge to master then my GitHub action running but that part is skipped i am not sure why it is happing please find the below attached code
---
name: "workflow"
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
# Allows you to run this workflow manually from the Actions tab
workflow_dispatch:
jobs:
cd:
name: "Deployment"
runs-on: "ubuntu-latest"
#if: startsWith(github.ref, 'refs/tags/')
steps:
- name: "Checkout Code"
uses: "actions/checkout#v2"
- name: Set tag
id: vars
run: echo "::set-output name=tag::${GITHUB_REF#refs/*/}"
- name: Configure AWS credential
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: my_ecr_repi
IMAGE_TAG: ${{ github.event.head_commit.message }}
run: |
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Setup Terraform
uses: hashicorp/setup-terraform#v1
with:
cli_config_credentials_token: ${{ secrets.TF_API_TOKEN }}
- name: Terraform Init
run: |
cd terraform_with_ALB
terraform init
- name: Terraform Format
id: fmt
run: |
cd terraform_with_ALB
terraform fmt -check
- name: Terraform Validate
id: validate
run: |
cd terraform_with_ALB
terraform validate -no-color
- name: Terraform Plan
id: plan
if: github.event_name == 'pull_request'
run: |
cd terraform_with_ALB
terraform plan -no-color -input=false
continue-on-error: true
till terraform valiate it wokring fine after that it skip terraform plan part
you are missing the pull_request element in the on section.
on:
# Triggers the workflow on push or pull request events but only for the "master" branch
push:
branches: [ "master" ]
pull_request:
branches: [ "master" ]

Github Actions "unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials"

I have created a github workflow to deploy to GCP. But when it comes to push the docker image to GCP I get this error
...
346fddbbb0ff: Waiting
a6fc7a8843ca: Waiting
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Error: Process completed with exit code 1.
Here is my yaml file :
name: Build for Dev
on:
workflow_dispatch:
env:
GKE_PROJECT: bi-dev
IMAGE: gcr.io/bi-dev/bot-dev
DOCKER_IMAGE_TAG: JAVA-${{ github.sha }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.commit_sha }}
- name: Build Docker Image
run: docker build -t ${{env.IMAGE}} .
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ env.GKE_PROJECT }}
service_account_key: ${{ secrets.GKE_KEY }}
export_default_credentials: true
- name: Push Docker Image to GCP
run: |
gcloud auth configure-docker
docker tag ${{env.IMAGE}} ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
docker push ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
- name: Update Deployment in GKE
env:
GKE_CLUSTER: bots-dev-test
GKE_DEPLOYMENT: bot-dev
GKE_CONTAINER: bot-dev
run: |
gcloud container clusters get-credentials ${{ env.GKE_CLUSTER }} --zone us-east1-b --project ${{ env.GKE_PROJECT }}
kubectl set image deployment/$GKE_DEPLOYMENT ${{ env.GKE_CONTAINER }}=${{ env.IMAGE }}:${{ env.TAG }}
kubectl rollout status deployment/$GKE_DEPLOYMENT
Surprisingly when I manually run docker push it works fine
Also I am using the similar yaml file to push other projects and they work totally fine. Its just this github action that fails.
Any leads would be appreciated.
Found out that I missed a step and didnt add the Service Account keys in Secrets for Github actions and that led to the failure of this particular actions.

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order

We have a GitHub Actions workflow consiting of 3 jobs:
provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.
We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: install-and-run-argocd-on-eks
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...
The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.
How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?
That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].
So for your workflow it will look like this:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...

Github actions Error: Input required and not supplied: task-definition

[![enter image description here][2]][2]
on:
push:
branches:
- soubhagya
name: Deploy to Amazon ECS
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: af-south-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: new-cgafrica-backend
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: cgafrica-new-backend-task
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: task-definition.json
container-name: cgafrica-backend-container
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: cgafrica-backend-service
cluster: cgafrica-backend-cluster
wait-for-service-stability: true
Here is my yaml file code added. Please check
I have shared my task-definition.json and github actions pipeline progress.
But, I am getting some error Input required and not supplied: task-definition
Please let me know what is the issue here
The problem is in the last step - Deploy Amazon ECS task definition
The problematic part is ${{ steps.task-def.outputs.task-definition }} which doesn't refer to an existing step. There is not step with id task-def.
In order to work it should be: ${{ steps.cgafrica-new-backend-task.outputs.task-definition }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.cgafrica-new-backend-task.outputs.task-definition }}
service: cgafrica-backend-service
cluster: cgafrica-backend-cluster
wait-for-service-stability: true

How to save the output of a bash command to output parameter in github actions

This is similar to what is being asked here but with more explanation and desire for an up-to-date answer (answer uses set-env which is now deprecated)
Say I have the following github action yaml:
name: pull-request-pipeline
on: [pull_request]
jobs:
deploy-to-dev-and-test:
runs-on: ubuntu-latest
steps:
- name: Checkout action
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: us-west-2
- name: Update API gateway definitions
run: |
aws apigateway put-rest-api --rest-api-id xxxxxxxx --mode merge --body 'file://SummitApi.yaml'
- name: Get and store current deploymentId
run: |
OLD_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id xxxxxxxx --stage-name dev --query deploymentId --output text)
echo '::set-output name=OLD_DEPLOYMENT_ID::$OLD_DEPLOYMENT_ID'
id: old-deployment-id
- name: Echo old deployment before deploy
run: |
echo ${{ steps.old-deployment-id.outputs.OLD_DEPLOYMENT_ID }}
- name: Deploy to dev stage
run: |
aws apigateway create-deployment --rest-api xxxxxxxx --stage-name dev
- name: Echo old deployment id after deploy
run: |
echo ${{ steps.old-deployment-id.outputs.OLD_DEPLOYMENT_ID }}
When this runs it produces the following:
When taking another approach, specifically changing the Get and store current deploymentId step to:
- name: Get and store current deploymentId
run: |
OLD_DEPLOYMENT_ID=$(aws apigateway get-stage --rest-api-id xxxxxxxx --stage-name dev --query deploymentId --output text)
echo '::set-output name=OLD_DEPLOYMENT_ID::$OLD_DEPLOYMENT_ID'
id: old-deployment-id
I get the following:
It seems like it sets the value of the output for the action to the unevaluated definitions of either $OLD_DEPLOYMENT or $(aws apigateway get-stage --rest-api-id xxxxxxxx --stage-name dev --query deploymentId --output text). I want to be able to store the value of the evaluated definition or in this case the actual deployment id from the cmd $(aws apigateway get-stage --rest-api-id xxxxxxxx --stage-name dev --query deploymentId --output text). Any ideas on how to do this with github actions?
References:
https://docs.github.com/en/free-pro-team#latest/actions/reference/workflow-commands-for-github-actions