I'm trying to deploy an image I built to eks Kubernetes using GitHub actions :
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ secrets.AWS_REGION }}
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: deploy to cluster
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA_STAGING }}
RELEASE_IMAGE: docker.pkg.github.com/ahmedappout08/dockerwebapp/demo:${GITHUB_REF##*/}
with:
args: set image deployment/my-app app=${{ env.RELEASE_IMAGE }} --record -n lg-gulf-ka-robodesk
- name: verify deployment
uses: kodermax/kubectl-aws-eks#master
env:
KUBE_CONFIG_DATA: ${{ secrets.KUBE_CONFIG_DATA }}
with:
args: rollout status deployment/my-app
But i got this error :
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Any help to fix this ? .. Thanks in advance
Related
I am using Github Actions to push an image into GCP Artifact Registry and later deploy to Cloud Run
All the process goes fine, except the automatic deploying to Cloud Run.
Below is the link for the example that guided me
https://github.com/codeedu/live-imersao-fullcycle10-nestjs-tests/blob/main/.github/workflows/ci_cd.yml
The error is as below:
Deploying...
failed
Deployment failed
ERROR: (gcloud.run.deploy) spec.template.spec.containers[0].image: Must provide an image URL to deploy
I appreciate any help to accomplish this task
Below is the workflow file:
name: CI and CD
on:
workflow_dispatch:
push:
branches: [main, develop]
env:
REGISTRY: gcr.io
IMAGE_NAME: ${{ secrets.GCP_PROJECT_NAME }}/${{ secrets.CLOUD_RUN_SERVICE }}
REGION: us-central1
# REGISTRY_GIT: ghcr.io
# IMAGE_NAME_GIT: ${{ github.repository }}
jobs:
test-code:
runs-on: ubuntu-20.04
steps:
- uses: actions/checkout#v3
- name: Use Node.js 16.x
uses: actions/setup-node#v3
with:
node-version: 16.x
- run: npm ci
- run: npm run test
build-image:
needs: test-code
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-20.04
outputs:
tags: ${{ steps.meta.outputs.tags }}
concurrency: build-image-process
steps:
- name: Checkout repository
uses: actions/checkout#v3
# Workaround: https://github.com/docker/build-push-action/issues/461
- name: Setup Docker buildx
uses: docker/setup-buildx-action#79abd3f86f79a9d68a23c75a09a9a85889262adf
# Login against a Docker registry except on PR
# https://github.com/docker/login-action
- name: Log into registry ${{ env.REGISTRY }}
if: github.event_name != 'pull_request'
uses: docker/login-action#28218f9b04b4f3f62068d7b6ce6ca5b26e35336c
with:
registry: ${{ env.REGISTRY }}
username: _json_key
#username: ${{ github.actor }}
password: ${{ secrets.GCP_SERVICE_ACCOUNT }}
#password: ${{ secrets.GITHUB_TOKEN }}
# Extract metadata (tags, labels) for Docker
# https://github.com/docker/metadata-action
- name: Extract Docker metadata
id: meta
uses: docker/metadata-action#98669ae865ea3cffbcbaa878cf57c20bbf1c6c38
with:
images: ${{ env.REGISTRY }}/${{ env.IMAGE_NAME }}
# Build and push Docker image with Buildx (don't push on PR)
# https://github.com/docker/build-push-action
- name: Build and push Docker image
id: build-and-push
uses: docker/build-push-action#ac9327eae2b366085ac7f6a2d02df8aa8ead720a
if: ${{ github.event_name != 'pull_request' }}
with:
context: .
file: ./Dockerfile.prod
push: true
tags: ${{ steps.meta.outputs.tags }}
labels: ${{ steps.meta.outputs.labels }}
cache-from: type=gha
cache-to: type=gha,mode=max
- name: Outputs tags
run: echo "${{ steps.meta.outputs.tags }}"
deploy-image:
needs: build-image
if: github.ref == 'refs/heads/main'
runs-on: ubuntu-20.04
steps:
- name: Checkout repository
uses: actions/checkout#v3
- id: 'auth'
uses: 'google-github-actions/auth#v0'
with:
credentials_json: '${{ secrets.GCP_SERVICE_ACCOUNT }}'
- name: 'Deploy to Cloud Run'
uses: 'google-github-actions/deploy-cloudrun#v0'
with:
service: ${{ secrets.CLOUD_RUN_SERVICE }}
image: ${{ needs.build-image.outputs.tags }}
region: ${{ env.REGION }}
I am working on syncing my GitHub repo with S3 bucket and I don't want to pass my AWS credentials as GitHub secrets. I already tried passing my credentials through GitHub secret and the code works. However, when I try to get GitHub to assume a role to perform the operations, I keep getting errors. Please see the code and images below.
GitHub main.yml
name: Upload Website
on:
push:
branches:
- main
jobs:
deploy:
runs-on: ubuntu-latest
permissions:
id-token: write
contents: read
steps:
- name: Git checkout
uses: actions/checkout#v3
- name: Configure AWS credentials from AWS account
uses: aws-actions/configure-aws-credentials#v1
with:
role-to-assume: ${{ secrets.AWS_ROLE }}[
aws-region: ${{ secrets.AWS_REGION }}
role-session-name: GitHub-OIDC-frontend
- uses: actions/checkout#master
- uses: jakejarvis/s3-sync-action#master
with:
args: --follow-symlinks --exclude '.git/*' --exclude '.github/*'
env:
AWS_REGION: ${{ secrets.AWS_REGION }}
AWS_S3_BUCKET: ${{ secrets.AWS_S3_BUCKET }}
- name: Invalidate CloudFront
uses: chetan/invalidate-cloudfront-action#v2
env:
DISTRIBUTION: ${{ secrets.AWS_CF_DISTRIBUTION_ID }}
PATHS: "/index.html"
AWS ROLE POLICY
{
"Version": "2008-10-17",
"Statement": [
{
"Effect": "Allow",
"Principal": {
"Federated": "arn:aws:iam::************:oidc-provider/token.actions.githubusercontent.com"
},
"Action": "sts:AssumeRoleWithWebIdentity",
"Condition": {
"StringEquals": {
"token.actions.githubusercontent.com:sub": [
"repo:ACCOUNT_ID/REPO_NAME:*",
"repo:ACCOUNT_ID/REPO_NAME:*"
],
"token.actions.githubusercontent.com:aud": "sts.amazonaws.com"
}
}
}
]
}
GITHUB ERROR
Run aws-actions/configure-aws-credentials#v1
with:
role-to-assume: ***
aws-region: ***
role-session-name: GitHub-OIDC-frontend
audience: sts.amazonaws.com
Error: Not authorized to perform sts:AssumeRoleWithWebIdentity
Did you set the claim_keys via the Github REST API?
If you are using the github cli, it looks something like this
gh api /repos/ACCOUNT_ID/REPO_NAME/actions/oidc/customization/sub --method PUT --input ./body.txt
where body.txt looks like
{"use_default":false,"include_claim_keys":["repo"]}
Im also curious if there is an issue with your token.actions.githubusercontent.com:sub values. Is that star just explicitly allowing any other claims in? You may want (or need) to knock that down to just repo:ACCOUNT_ID/REPO_NAME.
Why not create an user? Here is a solution similar to the problem described.
# Workflow name
name: S3 Deploy
on:
workflow_dispatch:
push:
paths:
- 'app/**'
- '.github/workflows/deploy.yml'
jobs:
build-and-deploy:
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
BUCKET_NAME: caiogomes.me
steps:
- name: Install hugo
run: sudo apt install hugo
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Checkout repository
uses: actions/checkout#v3
with:
submodules: 'true'
- name: Build
run: cd app/ && hugo
- name: Upload files to S3
run: aws s3 sync app/public/ s3://${{ env.BUCKET_NAME }}/ --exact-timestamps --delete
create-cloudfront-invalidation:
needs: build-and-deploy
runs-on: ubuntu-latest
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: sa-east-1
CLOUDFRONT_DISTRIBUTION_ID: ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }}
steps:
- name: Install aws cli
id: install-aws-cli
uses: unfor19/install-aws-cli-action#v1
with:
version: 2
verbose: false
arch: amd64
rootdir: ""
workdir: ""
- name: Set AWS credentials
run: export AWS_ACCESS_KEY_ID=${{ secrets.AWS_ACCESS_KEY_ID }} && export AWS_SECRET_ACCESS_KEY=${{ secrets.AWS_SECRET_ACCESS_KEY }}
- name: Invalidate clodufront distribution
run: aws cloudfront create-invalidation --distribution-id ${{ secrets.CLOUDFRONT_DISTRIBUTION_ID }} --paths "/*"
Here is the repo: https://github.com/caiocsgomes/caiogomes.me
I have the following job definition
- uses: actions/checkout#v2
- uses: azure/login#v1
with:
creds: ${{ secrets.BETA_AZURE_CREDENTIALS }}
- uses: azure/docker-login#v1
with:
login-server: ${{ secrets.BETA_ACR_SERVER }}
username: ${{ secrets.BETA_ACR_USERNAME }}
password: ${{ secrets.BETA_ACR_PASSWORD }}
- run: docker build -f .ops/account.dockerfile -t ${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }} -t ${{ secrets.BETA_ACR_SERVER }}/account:latest .
working-directory: ./Services
- run: docker push ${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }}
- uses: azure/setup-kubectl#v2.0
- uses: azure/aks-set-context#v2.0
with:
resource-group: ${{ secrets.BETA_RESOURCE_GROUP }}
cluster-name: ${{ secrets.BETA_AKS_CLUSTER }}
- run: kubectl -n pltfrmd set image deployments/account account=${{ secrets.BETA_ACR_SERVER }}/account:${{ github.sha }}
The docker function works fine and it pushes to ACR without issue.
But then even though the ask-set-context works, the run kubectl command doesn't execute and just hangs waiting for an interactive login prompt.
What am I doing wrong? How do I get Github actions to let me execute the kubectl command properly?
Setting 'admin' to true worked for me.
- uses: azure/aks-set-context#v2.0
with:
resource-group: ${{ secrets.BETA_RESOURCE_GROUP }}
cluster-name: ${{ secrets.BETA_AKS_CLUSTER }}
admin: true
I am trying to do ci/cd with github actions and aws code deploy to the ec2 instance.
I have one ec2 instance and three github repositories(each repository has their own gitflow as well)
name: Deployment
on:
push:
branches:
- main
pull_request:
branches:
- main
jobs:
buildAndTest:
name: CI Pipeline
runs-on: ubuntu-latest
strategy:
matrix:
node-version: [ '14.x' ]
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Install project dependencies, test and build
- name: Install dependencies
run: yarn
- name: Run build
run: yarn build
deploy:
runs-on: ubuntu-latest
strategy:
matrix:
node-version: ['14.x']
appname: ['app_name']
deploy-group: ['group_name']
region: ['region']
needs: [buildAndTest]
if: github.ref == 'refs/heads/main'
steps:
- uses: actions/checkout#v2
# Initialize Node.js
- name: Install Node.js ${{ matrix.node-version }}
uses: actions/setup-node#v1
with:
node-version: ${{ matrix.node-version }}
# Step 1
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: ${{ matrix.region }}
# Step 2
- name: Create CodeDeploy Deployment
id: deploy
run: |
aws deploy create-deployment \
--application-name ${{ matrix.appname }} \
--deployment-group-name ${{ matrix.deploy-group }} \
--deployment-config-name CodeDeployDefault.OneAtATime \
--github-location repository=${{ github.repository }},commitId=${{ github.sha }}
It works good when I push or do pull request to one repo, but when I push two repo at once which means I am gonna push and deploy concurrently, only one is success and another one is failed.
version: 0.0
os: linux
files:
- source: .
destination: /var/www/source
hooks:
ApplicationStart:
- location: deploy.sh // yarn install and restart server.
timeout: 300
runas: root
What is really curious is that except main location(in ec2), some files excluding build or so in other repos(two) are removed ???
I am using the same application and group id for three repositories and Is it a problem?
Any help would be super helpful :)
AWS CodeDeploy application group can not make two deployments at the same time.
[![enter image description here][2]][2]
on:
push:
branches:
- soubhagya
name: Deploy to Amazon ECS
jobs:
deploy:
name: Deploy
runs-on: ubuntu-latest
environment: production
steps:
- name: Checkout
uses: actions/checkout#v2
- name: Configure AWS credentials
uses: aws-actions/configure-aws-credentials#v1
with:
aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }}
aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
aws-region: af-south-1
- name: Login to Amazon ECR
id: login-ecr
uses: aws-actions/amazon-ecr-login#v1
- name: Build, tag, and push image to Amazon ECR
id: build-image
env:
ECR_REGISTRY: ${{ steps.login-ecr.outputs.registry }}
ECR_REPOSITORY: new-cgafrica-backend
IMAGE_TAG: ${{ github.sha }}
run: |
# Build a docker container and
# push it to ECR so that it can
# be deployed to ECS.
docker build -t $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG .
docker push $ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG
echo "::set-output name=image::$ECR_REGISTRY/$ECR_REPOSITORY:$IMAGE_TAG"
- name: Fill in the new image ID in the Amazon ECS task definition
id: cgafrica-new-backend-task
uses: aws-actions/amazon-ecs-render-task-definition#v1
with:
task-definition: task-definition.json
container-name: cgafrica-backend-container
image: ${{ steps.build-image.outputs.image }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.task-def.outputs.task-definition }}
service: cgafrica-backend-service
cluster: cgafrica-backend-cluster
wait-for-service-stability: true
Here is my yaml file code added. Please check
I have shared my task-definition.json and github actions pipeline progress.
But, I am getting some error Input required and not supplied: task-definition
Please let me know what is the issue here
The problem is in the last step - Deploy Amazon ECS task definition
The problematic part is ${{ steps.task-def.outputs.task-definition }} which doesn't refer to an existing step. There is not step with id task-def.
In order to work it should be: ${{ steps.cgafrica-new-backend-task.outputs.task-definition }}
- name: Deploy Amazon ECS task definition
uses: aws-actions/amazon-ecs-deploy-task-definition#v1
with:
task-definition: ${{ steps.cgafrica-new-backend-task.outputs.task-definition }}
service: cgafrica-backend-service
cluster: cgafrica-backend-cluster
wait-for-service-stability: true