Github Actions "unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials" - github

I have created a github workflow to deploy to GCP. But when it comes to push the docker image to GCP I get this error
...
346fddbbb0ff: Waiting
a6fc7a8843ca: Waiting
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Error: Process completed with exit code 1.
Here is my yaml file :
name: Build for Dev
on:
workflow_dispatch:
env:
GKE_PROJECT: bi-dev
IMAGE: gcr.io/bi-dev/bot-dev
DOCKER_IMAGE_TAG: JAVA-${{ github.sha }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.commit_sha }}
- name: Build Docker Image
run: docker build -t ${{env.IMAGE}} .
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ env.GKE_PROJECT }}
service_account_key: ${{ secrets.GKE_KEY }}
export_default_credentials: true
- name: Push Docker Image to GCP
run: |
gcloud auth configure-docker
docker tag ${{env.IMAGE}} ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
docker push ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
- name: Update Deployment in GKE
env:
GKE_CLUSTER: bots-dev-test
GKE_DEPLOYMENT: bot-dev
GKE_CONTAINER: bot-dev
run: |
gcloud container clusters get-credentials ${{ env.GKE_CLUSTER }} --zone us-east1-b --project ${{ env.GKE_PROJECT }}
kubectl set image deployment/$GKE_DEPLOYMENT ${{ env.GKE_CONTAINER }}=${{ env.IMAGE }}:${{ env.TAG }}
kubectl rollout status deployment/$GKE_DEPLOYMENT
Surprisingly when I manually run docker push it works fine
Also I am using the similar yaml file to push other projects and they work totally fine. Its just this github action that fails.
Any leads would be appreciated.

Found out that I missed a step and didnt add the Service Account keys in Secrets for Github actions and that led to the failure of this particular actions.

Related

Github Actions - Invalid workflow file

I am trying to build CI/CD pipelines using GitHub Actions but unfortunately, I am stuck with an error with the yaml file.
Here is my Yaml file is:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs_on: unbuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
Here is an error where I am stuck with:
GitHub Actions
/ .github/workflows/gcp.yaml
Invalid workflow file
You have an error in your yaml syntax on line 15
I tried all possible indentations available on the internet but had no luck. I tried Yamllinter but still could not find where the error comes from. Please point me to where I am going wrong.
Thanks.
The runs-on (not runs_on) should have two spaces indentation relative to the job identifier. Also, the OS should be ubuntu-latest.
Then, env should have the same indentation as runs-on or name, the same as steps.
Here is the correct WF:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs-on: ubuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
I would recommend debugging such issues in the GitHub file edit form (editing the yml file in the .github/workflows directory). It will highlight all the issues regarding the workflow syntax. Demo.

How to pull a private image from Docker Hub using github actions

I have a workflow where I need to pull a image from a private repository from Docker Hub. My job is the following:
run-flake8:
name: Run Flake 8
runs-on: "ubuntu-20.04"
needs: [django-dev-image]
container:
image: docker://index.docker.io/v1/<repository_name>/<image_name>:latest
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
steps:
- name: Print something
run: echo "Testing flake8 job"
Job fails with repository does not exist or may require 'docker login': denied: requested access to the resource is denied. I feel like my docker hub registry url is wrong, but I can't figure out what is the correct one, any help is more than appreciated.
Thank you all
When referring to DockerHub, you should not need to specify the docker registry.
run-flake8:
name: Run Flake 8
runs-on: "ubuntu-20.04"
needs: [django-dev-image]
container:
image: <repository_name>/<image_name>:latest
credentials:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_PASSWORD }}
steps:
- name: Print something
run: echo "Testing flake8 job"

Getting Access Denied for S3 when using GitHub Actions but not for Terminal

I am able to access my the bucket (SG) from my command line. However, when I run in GitHub actions, I get an access denied error. I have set the keys three times, so I know I am not using the wrong keys, and I know I have the correct permissions because it works in the terminal.
Some notes: The SG bucket is in a different region than mine. Also the SG bucket originally had a different endpoint. I need to ensure that it is pointing to the correct endpoints (https://s3.amazonaws.com). Currently I have the endpoint as a variable but do not set it.
Is there something wrong with my workflow YAML?
jobs:
Sync:
runs-on: ubuntu-latest
env:
SG_S3_ENDPOINT: https://s3.amazonaws.com
SG2_ACCESS_KEY_ID: ${{ secrets.SG2_ACCESS_KEY_ID }}
SG2_SECRET_ACCESS_KEY: ${{ secrets.SG2_SECRET_ACCESS_KEY }}
AWS_REGION_NAME: ${{ secrets.AWS_REGION_NAME }}
steps:
- uses: actions/checkout#v2
- name: install dependencies ...
run: |
pip3 install -r requirements.txt
- name: Sync Weekly Patterns
run: |
aws configure set aws_access_key_id $SG2_ACCESS_KEY_ID
aws configure set aws_secret_access_key $SG2_SECRET_ACCESS_KEY
aws configure set region $AWS_REGION_NAME
aws s3 sync s3://sg-places-outgoing/my_org/weekly/ s3://sg-my-org/weekly-patterns/
I think my problem was that my environment variables were not named to the AWS-recognized values, even though I was setting them in aws configure.
Sync:
runs-on: ubuntu-latest
env:
AWS_REGION_NAME: ${{ secrets.AWS_REGION_NAME }}
AWS_ACCESS_KEY_ID: ${{ secrets.SG_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.SG_SECRET_ACCESS_KEY }}
AWS_S3_ENDPOINT: https://s3.amazonaws.com
steps:
- uses: actions/checkout#v2
- name: install dependencies ...
run: |
pip3 install -r requirements.txt
- name: Sync Weekly Patterns
run: |
aws configure set aws_access_key_id $AWS_ACCESS_KEY_ID
aws configure set aws_secret_access_key $AWS_SECRET_ACCESS_KEY
aws configure set region $AWS_REGION_NAME
aws s3 sync s3://safegraph-places-outgoing/nyc_gov/weekly/ s3://safegraph-post-rdp/weekly-patterns/

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order

We have a GitHub Actions workflow consiting of 3 jobs:
provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.
We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: install-and-run-argocd-on-eks
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...
The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.
How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?
That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].
So for your workflow it will look like this:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...

Running a Docker Image via SSH Github Actions

so I'm currently trying to make GitHub Actions/CI SSH into my VPS and run a docker image. Although the main problem is that the job doesn't finish up after running the final command.
This is my YML file:
name: SSH & Deploy Image
on:
workflow_run:
workflows: ["Timmy Docker Build"]
branches: [ main ]
types:
- completed
jobs:
build:
name: Build
runs-on: ubuntu-latest
steps:
- name: Run Docker CMD
uses: appleboy/ssh-action#master
with:
host: ${{ secrets.HOST }}
username: ${{ secrets.USERNAME }}
password: ${{ secrets.PASSWORD }}
port: ${{ secrets.PORT }}
script: |
docker stop ss-timmy && docker rm ss-timmy
docker pull spaceturtle0/ss-timmy:latest
docker run --env-file=Timmy-SchoolSimplified/.env spaceturtle0/ss-timmy &
Regardless of having put the & sign at the final script command, the process just hangs until the process is killed. Is there something to fix this?
You should use -d flag that means detached instead & sign for last docker command. So full command will be:
docker run -d --env-file=Timmy-SchoolSimplified/.env spaceturtle0/ss-timmy