How do you set KUBECONFIG to connect to Azure Kubernetes service from github actions deployment? - kubernetes

I am trying to setup a deployment pipeline to configure an Azure Kubernetes service from github actions. I have found steps on the github actions marketplace for configuring various steps however I cannot get any combination of them to work correctly. I keep getting errors saying
error loading config file
"/home/runner/work/_temp/kubeconfig_xxxx": yaml: did not find
expected key
or similar errors saying
error loading config file couldn't get version/kind; json parse error: json: cannot unmarshal array into Go value of type struct { APIVersion string json:"apiVersion,omitempty; Kind string json:\kind,omitempty\ }
depending on how I try to pass the kube_config from Terraform. If I run the same environment locally it works so I am assuming there is something wrong with how it is setup on github actions.
Here is my deployment file:
name: Deploy
on:
workflow_dispatch:
inputs:
<redacted>
jobs:
deploy:
name: Deploy
runs-on: ubuntu-18.04
env:
<redacted>
defaults:
run:
shell: bash
steps:
- uses: actions/checkout#v2
- uses: azure/login#v1.1
with:
creds: ${{ <redacted> }}
- name: Generate Terraform backend
uses: azure/cli#v1.0.3
with:
azcliversion: 2.11.1
inlineScript: |
<redacted>
- uses: hashicorp/setup-terraform#v1.1.0
with:
terraform_version: 0.13.0
- name: Terraform Init
run: |
terraform init
- name: Terraform Plan
run: |
terraform plan \
<redacted>
-out=tfplan
- name: Terraform Apply
run: |
terraform apply \
-auto-approve \
tfplan
- uses: azure/setup-kubectl#v1
with:
version: 'v1.19.2'
- uses: azure/setup-helm#v1
with:
version: 'v3.3.1'
- name: Save Config
run: |
terraform output kube_config > ./aks.yml
- name: Set Env
run: |
echo ::set-env name=XXX::$(cat ./aks.yml)
- uses: azure/k8s-set-context#v1
with:
method: kubeconfig
kubeconfig: "${{ env.XXX }}"
- name: Test
run: |
kubectl get pods -o wide
I have tried setting KUBECONFIG and getting pods in one step using bash and it also fails. Any ideas what I am missing? Thanks in advance!

Related

Terraform Kubernetes provider fails on Github Action with following: 'config_path' refers to an invalid path: "/github/home/.kube/config"

I am trying to create a CI build in Github Actions for Kubernetes deployment with Terraform on Minikube. The Terraform apply fails on deploying provider with following message:
Invalid attribute in provider configuration
with provider["registry.terraform.io/hashicorp/kubernetes"],
on providers.tf line 18, in provider "kubernetes":
18: provider "kubernetes" {
'config_path' refers to an invalid path: "/github/home/.kube/config": stat
/github/home/.kube/config: no such file or directory
How can I resolve it? I have tried various approaches but so far nothing works. Everything works fine when I deploy it locally with Minikube.
Relevant code snippets from Terraform:
variables.tf:
variable "kube_config" {
type = string
default = "~/.kube/config"
}
providers.tf:
provider "kubernetes" {
config_path = pathexpand(var.kube_config)
config_context = "minikube"
}
Github Actions job:
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v3
- name: setup minikube
uses: manusa/actions-setup-minikube#v2.7.2
with:
minikube version: 'v1.28.0'
kubernetes version: 'v1.25.4'
github token: ${{ secrets.GITHUB_TOKEN }}
driver: docker
container runtime: docker
- name: terraform-apply
uses: dflook/terraform-apply#v1.29.1
with:
path: terraform-k8s
auto_approve: true
I have also tried running it with official setup-minikube action, but doesn't work as well.
Seems like I have managed to make it work by using official Hashicorp's action instead of the original. Gonna check if it deploys everything in the end :)
- uses: hashicorp/setup-terraform#v2
- name: terraform-init
run: terraform -chdir=terraform-k8s init
- name: terraform-apply
run: terraform -chdir=terraform-k8s apply -auto-approve

Github Actions - Invalid workflow file

I am trying to build CI/CD pipelines using GitHub Actions but unfortunately, I am stuck with an error with the yaml file.
Here is my Yaml file is:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs_on: unbuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
Here is an error where I am stuck with:
GitHub Actions
/ .github/workflows/gcp.yaml
Invalid workflow file
You have an error in your yaml syntax on line 15
I tried all possible indentations available on the internet but had no luck. I tried Yamllinter but still could not find where the error comes from. Please point me to where I am going wrong.
Thanks.
The runs-on (not runs_on) should have two spaces indentation relative to the job identifier. Also, the OS should be ubuntu-latest.
Then, env should have the same indentation as runs-on or name, the same as steps.
Here is the correct WF:
---
name: Build and push python code to gcp with github actions
on:
push:
branches:
- main
jobs:
build_push_grc:
name: Build and push to gcr
runs-on: ubuntu-latest
env:
IMAGE_NAME: learning_cicd
PROJECT_ID: personal-370316
steps:
- name: Checkoutstep
uses: actions/checkout#v2
- uses: google-github-actions/setup-gcloud#master
with:
service_account_key: ${{ secrets.SERVICE_ACCOUNT_KEY}}
project_id: ${{ env.PROJECT_ID }}
export_default_credentials: true
- name: Build Docker Image
run: docker build -t $IMAGE_NAME:latest .
- name: Configure Docker Client
run: |-
gcloud auth configure-docker --quiet
- name: Push Docker Image to Container Registry (GCR)
env:
GIT_TAG: v0.1.0
run: |-
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker tag $IMAGE_NAME:latest gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:latest
docker push gcr.io/$PROJECT_ID/$IMAGE_NAME:$GIT_TAG
I would recommend debugging such issues in the GitHub file edit form (editing the yml file in the .github/workflows directory). It will highlight all the issues regarding the workflow syntax. Demo.

Github actions workflow error: You have an error in your yaml syntax

I am trying to deploy to google cloud engine using github actions and my yaml config is as follows,
name: "Deploy to GAE"
on:
push:
branches: [production]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: composer install -n --prefer-dist
- name: Generate key
run: php artisan key:generate
- name: GCP Authenticate
uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "273.0.0"
service_account_key: ${{ secrets.GCP_SA_KEY }}
- name: Set GCP_PROJECT
env:
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
run: gcloud --quiet config set project ${GCP_PROJECT}
- name: Deploy to GAE
run: gcloud app deploy app.yaml
and github actions is throwing me the below error
Invalid workflow file: .github/workflows/main.yml#L10
You have an error in your yaml syntax on line 10
fyi, line #10 is - uses: actions/checkout#v2
The steps indentation level is incorrect, it should be inside deploy
name: "Deploy to GAE"
on:
push:
branches: [production]
jobs:
deploy:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
- name: Install Dependencies
run: composer install -n --prefer-dist
- name: Generate key
run: php artisan key:generate
- name: GCP Authenticate
uses: GoogleCloudPlatform/github-actions/setup-gcloud#master
with:
version: "273.0.0"
service_account_key: ${{ secrets.GCP_SA_KEY }}
- name: Set GCP_PROJECT
env:
GCP_PROJECT: ${{ secrets.GCP_PROJECT }}
run: gcloud --quiet config set project ${GCP_PROJECT}
- name: Deploy to GAE
run: gcloud app deploy app.yaml

Github Actions "unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials"

I have created a github workflow to deploy to GCP. But when it comes to push the docker image to GCP I get this error
...
346fddbbb0ff: Waiting
a6fc7a8843ca: Waiting
unauthorized: You don't have the needed permissions to perform this operation, and you may have invalid credentials. To authenticate your request, follow the steps in: https://cloud.google.com/container-registry/docs/advanced-authentication
Error: Process completed with exit code 1.
Here is my yaml file :
name: Build for Dev
on:
workflow_dispatch:
env:
GKE_PROJECT: bi-dev
IMAGE: gcr.io/bi-dev/bot-dev
DOCKER_IMAGE_TAG: JAVA-${{ github.sha }}
jobs:
build:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout#v2
with:
ref: ${{ github.event.inputs.commit_sha }}
- name: Build Docker Image
run: docker build -t ${{env.IMAGE}} .
- uses: google-github-actions/setup-gcloud#v0.2.0
with:
project_id: ${{ env.GKE_PROJECT }}
service_account_key: ${{ secrets.GKE_KEY }}
export_default_credentials: true
- name: Push Docker Image to GCP
run: |
gcloud auth configure-docker
docker tag ${{env.IMAGE}} ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
docker push ${{env.IMAGE}}:${{env.DOCKER_IMAGE_TAG}}
- name: Update Deployment in GKE
env:
GKE_CLUSTER: bots-dev-test
GKE_DEPLOYMENT: bot-dev
GKE_CONTAINER: bot-dev
run: |
gcloud container clusters get-credentials ${{ env.GKE_CLUSTER }} --zone us-east1-b --project ${{ env.GKE_PROJECT }}
kubectl set image deployment/$GKE_DEPLOYMENT ${{ env.GKE_CONTAINER }}=${{ env.IMAGE }}:${{ env.TAG }}
kubectl rollout status deployment/$GKE_DEPLOYMENT
Surprisingly when I manually run docker push it works fine
Also I am using the similar yaml file to push other projects and they work totally fine. Its just this github action that fails.
Any leads would be appreciated.
Found out that I missed a step and didnt add the Service Account keys in Secrets for Github actions and that led to the failure of this particular actions.

Share variables of GitHub Actions job to multiple subsequent jobs while retaining specific order

We have a GitHub Actions workflow consiting of 3 jobs:
provision-eks-with-pulumi: Provisions AWS EKS cluster (using Pulumi here)
install-and-run-argocd-on-eks: Installing & configuring ArgoCD using kubeconfig from job 1.
install-and-run-tekton-on-eks: Installing & running Tekton using kubeconfig from job 1., but depending on job 2.
We are already aware of this answer and the docs and use jobs.<jobs_id>.outputs to define the variable in job 1. and jobs.<job_id>.needs. to use the variable in the subsequent jobs. BUT it only works for our job 2. - but failes for job 3.. Here's our workflow.yml:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: install-and-run-argocd-on-eks
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...
The first job gets the kubeconfig correctly using needs.provision-eks-with-pulumi.outputs.kubeconfig - but the second job does not (see this GitHub Actions log). We also don't want our 3. job to only depend on job 1., because then job 2. and 3. will run in parallel.
How could our job 3. run after job 2. - but use the variables with the kubeconfig from job 1.?
That's easy, because a GitHub Actions job can depend on multiple jobs using the needs keyword. All you have to do in job 3. is to use an array notation like needs: [job1, job2].
So for your workflow it will look like this:
name: provision
on: [push]
env:
AWS_ACCESS_KEY_ID: ${{ secrets.AWS_ACCESS_KEY_ID }}
AWS_SECRET_ACCESS_KEY: ${{ secrets.AWS_SECRET_ACCESS_KEY }}
AWS_DEFAULT_REGION: 'eu-central-1'
jobs:
provision-eks-with-pulumi:
runs-on: ubuntu-latest
env:
PULUMI_ACCESS_TOKEN: ${{ secrets.PULUMI_ACCESS_TOKEN }}
outputs:
kubeconfig: ${{ steps.pulumi-up.outputs.kubeconfig }}
steps:
...
- name: Provision AWS EKS cluster with Pulumi
id: pulumi-up
run: |
pulumi stack select dev
pulumi up --yes
echo "Create ~/.kube dir only, if not already existent (see https://stackoverflow.com/a/793867/4964553)"
mkdir -p ~/.kube
echo "Create kubeconfig and supply it for depending Action jobs"
pulumi stack output kubeconfig > ~/.kube/config
echo "::set-output name=kubeconfig::$(pulumi stack output kubeconfig)"
- name: Try to connect to our EKS cluster using kubectl
run: kubectl get nodes
install-and-run-argocd-on-eks:
runs-on: ubuntu-latest
needs: provision-eks-with-pulumi
environment:
name: argocd-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install ArgoCD
run: ...
install-and-run-tekton-on-eks:
runs-on: ubuntu-latest
needs: [provision-eks-with-pulumi, install-and-run-argocd-on-eks]
environment:
name: tekton-dashboard
url: ${{ steps.dashboard-expose.outputs.dashboard_host }}
steps:
- name: Checkout
uses: actions/checkout#master
- name: Configure kubeconfig to use with kubectl from provisioning job
run: |
mkdir ~/.kube
echo '${{ needs.provision-eks-with-pulumi.outputs.kubeconfig }}' > ~/.kube/config
echo "--- Checking connectivity to cluster"
kubectl get nodes
- name: Install Tekton Pipelines, Dashboard, Triggers
run: ...