Deploying to Kubernetes cluster using runner from GitLab - kubernetes

I've integrated GitLab with my Digital Ocean Kubernetes cluster. I am trying to set up a simple manual build that will deploy to my Kubernetes cluster.
My gitlab-ci-yml file details are below:
deploy:
stage: deploy
image: bitnami/kubectl:latest
script:
- kubectl version
- kubectl apply -f web.yaml
I am not sure why this is not working. Currently getting the following error:
Error from server (Forbidden): error when retrieving current
configuration ... from server for: "web.yaml": ingresses.extensions "hmweb-ingress" is forbidden: User "system:serviceaccount:gitlab-managed-apps:default" cannot get resource "ingresses" in API group "extensions" in the namespace "hm-ns01"
As far as I can understand it cannot execute the kubectl apply .. commands
Am I doing something wrong?

I think you are missing the environment in your deploy job.
Modify your job definition to look something like this:
deploy:
stage: deploy
image: bitnami/kubectl:latest
environment:
name: production
script:
- kubectl version
- kubectl apply -f web.yaml
Where "production" is interchangable with any environment name.
At least that fixed the issue for me.

Related

Error in connecting to GKE cluster from Google Cloud Build

I am trying to connect to GKE cluster in Google cloud build, i performed below build step
steps:
- name: gcr.io/cloud-builders/gcloud
args:
- container
- clusters
- get-credentials
- cluster-name
- '--zone=us-central1-a'
Error: Kubernetes cluster unreachable: Get "https://IP/version": error executing access token command "/builder/google-cloud-sdk/bin/gcloud config config-helper --format=json": err=exit status 127 output= stderr=/builder/google-cloud-sdk/bin/gcloud: exec: line 192: python: not found
Update: Though it can be done using gcloud, using kubectl cloud-builder seems to be good idea here to connect to GKE cluster.
You can directly mention few Environment Variables and kubectl will do the rest for you.
Refer to this link: https://github.com/GoogleCloudPlatform/cloud-builders/tree/master/kubectl#usage
My build step looks like this:
- name: gcr.io/cloud-builders/kubectl
env:
- CLOUDSDK_COMPUTE_ZONE=us-central1-a
- CLOUDSDK_CONTAINER_CLUSTER=cluster-name
- KUBECONFIG=/workspace/.kube/config
args:
- cluster-info

how to run cmd kubectl apply using Ansible properly

I'm trying to automate the following:
Apply the Physical Volumes
kubectl apply -f food-pv.yaml
kubectl apply -f bar-pv.yaml
Apply the Physical Volume Claims
kubectl apply -f foo.yaml
kubectl apply -f bar.yaml
Apply the Services
kubectl apply -f this-service.yaml
kubectl apply -f that-nodeport.yaml
Apply the Deployment
kubectl apply -f something.yaml
Now I could run the cmds as shell commands, but I don't think that's the proper way to do it. I've been reading thru the Ansible documentation, but I'm not seeing what I need to do for this. Is there a better way to apply these yaml files without using a series of shell cmds?
Thanks in advance
The best way to do this would be to use ansible kubernetes.core collection
An example with file:
- name: Create a Deployment by reading the definition from a local file
kubernetes.core.k8s:
state: present
src: /testing/deployment.yml
So, you could loop from different folders containing the yaml definitions for your objects with state: present
I don't currently have a running kube cluster to test this against but you should basically be able to run all this in a single task with a loop using the kubernetes.core.k8s module
Here is what I believe should meet your requirement (provided your access to your kube instance is configured and ok in your environment and that you installed the above collection as described in the documentation)
- name: install my kube objects
hosts: localhost
gather_facts: false
vars:
obj_def_path: /path/to/your/obj_def_dir/
obj_def_list:
- food-pv.yaml
- bar-pv.yaml
- foo.yaml
- bar.yaml
- this-service.yaml
- that-nodeport.yaml
- something.yaml
tasks:
- name: Install all objects from def files
k8s:
src: "{{ obj_def_path }}/{{ item }}"
state: present
apply: true
loop: "{{ obj_def_list }}"

Ansible K8s Module - Apply Multiple Yaml Files at Once

Writing an Ansible playbook where we pull projects from GIT repos and thereafter apply all pulled yamls to a Kubernetes cluster.
I see only an example to apply single yaml files to a Kubernetes cluster but not multiple ones at once. e.g.:
- name: Apply metrics-server manifest to the cluster.
community.kubernetes.k8s:
state: present
src: ~/metrics-server.yaml
Is there any way of applying multiple yaml files? Something like:
- name: Apply Services to the cluster.
community.kubernetes.k8s:
state: present
src: ~/svc-*.yaml
Or:
- name: Apply Ingresses to the cluster.
community.kubernetes.k8s:
state: present
dir: ~/ing/
Is there maybe another Ansible K8s module I should be looking at maybe?
Or should we just run kubectl commands directly in Ansible tasks. e.g.:
- name: Apply Ingresses to the cluster.
command: kubectl apply -f ~/ing/*.yaml
What is the best way to achieve this using Ansible?
You can use k8s Ansible module along with 'with_fileglob' recursive pattern. Below code should work for your requirement.
- name: Apply K8s resources
k8s:
definition: "{{ lookup('template', '{{ item }}') | from_yaml }}"
with_fileglob:
- "~/ing/*.yaml"

ADO Pipeline Environment Kubernetes On-Prem Resource Connection failing with x509: certificate signed by unknown authority

I am trying to setup a multi-stage ADO pipeline using ADO pipeline Environment feature.
Stage 1: Builds the Spring-boot based Java Micro-service using Maven.
Stage 2: Deploys the above using Helm 3. The HelmDeploy#0 task uses Environment which has a Resource called tools-dev (a kubernetes namespace) where I want this service to be deployed using Helm chart.
It fails at the last step with this error:
/usr/local/bin/helm upgrade --install --values /azp/agent/_work/14/a/values.yaml --wait --set ENV=dev --set-file appProperties=/azp/agent/_work/14/a/properties.yaml --history-max 2 --stderrthreshold 3 java-rest-template k8s-common-helm/rest-template-helm-demo
Error: Kubernetes cluster unreachable: Get "https://rancher.msvcprd.windstream.com/k8s/clusters/c-gkffz/version?timeout=32s": x509: certificate signed by unknown authority
##[error]Error: Kubernetes cluster unreachable: Get "https://rancher.msvcprd.windstream.com/k8s/clusters/c-gkffz/version?timeout=32s": x509: certificate signed by unknown authority**
Finishing: Helm Deploy
I created the Kubernetes resource in the Environment using the kubectl commands specified in the settings section.
Deploy stage pipeline excerpt:
- stage: Deploy
displayName: kubernetes deployment
dependsOn: Build
condition: succeeded('Build')
jobs:
- deployment: deploy
pool: $(POOL_NAME)
displayName: Deploy
environment: dev-az-s-central-k8s2.tools-dev
strategy:
runOnce:
deploy:
steps:
- bash: |
helm repo add \
k8s-common-helm \
http://nexus.windstream.com/repository/k8s-helm/
helm repo update
displayName: 'Add and Update Helm repo'
failOnStderr: false
- task: HelmDeploy#0
inputs:
command: 'upgrade'
releaseName: '$(RELEASE_NAME)'
chartName: '$(HELM_CHART_NAME)'
valueFile: '$(Build.ArtifactStagingDirectory)/values.yaml'
arguments: '--set ENV=$(ENV) --set-file appProperties=$(Build.ArtifactStagingDirectory)/properties.yaml --history-max 2 --stderrthreshold 3'
displayName: 'Helm Deploy'
Environment Settings:
Name: dev-az-s-central-k8s2
Resource: tools-dev (Note: this is an on-prem k8s cluster that I am trying to connect to).
Can you please let me know what additional configuration is required to resolve this x509 certificate issue?
Check this documentation:
The issue is that your local Kubernetes config file must have the
correct credentials.
When you create a cluster on GKE, it will give you credentials,
including SSL certificates and certificate authorities. These need to
be stored in a Kubernetes config file (Default: ~/.kube/config) so
that kubectl and helm can access them.
Also, check answer in case Helm 3: x509 error when connecting to local Kubernetes
Helm looks for kubeconfig at this path $HOME/.kube/config.
Please run this command
microk8s.kubectl config view --raw > $HOME/.kube/config
This will save the config at required path in your directory and shall
work

Deploying Container Image in Azure Kubernetes Service(AKS ) using AzureDevops

I have configured an Azure Kubernetes Service.
I have completed a couple of deployments successfully using Kubectl task in Azure DevOps. The task command is "kubectl apply -f deployment.yaml".
In the deployment.yaml I have some items which I would like to configure as a variable for example image as below
containers:
- name: xxxxx
image: containerregistry.azurecr.io/xxxxx:5517
ports:
- containerPort: 80.
Now I am publishing the docker image with building number being 5517,5518 and so on. So how can I change the image tag on the fly when "kubectl apply -f deployment.yaml" is executed. The deployment. Yaml is checked into my Azure DevOps repo.
So you have 2 options:
preprocess the file and replace tokens (there is a task for that)
use helm
You obviously have other options like using pulumi\terraform\flux\etc, but these are the most straight forward ones to use from your starting point