Integration test for kubernetes deployment with helm on openshift - kubernetes

I am trying to use ansible or helm test to test all resources are up and running after the deployment of ansible automation platform (automation controller, private-automation-hub) on openshift.
Currently, I am using ansible assertion to check the deployments but seems like I can use --atomic with helm commands and check the all resources are up after the helm deployment.
Can you help me with ansible to check all the resources (not only deployments but all resources I deployed with helm chart)? maybe example code or also if possible with helm test some examples?
Thank you.
- name: Test deployment
hosts: localhost
gather_facts: false
# vars:
# deployment_name: "pah-api"
tasks:
- name: gather all deployments
shell: oc get deployment -o template --template '{{"{{"}}range.items{{"}}"}}{{"{{"}}.metadata.name{{"}}"}}{{"{{"}}"\n"{{"}}"}}{{"{{"}}end{{"}}"}}'
register: deployed_resources
# - name: print the output of deployments
# debug:
# var: deployed_resources.stdout_lines
- name: Get deployment status
shell: oc get deployment {{ item }} -o=jsonpath='{.status.readyReplicas}'
with_items: "{{ deployed_resources.stdout_lines }}"
register: deployment_status
failed_when: deployment_status.rc != 0
- name: Verify deployment is running
assert:
that:
- deployment_status.stdout != 'null'
- deployment_status.stdout != '0'
fail_msg: 'Deployment {{ deployed_resources }} is not running.'
Currently I only check for deployments but it would be nice to check all resources (I deployed with helm chart) with ansible or via helm test?

You could use the Ansible Helm module. The atomic parameter is available out of the box: https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html

Related

DevOps CI/CD pipelines broken after Kubernetes upgrade to v1.22

Present state
In v1.22 Kubernetes dropped support for v1beta1 API. That made our release pipeline crash and we are not sure how to fix it.
We use build pipelines to build .NET Core applications and deploy them to the Azure Container Registry. Then there are release pipelines that use helm to upgrade them in the cluster from that ACR. This is how it looks exactly.
Build pipeline:
.NET download, restore, build, test, publish
Docker task v0: Build task
Docker task v0: Push to the ACR task
Artifact publish to Azure Pipelines
Release pipeline:
Helm tool installer: Install helm v3.2.4 (check for latest version of Helm unchecked) and install newest Kubectl (Check for latest version checked)
Bash task:
az acr login --name <acrname>
az acr helm repo add --name <acrname>
Helm upgrade task:
chart name <acrname>/<chartname>
version empty
release name `
After the upgrade to Kubernetes v1.22 we are getting the following error in Release step 3.:
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1".
What I've already tried
Error is pretty obvious and from Helm compatibility table it states clearly that I need to upgrade the release pipelines to use at least Helm v3.7.x. Unfortunately in this version OCI functionality (about this shortly) is still in experimental phase so at least v3.8.x has to be used.
Bumping helm version to v3.8.0
That makes release step 3. report:
Error: looks like "https://<acrname>.azurecr.io/helm/v1/repo" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "acrMetadata"
After reading Microsoft tutorial on how to live with helm and ACR I learned that az acr helm commands use helm v2 so are deprecated and OCI artifacts should be used.
Switching to OCI part 1
After reading that I changed release step 2. to a one-liner:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
That now gives me Login Succeeded in release step 2. but release step 3. fails with
Error: failed to download "<acrname>/<reponame>".
Switching to OCI part 2
I thought that the helm task is incompatible or something with the new approach so I removed release step 3. and decided to make it from the command line in step 2. So now step 2. looks like this:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
helm upgrade --install --wait -n <namespace> <deploymentName> oci://<acrname>.azurecr.io/<reponame> --version latest --values ./values.yaml
Unfortunately, that still gives me:
Error: failed to download "oci://<acrname>.azurecr.io/<reponame>" at version "latest"
Helm pull, export, upgrade instead of just upgrade
The next try was to split the help upgrade into separately helm pull, helm export and then helm upgrade but
helm pull oci://<acrname>.azurecr.io/<reponame> --version latest
gives me:
Error: manifest does not contain minimum number of descriptors (2), descriptors found: 0
Changing docker build and docker push tasks to v2
I also tried changing the docker tasks in the build pipelines to v2. But that didn't change anything at all.
Have you tried changing the Ingress object's apiVersion to networking.k8s.io/v1beta1 or networking.k8s.io/v1? Support for Ingress in the extensions/v1beta1 API version is dropped in k8s 1.22.
Our ingress.yaml file in our helm chart looks something like this to support multiple k8s versions. You can ignore the AWS-specific annotations since you're using Azure. Our chart has a global value of ingress.enablePathType because at the time of writing the yaml file, AWS Load Balancer did not support pathType and so we set the value to false.
{{- if .Values.global.ingress.enabled -}}
{{- $useV1Ingress := and (.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress") .Values.global.ingress.enablePathType -}}
{{- if $useV1Ingress -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: example-ingress
labels:
{{- include "my-chart.labels" . | nindent 4 }}
annotations:
{{- if .Values.global.ingress.group.enabled }}
alb.ingress.kubernetes.io/group.name: {{ required "ingress.group.name is required when ingress.group.enabled is true" .Values.global.ingress.group.name }}
{{- end }}
{{- with .Values.global.ingress.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
# Add these tags to the AWS Application Load Balancer
alb.ingress.kubernetes.io/tags: k8s.namespace/{{ .Release.Namespace }}={{ .Release.Namespace }}
spec:
rules:
- host: {{ include "my-chart.applicationOneServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ $applicationOneServiceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ $applicationOneServiceName }}
servicePort: http-grails
{{- end }}
- host: {{ include "my-chart.applicationTwoServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.global.applicationTwo.serviceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ .Values.global.applicationTwo.serviceName }}
servicePort: http-grails
{{- end }}
{{- end }}
Just to make the picture full - mentioned by #wubbalubba change in ingress' YAML in chart definition wasn't the only thing I had to do fixing our pipelines:
So first, obviously, change the API to v1 in ingress' YAML file inside chart definition plus increment the chart version. Then pack it again and push it to the ACR:
helm package .
helm push .\generated-new-chart.tgz oci://<acrname>.azurecr.io/
Next thing, learned from this guide, was to update, or rather I just removed, all the secrets and configmaps connected with my services:
kubectl delete secret -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
kubectl delete configmap -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
Lastly, remove the deployment helm upgrade step. Instead shell script took its responsibility:
helm registry login $(ContainerRegistryUrl) --username $(ContainerRegistryUsername) --password $(ContainerRegistryPassword)
az aks get-credentials --resource-group $(Kubernetes__ResourceGroup) --name $(Kubernetes__Cluster)
helm upgrade --install --wait -n $(NamespaceName) $(ServiceName) oci://$(ContainerRegistryUrl)/services-generic-chart --version 2 -f ./values.yaml
Only then I was able to redeploy everything successfully.

How to connect to Kubernetes using ansible?

I want to connect to Kubernetes using Ansible. I want to run some ansible playbooks to create Kubernetes objects such as roles and rolebindings using ansible k8s module. I want to know if the Ansible K8s module is standard Kubernetes client that can use Kubeconfig in the same way as helm and kubectl.
Please let me know how to configure Kubeconfig for ansible to connect to K8s cluster.
You basically specify the kubeconfig parameter in the Ansible YAML file. (It defaults to ~/.kube/config.json). For example:
---
- hosts: localhost
gather_facts: false
vars_files:
- vars/main.yml
tasks:
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '~/.kube/config'
state: present
loop: "{{ lookup('template', 'myapp/mysql-pass.yml') | from_yaml_all | list }}"
no_log: k8s_no_log
...
You can also make it a variable:
...
- name: Deploy my app secrets.
k8s:
definition: '{{ item }}'
kubeconfig: '{{ k8s_kubeconfig }}'
...
Thankyou..It worked for me..I tried the below.
- hosts: localhost
gather_facts: false
tasks:
- name: Create a k8s namespace
k8s:
kubeconfig: '~/Documents/sample-project/eks-kubeconfig'
name: testing1
api_version: v1
kind: Namespace
state: present
state: present

How to set java environment variables in a helm chart?

What is the best practice to set environment variables for a java app's deployment in a helm chart so that I can use the same chart for dev and prod environments? I have separate kubernetes deployments for both the environments.
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc ..."
Similarly, my prod variables would something like
"-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc ..."
Now, how can I leverage helm to write a single chart but can create separated set of pods with different properties according to the environment as in
helm install my-app --set env=prod ./test-chart
or
helm install my-app --set env=dev ./test-chart
The best way is to use single deployment template and use separate value file for each environment.
It does not need to be only environment variable used in the application.
The same can be apply for any environment specific configuration.
Example:
deployment.yaml
spec:
containers:
env:
- name: SYSTEM_OPTS
- value: "{{ .Values.opts }}"
values-dev.yaml
# system opts
opts: "-Dapp1.url=http://dev.app1.xyz -Dapp2.url=http://dev.app2.abc "
values-prod.yaml
# system opts
opts: "-Dapp1.url=http://prod.app1.xyz -Dapp2.url=http://prod.app2.abc "
Then specify the related value file in the helm command.
For example, deploying on dev enviornemnt.
helm install -f values-dev.yaml my-app ./test-chart

How to setup ansible playbook that is able to execute kubectl (kubernetes) commands

I'm trying to write simple ansible playbook that would be able to execute some arbitrary command against the pod (container) running in kubernetes cluster.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
Couple of questions:
Do I need to first have inventory for k8s defined? Something like: https://docs.ansible.com/ansible/latest/plugins/inventory/k8s.html. My understanding is that I would define kube config via inventory which would be used by the kubectl plugin to actually connect to the pods to perform specific action.
If yes, is there any example of arbitrary command executed via kubectl plugin (but not via shell plugin that invokes kubectl on some remote machine - this is not what I'm looking for)
I'm assuming that, during the ansible-playbook invocation, I would point to k8s inventory.
Thanks.
I would like to utilise kubectl connection plugin: https://docs.ansible.com/ansible/latest/plugins/connection/kubectl.html but having struggle to figure out how to actually do that.
The fine manual describes how one uses connection plugins, and while it is possible to use in in tasks, that is unlikely to make any sense unless your inventory started with Pods.
The way I have seen that connection used is to start by identifying the Pods against which you might want to take action, and then run a playbook against a unique group for that purpose:
- hosts: all
tasks:
- set_fact:
# this is *just an example for brevity*
# in reality you would use `k8s:` or `kubectl get -o name pods -l my-selector=my-value` to get the pod names
pod_names:
- nginx-12345
- nginx-3456
- add_host:
name: '{{ item }}'
groups:
- my-pods
with_items: '{{ pod_names }}'
- hosts: my-pods
connection: kubectl
tasks:
# and now you are off to the races
- command: ps -ef
# watch out if the Pod doesn't have a working python installed
# as you will have to use raw: instead
# (and, of course, disable "gather_facts: no")
- raw: ps -ef
First install k8s collections
ansible-galaxy collection install community.kubernetes
and here is play-book, it will sort all pods and run a command in every pod
---
-
hosts: localhost
vars_files:
- vars/main.yaml
collections:
- community.kubernetes
tasks:
-
name: Get the pods in the specific namespace
k8s_info:
kubeconfig: '{{ k8s_kubeconfig }}'
kind: Pod
namespace: test
register: pod_list
-
name: Print pod names
debug:
msg: "pod_list: {{ pod_list | json_query('resources[*].status.podIP') }} "
- set_fact:
pod_names: "{{pod_list|json_query('resources[*].metadata.name')}}"
-
k8s_exec:
kubeconfig: '{{ k8s_kubeconfig }}'
namespace: "{{ namespace }}"
pod: "{{ item.metadata.name }}"
command: apt update
with_items: "{{ pod_list.resources }}"
register: exec
loop_control:
label: "{{ item.metadata.name }}"
Maybe you can use like this...
- shell: |
kubectl exec -i -n {{ namespace }} {{ pod_name }} -- bash -c 'clickhouse-client --query "INSERT INTO customer FORMAT CSV"
--user=test --password=test < /mnt/azure/azure/test/test.tbl'
As per the latest documentation you can use the following k8s modules
The following are some of the examples
- name: Create a k8s namespace
kubernetes.core.k8s:
name: testing
api_version: v1
kind: Namespace
state: present
- name: Create a Service object from an inline definition
kubernetes.core.k8s:
state: present
definition:
apiVersion: v1
kind: Service
metadata:
name: web
namespace: testing
labels:
app: galaxy
service: web
spec:
selector:
app: galaxy
service: web
ports:
- protocol: TCP
targetPort: 8000
name: port-8000-tcp
port: 8000
- name: Remove an existing Service object
kubernetes.core.k8s:
state: absent
api_version: v1
kind: Service
namespace: testing
name: web

Kubernetes w/ helm: MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal

I'm using a script to run helm command which upgrades my k8s deployment.
Before I've used kubectl to directly deploy, as I've move to helm and started using charts, I see an error after deploying on the k8s pods:
MountVolume.SetUp failed for volume "secret" : invalid character '\r' in string literal
My script looks similar to:
value1="foo"
value2="bar"
helm upgrade deploymentName --debug --install --atomic --recreate-pods --reset-values --force --timeout 900 pathToChartDir --set value1 --set value2
The deployment.yaml is as following:
apiVersion: apps/v1
kind: Deployment
metadata:
name: deploymentName
spec:
selector:
matchLabels:
run: deploymentName
replicas: 2
template:
metadata:
labels:
run: deploymentName
app: appName
spec:
containers:
- name: deploymentName
image: {{ .Values.image.acr.registry }}/{{ .Values.image.name }}:{{ .Values.image.tag }}
volumeMounts:
- name: secret
mountPath: /secrets
readOnly: true
ports:
- containerPort: 1234
env:
- name: DOTENV_CONFIG_PATH
value: "/secrets/env"
volumes:
- name: secret
flexVolume:
driver: "azure/kv"
secretRef:
name: "kvcreds"
options:
usepodidentity: "false"
tenantid: {{ .Values.tenantid }}
subscriptionid: {{ .Values.subsid }}
resourcegroup: {{ .Values.rg }}
keyvaultname: {{ .Values.kvname }}
keyvaultobjecttype: secret
keyvaultobjectname: {{ .Values.objectname }}
As can be seen, the error relates to the secret volume and its values.
I've triple checked there is no line-break or anything like that in the values.
I've run helm lint - no errors found.
I've run helm template - nothing strange or missing in output.
Update:
I've copied the output of helm template and put in a deploy.yaml file.
Then used kubectl apply -f deploy.yaml to manually deploy the service, and... it works.
That makes me think it's actually some kind of a bug in helm? make sense?
Update 2:
I've also tried replacing the azure/kv volume with emptyDir volume and I was able to deploy using helm. It looks like a specific issue of helm with azure/kv volume?
Any ideas for a workaround?
A completely correct answer requires that I say the actual details of your \r problem might be different from mine.
I found the issue in my case by looking in the kv log of the AKS node (/var/log/kv-driver.log). In my case, the error was:
Original Error: autorest/azure: Service returned an error. Status=403 Code="Forbidden" Message="Access denied. Caller was not found on any access policy.\r\n
You can learn to SSH into the node on this page:
https://learn.microsoft.com/en-us/azure/aks/ssh
If you want to follow the solution, I opened an issue:
https://github.com/Azure/kubernetes-keyvault-flexvol/issues/121