DevOps CI/CD pipelines broken after Kubernetes upgrade to v1.22 - kubernetes

Present state
In v1.22 Kubernetes dropped support for v1beta1 API. That made our release pipeline crash and we are not sure how to fix it.
We use build pipelines to build .NET Core applications and deploy them to the Azure Container Registry. Then there are release pipelines that use helm to upgrade them in the cluster from that ACR. This is how it looks exactly.
Build pipeline:
.NET download, restore, build, test, publish
Docker task v0: Build task
Docker task v0: Push to the ACR task
Artifact publish to Azure Pipelines
Release pipeline:
Helm tool installer: Install helm v3.2.4 (check for latest version of Helm unchecked) and install newest Kubectl (Check for latest version checked)
Bash task:
az acr login --name <acrname>
az acr helm repo add --name <acrname>
Helm upgrade task:
chart name <acrname>/<chartname>
version empty
release name `
After the upgrade to Kubernetes v1.22 we are getting the following error in Release step 3.:
Error: UPGRADE FAILED: unable to recognize "": no matches for kind "Ingress" in version "extensions/v1beta1".
What I've already tried
Error is pretty obvious and from Helm compatibility table it states clearly that I need to upgrade the release pipelines to use at least Helm v3.7.x. Unfortunately in this version OCI functionality (about this shortly) is still in experimental phase so at least v3.8.x has to be used.
Bumping helm version to v3.8.0
That makes release step 3. report:
Error: looks like "https://<acrname>.azurecr.io/helm/v1/repo" is not a valid chart repository or cannot be reached: error unmarshaling JSON: while decoding JSON: json: unknown field "acrMetadata"
After reading Microsoft tutorial on how to live with helm and ACR I learned that az acr helm commands use helm v2 so are deprecated and OCI artifacts should be used.
Switching to OCI part 1
After reading that I changed release step 2. to a one-liner:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
That now gives me Login Succeeded in release step 2. but release step 3. fails with
Error: failed to download "<acrname>/<reponame>".
Switching to OCI part 2
I thought that the helm task is incompatible or something with the new approach so I removed release step 3. and decided to make it from the command line in step 2. So now step 2. looks like this:
helm registry login <acrname>.azurecr.io --username <username> --password <password>
helm upgrade --install --wait -n <namespace> <deploymentName> oci://<acrname>.azurecr.io/<reponame> --version latest --values ./values.yaml
Unfortunately, that still gives me:
Error: failed to download "oci://<acrname>.azurecr.io/<reponame>" at version "latest"
Helm pull, export, upgrade instead of just upgrade
The next try was to split the help upgrade into separately helm pull, helm export and then helm upgrade but
helm pull oci://<acrname>.azurecr.io/<reponame> --version latest
gives me:
Error: manifest does not contain minimum number of descriptors (2), descriptors found: 0
Changing docker build and docker push tasks to v2
I also tried changing the docker tasks in the build pipelines to v2. But that didn't change anything at all.

Have you tried changing the Ingress object's apiVersion to networking.k8s.io/v1beta1 or networking.k8s.io/v1? Support for Ingress in the extensions/v1beta1 API version is dropped in k8s 1.22.
Our ingress.yaml file in our helm chart looks something like this to support multiple k8s versions. You can ignore the AWS-specific annotations since you're using Azure. Our chart has a global value of ingress.enablePathType because at the time of writing the yaml file, AWS Load Balancer did not support pathType and so we set the value to false.
{{- if .Values.global.ingress.enabled -}}
{{- $useV1Ingress := and (.Capabilities.APIVersions.Has "networking.k8s.io/v1/Ingress") .Values.global.ingress.enablePathType -}}
{{- if $useV1Ingress -}}
apiVersion: networking.k8s.io/v1
{{- else if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: example-ingress
labels:
{{- include "my-chart.labels" . | nindent 4 }}
annotations:
{{- if .Values.global.ingress.group.enabled }}
alb.ingress.kubernetes.io/group.name: {{ required "ingress.group.name is required when ingress.group.enabled is true" .Values.global.ingress.group.name }}
{{- end }}
{{- with .Values.global.ingress.annotations }}
{{- toYaml . | nindent 4 }}
{{- end }}
# Add these tags to the AWS Application Load Balancer
alb.ingress.kubernetes.io/tags: k8s.namespace/{{ .Release.Namespace }}={{ .Release.Namespace }}
spec:
rules:
- host: {{ include "my-chart.applicationOneServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ $applicationOneServiceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ $applicationOneServiceName }}
servicePort: http-grails
{{- end }}
- host: {{ include "my-chart.applicationTwoServerUrl" . | quote }}
http:
paths:
{{- if $useV1Ingress }}
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.global.applicationTwo.serviceName }}
port:
name: http-grails
{{- else }}
- path: /*
backend:
serviceName: {{ .Values.global.applicationTwo.serviceName }}
servicePort: http-grails
{{- end }}
{{- end }}

Just to make the picture full - mentioned by #wubbalubba change in ingress' YAML in chart definition wasn't the only thing I had to do fixing our pipelines:
So first, obviously, change the API to v1 in ingress' YAML file inside chart definition plus increment the chart version. Then pack it again and push it to the ACR:
helm package .
helm push .\generated-new-chart.tgz oci://<acrname>.azurecr.io/
Next thing, learned from this guide, was to update, or rather I just removed, all the secrets and configmaps connected with my services:
kubectl delete secret -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
kubectl delete configmap -l owner=helm,status=deployed,name=<release_name> --namespace <release_namespace>
Lastly, remove the deployment helm upgrade step. Instead shell script took its responsibility:
helm registry login $(ContainerRegistryUrl) --username $(ContainerRegistryUsername) --password $(ContainerRegistryPassword)
az aks get-credentials --resource-group $(Kubernetes__ResourceGroup) --name $(Kubernetes__Cluster)
helm upgrade --install --wait -n $(NamespaceName) $(ServiceName) oci://$(ContainerRegistryUrl)/services-generic-chart --version 2 -f ./values.yaml
Only then I was able to redeploy everything successfully.

Related

Integration test for kubernetes deployment with helm on openshift

I am trying to use ansible or helm test to test all resources are up and running after the deployment of ansible automation platform (automation controller, private-automation-hub) on openshift.
Currently, I am using ansible assertion to check the deployments but seems like I can use --atomic with helm commands and check the all resources are up after the helm deployment.
Can you help me with ansible to check all the resources (not only deployments but all resources I deployed with helm chart)? maybe example code or also if possible with helm test some examples?
Thank you.
- name: Test deployment
hosts: localhost
gather_facts: false
# vars:
# deployment_name: "pah-api"
tasks:
- name: gather all deployments
shell: oc get deployment -o template --template '{{"{{"}}range.items{{"}}"}}{{"{{"}}.metadata.name{{"}}"}}{{"{{"}}"\n"{{"}}"}}{{"{{"}}end{{"}}"}}'
register: deployed_resources
# - name: print the output of deployments
# debug:
# var: deployed_resources.stdout_lines
- name: Get deployment status
shell: oc get deployment {{ item }} -o=jsonpath='{.status.readyReplicas}'
with_items: "{{ deployed_resources.stdout_lines }}"
register: deployment_status
failed_when: deployment_status.rc != 0
- name: Verify deployment is running
assert:
that:
- deployment_status.stdout != 'null'
- deployment_status.stdout != '0'
fail_msg: 'Deployment {{ deployed_resources }} is not running.'
Currently I only check for deployments but it would be nice to check all resources (I deployed with helm chart) with ansible or via helm test?
You could use the Ansible Helm module. The atomic parameter is available out of the box: https://docs.ansible.com/ansible/latest/collections/kubernetes/core/helm_module.html

adding multiple ips/domains to filebeat.yml output.elasticsearch via helm

If have multiple elasticsearch/logstash nodes that you want to point them to the output.elasticsearch:hosts in filebeat.yml from the helm chart you can do like that:
values.yaml
note: define hosts as a string, not an array
logstash:
hosts: 192.168.1.2:5444', '192.168.2.100:5544
filebeat-deployment.yml
env:
- name: ELASTICSEARCH_HOSTS
{{- range $key, $val := .Values.logstash }}
value: {{ . | quote }}
{{- end }}
the results will be :
$ helm exec filebeat-pod cat /etc/filebeat/filebeat.yml -n filebeat
setup.template.overwrite: true
setup.ilm.enabled: false
output.elasticsearch:
hosts: ['192.168.1.2:5444', '192.168.2.100:5544']
#username:
#password:
#ssl.verification_mode:
#ssl.certificate_authorities:
#ssl.certificate:
#ssl.key:
filebeat pod logs
$ helm logs filebeat-pod -n filebeat
2022-10-04T09:54:04.539Z INFO eslegclient/connection.go:99 elasticsearch url: http://192.168.1.2:5444
2022-10-04T09:54:04.539Z INFO eslegclient/connection.go:99 elasticsearch url: http://192.168.2.100:5544
NOTE!! - if you have other solutions by adding the multiple ips/domains via helm chart to the ENV container, just reply to this.
Hope you will find this post helpful for you

Helm - Configmap - Read and update the file name

I have the application properties defined for each environment inside a config folder.
config/
application-dev.yml
application-dit.yml
application-sit.yml
When i'm trying to deploy the application in dev, i need to create configmap by considering the applicaiton-dev.yml with a name application.yml.
When i'm trying to deploy the application in dit i need to create configmap by considering the application-dit.yml. But the name of the file should be always application.yml inside the configmap.
Any suggestions?
When using helm to manage projects, different values.yaml files are generally used to distinguish between different environments (development/pre-release/online).
Suppose your configmap file is as follows:
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ $.Values.cm.name }}
data:
application.yml : |-
{{ $.Files.Get {{ $.Values.cm.path }} | nindent 4 }}
In dev, define values-dev.yaml file
cm:
name: test
path: config/application-dev.yml
When you install the chart in dev, you can use the following command:
helm install test . -f values-dev.yaml
In dit, define values-dit.yaml file
cm:
name: test
path: config/application-dit.yml
When you install the chart in dit, you can use the following command:
helm install test . -f values-dit.yaml

why does helm do not accept kubernetes secrets for deployment?

i have created secret inside Kubernetes cluster for image pull from private repository and added it to helm values.yml.
after deployment start (helm install chart /chart) i see that helm deployment is crashing all the time by timeout.
"kubectl describe pod" shows me an error: "imagePullBackoff" and "wrong credentials".
at the same time if to deploy the same app with kubectl apply -f deployment.yml file this secret works as expected and image is downloaded without any issues and deployment is successful.
the question is how to force this secret to work with helm charts?
Try creating secret using this command:
kubectl create secret docker-registry mysecret --docker-server=<docker-repo> --docker-username=<docker-username> --docker-password=<docker-password> --docker-email=<email>
(Provide your respective inputs in the above command)
From helm document
First, assume that the credentials are defined in the values.yaml file like so:
imageCredentials:
registry: quay.io
username: someone
password: sillyness
We then define our helper template as follows:
{{- define "imagePullSecret" }}
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.imageCredentials.registry (printf "%s:%s" .Values.imageCredentials.username .Values.imageCredentials.password | b64enc) | b64enc }}
{{- end }}
Finally, we use the helper template in a larger template to create the Secret manifest:
apiVersion: v1
kind: Secret
metadata:
name: myregistrykey
type: kubernetes.io/dockerconfigjson
data:
.dockerconfigjson: {{ template "imagePullSecret" . }}
In deployment
containers:
- name: private-reg-container
image: <your-private-image>
imagePullSecrets:
- name: myregistrykey

Gitlab 10.1 Deploy to Google Kubernetes Engine

How does one deploy a node app from Gitlab-ci to GKE? I already have cluster integration enabled and functional. But the documentation on what that means is almost non existent. I don't know what variables having a GKE cluster connected gives me or how to use it in my CI.
Here's my gitlab-ci.yml, it puts the image in gitlabhq Registry, meaning I'll have to copy it to google or somehow setup GKE to use a private registry, which no one seems to have managed to do.
image: docker:git
services:
- docker:dind
stages:
- build
- test
- release
- deploy
variables:
DOCKER_DRIVER: overlay2
CONTAINER_TEST_IMAGE: registry.gitlab.com/my-proj:$CI_BUILD_REF_NAME
CONTAINER_RELEASE_IMAGE: registry.gitlab.com/my-proj:latest
before_script:
- docker login -u gitlab-ci-token -p $CI_BUILD_TOKEN registry.gitlab.com
build:
stage: build
script:
- docker build -t $CONTAINER_TEST_IMAGE .
- docker push $CONTAINER_TEST_IMAGE
.test1:
stage: test
script:
- docker run $CONTAINER_TEST_IMAGE npm run eslint
.test2:
stage: test
script:
- docker run $CONTAINER_TEST_IMAGE npm run mocha
release-image:
stage: release
script:
- docker pull $CONTAINER_TEST_IMAGE
- docker tag $CONTAINER_TEST_IMAGE $CONTAINER_RELEASE_IMAGE
- docker push $CONTAINER_RELEASE_IMAGE
only:
- master
deploy:
??????
I haven't used Auto DevOps integration, but I can try and generalize a working approach.
If you have tiller installed on the k8s cluster, it's best if you create a helm chart for your application. If you haven't done that already, there is a a tutorial on how to do that here:
https://github.com/kubernetes/helm/blob/master/docs/charts.md (check Using Helm to Manage Charts)
A basic deployment.yaml managed by helm would look like this:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: {{ template "name" . }}
labels:
app: {{ template "name" . }}
chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
replicas: {{ .Values.replicaCount }}
template:
metadata:
labels:
app: {{ template "name" . }}
release: {{ .Release.Name }}
spec:
containers:
- name: {{ .Chart.Name }}
image: "{{ .Values.image.repository }}:{{ .Values.image.tag }}"
imagePullPolicy: {{ .Values.image.pullPolicy }}
and the corresponding values in the .Values file:
image:
repository: registry.gitlab.com/my-proj
tag: latest
A sample .gitlab-ci.yml file should look like this:
...
deploy:
stage: deploy
script:
- helm upgrade <your-app-name> <path-to-the-helm-chart> --install --set image.tag=$CI_BUILD_REF_NAME
The build phase publishes the docker image and the deploy phase installs a helm chart which tries to download that image from registry.gitlab.com/my-proj.
I take that the k8s cluster has access to that registry. If the registry is private, you need to create a secret in kubernetes that holds the authorization token (unless it is automatically created):
https://kubernetes.io/docs/tasks/configure-pod-container/pull-image-private-registry/
The default pipeline image you're using (image: docker:git) doesn't have the helm CLI installed, so you should change that image with one that has helm and kubectl installed.
In the gitlab tutorial, they seem to be doing the installation on each run:
https://gitlab.com/gitlab-org/gitlab-ci-yml/blob/master/Auto-DevOps.gitlab-ci.yml (check function install_dependencies())