Azure devops + helm chart = ##[warning]Capturing deployment metadata failed with error: TypeError: Cannot read property 'kind' of null - azure-devops

I use a helm chart to deploy on an AKS cluster via Azure Devops. Everything works fine but I see a warning at the end of the deployment step:
Starting: Deploy Helm chart to qa3 environment
==============================================================================
Task : Package and deploy Helm charts
Description : Deploy, configure, update a Kubernetes cluster in Azure Container Service by running helm commands
Version : 0.201.0
Author : Microsoft Corporation
Help : https://aka.ms/azpipes-helm-tsg
==============================================================================
/usr/local/bin/helm upgrade --namespace qa3 --install --values /home/vsts/work/1/s/invitation/values.yaml --set deployment.image.tag=***,deployment.environment=qa3,cluster.name=dev,azure.region=westus2,azure.appInsightsKey=***,deployment.deployedBy='cd',application.publicJwtValidationCertPemBase64=***,application.endpointPath=invitational,application.sendGridTemplateId=***,application.twillioFromPhoneNumber=***,secret.AuthToken=***,secret.AccountSid=***,secret.SendGridApiKey=*** --wait --install --reuse-values q5id-app-invitation /home/vsts/work/1/s/invitation
Release "q5id-app-invitation" has been upgraded. Happy Helming!
NAME: q5id-app-invitation
LAST DEPLOYED: Wed Aug 10 14:53:19 2022
NAMESPACE: qa3
STATUS: deployed
REVISION: 3
TEST SUITE: None
/usr/local/bin/kubectl cluster-info
Kubernetes control plane is running at https://***:443
CoreDNS is running at https://***:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://***:443/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
##[warning]Capturing deployment metadata failed with error: TypeError: Cannot read property 'kind' of null
Finishing: Deploy Helm chart to qa3 environment
It looks like the template was successfully deployed, then the kubectl cluster-info was ran, then something else. I cannot understand what might cause this warning:
##[warning]Capturing deployment metadata failed with error: TypeError: Cannot read property 'kind' of null
How can I fix it?

It seems I just had the same issue. The problem is that one of your helm templates is not loading properly.
Let's say your template is conditional and before the condition there is a comment
# enables PodDisruptionBudget
{{- if .Values.pdb }}{{ if .Values.pdb.enabled }}
apiVersion: policy/v1
kind: PodDisruptionBudget
...
{{- end }}{{ end }}
In some cases, it would work without warning when if is true. However, if it's a false your template is kinda "empty" and Azure should skip it. Unfortunately, it cannot, because there is a comment that Azure prints in the pipeline logs. Due to that the template is not skipped and Azure searches for kind of this template, but there is nothing, so this is why the warning appears. To solve it one needs to move the comment inside of the condition.

Related

Pulumi - Chart - Failed checking the Kubernetes version: argocd: >= 1.22.0-0 and got Kubernetes 1.20.0

Got an issue in deploying the argo-cd helm chart, it seems failing checking the Kubernetes version: argocd: >= 1.22.0-0 and got Kubernetes 1.20.0
Pulumi is not using the installed Helm on my Mac and seems to have kube-version set to 1.20.0!
Pulumi Chart ressource:
new k8s.helm.v3.Chart(
'argo-cd',
{
chart: 'argo-cd',
fetchOpts: {
repo: 'https://argoproj.github.io/argo-helm'
},
namespace: 'argo',
values: {}
},
{
providers: {
kubernetes: cluster.provider
}
}
);
Result:
pulumi:pulumi:Stack (my-project-prod):
error: Error: invocation of kubernetes:helm:template returned an error: failed to generate YAML for specified Helm chart: failed to create chart from template: chart requires kubeVersion: >=1.22.0-0 which is incompatible with Kubernetes v1.20.0
The chart is working as intended. >=1.22.0-0 means the chart must be rendered with a version of the Kubernetes client greater than 1.22.0.
Check your Helm version is compiled against 1.22.
If you want to render your chart against your cluster's capabilities, use helm template --validate. That will tell Helm to pull the Kubernetes version from your cluster. Otherwise it uses the version of Kubernetes the Helm client was compiled against.
You can also refer to this github link for more troubleshooting steps.

Helm: Conditional deployment of a dependent chart, only install if it has not been installed before

While installing a helm chart a condition for the dependency works well as the following Chart.yaml file. But it doesn't allow to apply the condition based on the existing Kubernetes resource.
# Chart.yaml
apiVersion: v1
name: my-chart
version: 0.3.1
appVersion: 0.4.5
description: A helm chart with dependency
dependencies:
- name: metrics-server
version: 2.5.0
repository: https://artifacts.myserver.com/v1/helm
condition: metrics-server.enabled
I did a local install of the chart (my-chart) in a namespace(default), then I try to install the same chart in another namespace(pb) I get the following error which says the resource already exists. This resource, "system:metrics-server-aggregated-reader", has been installed cluster wide as previous dependency (metrics-server). Following is the step to reproduce.
user#hostname$helm install my-chart -n default --set metrics-server.enabled=true ./my-chart
NAME: my-chart
LAST DEPLOYED: Wed Nov 25 16:22:52 2020
NAMESPACE: default
STATUS: deployed
REVISION: 1
NOTES:
My Cluster
user#hostname$helm install my-chart -n pb --set metrics-server.enabled=true ./my-chart
Error: rendered manifests contain a resource that already exists. Unable to continue with install: ClusterRole "system:metrics-server-aggregated-reader" in namespace "" exists and cannot be imported into the current release: invalid ownership metadata; annotation validation error: key "meta.helm.sh/release-namespace" must equal "pb": current value is "default"
There is a way to modify the template inside the metrics-server chart to conditionally generate the manifest files as described in Helm Conditional Templates. In order to do this I have to modify and maintain the metrics-server chart in internal artifact which will restrict me using the most recent charts.
I am looking for an approach to query the existing Kubernetes resource, "system:metrics-server-aggregated-reader", and only install the dependency chart if such resource do not exists.

no matches for kind "Deployment" in version "extensions/v1beta1"

While deploying mojaloop, Kubernetes responds with the following errors:
Error: validation failed: [unable to recognize "": no matches for kind
"Deployment" in version "apps/v1beta2", unable to recognize "": no
matches for kind "Deployment" in version "extensions/v1beta1", unable
to recognize "": no matches for kind "StatefulSet" in version
"apps/v1beta2", unable to recognize "": no matches for kind
"StatefulSet" in version "apps/v1beta1"]
My Kubernetes version is 1.16.
How can I fix the problem with the API version?
From investigating, I have found that Kubernetes doesn't support apps/v1beta2, apps/v1beta1.
How can I make Kubernetes use a not deprecated version or some other supported version?
I am new to Kubernetes and anyone who can support me I am happy
In Kubernetes 1.16 some apis have been changed.
You can check which apis support current Kubernetes object using
$ kubectl api-resources | grep deployment
deployments deploy apps true Deployment
This means that only apiVersion with apps is correct for Deployments (extensions is not supporting Deployment). The same situation with StatefulSet.
You need to change Deployment and StatefulSet apiVersion to apiVersion: apps/v1.
If this does not help, please add your YAML to the question.
EDIT
As issue is caused by HELM templates included old apiVersions in Deployments which are not supported in version 1.16, there are 2 possible solutions:
1. git clone whole repo and replace apiVersion to apps/v1 in all templates/deployment.yaml using script
2. Use older version of Kubernetes (1.15) when validator accept extensions as apiVersion for Deployment and StatefulSet.
to convert an older Deployment to apps/v1, you can run:
kubectl convert -f ./my-deployment.yaml --output-version apps/v1
You can change manually as an alternative. Fetch the helm chart:
helm fetch --untar stable/metabase
Access the chart folder:
cd ./metabase
Change API version:
sed -i 's|extensions/v1beta1|apps/v1|g' ./templates/deployment.yaml
Add spec.selector.matchLabels:
spec:
[...]
selector:
matchLabels:
app: {{ template "metabase.name" . }}
[...]
Finally install your altered chart:
helm install ./ \
-n metabase \
--namespace metabase \
--set ingress.enabled=true \
--set ingress.hosts={metabase.$(minikube ip).nip.io}
Enjoy!
I prefer kubectl explain.
# kubectl explain deploy
KIND: Deployment
VERSION: apps/v1
DESCRIPTION:
Deployment enables declarative updates for Pods and ReplicaSets.
FIELDS:
apiVersion <string>
APIVersion defines the versioned schema of this representation of an
object. Servers should convert recognized schemas to the latest internal
value, and may reject unrecognized values. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#resources
kind <string>
Kind is a string value representing the REST resource this object
represents. Servers may infer this from the endpoint the client submits
requests to. Cannot be updated. In CamelCase. More info:
https://git.k8s.io/community/contributors/devel/sig-architecture/api-conventions.md#types-kinds
metadata <Object>
Standard object metadata.
spec <Object>
Specification of the desired behavior of the Deployment.
status <Object>
Most recently observed status of the Deployment.
With kubectl explain you can also see specific parameters of an object:
# kubectl explain Service.spec.externalTrafficPolicy
KIND: Service
VERSION: v1
FIELD: externalTrafficPolicy <string>
DESCRIPTION:
externalTrafficPolicy denotes if this Service desires to route external
traffic to node-local or cluster-wide endpoints. "Local" preserves the
client source IP and avoids a second hop for LoadBalancer and Nodeport type
services, but risks potentially imbalanced traffic spreading. "Cluster"
obscures the client source IP and may cause a second hop to another node,
but should have good overall load-spreading.
To put it simply, you don't force the current installation to use an outdated version of the API; you fix the version in your config files.
If you want to check which version your current kube supports, run :
root#ubn64:~# kubectl api-versions | grep -i apps
apps/v1
I was getting below error -
error: unable to recognize "deployment.yaml": no matches for kind "Deployment" in version "extensions/v1beta1"
solution that worked for me -
modified the line from apiVersion: extensions/v1beta1 to apiVersion: apps/v1 in deployment.yaml
Reason -
we had upgraded the K8 cluster hence this error occured.
This was annoying me because I am testing lots of helm packages so I wrote a quick script - which could be modified to sort your workflow perhaps
see below
New workflow
First fetch the chart as a tgz to your working directory
helm fetch repo/chart
then in your working directly run bash script below - which I named helmk
helmk myreleasename mynamespace chart.tgz [any parameters for kubectl create]
Contents of helmk - need to edit your kubeconfig clustername to work
#!/bin/bash
echo usage $0 releasename namespace chart.tgz [createparameter1] [createparameter2] ... [createparameter n]
echo This will use your namespace then shift back to default so be careful!!
kubectl create namespace $2 #this will create harmless error if namespace exists have to ignore
kubectl config set-context MYCLUSTERNAME --namespace $2
helm template -n $1 --namespace $2 $3 | kubectl convert -f /dev/stdin | kubectl create --save-config=true ${#:4} -f /dev/stdin
#note the --namespace parameter in helm template above seems to be ignored so we have to manually switch context
kubectl config set-context MYCLUSTERNAME --namespace default
It's a slightly dangerous hack since I manually switch to your new desired namespace context then back again so only to be used for single user devs really or comment that out.
You will get a warning about using the kubectl convert facility like this
If you need to edit the YAML to customise - just replace one of the /dev/stdin to intermediate files but It's probably better to get it up using "create" with a save-config as I have and then simply "apply" your changes which means that they will be recorded in kubernetes too.
Good luck
I was facing the same issue on a cluster that was upgraded to a version that does not support certain api versions (v1.17 and apps/v1beta2).
$ helm get manifest some-deployment
...
# Source: some-deployment/templates/deployment.yaml
apiVersion: apps/v1beta2
kind: Deployment
metadata:
name: some-deployment
labels:
...
Looking at the helm docs, it seems that the manifest is stored in the cluster for helm to reference, and it may include invalid api versions, leading to errors.
The 2 proposed methods are to either manually edit the manifest (a rather tedious multi-stage process), or use a helm plugin called mapkubeapis that does it automatically.
$ helm plugin install https://github.com/helm/helm-mapkubeapis
It can be run with the --dry-run flag to simulate the effects:
$ helm mapkubeapis --dry-run some-deployment
2021/02/15 09:33:29 NOTE: This is in dry-run mode, the following actions will not be executed.
2021/02/15 09:33:29 Run without --dry-run to take the actions described below:
2021/02/15 09:33:29
2021/02/15 09:33:29 Release 'some-deployment' will be checked for deprecated or removed Kubernetes APIs and will be updated if necessary to supported API versions.
2021/02/15 09:33:29 Get release 'some-deployment' latest version.
2021/02/15 09:33:30 Check release 'some-deployment' for deprecated or removed APIs...
2021/02/15 09:33:30 Found deprecated or removed Kubernetes API:
"apiVersion: apps/v1beta2
kind: Deployment"
Supported API equivalent:
"apiVersion: apps/v1
kind: Deployment"
2021/02/15 09:33:30 Finished checking release 'some-deployment' for deprecated or removed APIs.
2021/02/15 09:33:30 Deprecated or removed APIs exist, updating release: some-deployment.
2021/02/15 09:33:30 Map of release 'some-deployment' deprecated or removed APIs to supported versions, completed successfully.
and then run without the flag to apply the changes.

Namespace deployment issue in Kubernetes Helm Chart

I am now testing the deployment into different namespace using Kubernetes. Here I am using Kubernetes Helm Chart for that. In my chart, I have deployment.yaml and service.yaml.
When I am defining the "namespace" parameter with Helm command helm install --upgrade, it is not working. When I a read about that I found the statement that - "Helm 2 is not overwritten by the --namespace parameter".
I tried the following command:
helm upgrade --install kubedeploy --namespace=test pipeline/spacestudychart
NB Here my service is deploying with default namespace.
Screenshot of describe pod:
Here my "helm version" command output is like follows:
docker#mildevdcr01:~$ helm version
Client: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.14.3",
GitCommit:"0e7f3b6637f7af8fcfddb3d2941fcc7cbebb0085", GitTreeState:"clean"}
Because of this reason, I tried to addthis command in deployment.yaml, under metadata.namespace like following,
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "spacestudychart.fullname" . }}
namespace: test
I created test and prod, 2 namespaces. But here also it's not working. When I adding like this, I am not getting my service up. I am not able to accessible. And in Jenkins console there is no error. When I defined in helm install --upgrade command it was deploying with default namespace. But here not deploying also.
After this, I removed the namespace from deployment.yaml and added metadata.namespace like the same. There also I am not able to access deployed service. But Jenkins console output still showing success.
Why namespace is not working with my Helm deployment? What changes I need to do here for deploying test/prod instead of this default namespace.
Remove namespace: test from all of your chart files and helm install --namespace=namespace2 ... should work.
On Helm 3.2+, I would suggest (based on this thread) to move the namespace creation to the CLI:
1 ) Add the --create-namespace after the -n flag:
helm upgrade --install <name> <repo> -n <namespace> --create-namespace
2 ) Inside the different resources - pass the Release namespace:
namespace: {{ .Release.Namespace }}

Helm Hook not triggered

Context: Kubenete 1.0.3, Helm 2.8.2
Helm Hook: pre-install
weight: 0
delete-policy: before-hook-creation
Helm command: helm upgrade --install -n namespace
Problem description:
The hook block is well rendered when running with --dry-run mode. But after installing (without dry-run), no hook job is triggered.
Check the job using command kubectl get jobs -n namespace.
Hook is a mechanism introduced in HELM to intervence at certain points in release life cycle.
Hooks can be definied in few ways via special annotations in metadata section, i.e. "pre-install", "post-install", "pre-upgrade" etc. Example of the hook:
apiVersion: ...
kind: ....
metadata:
annotations:
"helm.sh/hook": "pre-install"
Full list of Hooks can be found here. In addition there can be used more than one hook.
In this case "pre-upgrade" option resolved the issue which was "Executes on an upgrade request after templates are rendered, but before any resources are loaded into Kubernetes (e.g. before a Kubernetes apply operation)."