Helm upgrade error. atlassian-jira-software ingress - kubernetes

Trying to update helm ingress jira atlassian software.
I have such ingress template:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "atlassian-jira-software.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "atlassian-jira-software.name" . }}
chart: {{ template "atlassian-jira-software.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
service:
name: {{ $fullName }}
port:
name: http
{{- end }}
{{- end }}
Execute this command:
helm upgrade --dry-run -n atlassian jira .
The output of this command:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress "jira-atlassian-jira-software" in namespace "atlassian" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "jira"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "atlassian"
kubectl version --short
The output:
Client Version: v1.19.12 Server Version: v1.19.13-eks-8df270
Please, help me!

Was the ingress originally installed by Helm? Check out its "labels" section:
kubectl get ingress jira-atlassian-jira-software -o json
If you don't find the expected values (as described in the error messages) and you are sure you know what you are doing, you can try adding the labels yourself by editing the ingress:
kubectl edit ingress jira-atlassian-jira-software
If you do this, make sure that you run a diff before you do the helm upgrade again (to ensure that you see what is going to happen in advance and that you don't blow away anything you did not intend to):
helm diff upgrade -n atlassian jira .

Related

Issues migrating from v1beta to v1 for kubernetes ingress

In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1
This is the earlier manifest for the ingress file :
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
and after my changes it looks like this:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
But, after I made the changes and tried to deploy using helm, I receive this error:
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.
The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.
Here is the resolution:
$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
After this run a helm upgrade again.
Hi I had same problem one of the deployment was failing after we update our ingress files from apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 we had the same error with that app that was using helm
UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
Here is the solution install the mapkubeapis plugin with helm by running the below cms in your terminal
helm plugin install <https://github.com/hickeyma/helm-mapkubeapis>
Downloading and installing helm-mapkubeapis v0.0.15 ...
https://github.com/hickeyma/helm-mapkubeapis/releases/download/v0.0.15/helm-mapkubeapis_0.0.15_darwin_amd64.tar.gz
Installed plugin: mapkubeapis
helm plugin list
NAME VERSION DESCRIPTION
mapkubeapis 0.0.15 Map release deprecated Kubernetes APIs in-place
$ helm mapkubeapis release-name --namespace test-namespace --dry-run
the change the release-name to the name of your release/deployment that is failing. this cmd will list the files that have the old 1beta1
helm mapkubeapis release-name --namespace test-namespace
finally run this above cmd that will update the files and remove the depreciation. now go to your pipeline and run the deployment again and it will work this time.
After trying out a lot more stuff I just decided to finally use helm unistall to remove the deployments and the charts currently in the cluster.
I then simply tried to install with the new ingress manifest which I have mentioned in the question and that worked out and was finally able to deploy. So, the manifest itself which I had modified did not have any issues it seems.
Uninstalling and installing release worked for me.
1. helm uninstall <release>
2. helm install <release>
If you are doing the deployments through pipeline, you will have to manually perform step 1 and just re-trigger pipeline.

Helm upgrade is making deployment failure

We configured CSI driver in our cluster for secret management and used the below secret provider class template to automatically assign secrets to the deployments env variable. The above setup is working fine.
But 2 things where I have issues. Whenever new changes were done to the secret, say if adding a new secret to the YAML and key vault, the next release will fail with the helm upgrade command, stating specified secret is not found.
So in order to solve this, I have to uninstall all helm releases and need to install the helm release again, which means down time, how can I achieve this scenario without any down time?
Secondly, is there any recommended way to restart the Pods when the secret template changes:
values.yaml for MyAppA
keyvault:
name: mykv
tenantId: ${tenantId}$
clientid: "#{spid}#"
clientsecret: "#{spsecret}#"
secrets:
- MyAPPA_SECRET1_NAME1
- MyAPPA_SECRET2_NAME2
- MyAPPA_SECRET3_NAME3
deployment.yaml, ENV part is as below
{{- if eq .Values.keyvault.enabled true }}
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- name: {{ . }}
valueFrom:
secretKeyRef:
name: {{ $.Release.Name }}-kvsecret
key: {{ . }}
{{- end }}
{{- end }}
volumeMounts:
- name: {{ $.Release.Name }}-volume
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: {{ $.Release.Name }}-volume
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: {{ $.Release.Name }}-secretproviderclass
nodePublishSecretRef:
name: {{ $.Release.Name }}-secrets-store-creds
secretProviderClass yaml file is as below.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ $.Release.Name }}-secretproviderclass
labels:
app: {{ $.Release.Name }}
chart: "{{ $.Release.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
provider: azure
secretObjects:
- data:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- key: {{ . }}
objectName: {{ $.Release.Name | upper }}-{{ . }}
{{- end }}
secretName: {{ $.Release.Name }}-kvsecret
type: opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: {{ .Values.keyvault.name | default "mydev-kv" }}
objects: |
array:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- |
objectName: {{ $.Release.Name | upper }}-{{ . }}
objectType: secret
{{- end }}
tenantId: {{ .Values.keyvault.tenantid }}
{{- end }}
{{- end -}}
{{- define "commonobject.secretproviderclass" -}}
{{- template "commonobject.util.merge" (append . "commonobject.secretproviderclass.tpl") -}}
{{- end -}}
The problem is not in the "helm upgrade" command. I discovered this is a limitation of a CSI driver or SecretProviderClass. When the deployment is already created, the SecretProviderClass resource is updated but the "SecretProviderClassPodStatuses" is not, so secrets are not updated.
Two potential solutions to update secrets:
delete secret and restart/recreate pod => this works but it sounds more like a workaround than an actual solution
set enableSecretRotation to true => it has been implemented in a CSI driver recently and it's in an 'alpha' version
https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html
Edited:
In the end, I used this command to use automatic secret rotation in Azure Kubernetes Service:
az aks addon update -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 0.5m
You can use the following command to check if this option is enabled:
az aks addon show -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider
More info here:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation

nil pointer evaluating interface when installing a helm chart

I'm trying to install a chart to my cluster but I'm getting a error
Error: template: go-api/templates/deployment.yaml:18:24: executing "go-api/templates/deployment.yaml"
at <.Values.deployment.container.name>: nil pointer evaluating interface {}.name
However I executed the same commands for another 2 charts and it worked fine.
The template file I'm using is this:
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: {{ .Values.namespace}}
labels: {{- include "chart.labels" . | nindent 4 }}
name: {{ .Values.deployment.name}}
spec:
replicas: {{ .Values.deployment.replicas}}
selector:
matchLabels: {{ include "chart.matchLabels" . | nindent 6 }}
template:
metadata:
labels:
app.kubernetes.io/name: {{ template "chart.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
spec:
containers:
- name: {{ .Values.deployment.container.name }}
image: {{ .Values.deployment.container.image }}
imagePullPolicy: Never
ports:
- containerPort: {{ .Values.deployment.container.port }}
This can happen if the Helm values you're using to install don't have that particular block:
namespace: default
deployment:
name: a-name
replicas: 1
# but no container:
To avoid this specific error in the template code, it's useful to pick out the parent dictionary into a variable; then if the parent is totally absent, you can decide what to do about it. This technique is a little more useful if there are optional fields or sensible defaults:
{{- $deployment := .Values.deployment | default dict }}
metadata:
name: {{ $deployment.name | default (include "chart.fullname" .) }}
spec:
{{- if $deployment.replicas }}
replicas: {{ $deployment.replicas }}
{{- end }}
If you really can't work without the value, Helm has an undocumented required function that can print a more specific error message.
{{- $deployment := .Values.deployment | required "deployment configuration is required" }}
(My experience has been that required values are somewhat frustrating as an end user, particularly if you're trying to run someone else's chart, and I would try to avoid this if possible.)
Given what you show, it's also possible you're making the chart too configurable. The container name, for example, is mostly a detail that only appears if you have a multi-container pod (or are using Istio); the container port is a fixed attribute of the image you're running. You can safely fix these values in the Helm template file, and then it's reasonable to provide defaults for things like the replica count or image name (consider setting the repository name, image name, and tag as separate variables).
{{- $deployment := .Values.deployment | default dict }}
{{- $registry := $deployment.registry | default "docker.io" }}
{{- $image := $deployment.image | default "my/image" }}
{{- $tag := $deployment.tag | default "latest" }}
containers:
- name: app # fixed
image: {{ printf "%s/%s:%s" $registry $image $tag }}
{{- with .Values.imagePullPolicy }}
imagePullPolicy: {{ . }}
{{- end }}
ports:
- name: http
containerPort: 3000 # fixed
If the value is defined in your values file, and you're still getting the error then the issue could be due to accessing that value inside range or similar function which changes the context.
For example, to use a named template mySuffix that has been defined with .Values and using that inside range with template function, we need to provide $ to the template function instead of the usual .:
{{- define "mySuffix" -}}
{{- .Values.suffix }}
{{- end }}
...
{{- range .Values.listOfValues }}
echo {{ template "mySuffix" $ }}
{{- end }}

Multiple resources using single HELM template

We had been using single ingress per application(public) by default but with the recent requirement we need to expose (private) endpoint as well for some of the apps. That means we had a single template that looks like this:
templates/ingress.yaml
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{ include "app.labels" . | indent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ . }}
backend:
serviceName: {{ $fullName }}
servicePort: http
{{- end }}
{{- end }}
{{- end }}
templates/cert.yaml
{{- if .Values.ingress.tls -}}
apiVersion: certmanager.k8s.io/v1alpha1
kind: Certificate
metadata:
name: {{ .Values.ingress.name }}
namespace: {{ .Values.ingress.namespace }}
spec:
{{- range .Values.ingress.tls }}
secretName: {{ .secretName }}
duration: 24h
renewBefore: 12h
issuerRef:
name: {{ .issuerRef.name }}
kind: {{ .issuerRef.kind }}
dnsNames:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
{{- end -}}
{{- end -}}
And the values.yaml looks like this:
ingress:
enabled: true
name: apps-ingress
namespace: app1-namespace
annotations:
kubernetes.io/ingress.class: hybrid-external
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hosts:
- host: apps.test.cluster
paths:
- /
tls:
- secretName: app1-tls
issuerRef:
name: vault-issuer
kind: ClusterIssuer
hosts:
- "apps.test.cluster"
So, to accomodate the new setup. I have added the below block on values.yaml file.
ingress-private:
enabled: true
name: apps-ingress-private
namespace: app1-namespace
annotations:
kubernetes.io/ingress.class: hybrid-internal
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
hosts:
- host: apps.internal.test.cluster
paths:
- /
tls:
- secretName: app1-tls
issuerRef:
name: vault-issuer
kind: ClusterIssuer
hosts:
- "apps.internal.test.cluster"
And duplicated both templates i.e templates/ingress-private.yaml and templates/certs-private.yaml, and is working fine but my question here is - is there a way using a single template for each ingress and certs and create conditional resource?
As I mentioned above, some apps need internal ingress and some don't. What I want to do is; make public ingress/certs as default and private as optional. i have been using {{- if .Values.ingress.enabled -}} option to validate if ingress is required but in 2 different files.
Also, in values.yaml file, rather than having 2 different block is there a way to use the list if multiple resources are required?
There are a couple of ways to approach this problem.
The way you have it now, with one file per resource but some duplication of logic, is a reasonably common pattern. It's very clear exactly what resources are being created, and there's less logic involved. The Go templating language is a little bit specialized, so this can be more approachable to other people working on your project.
If you do want to combine things together there are a couple of options. As #Matt notes in their comment, you can put multiple Kubernetes resources in the same file so long as they're separated by the YAML --- document separator.
{{/* ... arbitrary templating logic ... */}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}
...
{{/* ... more logic ... */}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}-private
...
The only thing that matters here is that the output of the template is a valid multi-document YAML file. You can use the helm template command to see what comes out without actually sending it to the cluster.
This approach pairs well with having a list of configuration rules in your YAML file
ingresses:
- name: apps-ingress
annotations:
kubernetes.io/ingress.class: hybrid-external
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
- name: apps-ingress-private
annotations:
kubernetes.io/ingress.class: hybrid-internal
nginx.ingress.kubernetes.io/backend-protocol: "HTTP"
You can use the Go template range construct to loop over all of these. Note that this borrows the . special variable, so if you do refer to arbitrary other things in .Values you need to save away the current value of it.
{{- $top := . -}}
{{- range $ingress := .Values.ingresses -}}
---
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $ingress.name }}
annotations: {{- $ingress.annotations.toYaml | nindent 4 }}
...
{{ end }}
Thanks David for getting us started . But it did not work for me.
I used following to generate multiple resources in single YAML.
{{- if .Values.nfs }}
{{- $top := . -}}
{{- range $index, $pvc := .Values.nfs }}
{{- if $pvc.persistentClaim }}
---
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: "{{ $top.Chart.Name }}-{{ $top.Release.Name }}-nfs-{{ $index }}"
namespace: {{ $top.Release.Namespace }}
....
resources:
requests:
storage: {{ $pvc.persistentClaim.storageRequest }}
{{- end }}
{{- end }}
{{- end }}
helm docs show how to define and use range:
https://helm.sh/docs/chart_template_guide/variables/

Helm nginx-ingress 404 error on all routes on GKE

I am trying to deploy an app to GKE. I am using nginx-ingress controller and I have a bunch of ingress rules for every route. For some reason which am not sure, I am getting a 404 error on all the routes/pages.
One of the ingress resource definitions is:-
{{- $fullName := include "nginx.fullname" . -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ $fullName }}-base
labels:
app: oxtrust
annotations:
kubernetes.io/ingress.class: "nginx"
nginx.ingress.kubernetes.io/app-root: "/identity"
nginx.ingress.kubernetes.io/affinity: "cookie"
nginx.ingress.kubernetes.io/session-cookie-name: "route"
nginx.ingress.kubernetes.io/session-cookie-hash: "sha1"
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . | quote }}
http:
paths:
- path: {{ $ingressPath }}
backend:
serviceName: oxtrust
servicePort: 8080
{{- end }}
On minikube the setup is working okay since I do need to use nginx-ingress.
I really don't understand what I am doing wrong. I read somewhere about mandatory and cloud-generic objects should be deployed but I am not sure about that.
Someone shed some light on this please. Thank you