I'm currently using kubectl create -f clusterRole.yaml , I was wondering if I can use helm to install it automatically with my chart.
I was looking at the helm documentation, and it used kubectl create -f for the clusterRole file. Is there any reason that this can't be done through helm? Is it because this concerns with access privilege issues?
As already mentioned in the comments, you can install your RBAC roles using your helm chart. As a matter of fact many of the helm charts do configure roles/clusterRoles at install. Here's an example of Elasticsearch helm chart which does configure Role and RoleBinding at install level:
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}
Another example with clusterRole can be found here.
To sum up, if you context allow you to install desired rbac or any other stuff with kubectl then basically you will be able to do so with helm.
Related
I'm setting up two ingresses in different namespaces with ingress-nginx (https://github.com/kubernetes/ingress-nginx). My understanding is that I need to install ingress-nginx for each namespace, which creates the IngressClass I need.
I've installed the ingress-nginx with this:
helm install ingress-ns1 ingress-nginx/ingress-nginx \
--namespace ns1 \
--set controller.ingressClassResource.name=ns1-class \
--set controller.scope.namespace=ns1 \
--set controller.ingressClassByName=true
then the same again for namespace ns2. My understanding this created the IngressClasses I need and seems to work.
I've also got an Ingress configuration templated by helm that uses the IngresClasses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Values.ingress.name }}"
namespace: "{{ .Values.namespace }}"
annotations:
cert-manager.io/cluster-issuer: "{{ .Values.issuer.name }}"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
ingressClassName: {{ .Values.ingress.ingressClassName }}
rules:
{{- range $v := .Values.ingress.rules }}
- host: {{ $v.host.name }}
http:
paths:
{{- range $p := $v.host.paths }}
- path: {{ $p.path.path }}
pathType: Prefix
backend:
service:
name: {{ $p.path.name }}
port:
number: {{ $p.path.port }}
{{- end }}
{{- end }}
tls:
- hosts:
{{- range $v := .Values.ingress.rules }}
- {{ $v.host.name }}
{{- end }}
secretName: "{{ .Values.issuer.name }}"
This seems to work and uses the IngressClass which i've templated into {{ .Values.ingress.ingressClassName }}. These end up being ns1-class and ns2-class.
However. I then end up with 4 loadbalancers created, rather than two!
Looking at k9s, seems that installing the ingress-nginx with helm installs the two IngressClasses which I want, but also adds its own ingress controller pods. I only want the two created with my Ingress definition above.
How do I still setup the IngressClass to use ingress-nginx, but not have the controller created by installing ingress-nginx?
I've read this: (https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-controllers) a few times, I find it quite confusing as there are snippets of configuration that I don't know what to do with/where to put.
In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1
This is the earlier manifest for the ingress file :
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
and after my changes it looks like this:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
But, after I made the changes and tried to deploy using helm, I receive this error:
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.
The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.
Here is the resolution:
$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
After this run a helm upgrade again.
Hi I had same problem one of the deployment was failing after we update our ingress files from apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 we had the same error with that app that was using helm
UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
Here is the solution install the mapkubeapis plugin with helm by running the below cms in your terminal
helm plugin install <https://github.com/hickeyma/helm-mapkubeapis>
Downloading and installing helm-mapkubeapis v0.0.15 ...
https://github.com/hickeyma/helm-mapkubeapis/releases/download/v0.0.15/helm-mapkubeapis_0.0.15_darwin_amd64.tar.gz
Installed plugin: mapkubeapis
helm plugin list
NAME VERSION DESCRIPTION
mapkubeapis 0.0.15 Map release deprecated Kubernetes APIs in-place
$ helm mapkubeapis release-name --namespace test-namespace --dry-run
the change the release-name to the name of your release/deployment that is failing. this cmd will list the files that have the old 1beta1
helm mapkubeapis release-name --namespace test-namespace
finally run this above cmd that will update the files and remove the depreciation. now go to your pipeline and run the deployment again and it will work this time.
After trying out a lot more stuff I just decided to finally use helm unistall to remove the deployments and the charts currently in the cluster.
I then simply tried to install with the new ingress manifest which I have mentioned in the question and that worked out and was finally able to deploy. So, the manifest itself which I had modified did not have any issues it seems.
Uninstalling and installing release worked for me.
1. helm uninstall <release>
2. helm install <release>
If you are doing the deployments through pipeline, you will have to manually perform step 1 and just re-trigger pipeline.
We configured CSI driver in our cluster for secret management and used the below secret provider class template to automatically assign secrets to the deployments env variable. The above setup is working fine.
But 2 things where I have issues. Whenever new changes were done to the secret, say if adding a new secret to the YAML and key vault, the next release will fail with the helm upgrade command, stating specified secret is not found.
So in order to solve this, I have to uninstall all helm releases and need to install the helm release again, which means down time, how can I achieve this scenario without any down time?
Secondly, is there any recommended way to restart the Pods when the secret template changes:
values.yaml for MyAppA
keyvault:
name: mykv
tenantId: ${tenantId}$
clientid: "#{spid}#"
clientsecret: "#{spsecret}#"
secrets:
- MyAPPA_SECRET1_NAME1
- MyAPPA_SECRET2_NAME2
- MyAPPA_SECRET3_NAME3
deployment.yaml, ENV part is as below
{{- if eq .Values.keyvault.enabled true }}
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- name: {{ . }}
valueFrom:
secretKeyRef:
name: {{ $.Release.Name }}-kvsecret
key: {{ . }}
{{- end }}
{{- end }}
volumeMounts:
- name: {{ $.Release.Name }}-volume
mountPath: '/mnt/secrets-store'
readOnly: true
volumes:
- name: {{ $.Release.Name }}-volume
csi:
driver: 'secrets-store.csi.k8s.io'
readOnly: true
volumeAttributes:
secretProviderClass: {{ $.Release.Name }}-secretproviderclass
nodePublishSecretRef:
name: {{ $.Release.Name }}-secrets-store-creds
secretProviderClass yaml file is as below.
apiVersion: secrets-store.csi.x-k8s.io/v1
kind: SecretProviderClass
metadata:
name: {{ $.Release.Name }}-secretproviderclass
labels:
app: {{ $.Release.Name }}
chart: "{{ $.Release.Name }}-{{ .Chart.Version }}"
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
spec:
provider: azure
secretObjects:
- data:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- key: {{ . }}
objectName: {{ $.Release.Name | upper }}-{{ . }}
{{- end }}
secretName: {{ $.Release.Name }}-kvsecret
type: opaque
parameters:
usePodIdentity: "false"
useVMManagedIdentity: "false"
userAssignedIdentityID: ""
keyvaultName: {{ .Values.keyvault.name | default "mydev-kv" }}
objects: |
array:
{{- range .Values.keyvault.secrets }}{{/* <-- only one range loop */}}
- |
objectName: {{ $.Release.Name | upper }}-{{ . }}
objectType: secret
{{- end }}
tenantId: {{ .Values.keyvault.tenantid }}
{{- end }}
{{- end -}}
{{- define "commonobject.secretproviderclass" -}}
{{- template "commonobject.util.merge" (append . "commonobject.secretproviderclass.tpl") -}}
{{- end -}}
The problem is not in the "helm upgrade" command. I discovered this is a limitation of a CSI driver or SecretProviderClass. When the deployment is already created, the SecretProviderClass resource is updated but the "SecretProviderClassPodStatuses" is not, so secrets are not updated.
Two potential solutions to update secrets:
delete secret and restart/recreate pod => this works but it sounds more like a workaround than an actual solution
set enableSecretRotation to true => it has been implemented in a CSI driver recently and it's in an 'alpha' version
https://secrets-store-csi-driver.sigs.k8s.io/topics/secret-auto-rotation.html
Edited:
In the end, I used this command to use automatic secret rotation in Azure Kubernetes Service:
az aks addon update -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider --enable-secret-rotation --rotation-poll-interval 0.5m
You can use the following command to check if this option is enabled:
az aks addon show -g [resource-group] -n [aks-name] -a azure-keyvault-secrets-provider
More info here:
https://learn.microsoft.com/en-us/azure/aks/csi-secrets-store-driver#enable-and-disable-autorotation
While deploying a Kubernetes application, I want to check if a resource is already present. If so it shall not be rendered. To archive this behaviour the lookup function of helm is used. As it seems is always empty while deploying (no dry-run). Any ideas what I am doing wrong?
---
{{- if not (lookup "v1" "ServiceAccount" "my-namespace" "my-sa") }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
running the corresponding kubectl command return the expected service account
kubectl get ServiceAccount my-sa -n my-namespace lists the expected service account
helm version: 3.5.4
i think you cannot use this if-statement to validate what you want.
the lookup function returns a list of objects that were found by your lookup. so, if you want to validate that there are no serviceaccounts with the properties you specified, you should check if the returned list is empty.
test something like
---
{{ if eq (len (lookup "v1" "ServiceAccount" "my-namespace" "my-sa")) 0 }}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ .Chart.Name }}-{{ .Values.environment }}
namespace: {{ .Values.namespace }}
labels:
app: {{ $.Chart.Name }}
environment: {{ .Values.environment }}
annotations:
"helm.sh/resource-policy": keep
iam.gke.io/gcp-service-account: "{{ .Chart.Name }}-{{ .Values.environment }}#{{ .Values.gcpProjectId }}.iam.gserviceaccount.com"
{{- end }}
see: https://helm.sh/docs/chart_template_guide/functions_and_pipelines/#using-the-lookup-function
Trying to update helm ingress jira atlassian software.
I have such ingress template:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "atlassian-jira-software.fullname" . -}}
{{- $servicePort := .Values.service.port -}}
{{- $ingressPath := .Values.ingress.path -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
app: {{ template "atlassian-jira-software.name" . }}
chart: {{ template "atlassian-jira-software.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
{{- with .Values.ingress.annotations }}
annotations:
{{ toYaml . | indent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ . }}
http:
paths:
- path: {{ $ingressPath }}
backend:
service:
name: {{ $fullName }}
port:
name: http
{{- end }}
{{- end }}
Execute this command:
helm upgrade --dry-run -n atlassian jira .
The output of this command:
Error: UPGRADE FAILED: rendered manifests contain a resource that already exists. Unable to continue with update: Ingress "jira-atlassian-jira-software" in namespace "atlassian" exists and cannot be imported into the current release: invalid ownership metadata; label validation error: missing key "app.kubernetes.io/managed-by": must be set to "Helm"; annotation validation error: missing key "meta.helm.sh/release-name": must be set to "jira"; annotation validation error: missing key "meta.helm.sh/release-namespace": must be set to "atlassian"
kubectl version --short
The output:
Client Version: v1.19.12 Server Version: v1.19.13-eks-8df270
Please, help me!
Was the ingress originally installed by Helm? Check out its "labels" section:
kubectl get ingress jira-atlassian-jira-software -o json
If you don't find the expected values (as described in the error messages) and you are sure you know what you are doing, you can try adding the labels yourself by editing the ingress:
kubectl edit ingress jira-atlassian-jira-software
If you do this, make sure that you run a diff before you do the helm upgrade again (to ensure that you see what is going to happen in advance and that you don't blow away anything you did not intend to):
helm diff upgrade -n atlassian jira .