k8s: helm install ingress-nginx only create IngressClass? - kubernetes

I'm setting up two ingresses in different namespaces with ingress-nginx (https://github.com/kubernetes/ingress-nginx). My understanding is that I need to install ingress-nginx for each namespace, which creates the IngressClass I need.
I've installed the ingress-nginx with this:
helm install ingress-ns1 ingress-nginx/ingress-nginx \
--namespace ns1 \
--set controller.ingressClassResource.name=ns1-class \
--set controller.scope.namespace=ns1 \
--set controller.ingressClassByName=true
then the same again for namespace ns2. My understanding this created the IngressClasses I need and seems to work.
I've also got an Ingress configuration templated by helm that uses the IngresClasses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Values.ingress.name }}"
namespace: "{{ .Values.namespace }}"
annotations:
cert-manager.io/cluster-issuer: "{{ .Values.issuer.name }}"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
ingressClassName: {{ .Values.ingress.ingressClassName }}
rules:
{{- range $v := .Values.ingress.rules }}
- host: {{ $v.host.name }}
http:
paths:
{{- range $p := $v.host.paths }}
- path: {{ $p.path.path }}
pathType: Prefix
backend:
service:
name: {{ $p.path.name }}
port:
number: {{ $p.path.port }}
{{- end }}
{{- end }}
tls:
- hosts:
{{- range $v := .Values.ingress.rules }}
- {{ $v.host.name }}
{{- end }}
secretName: "{{ .Values.issuer.name }}"
This seems to work and uses the IngressClass which i've templated into {{ .Values.ingress.ingressClassName }}. These end up being ns1-class and ns2-class.
However. I then end up with 4 loadbalancers created, rather than two!
Looking at k9s, seems that installing the ingress-nginx with helm installs the two IngressClasses which I want, but also adds its own ingress controller pods. I only want the two created with my Ingress definition above.
How do I still setup the IngressClass to use ingress-nginx, but not have the controller created by installing ingress-nginx?
I've read this: (https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-controllers) a few times, I find it quite confusing as there are snippets of configuration that I don't know what to do with/where to put.

Related

Removing secretName from ingress.yaml results in dangling certificate in the K8s cluster as it doesn't automatically get deleted, any workaround?

My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.

Issues migrating from v1beta to v1 for kubernetes ingress

In my firm our Kubernetes Cluster was recently updated to 1.22+ and we are using AKS. So I had to change the manifest of our ingress yaml file which was using : networking.k8s.io/v1beta1, to be compliant to the new apiVersion : networking.k8s.io/v1
This is the earlier manifest for the ingress file :
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
{{- if semverCompare ">=1.14-0" .Capabilities.KubeVersion.GitVersion -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
#{{- range .paths }}
#- path: {{ . }}
# backend:
# serviceName: {{ $fullName }}
# servicePort: {{ $svcPort }}
#{{- end }}
- path: /callista/?(.*)
backend:
serviceName: amro-amroingress
servicePort: 8080
{{- end }}
{{- end }}
and after my changes it looks like this:
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "amroingress.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1
{{- end }}
kind: Ingress
metadata:
name: {{ include "amroingress.fullname" . }}
labels:
{{- include "amroingress.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /callista/?(.*)
pathType: Prefix
backend:
service:
name: amro-amroingres
port:
number: 8080
{{- end }}
{{- end }}
But, after I made the changes and tried to deploy using helm, I receive this error:
Error: UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
I am not sure why this error occurs even though the ingress manifest has changed and I have been stuck at this for a few days now. I am new to kubernetes and ingress in general, any help will be massively appreciated.
The API resources on the Control plane are upgreaded but the ones in helm stored manifest (within a Secret resource) are old.
Here is the resolution:
$ helm plugin install https://github.com/helm/helm-mapkubeapis
$ helm mapkubeapis my-release-name --namespace ns
After this run a helm upgrade again.
Hi I had same problem one of the deployment was failing after we update our ingress files from apiVersion: networking.k8s.io/v1beta1 to apiVersion: networking.k8s.io/v1 we had the same error with that app that was using helm
UPGRADE FAILED: current release manifest contains removed kubernetes api(s) for this kubernetes version and it is therefore unable to build the kubernetes objects for performing the diff. error from kubernetes: unable to recognize "": no matches for kind "Ingress" in version "networking.k8s.io/v1beta1"
Here is the solution install the mapkubeapis plugin with helm by running the below cms in your terminal
helm plugin install <https://github.com/hickeyma/helm-mapkubeapis>
Downloading and installing helm-mapkubeapis v0.0.15 ...
https://github.com/hickeyma/helm-mapkubeapis/releases/download/v0.0.15/helm-mapkubeapis_0.0.15_darwin_amd64.tar.gz
Installed plugin: mapkubeapis
helm plugin list
NAME VERSION DESCRIPTION
mapkubeapis 0.0.15 Map release deprecated Kubernetes APIs in-place
$ helm mapkubeapis release-name --namespace test-namespace --dry-run
the change the release-name to the name of your release/deployment that is failing. this cmd will list the files that have the old 1beta1
helm mapkubeapis release-name --namespace test-namespace
finally run this above cmd that will update the files and remove the depreciation. now go to your pipeline and run the deployment again and it will work this time.
After trying out a lot more stuff I just decided to finally use helm unistall to remove the deployments and the charts currently in the cluster.
I then simply tried to install with the new ingress manifest which I have mentioned in the question and that worked out and was finally able to deploy. So, the manifest itself which I had modified did not have any issues it seems.
Uninstalling and installing release worked for me.
1. helm uninstall <release>
2. helm install <release>
If you are doing the deployments through pipeline, you will have to manually perform step 1 and just re-trigger pipeline.

Using helm to install clusterRole.yaml

I'm currently using kubectl create -f clusterRole.yaml , I was wondering if I can use helm to install it automatically with my chart.
I was looking at the helm documentation, and it used kubectl create -f for the clusterRole file. Is there any reason that this can't be done through helm? Is it because this concerns with access privilege issues?
As already mentioned in the comments, you can install your RBAC roles using your helm chart. As a matter of fact many of the helm charts do configure roles/clusterRoles at install. Here's an example of Elasticsearch helm chart which does configure Role and RoleBinding at install level:
{{- if .Values.rbac.create -}}
{{- $fullName := include "elasticsearch.uname" . -}}
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
name: {{ $fullName | quote }}
labels:
heritage: {{ .Release.Service | quote }}
release: {{ .Release.Name | quote }}
chart: "{{ .Chart.Name }}-{{ .Chart.Version }}"
app: {{ $fullName | quote }}
rules:
- apiGroups:
- extensions
resources:
- podsecuritypolicies
resourceNames:
{{- if eq .Values.podSecurityPolicy.name "" }}
- {{ $fullName | quote }}
{{- else }}
- {{ .Values.podSecurityPolicy.name | quote }}
{{- end }}
verbs:
- use
{{- end -}}
Another example with clusterRole can be found here.
To sum up, if you context allow you to install desired rbac or any other stuff with kubectl then basically you will be able to do so with helm.

Trouble setting up secure connection for ingress behind ingress - bad gateway

I'm trying to set up a connection to a kubernetes cluster and I'm getting a 502 bad gateway error.
The cluster has an nginx ingress and a service (listening at both http and https). In addition the ingress is behind an nginx ingress service (I have nginx helm chart installed) with a static IP address.
I can see in the description of the cluster ingress that it knows the service's endpoints.
I see that the pods communicate successfully with each other (there are 3 pods), but I can't ping the external nginx from within a shell.
These are the cluster's ingress values in values.yaml:
ingress:
# If `true`, an Ingress is created
enabled: true
# The Service port targeted by the Ingress
servicePort: http
# Ingress annotations
annotations:
kubernetes.io/ingress.class: "nginx"
# Additional Ingress labels
labels: {}
# List of rules for the Ingress
rules:
-
# Ingress host
host: my-app.com
# Paths for the host
paths:
- /
# TLS configuration
tls:
- hosts:
- my-app.com
secretName: my-app-tls
When I go to my-app.com I see in the browser that I'm on a secure connection (the lock icon next to the URL), but like I said I get a 502 bad gateway error. If I replace the servicePort from http to https I get the '400 bad request' error.
How should I set up both ingresses to allow a secured connection to my app?
I tried all sorts of annotations, but always got the errors above.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Thank you!
The ingress definition that you have shared doesn't ingress definition and its values file.
Ingress definition should look something like below
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This gets executed if your values has ingress enabled=true.
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
The missing annotation was nginx.org/ssl-services, which accepts the list of secure services.

How does one pass an override file to specific Service YAML file using Helm?

I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML
Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.