Helm Template - Multiple Ingress in a single chart - kubernetes-helm

I'm writing some helm charts which I want to use as generic charts for deploying various things. When it comes to ingress, I'm toying with ideas as to how to handle multiple ingress entries in values.yaml.. for example;
svc:
app1:
ingress-internal:
enabled: true
annotations:
kubernetes.io/ingress.class: "internal"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.int.example.com
paths:
- /
ingress-public:
enabled: true
annotations:
kubernetes.io/ingress.class: "external"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.example.com
paths:
- /
I plan to have one values file which contains a bunch of semi-static config which is managed by one team, and is of no visibility to the app config teams. So the values file could contain multiple app entries (although the chart will hand one at a time).. I do this with this logic:
{{- if (index .Values.svc (include "app.refName" .) "something") -}}
Where app.refName is in this case equal to app1.
I'm toying with the idea now about multiple ingresses... I'm thinking along the lines of;
{{- $ingressName := regexMatch "/^ingress-*/" index ( .Values.svc (include "app.refName" .)) -}}
{{- $ingress := index .Values.svc (include "bvnk.refame" .) $ingressName -}}
{{- if $ingress -}}
Am I right in saying regexMatch would match both ingress-internal and ingress-public? And then I could use $ingressName as the name of the ingress entry... but then how would I do the following:
Ensure it loops through all ^ingress-* entries and creates a manifest for each?
Do nothing if none exist? (I suppose I could use a default to return an emtpy dict maybe?)
Is there a better way of aceiving this?

You should use an array of ingress definitions. For example:
svc:
app1:
ingress:
- name: internal
annotations:
kubernetes.io/ingress.class: "internal"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.int.example.com
paths:
- /
- name: public
annotations:
kubernetes.io/ingress.class: "external"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.example.com
paths:
- /
Then, in your template file, range over the ingresses:
{{- range .Values.svc.app1.ingress }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .name }}
...
{{- end }}

Related

Create Istio Service Entry with Range in HELM Template

I am creating 3 Istio service entries using HELM template. When I use range it only create the last one. Here is the value.yaml and service entry yaml. How do I create 3 service entries here?
serviceentry:
appdb01: APPDB01.domain.com
appdb02: APPDB02.domain.com
appdb03: APPDB03.domain.com
{{- range $key, $val := .Values.serviceentry }}
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: {{ $key }}
namespace: mytest
spec:
hosts:
- {{ $val | quote }}
location: MESH_EXTERNAL
ports:
- name: tcp1433
number: 1433
protocol: TCP
- name: udp1434
number: 1434
protocol: UDP
resolution: DNS
{{- end }}
Result:
Only the appdb03 is created
When running the HeLM template, it only creates the appdb03 but not the other 2.
You need to make sure you include a YAML start-of-document marker, --- on its own line, inside the range loop. This is true whenever you're producing multiple Kubernetes manifests from the same Helm template file; it's not specific to this Istio use case.
{{- range $key, $val := .Values.serviceentry }}
---
apiVersion: ...
{{- end }}

Removing secretName from ingress.yaml results in dangling certificate in the K8s cluster as it doesn't automatically get deleted, any workaround?

My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.

Trouble setting up secure connection for ingress behind ingress - bad gateway

I'm trying to set up a connection to a kubernetes cluster and I'm getting a 502 bad gateway error.
The cluster has an nginx ingress and a service (listening at both http and https). In addition the ingress is behind an nginx ingress service (I have nginx helm chart installed) with a static IP address.
I can see in the description of the cluster ingress that it knows the service's endpoints.
I see that the pods communicate successfully with each other (there are 3 pods), but I can't ping the external nginx from within a shell.
These are the cluster's ingress values in values.yaml:
ingress:
# If `true`, an Ingress is created
enabled: true
# The Service port targeted by the Ingress
servicePort: http
# Ingress annotations
annotations:
kubernetes.io/ingress.class: "nginx"
# Additional Ingress labels
labels: {}
# List of rules for the Ingress
rules:
-
# Ingress host
host: my-app.com
# Paths for the host
paths:
- /
# TLS configuration
tls:
- hosts:
- my-app.com
secretName: my-app-tls
When I go to my-app.com I see in the browser that I'm on a secure connection (the lock icon next to the URL), but like I said I get a 502 bad gateway error. If I replace the servicePort from http to https I get the '400 bad request' error.
How should I set up both ingresses to allow a secured connection to my app?
I tried all sorts of annotations, but always got the errors above.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Thank you!
The ingress definition that you have shared doesn't ingress definition and its values file.
Ingress definition should look something like below
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This gets executed if your values has ingress enabled=true.
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
The missing annotation was nginx.org/ssl-services, which accepts the list of secure services.

Verify if path value is unique in helm template

I'm trying to check if my path value is unique. This is my value.yml example:
ingresses:
- name: ingress-1
path: /route2
host: example.com
- name: ingress-2
path: /route2
host: example.com
In this example I want to exclude or concatenate the second route.
This is my ingress.yml template:
{{- range $ingress := .Values.ingresses -}}
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: sampleName
labels:
app: sampleName
deploymentStrategy: sampleStrategy
spec:
rules:
- host: "{{ $ingress.host }}"
http:
paths:
- backend:
serviceName: SampleName
servicePort: 80
path: /sampleApp/{{ $ingress.path }}
---
{{- end -}}
I'm in range context, so I can't check another ingresses.
Do you have any idea how to do this
Since (as you note) you can't enforce uniqueness across multiple Ingress objects, I'd probably accept that "one service declares the same endpoint" is just a specific case of "the same endpoint can be declared multiple times" and do nothing.
Helm templates have access to a support library called Sprig that allows some more general-purpose data structures. If you just want to check that there aren't duplicates, you can use a dictionary:
{{- $paths := dict -}}
{{- range $ingress := .Values.ingresses -}}
{{- if hasKey $paths $ingress.path -}}
{{- printf "Duplicate ingress path %s" $ingress.path | fail -}}
{{- else -}}
{{- $_ := set $paths $ingress.path $ingress.path -}}
{{- end -}}
{{- end -}}
You can use a similar approach to only emit the first Ingress object that has a given path (don't fail if the key exists, do include the template for it immediately after the set).

Helm: Cannot overwrite table with non table for tls

this is most likely really simple but I can't figure it out for 2 hours now. I have the following Consul Ingress that I want to parameterize through a parent chart:
spec:
rules:
{{- range .Values.uiIngress.hosts }}
- host: {{ . }}
http:
paths:
- backend:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
{{- if .Values.uiIngress.tls }}
tls:
{{ toYaml .Values.uiIngress.tls | indent 4 }}
{{- end -}}
{{- end }}
I want to parameterize spec.tls in the above.
In the values.yaml file for Consul we have the following template for it:
uiIngress:
enabled: false
annotations: {}
hosts: []
tls: {}
The closest I got to parameterizing it is the following:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
When I do that I get this error though:
warning: cannot overwrite table with non table for tls (map[])
Can someone please help, I tried a million things.
Check your helm version. I think there were some issues in the old version. This one is fine:
$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
I followed exactly the step you mentioned:
Added consul as a subchart (in charts/consul)
In the parent chart created values.yaml with:
consul:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
Helm install the parent chart
If your default configuration values for this chart that are defined in the values.yaml file for Consul has this structure:
uiIngress:
enabled: false
annotations: {}
hosts: []
tls: {}
And when you are executing helm command you are sending values like this:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
The error warning: cannot overwrite table with non table for tls (map[]) is happening because of the fact that tls is defined as a dict {} in the values.yaml and you are trying to set value with the type list [](- hosts:) to it.
To fix the warning you can change provided values.yaml format to:
uiIngress:
enabled: false
annotations: {}
tls: []
hosts: []