this is most likely really simple but I can't figure it out for 2 hours now. I have the following Consul Ingress that I want to parameterize through a parent chart:
spec:
rules:
{{- range .Values.uiIngress.hosts }}
- host: {{ . }}
http:
paths:
- backend:
serviceName: {{ $serviceName }}
servicePort: {{ $servicePort }}
{{- end -}}
{{- if .Values.uiIngress.tls }}
tls:
{{ toYaml .Values.uiIngress.tls | indent 4 }}
{{- end -}}
{{- end }}
I want to parameterize spec.tls in the above.
In the values.yaml file for Consul we have the following template for it:
uiIngress:
enabled: false
annotations: {}
hosts: []
tls: {}
The closest I got to parameterizing it is the following:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
When I do that I get this error though:
warning: cannot overwrite table with non table for tls (map[])
Can someone please help, I tried a million things.
Check your helm version. I think there were some issues in the old version. This one is fine:
$ helm version
Client: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.9.1", GitCommit:"20adb27c7c5868466912eebdf6664e7390ebe710", GitTreeState:"clean"}
I followed exactly the step you mentioned:
Added consul as a subchart (in charts/consul)
In the parent chart created values.yaml with:
consul:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
Helm install the parent chart
If your default configuration values for this chart that are defined in the values.yaml file for Consul has this structure:
uiIngress:
enabled: false
annotations: {}
hosts: []
tls: {}
And when you are executing helm command you are sending values like this:
uiIngress:
tls:
- hosts:
- "some.domain.com"
secretName: "ssl-default"
The error warning: cannot overwrite table with non table for tls (map[]) is happening because of the fact that tls is defined as a dict {} in the values.yaml and you are trying to set value with the type list [](- hosts:) to it.
To fix the warning you can change provided values.yaml format to:
uiIngress:
enabled: false
annotations: {}
tls: []
hosts: []
Related
My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.
Bug Description
I can smoothly work on adding multiple destinations for canary deployment but when I try adding retry it fails with the custom-built Helm chart. As I can't iterate over it.
This is a problem because Retry is tied to each destination according to that whole should be iterated.
Please find helm chart template below.
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: {{ .Values.virtualservice.name }}
namespace: {{ .Values.namespace }}
spec:
hosts:
- {{ .Values.virtualservice.hosts }}
gateways:
- {{ .Values.virtualservice.gateways }}
http:
- route:
{{- range $key, $value := .Values.destination }}
- destination:
host: {{ $value.host }}
subset: {{ $value.subset }}
weight: {{ $value.weight }}
retries:
attempts: {{ $value.retries.attempts }}
perTryTimeout: {{ $value.retries.perTryTimeout }}
retryOn: {{ $value.retries.retryOn }}
timeout: {{ $value.retries.timeout }}
{{- end }}
Error log
$ helm install asm-helm ./asm-svc-helm-chart -f values.yaml --dry-run
Error: INSTALLATION FAILED: YAML parse error on asmvrtsvc/templates/retry-svc.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Version
$ kubectl version --short
Client Version: v1.24.0
Kustomize Version: v4.5.4
Server Version: v1.22.12-gke.300
$ helm version
v3.9.4+gdbc6d8e
Added example for reference
spec:
hosts:
- reviews
http:
- route:
- destination:
host: reviews
subset: v1
weight: 75
retries:
attempts: 3
perTryTimeout: 2s
- destination:
host: reviews
subset: v2
weight: 25
retries:
attempts: 3
perTryTimeout: 2s
According to the schema of Virtual Service, The route field in Virtual Service can have one retries field.
So, the loop should include destination as array.
*: https://istio.io/latest/docs/reference/config/networking/virtual-service/
I'm writing some helm charts which I want to use as generic charts for deploying various things. When it comes to ingress, I'm toying with ideas as to how to handle multiple ingress entries in values.yaml.. for example;
svc:
app1:
ingress-internal:
enabled: true
annotations:
kubernetes.io/ingress.class: "internal"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.int.example.com
paths:
- /
ingress-public:
enabled: true
annotations:
kubernetes.io/ingress.class: "external"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.example.com
paths:
- /
I plan to have one values file which contains a bunch of semi-static config which is managed by one team, and is of no visibility to the app config teams. So the values file could contain multiple app entries (although the chart will hand one at a time).. I do this with this logic:
{{- if (index .Values.svc (include "app.refName" .) "something") -}}
Where app.refName is in this case equal to app1.
I'm toying with the idea now about multiple ingresses... I'm thinking along the lines of;
{{- $ingressName := regexMatch "/^ingress-*/" index ( .Values.svc (include "app.refName" .)) -}}
{{- $ingress := index .Values.svc (include "bvnk.refame" .) $ingressName -}}
{{- if $ingress -}}
Am I right in saying regexMatch would match both ingress-internal and ingress-public? And then I could use $ingressName as the name of the ingress entry... but then how would I do the following:
Ensure it loops through all ^ingress-* entries and creates a manifest for each?
Do nothing if none exist? (I suppose I could use a default to return an emtpy dict maybe?)
Is there a better way of aceiving this?
You should use an array of ingress definitions. For example:
svc:
app1:
ingress:
- name: internal
annotations:
kubernetes.io/ingress.class: "internal"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.int.example.com
paths:
- /
- name: public
annotations:
kubernetes.io/ingress.class: "external"
kubernetes.io/internal-dns.create: "true"
hosts:
- host: app1.example.com
paths:
- /
Then, in your template file, range over the ingresses:
{{- range .Values.svc.app1.ingress }}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .name }}
...
{{- end }}
This is my first question ever on the internet. I've been helped much by reading other people's fixes but now it's time to humbly ask for help myself.
I get the following error by Helm (helm3 install ist-gw-t1 --dry-run )
Error: INSTALLATION FAILED: YAML parse error on istio-gateways/templates/app-name.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key
However the file is 27 lines long! It used to be longer but I removed the other Kubernetes resources so that I narrow down the area for searching for the issue.
Template file
{{- range .Values.ingressConfiguration.app-name }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "{{ .name }}-tcp-8080"
namespace: {{ .namespace }}
labels:
app: {{ .name }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
hosts: # Is was line 40 before shortening the file.
- {{ .fqdn }}
gateways:
- {{ .name }}
{{ (.routing).type }}:
- route:
- destination:
port:
number: {{ (.services).servicePort2 }}
host: {{ (.services).serviceUrl2 }}
match:
port: "8080"
uri:
exact: {{ .fqdn }}
{{- end }}
values.yaml
networkPolicies:
ingressConfiguration:
app-name:
- namespace: 'namespace'
name: 'app-name'
ingressController: 'internal-istio-ingress-gateway' # ?
fqdn: '48.characters.long'
tls:
credentialName: 'name-of-the-secret' # ?
mode: 'SIMPLE' # ?
serviceUrl1: 'foo' # ?
servicePort1: '8080'
routing:
# Available routing types: http or tls
# In case of tls routing type selected, the matchingSniHost(resp. rewriteURI/matchingURIs) has(resp. have) to be filled(resp. empty)
type: http
rewriteURI: ''
matchingURIs: ['foo']
matchingSniHost: []
- services:
serviceUrl2: "foo"
servicePort2: "8080"
- externalServices:
mysql: 'bar'
Where does the error come from?
Why does Helm still report line 40 as problematic -- even after the shortening of the file.
Can you please recommend some Visual Studio Code extention that could have helped me? I have the following slightly relevant but they do not have linting (or I do not know to use it): YAML by Red Hat; amd Kubernetes by Microsoft.
I'm trying to set up a connection to a kubernetes cluster and I'm getting a 502 bad gateway error.
The cluster has an nginx ingress and a service (listening at both http and https). In addition the ingress is behind an nginx ingress service (I have nginx helm chart installed) with a static IP address.
I can see in the description of the cluster ingress that it knows the service's endpoints.
I see that the pods communicate successfully with each other (there are 3 pods), but I can't ping the external nginx from within a shell.
These are the cluster's ingress values in values.yaml:
ingress:
# If `true`, an Ingress is created
enabled: true
# The Service port targeted by the Ingress
servicePort: http
# Ingress annotations
annotations:
kubernetes.io/ingress.class: "nginx"
# Additional Ingress labels
labels: {}
# List of rules for the Ingress
rules:
-
# Ingress host
host: my-app.com
# Paths for the host
paths:
- /
# TLS configuration
tls:
- hosts:
- my-app.com
secretName: my-app-tls
When I go to my-app.com I see in the browser that I'm on a secure connection (the lock icon next to the URL), but like I said I get a 502 bad gateway error. If I replace the servicePort from http to https I get the '400 bad request' error.
How should I set up both ingresses to allow a secured connection to my app?
I tried all sorts of annotations, but always got the errors above.
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/ssl-redirect: "false"
Thank you!
The ingress definition that you have shared doesn't ingress definition and its values file.
Ingress definition should look something like below
{{- if .Values.ingress.enabled -}}
{{- $fullName := include "app.fullname" . -}}
{{- $svcPort := .Values.service.port -}}
apiVersion: networking.k8s.io/v1beta1
{{- else -}}
apiVersion: extensions/v1beta1
{{- end }}
kind: Ingress
metadata:
name: {{ $fullName }}
labels:
{{- include "app.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: /
backend:
serviceName: {{ $fullName }}
servicePort: {{ $svcPort }}
{{- end }}
{{- end }}
{{- end }}
This gets executed if your values has ingress enabled=true.
service:
type: ClusterIP
port: 8080
ingress:
enabled: true
The missing annotation was nginx.org/ssl-services, which accepts the list of secure services.