Create Istio Service Entry with Range in HELM Template - kubernetes-helm

I am creating 3 Istio service entries using HELM template. When I use range it only create the last one. Here is the value.yaml and service entry yaml. How do I create 3 service entries here?
serviceentry:
appdb01: APPDB01.domain.com
appdb02: APPDB02.domain.com
appdb03: APPDB03.domain.com
{{- range $key, $val := .Values.serviceentry }}
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: {{ $key }}
namespace: mytest
spec:
hosts:
- {{ $val | quote }}
location: MESH_EXTERNAL
ports:
- name: tcp1433
number: 1433
protocol: TCP
- name: udp1434
number: 1434
protocol: UDP
resolution: DNS
{{- end }}
Result:
Only the appdb03 is created
When running the HeLM template, it only creates the appdb03 but not the other 2.

You need to make sure you include a YAML start-of-document marker, --- on its own line, inside the range loop. This is true whenever you're producing multiple Kubernetes manifests from the same Helm template file; it's not specific to this Istio use case.
{{- range $key, $val := .Values.serviceentry }}
---
apiVersion: ...
{{- end }}

Related

Removing secretName from ingress.yaml results in dangling certificate in the K8s cluster as it doesn't automatically get deleted, any workaround?

My ingress.yaml looks like so:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ .Values.name }}-a
namespace: {{ .Release.Namespace }}
annotations:
nginx.ingress.kubernetes.io/backend-protocol: "HTTPS"
nginx.ingress.kubernetes.io/canary: "true"
nginx.ingress.kubernetes.io/canary-weight: "{{ .Values.canary.weight }}"
nginx.ingress.kubernetes.io/proxy-read-timeout: "120"
spec:
tls:
- hosts:
- {{ .Values.urlFormat | quote }}
secretName: {{ .Values.name }}-cert // <-------------- This Line
ingressClassName: nginx-customer-wildcard
rules:
- host: {{ .Values.urlFormat | quote }}
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: {{ .Values.name }}-a
port:
number: {{ .Values.backendPort }}
Assume Values.name = customer-tls then secretName will become customer-tls-cert.
On removing secretName: {{ .Values.name }}-cert the the nginx ingress start to use default certificate which is fine as I expect it to be but this also results in the customer-tls-cert certificate still hanging around in the cluster though unused. Is there a way that when I delete the cert from helm config it also removed the certificate from the cluster.
Otherwise, some mechanism that will will figure out the certificates that are no longer in use and will get deleted automatically ?
My nginx version is nginx/1.19.9
K8s versions:
Client Version: v1.25.2
Kustomize Version: v4.5.7
Server Version: v1.24.6
I experimented with --enable-dynamic-certificates a lil bit but that's not supported anymore on the versions that I am using. I am not even sure if that would have solved my problem.
For now I have just manually deleted the certificate from the cluster using kubectl delete secret customer-tls-cert -n edge where edge is the namespace where cert resides.
Edit: This is how my certificate.yaml looks like,
{{- if eq .Values.certificate.enabled true }}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
namespace: edge
annotations:
vault.security.banzaicloud.io/vault-addr: {{ .Values.vault.vaultAddress | quote }}
vault.security.banzaicloud.io/vault-role: {{ .Values.vault.vaultRole | quote }}
vault.security.banzaicloud.io/vault-path: {{ .Values.vault.vaultPath | quote }}
vault.security.banzaicloud.io/vault-namespace : {{ .Values.vault.vaultNamespace | quote }}
type: kubernetes.io/tls
data:
tls.crt: {{ .Values.certificate.cert }}
tls.key: {{ .Values.certificate.key }}
{{- end }}
Kubernetes in general will not delete things simply because they are not referenced. There is a notion of ownership which doesn't apply here (if you delete a Job, the cluster also deletes the corresponding Pod). If you have a Secret or a ConfigMap that's referenced by name, the object will still remain even if you delete the last reference to it.
In Helm, if a chart contains some object, and then you upgrade the chart to a newer version or values that don't include that object, then Helm will delete the object. This would require that the Secret actually be part of the chart, like
{{/* templates/cert-secret.yaml */}}
{{- if .Values.createSecret -}}
apiVersion: v1
kind: Secret
metadata:
name: {{ .Values.name }}-cert
...
{{ end -}}
If your chart already included this, and you ran helm upgrade with values that set createSecret to false, then Helm would delete the Secret.
If you're not in this situation, though – your chart references the Secret by name, but you expect something else to create it – then you'll also need to manually destroy it, maybe with kubectl delete.

k8s: helm install ingress-nginx only create IngressClass?

I'm setting up two ingresses in different namespaces with ingress-nginx (https://github.com/kubernetes/ingress-nginx). My understanding is that I need to install ingress-nginx for each namespace, which creates the IngressClass I need.
I've installed the ingress-nginx with this:
helm install ingress-ns1 ingress-nginx/ingress-nginx \
--namespace ns1 \
--set controller.ingressClassResource.name=ns1-class \
--set controller.scope.namespace=ns1 \
--set controller.ingressClassByName=true
then the same again for namespace ns2. My understanding this created the IngressClasses I need and seems to work.
I've also got an Ingress configuration templated by helm that uses the IngresClasses:
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: "{{ .Values.ingress.name }}"
namespace: "{{ .Values.namespace }}"
annotations:
cert-manager.io/cluster-issuer: "{{ .Values.issuer.name }}"
nginx.ingress.kubernetes.io/from-to-www-redirect: "true"
spec:
ingressClassName: {{ .Values.ingress.ingressClassName }}
rules:
{{- range $v := .Values.ingress.rules }}
- host: {{ $v.host.name }}
http:
paths:
{{- range $p := $v.host.paths }}
- path: {{ $p.path.path }}
pathType: Prefix
backend:
service:
name: {{ $p.path.name }}
port:
number: {{ $p.path.port }}
{{- end }}
{{- end }}
tls:
- hosts:
{{- range $v := .Values.ingress.rules }}
- {{ $v.host.name }}
{{- end }}
secretName: "{{ .Values.issuer.name }}"
This seems to work and uses the IngressClass which i've templated into {{ .Values.ingress.ingressClassName }}. These end up being ns1-class and ns2-class.
However. I then end up with 4 loadbalancers created, rather than two!
Looking at k9s, seems that installing the ingress-nginx with helm installs the two IngressClasses which I want, but also adds its own ingress controller pods. I only want the two created with my Ingress definition above.
How do I still setup the IngressClass to use ingress-nginx, but not have the controller created by installing ingress-nginx?
I've read this: (https://kubernetes.github.io/ingress-nginx/user-guide/multiple-ingress/#multiple-ingress-controllers) a few times, I find it quite confusing as there are snippets of configuration that I don't know what to do with/where to put.

Range issue in go template in vault configuration in k8s

I don't know Golang at all but need to implement Go template syntax in my kubernetes config (where hishicorp vault is configured). What I'm trying to do is to modify file in order to change its format. So source looks like this:
data: map[key1:value1]
metadata: map[created_time:2021-10-06T21:02:18.41643371Z deletion_time: destroyed:false version:1]
The Kubernetes config part with go template is used in order to format file is here:
apiVersion: apps/v1
kind: Deployment
metadata:
name: test
spec:
replicas: ${REPLICAS}
selector:
matchLabels:
component: test
template:
metadata:
labels:
component: test
annotations:
vault.hashicorp.com/agent-inject: 'true'
vault.hashicorp.com/agent-inject-status: 'update'
vault.hashicorp.com/role: 'test'
vault.hashicorp.com/agent-inject-secret-config: 'secret/data/test/config'
vault.hashicorp.com/agent-inject-template-config: |
{{- with secret "secret/data/test/config" -}}
{{ range $k, $v := .Data.data }}
export {{ $k }}={{ $v | quote }}
{{ end }}
{{- end}}
spec:
serviceAccountName: test
containers:
- name: test
image: ${IMAGE}
ports:
- containerPort: 3000
But the error I'm getting is:
runtime error encountered: error="template server: (dynamic): parse: template: :2: unexpected "," in range"
EDIT:
To deploy vault on k8s I'm using vault helm chart
For what I can see you have env variables in your yaml file (${REPLICAS}, ${IMAGE}), which makes me think that you are using something like cat file.yml | envsubst | kubectl apply --wait=true -f - in order to replace those env vars for the real values.
The issue with this is that $k and $v are also being replaced for '' (since you do not have that env var in your system).
One ugly but effective solution is to export v="$v" and export k="$k" which whill generate your yaml file correctly.

Looping with helm

I'm reading the helm documentation about how to do some loops for Kubernetes, basically what i want to do is something like this.
What i have...
values.yaml
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ .Values.var3.var3parent }}-{{ .Values.var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
but im having the error
Error: UPGRADE FAILED: render error in "services-external.yaml": template: templates/services-external.yaml:312:32: executing "services-external.yaml" at <.Values.var3.var3parent>: can't evaluate field Values in type interface {}
I tried also with "with" but same error. Is there some way to achieve it by using the "if" with a loop on helm?
Error you have shows that template can't find values for <.Values.var3.var3parent>. Since you're using range block, . refers to local variables within the loop. You need to refer to global variables. This can be achieved with two approaches:
Use $ before a variable you need to invoke (see it with var3)
Define a new variable and save values you need to this variable (see it with var4)
Here's a tested template with two approaches from the above:
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- $var4 := .Values.var4 -}}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ $.Values.var3.var3parent }}-{{ $var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
You can read about it more here
Also there is one more possible solution for this to reset the scope to root and work with loop as usually (but it's more sketchy approach) (here's a link)
Thanks #moonkotte i managed to make it work using the approach of defining a new variable to save the scope, here the example.
On values.yaml
dnsShortNames:
short1: "short1"
short2: "short2"
short3: "short3"
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- $dns_short_names := .Values.dnsShortNames }}
{{- range .Values.dnsAliases }}
---
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }}
spec:
type: ExternalName
externalName: {{ $dns_short_names.short1 }}-{{ $dns_short_names.short2 }}-{{ $dns_short_names.short3 }}{{.}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{- end }}
Applying this Kubernetes will create 3 different external services.
short1-short2-short3.test1.svc.cluster.local
short1-short2-short3.test2.svc.cluster.local
short1-short2-short3.test3.svc.cluster.local
Public thanks to my friend Xavi <3.

Kubernetes internal socket.io connection

I am following this image architecture from K8s
However I can not seem to connect to socket.io server from within the cluster using the service name
Current situation:
From POD B
Can connect directly to App A's pod using WS ( ws://10.10.10.1:3000 ) ✅
Can connect to App A's service using HTTP ( http://orders:8000 ) ✅
Can not connect to App A's service using WS ( ws://orders:8000 ) ❌
From outside world / Internet
Can connect to App A's service using WS ( ws://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
Can connect to App A's service using HTTP ( http://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
My current service configuration
spec:
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 3000
selector:
app: orders
clusterIP: 172.20.115.234
type: ClusterIP
sessionAffinity: None
My Ingress Helm chart
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "app.name" $ }}-backend
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
ingress.kubernetes.io/auth-response-headers: authorization
labels:
{{- include "api-gw.labels" $ | indent 4 }}
spec:
rules:
- host: {{ .Values.deploy.host | quote }}
http:
paths:
- path: /socket/events
backend:
serviceName: orders
servicePort: 8000
My Service Helm chart
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.name" . }}
spec:
{{ if not $isDebug -}}
selector:
app: {{ template "app.name" . }}
{{ end -}}
type: NodePort
ports:
- name: http
port: {{ template "app.svc.port" . }}
targetPort: {{ template "app.port" . }}
nodePort: {{ .Values.service.exposedPort }}
protocol: TCP
# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
# {{ default "default" .Chart.Name }}
# {{- end -}}
# {{- define "app.port" -}}
# 3000
# {{- end -}}
# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}
The services DNS name must be set in your container to access its VIP address.
Kubernetes automatically sets environmental variables in all pods which have the same selector as the service.
In your case, all pods with selector A, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.
The other pod with selector B, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.
Here is the k8s documentation related to your problem.
To solve this, you can setup a DNS service, which k8s offers as a cluster addon.
Just follow the documentation.