Looping with helm - kubernetes

I'm reading the helm documentation about how to do some loops for Kubernetes, basically what i want to do is something like this.
What i have...
values.yaml
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ .Values.var3.var3parent }}-{{ .Values.var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
but im having the error
Error: UPGRADE FAILED: render error in "services-external.yaml": template: templates/services-external.yaml:312:32: executing "services-external.yaml" at <.Values.var3.var3parent>: can't evaluate field Values in type interface {}
I tried also with "with" but same error. Is there some way to achieve it by using the "if" with a loop on helm?

Error you have shows that template can't find values for <.Values.var3.var3parent>. Since you're using range block, . refers to local variables within the loop. You need to refer to global variables. This can be achieved with two approaches:
Use $ before a variable you need to invoke (see it with var3)
Define a new variable and save values you need to this variable (see it with var4)
Here's a tested template with two approaches from the above:
{{- if and .Values.var1.var1parent (eq .Values.var2.var2parent "value") }}
{{- $var4 := .Values.var4 -}}
{{- range .Values.dnsAliases }}
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }} ( for creating the name "name-test1/test2 and so on"
spec:
type: ExternalName
externalName: {{ $.Values.var3.var3parent }}-{{ $var4.var4parent }}-{{ .}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{ end }}
{{ end }}
You can read about it more here
Also there is one more possible solution for this to reset the scope to root and work with loop as usually (but it's more sketchy approach) (here's a link)

Thanks #moonkotte i managed to make it work using the approach of defining a new variable to save the scope, here the example.
On values.yaml
dnsShortNames:
short1: "short1"
short2: "short2"
short3: "short3"
dnsAliases:
- test1
- test2
- test3
on services-external.yaml
{{- $dns_short_names := .Values.dnsShortNames }}
{{- range .Values.dnsAliases }}
---
apiVersion: v1
kind: Service
metadata:
name: name-{{ . }}
spec:
type: ExternalName
externalName: {{ $dns_short_names.short1 }}-{{ $dns_short_names.short2 }}-{{ $dns_short_names.short3 }}{{.}}.svc.cluster.local
ports:
- port: 80
protocol: TCP
targetPort: 80
{{- end }}
Applying this Kubernetes will create 3 different external services.
short1-short2-short3.test1.svc.cluster.local
short1-short2-short3.test2.svc.cluster.local
short1-short2-short3.test3.svc.cluster.local
Public thanks to my friend Xavi <3.

Related

Create Istio Service Entry with Range in HELM Template

I am creating 3 Istio service entries using HELM template. When I use range it only create the last one. Here is the value.yaml and service entry yaml. How do I create 3 service entries here?
serviceentry:
appdb01: APPDB01.domain.com
appdb02: APPDB02.domain.com
appdb03: APPDB03.domain.com
{{- range $key, $val := .Values.serviceentry }}
apiVersion: networking.istio.io/v1beta1
kind: ServiceEntry
metadata:
name: {{ $key }}
namespace: mytest
spec:
hosts:
- {{ $val | quote }}
location: MESH_EXTERNAL
ports:
- name: tcp1433
number: 1433
protocol: TCP
- name: udp1434
number: 1434
protocol: UDP
resolution: DNS
{{- end }}
Result:
Only the appdb03 is created
When running the HeLM template, it only creates the appdb03 but not the other 2.
You need to make sure you include a YAML start-of-document marker, --- on its own line, inside the range loop. This is true whenever you're producing multiple Kubernetes manifests from the same Helm template file; it's not specific to this Istio use case.
{{- range $key, $val := .Values.serviceentry }}
---
apiVersion: ...
{{- end }}

Helm for loop list

I wanted to use one deployment file and value file to create charts for multiple services.
My value file has the values of all the service, that has to be used one deployment file.
below is my deployment file content
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .Values.PA.name }}-deployment
labels:
app: {{ .Values.PA.name }}
spec:
replicas: {{ .Values.PA.replicas }}
selector:
matchLabels:
app: {{ .Values.PA.name }}
template:
metadata:
labels:
app: {{ .Values.PA.name }}
spec:
containers:
- name: {{ .Values.PA.name }}
image: {{ .Values.PA.image }}:{{ .Values.PA.tag }}
ports:
- containerPort: {{ .Values.PA.port }}
Below is my values file
PA:
name: povisioning_adapter
replicas: 1
env: dev
image: provisioning_adapter
tag: master
port: 8001
service:
protocol: TCP
port: 8001
targetPort: 8001
nodePort: 30100
SA:
name: service_adapter
replicas: 1
env: dev
image: service_adapter
tag: master
port: 8002
service:
protocol: TCP
port: 8002
targetPort: 8002
nodePort: 30200
Now I want to iterate through PA, SA values, etc. inside my deployment file.
How to declare list [PA,SA,..] and for loop through it inside deployment file?
You can wrap this in a range loop:
{{- range list .Values.PA .Values.SA -}}
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ .name }}-deployment
...
{{ end -}}
If you need to refer to the top-level .Values, in this setup you'd need to "escape" the range loop's scoping by explicitly referring to the top-level value $. You also might need this in a template parameter.
metadata:
labels:
name: {{ .name }}
{{ include "myapp.labels" $ | indent 4 }}
{{/* ^ */}}
You could do something similar breaking this out into a helper template that produced one of the Kubernetes objects. You may be able to restructure this to use the name of the component rather than its specific settings; where you currently have .Values.PA.name, if you have the top-level $.Values object and you know the name, then index $.Values "PA" "name" is equivalent, and any of those parts can be replaced by variables.

Helm (jinja2): parse error "did not find expected key" at a non-existant further line

This is my first question ever on the internet. I've been helped much by reading other people's fixes but now it's time to humbly ask for help myself.
I get the following error by Helm (helm3 install ist-gw-t1 --dry-run )
Error: INSTALLATION FAILED: YAML parse error on istio-gateways/templates/app-name.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key
However the file is 27 lines long! It used to be longer but I removed the other Kubernetes resources so that I narrow down the area for searching for the issue.
Template file
{{- range .Values.ingressConfiguration.app-name }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "{{ .name }}-tcp-8080"
namespace: {{ .namespace }}
labels:
app: {{ .name }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
hosts: # Is was line 40 before shortening the file.
- {{ .fqdn }}
gateways:
- {{ .name }}
{{ (.routing).type }}:
- route:
- destination:
port:
number: {{ (.services).servicePort2 }}
host: {{ (.services).serviceUrl2 }}
match:
port: "8080"
uri:
exact: {{ .fqdn }}
{{- end }}
values.yaml
networkPolicies:
ingressConfiguration:
app-name:
- namespace: 'namespace'
name: 'app-name'
ingressController: 'internal-istio-ingress-gateway' # ?
fqdn: '48.characters.long'
tls:
credentialName: 'name-of-the-secret' # ?
mode: 'SIMPLE' # ?
serviceUrl1: 'foo' # ?
servicePort1: '8080'
routing:
# Available routing types: http or tls
# In case of tls routing type selected, the matchingSniHost(resp. rewriteURI/matchingURIs) has(resp. have) to be filled(resp. empty)
type: http
rewriteURI: ''
matchingURIs: ['foo']
matchingSniHost: []
- services:
serviceUrl2: "foo"
servicePort2: "8080"
- externalServices:
mysql: 'bar'
Where does the error come from?
Why does Helm still report line 40 as problematic -- even after the shortening of the file.
Can you please recommend some Visual Studio Code extention that could have helped me? I have the following slightly relevant but they do not have linting (or I do not know to use it): YAML by Red Hat; amd Kubernetes by Microsoft.

Kubernetes internal socket.io connection

I am following this image architecture from K8s
However I can not seem to connect to socket.io server from within the cluster using the service name
Current situation:
From POD B
Can connect directly to App A's pod using WS ( ws://10.10.10.1:3000 ) ✅
Can connect to App A's service using HTTP ( http://orders:8000 ) ✅
Can not connect to App A's service using WS ( ws://orders:8000 ) ❌
From outside world / Internet
Can connect to App A's service using WS ( ws://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
Can connect to App A's service using HTTP ( http://my-external-ip/orders ) ✅ // using traefik to route my-external-ip/orders to service orders:8000
My current service configuration
spec:
ports:
- name: http
protocol: TCP
port: 8000
targetPort: 3000
selector:
app: orders
clusterIP: 172.20.115.234
type: ClusterIP
sessionAffinity: None
My Ingress Helm chart
apiVersion: extensions/v1beta1
kind: Ingress
metadata:
name: {{ template "app.name" $ }}-backend
annotations:
kubernetes.io/ingress.class: traefik
ingress.kubernetes.io/auth-type: forward
ingress.kubernetes.io/auth-url: "http://auth.default.svc.cluster.local:8000/api/v1/oauth2/auth"
ingress.kubernetes.io/auth-response-headers: authorization
labels:
{{- include "api-gw.labels" $ | indent 4 }}
spec:
rules:
- host: {{ .Values.deploy.host | quote }}
http:
paths:
- path: /socket/events
backend:
serviceName: orders
servicePort: 8000
My Service Helm chart
apiVersion: v1
kind: Service
metadata:
name: {{ template "app.name" . }}
spec:
{{ if not $isDebug -}}
selector:
app: {{ template "app.name" . }}
{{ end -}}
type: NodePort
ports:
- name: http
port: {{ template "app.svc.port" . }}
targetPort: {{ template "app.port" . }}
nodePort: {{ .Values.service.exposedPort }}
protocol: TCP
# Helpers..
# {{/* vim: set filetype=mustache: */}}
# {{- define "app.name" -}}
# {{ default "default" .Chart.Name }}
# {{- end -}}
# {{- define "app.port" -}}
# 3000
# {{- end -}}
# {{- define "app.svc.port" -}}
# 8000
# {{- end -}}
The services DNS name must be set in your container to access its VIP address.
Kubernetes automatically sets environmental variables in all pods which have the same selector as the service.
In your case, all pods with selector A, have environmental variables set in them when the container is deploying, that contain the services VIP and PORT.
The other pod with selector B, is not linked as an endpoints for the service, therefor, it does not contain the environmental variables needed to access the service.
Here is the k8s documentation related to your problem.
To solve this, you can setup a DNS service, which k8s offers as a cluster addon.
Just follow the documentation.

Is it allowed to have multi service in helm chart?

I am pretty newbie in Helm and would like to know, if it is allowed to have multi services in service.yaml file like:
apiVersion: v1
kind: Service
metadata:
name: {{ include "keycloak.fullname" . }}
labels:
{{- include "keycloak.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "keycloak.selectorLabels" . | nindent 4 }}
---
apiVersion: v1
kind: Service
metadata:
name: {{ include "keycloak.fullname" . }}
labels:
{{- include "keycloak.labels" . | nindent 4 }}
spec:
type: {{ .Values.service.type }}
ports:
- port: {{ .Values.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "keycloak.selectorLabels" . | nindent 4 }}
Yes it is, are you facing any issue?
A cleaner way is to use two different files service-a.yaml and service-b.yaml
Note: Better not to have both the services with the same name.