Conditionally creation of a template in Helm - kubernetes

I need to make the installation of this resource file to be conditionally applied based on the flag istioEnables from the values.yaml
virtualService.yaml
{{ if .Values.istioEnabled }}
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ .Values.istio.destinationRule.name }}
namespace: {{ .Values.namespace }}
spec:
host:
- {{ .Values.service.name }}
trafficPolicy:
connectionPool:
tcp:
maxConnections: {{ .Values.istio.destinationRule.maxConnections }}
loadBalancer:
simple: {{ .Values.istio.destinationRule.loadBalancer }}
subsets:
- name: v1
labels: {{ toYaml .Values.app.labels | indent 10 }}
{{ end }}
Values.yaml
namespace: default
app:
name: backend-api
labels: |+
app: backend-api
service:
name: backend-service
istioEnabled: "true"
istio:
destinationRule:
name: backend-destination-rule
loadBalancer: RANDOM
virtualService:
name: backend-virtual
timeout: 5s
retries: 3
perTryTimeout: 3s
this file cause an error while installing the helm chart.
the error is:
Error: INSTALLATION FAILED: YAML parse error on backend-chart/templates/virtualservice.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context

It's caused by the leading two spaces in your file:
{{ if eq 1 1 }}
apiVersion: v1
kind: Kaboom
{{ end }}
produces the same error:
Error: YAML parse error on helm0/templates/debug.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context

Related

Can i use the ne and eq in deployment.yaml like below?

I am trying to use ne and eq in deployment.yaml but while template helm getting below error
Error:YAML parse error on cdp/templates/cdp-deployment.yaml: error converting YAML to JSON: yaml: line 50: did not find expected key
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ports:
- containerPort: {{ .Values.service.port }}
envFrom:
- configMapRef:
name: {{ .Values.metadata.name }}
- secretRef:
name: {{ .Values.metadata.name }}
{{- end }}
Thank you in advance
There is no problem with this if statement. I tried to write a demo to test it. There is no problem with this paragraph.
values.yaml
metadata:
name: application-B
templates/cm.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: test
data:
cfg: |-
{{- if (or (ne .Values.metadata.name "application-A") (eq .Values.metadata.name "application-B") )}}
ok
{{- else }}
notok
{{- end }}
output
apiVersion: v1
kind: ConfigMap
metadata:
name: test-v32
labels:
helm.sh/chart: test-0.1.0
app.kubernetes.io/name: test
app.kubernetes.io/instance: test
app.kubernetes.io/version: "1.16.0"
app.kubernetes.io/managed-by: Helm
data:
cfg: |-
ok
The rendered line labels are not the same as the actual line labels, as the fool said, you should call the helm template --debug test . command to debug to see what the problem is.

HorizontalPodAutoscaler scales up pods but then terminates them instantly

So I have a HorizontalPodAutoscaler set up for my backend (an fpm-server and an Nginx server for a Laravel application).
The problem is that when the HPA is under load, it scales up the pods but it terminates them instantly, not even letting them get into the Running state.
The metrics are good, the scale-up behavior is as expected the only problem is that the pods get terminated right after scaling.
What could be the problem?
Edit: The same HPA is used on the frontend and it's working as expected, the problem seems to be only on the backend.
Edit 2: I have Cluster Autoscaler enabled, it does it's job, nodes get added when they are needed and then cleaned up so it's not an issue about available resources.
deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: fpm-server
labels:
tier: backend
layer: fpm
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
tier: backend
layer: fpm
template:
metadata:
labels:
tier: backend
layer: fpm
spec:
{{- with .Values.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
containers:
- name: fpm
image: "{{ .Values.fpm.image.repository }}:{{ .Values.fpm.image.tag }}"
ports:
- name: http
containerPort: 9000
protocol: TCP
env:
{{- range $name, $value := .Values.env }}
- name: {{ $name }}
value: "{{ $value }}"
{{- end }}
envFrom:
- secretRef:
name: backend-secrets
resources:
{{- toYaml .Values.resources | nindent 12 }}
hpa.yaml
apiVersion: autoscaling/v2beta1
kind: HorizontalPodAutoscaler
metadata:
name: fpm-server-hpa
labels:
tier: backend
layer: fpm
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: fpm-server
minReplicas: {{ .Values.autoscaling.minReplicas }}
maxReplicas: {{ .Values.autoscaling.maxReplicas }}
metrics:
{{- if .Values.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
targetAverageUtilization: {{ .Values.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
targetAverageUtilization: {{ .Values.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
Seems that the problem was with the replicas: {{ .Values.replicaCount }} definition. It seems that, if you are using HPA, replicas can't be used. I removed this line and the HPA started scaling.

HELM cannot find Deployment.spec.template.spec.containers[0]

I am building a boilerplate HELM Chart, but HELM cannot find the container name. I have tried a hard coded name as well as various formulations of the variable. Nothing works. I am stumped. Please help!
ERROR MSG
Error: unable to build kubernetes objects from release manifest: error validating "": error validating data: ValidationError(Deployment.spec.template.spec.containers[0]): missing required field "name" in io.k8s.api.core.v1.Container
deployment.yaml
apiVersion: "apps/ {{ .Release.ApiVersion }}"
kind: Deployment
metadata:
name: {{ .Release.Name }}
labels:
app: {{ .Values.deploy.image.name }}
spec:
replicas: {{ .Values.deploy.replicas }}
selector:
matchLabels:
app: {{ .Values.deploy.image.name }}
template:
metadata:
labels:
app: {{ .Values.deploy.image.name }}
spec:
containers:
- name: {{ .Values.deploy.image.name }}
image: {{ .Values.deploy.image.repository }}
imagePullPolicy: {{ .Values.deploy.image.pullPolicy }}
resources: {}
values.yaml
deploy:
type: ClusterIP
replicas: 5
image:
name: test
repository: k8stest
pullPolicy: IfNotPresent
service:
name: http
protocol: TCP
port: 80
targetPort: 8000
Your example works for me just fine, I copy pasted your code and only changed apiVersion to apps/v1. Since you say you have tried to hard code the name and still isn't working for you, I would think the problem is somewhere in the white space characters.

Does helm support Endpoints object type?

I've created the following to objects:
apiVersion: v1
kind: Service
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
spec:
ports:
- port: {{ .Values.postgres.port}}
selector: {}
for a service and its endpoint:
kind: Endpoints
apiVersion: v1
metadata:
name: {{ .Values.serviceName }}
namespace: {{ .Values.global.namespace }}
labels:
chart: {{ template "chartName" . }}
env: {{ .Values.global.env }}
annotations:
"helm.sh/hook": "pre-install"
"helm.sh/hook-weight": "10"
"helm.sh/hook-delete-policy": "before-hook-creation"
subsets:
- addresses:
- ip: "{{ .Values.external.ip }}"
ports:
- name: "db"
port: {{ .Values.external.port }}
When I use helm even in a dry run mode I can see the service object and cant see the endpoint object.
Why? Doesn't helm support all k8s objects?
Helm is just a "templating" tool, so technically it supports everything that your underlying k8 supports.
In your case, please check that both files are in the templates directory
Actually it does work. The problem was that the service and the endpoint MUST have same names (which I new) and MUST have port names exactly the same

Why doesn't helm use the name defined in the deployment template?

i.e. from name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod below
# deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
labels:
app: {{ template "project1234.name" . }}
chart: {{ template "project1234.chart" . }}
release: {{ .Release.Name }}
heritage: {{ .Release.Service }}
name: {{ template "project1234.module5678.fullname" . }}
spec:
replicas: {{ .Values.replicaCount }}
selector:
matchLabels:
app: {{ template "project1234.name" . }}
template:
metadata:
labels:
app: {{ template "project1234.name" . }}
spec:
containers:
- image: "{{ .Values.image.name }}:{{ .Values.image.tag }}"
name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod
ports:
- containerPort: 1234
imagePullSecrets:
- name: {{ .Values.image.pullSecret }}
I am expecting the pod name to be:
pod/project1234-module5678-pod
Instead, the resulting Pod name is:
pod/chartname-project1234-module5678-dc7db787-skqvv
...where (in my understanding):
chartname is from: helm install --name chartname -f values.yaml .
project1234 is from:
# Chart.yaml
apiVersion: v1
appVersion: "1.0"
description: project1234 Helm chart for Kubernetes
name: project1234
version: 0.1.0
module5678 is from:
# values.yaml
rbac:
create: true
serviceAccounts:
module5678:
create: true
name:
image:
name: <image location>
tag: 1.5
pullSecret: <pull secret>
gitlab:
secretName: <secret name>
username: foo
password: bar
module5678:
enabled: true
name: module5678
ingress:
enabled: true
replicaCount: 1
resources: {}
I've tried changing name: {{ .Chart.Name }}-{{ .Values.module5678.name }}-pod into a plain string value like "podname1234" and it isn't followed. I even tried removing the name setting entirely and the resulting pod name remains the same.
Pods created from a Deployment always have a generated name based on the Deployment's name (and also the name of the intermediate ReplicaSet, if you go off and look for it). You can't override it.
Given the YAML you've shown, I'd expect that this fragment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ template "project1234.module5678.fullname" . }}
expands out to a Deployment name of chartname-project1234-module5678; the remaining bits are added in by the ReplicaSet and then the Pod itself.
If you do look up the Pod and kubectl describe pod chartname-project1234-module5678-dc7db787-skqvv you will probably see that it has a single container that has the expected name project1234-module5678-pod. Pretty much the only time you need to use this is if you need to kubectl logs (or, more rarely, kubectl exec) in a multi-container pod; if you are in this case, you'll appreciate having a shorter name, and since the container names are always scoped to the specific pod in which they appear, there's nothing wrong with using a short fixed name here
spec:
containers:
- name: container