unable to parse YAML: error converting YAML to JSON: did not find expected key on line "advanced" field - kubernetes

The build is throwing error:- did not find the expected key on line "advanced" field
Below is the yaml file snipped that i have written
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {{ .Values.service.name }}-sqs
spec:
scaleTargetRef:
name: {{ .Values.service.name }}
minReplicaCount: {{ .Values.hpa.minReplicaCount }}
maxReplicaCount: {{ .Values.hpa.maxReplicaCount }}
fallback:
failureThreshold: 3
replicas: {{ .Values.hpa.minReplicaCount }}
advanced:
behavior:
scaleDown:
stabilizationWindowSeconds: {{ .Values.hpa.scaleDown.stabilizationWindowSeconds}}
policies:
{{ toYaml .Values.hpa.scaleDown.policies | indent 6 }}
scaleUp:
stabilizationWindowSeconds: {{ .Values.hpa.scaleUp.stabilizationWindowSeconds}}
policies:
{{ toYaml .Values.hpa.scaleUp.policies | indent 6 }}
selectPolicy: {{ .Values.hpa.selectPolicyForScaleUp}} ```

There are some formatting and indentation issues in the yaml file you have provided as said by Rafał Leszko. You can use helm lint for validating your charts before deploying them. Helm Linter will validate your chart syntax and pints out all the errors, warnings and info which will break your code from working.(Source: Helm docs)

Related

helm: no endpoints available for service "external-secrets-webhook"

When running:
helm upgrade --install backend ./k8s "$#"
Gives me the next error (did not happen before):
Error: UPGRADE FAILED: cannot patch "api" with kind ExternalSecret: Internal error occurred: failed calling webhook "validate.externalsecret.external-secrets.io": Post "https://external-secrets-webhook.external-secrets.svc:443/validate-external-secrets-io-v1beta1-externalsecret?timeout=5s": no endpoints available for service "external-secrets-webhook"
Any idea on how what is it or how to debug, --atomic also doesn't roll back for the same reason.
The helm config is:
{{- if .Values.awsSecret.enabled }}
apiVersion: external-secrets.io/v1beta1
kind: ExternalSecret
metadata:
name: {{ .Values.applicationName }}
namespace: {{ .Values.namespace }}
labels:
{{- include "application.labels" . | nindent 4 }}
spec:
refreshInterval: 1m
secretStoreRef:
name: cluster-secret-store
kind: ClusterSecretStore
target:
name: {{ .Values.applicationName }}
creationPolicy: Owner
dataFrom:
- extract:
key: {{ .Values.awsSecret.name }}
{{- end }}
and the gihutbActions
- helm/upgrade-helm-chart:
atomic: false
chart: ./k8s
helm-version: v3.8.2
release-name: backend
namespace: default
values: ./k8s/values-${ENV}.yaml
values-to-override:
"image.tag=${CIRCLE_TAG},\
image.repository=trak-${ENV}-backend,\
image.registry=${AWS_ECR_ACCOUNT},\
env=${ENV},\
applicationName=api,\
applicationVersion=${CIRCLE_TAG}"
Thank you
I have tried setting --atomic to true but doesn't rollBack, this morning we did a few changes on roles and permissions but should not affect this at all.

Conditionally creation of a template in Helm

I need to make the installation of this resource file to be conditionally applied based on the flag istioEnables from the values.yaml
virtualService.yaml
{{ if .Values.istioEnabled }}
apiVersion: networking.istio.io/v1alpha3
kind: DestinationRule
metadata:
name: {{ .Values.istio.destinationRule.name }}
namespace: {{ .Values.namespace }}
spec:
host:
- {{ .Values.service.name }}
trafficPolicy:
connectionPool:
tcp:
maxConnections: {{ .Values.istio.destinationRule.maxConnections }}
loadBalancer:
simple: {{ .Values.istio.destinationRule.loadBalancer }}
subsets:
- name: v1
labels: {{ toYaml .Values.app.labels | indent 10 }}
{{ end }}
Values.yaml
namespace: default
app:
name: backend-api
labels: |+
app: backend-api
service:
name: backend-service
istioEnabled: "true"
istio:
destinationRule:
name: backend-destination-rule
loadBalancer: RANDOM
virtualService:
name: backend-virtual
timeout: 5s
retries: 3
perTryTimeout: 3s
this file cause an error while installing the helm chart.
the error is:
Error: INSTALLATION FAILED: YAML parse error on backend-chart/templates/virtualservice.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context
It's caused by the leading two spaces in your file:
{{ if eq 1 1 }}
apiVersion: v1
kind: Kaboom
{{ end }}
produces the same error:
Error: YAML parse error on helm0/templates/debug.yaml: error converting YAML to JSON: yaml: line 2: mapping values are not allowed in this context

Helm (jinja2): parse error "did not find expected key" at a non-existant further line

This is my first question ever on the internet. I've been helped much by reading other people's fixes but now it's time to humbly ask for help myself.
I get the following error by Helm (helm3 install ist-gw-t1 --dry-run )
Error: INSTALLATION FAILED: YAML parse error on istio-gateways/templates/app-name.yaml: error converting YAML to JSON: yaml: line 40: did not find expected key
However the file is 27 lines long! It used to be longer but I removed the other Kubernetes resources so that I narrow down the area for searching for the issue.
Template file
{{- range .Values.ingressConfiguration.app-name }}
apiVersion: networking.istio.io/v1alpha3
kind: VirtualService
metadata:
name: "{{ .name }}-tcp-8080"
namespace: {{ .namespace }}
labels:
app: {{ .name }}
chart: {{ $.Chart.Name }}-{{ $.Chart.Version | replace "+" "_" }}
release: {{ $.Release.Name }}
heritage: {{ $.Release.Service }}
spec:
hosts: # Is was line 40 before shortening the file.
- {{ .fqdn }}
gateways:
- {{ .name }}
{{ (.routing).type }}:
- route:
- destination:
port:
number: {{ (.services).servicePort2 }}
host: {{ (.services).serviceUrl2 }}
match:
port: "8080"
uri:
exact: {{ .fqdn }}
{{- end }}
values.yaml
networkPolicies:
ingressConfiguration:
app-name:
- namespace: 'namespace'
name: 'app-name'
ingressController: 'internal-istio-ingress-gateway' # ?
fqdn: '48.characters.long'
tls:
credentialName: 'name-of-the-secret' # ?
mode: 'SIMPLE' # ?
serviceUrl1: 'foo' # ?
servicePort1: '8080'
routing:
# Available routing types: http or tls
# In case of tls routing type selected, the matchingSniHost(resp. rewriteURI/matchingURIs) has(resp. have) to be filled(resp. empty)
type: http
rewriteURI: ''
matchingURIs: ['foo']
matchingSniHost: []
- services:
serviceUrl2: "foo"
servicePort2: "8080"
- externalServices:
mysql: 'bar'
Where does the error come from?
Why does Helm still report line 40 as problematic -- even after the shortening of the file.
Can you please recommend some Visual Studio Code extention that could have helped me? I have the following slightly relevant but they do not have linting (or I do not know to use it): YAML by Red Hat; amd Kubernetes by Microsoft.

Helm Error converting YAML to JSON: yaml: line 20: did not find expected key

I don't really know what is the error here, is a simple helm deploy with a _helpers.tpl, it doesn't make sense and is probably a stupid mistake, the code:
deploy.yaml
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
{{ include "metadata.name" . }}-deploy
spec:
selector:
matchLabels:
app: nginx
replicas: 2 # tells deployment to run 2 pods matching the template
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
vars: {{- include "envs.var" .Values.secret.data }}
_helpers.tpl
{{- define "envs.var"}}
{{- range $key := . }}
- name: {{ $key | upper | quote}}
valueFrom:
secretKeyRef:
key: {{ $key | lower }}
name: {{ $key }}-auth
{{- end }}
{{- end }}
values.yaml
secret:
data:
username: root
password: test
the error
Error: YAML parse error on mychart/templates/deploy.yaml: error converting YAML to JSON: yaml: line 21: did not find expected key
Here this problem happens because of indent. You can resolve by updating
env: {{- include "envs.var" .Values.secret.data | nindent 12 }}
Simplest way to resolve this kind of issues is to use tools.
These are mostly indentation issues, and can be resolved very easily using the right tool
npm install -g yaml-lint
yaml-lint is one such tool
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
× YAML Lint failed for C:/Users/mnadeem6/vsc-workspaces/grafana-1/grafana.yaml
× bad indentation of a mapping entry at line 137, column 11:
restartPolicy: Always
^
PS E:\vsc-workspaces\grafana-1> yamllint .\grafana.yaml
√ YAML Lint successful.

How does one pass an override file to specific Service YAML file using Helm?

I am trying to pass a toleration when deploying to a chart located in stable. The toleration should be applied to a specific YAML file in the templates directory, NOT the values.yaml file as it is doing by default.
I've applied using patch and I can see that the change I need would work if it were applied to the right Service, which is a DaemonSet.
Currently I'm trying "helm install -f tolerations.yaml --name release_here"
This is simply creating a one-off entry when running get chart release_here, and is not in the correct service YAML
Quoting your requirement
The toleration should be applied to a specific YAML file in the
templates directory
First, in order to make it happen your particular helm chart file needs to allow such an end-user customization.
Here is the example based on stable/kiam chart:
Definition of kiam/templates/server-daemonset.yaml
{{- if .Values.server.enabled -}}
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: {{ template "kiam.name" . }}
chart: {{ template "kiam.chart" . }}
component: "{{ .Values.server.name }}"
heritage: {{ .Release.Service }}
release: {{ .Release.Name }}
name: {{ template "kiam.fullname" . }}-server
spec:
selector:
matchLabels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
template:
metadata:
{{- if .Values.server.podAnnotations }}
annotations:
{{ toYaml .Values.server.podAnnotations | indent 8 }}
{{- end }}
labels:
app: {{ template "kiam.name" . }}
component: "{{ .Values.server.name }}"
release: {{ .Release.Name }}
{{- if .Values.server.podLabels }}
{{ toYaml .Values.server.podLabels | indent 8 }}
{{- end }}
spec:
serviceAccountName: {{ template "kiam.serviceAccountName.server" . }}
hostNetwork: {{ .Values.server.useHostNetwork }}
{{- if .Values.server.nodeSelector }}
nodeSelector:
{{ toYaml .Values.server.nodeSelector | indent 8 }}
{{- end }}
tolerations: <---- TOLERATIONS !
{{ toYaml .Values.server.tolerations | indent 8 }}
{{- if .Values.server.affinity }}
affinity:
{{ toYaml .Values.server.affinity | indent 10 }}
{{- end }}
volumes:
- name: tls
Override default values.yaml with your customs-values to set toleration in Pod spec of DeamonSet.
server:
enabled: true
tolerations: ## Agent container resources
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
Render the resulting manifest file, to see how it would look like when overriding default values with install/upgrade helm command using --values/--set argument:
helm template --name my-release . -x templates/server-daemonset.yaml --values custom-values.yaml
Rendered file (output truncated):
---
# Source: kiam/templates/server-daemonset.yaml
apiVersion: apps/v1beta2
kind: DaemonSet
metadata:
labels:
app: kiam
chart: kiam-2.5.1
component: "server"
heritage: Tiller
release: my-release
name: my-release-kiam-server
spec:
selector:
matchLabels:
app: kiam
component: "server"
release: my-release
template:
metadata:
labels:
app: kiam
component: "server"
release: my-release
spec:
serviceAccountName: my-release-kiam-server
hostNetwork: false
tolerations:
affinity:
nodeAffinity:
requiredDuringSchedulingIgnoredDuringExecution:
nodeSelectorTerms:
- matchExpressions:
- key: foo.bar.com/role
operator: In
values:
- master
volumes:
...
I hope this will help you to solve your problem.