How to replace some of the values in Helm template definition? - kubernetes-helm

I have a value someField in my values.yaml file that looks like this:
someField:
field1: 1
field2: someValue
field3: someObject
...
I now want to add this to my template, but change some of the fields and add some other fields and maybe remove some of them too.
someField:
field1: 2
field3: someObject
...
field100: someNewValue
I can write to my template all the fields one by one and then do what I want, but there are multiple fields that are available and I would like to avoid listing them all.
I can use toYaml, but it would fully write the original value, and I do not know how to augment it "on the fly"
Extended example:
I have another helm chart installed that is called KEDA, and it defines a CRD:
apiVersion: keda.sh/v1alpha1
kind: ScaledObject
metadata:
name: {scaled-object-name}
spec:
scaleTargetRef:
apiVersion: {api-version-of-target-resource} # Optional. Default: apps/v1
kind: {kind-of-target-resource} # Optional. Default: Deployment
name: {name-of-target-resource} # Mandatory. Must be in the same namespace as the ScaledObject
envSourceContainerName: {container-name} # Optional. Default: .spec.template.spec.containers[0]
pollingInterval: 30 # Optional. Default: 30 seconds
cooldownPeriod: 300 # Optional. Default: 300 seconds
idleReplicaCount: 0 # Optional. Default: ignored, must be less than minReplicaCount
minReplicaCount: 1 # Optional. Default: 0
maxReplicaCount: 100 # Optional. Default: 100
fallback: # Optional. Section to specify fallback options
failureThreshold: 3 # Mandatory if fallback section is included
replicas: 6 # Mandatory if fallback section is included
advanced: # Optional. Section to specify advanced options
restoreToOriginalReplicaCount: true/false # Optional. Default: false
horizontalPodAutoscalerConfig: # Optional. Section to specify HPA related options
name: {name-of-hpa-resource} # Optional. Default: keda-hpa-{scaled-object-name}
behavior: # Optional. Use to modify HPA's scaling behavior
scaleDown:
stabilizationWindowSeconds: 300
policies:
- type: Percent
value: 100
periodSeconds: 15
triggers:
- type: service-bus
authenticationRef: name-of-auth
I have my own HelmChart where I want to generate this CRD from a definition, but change some values.
So a user would provide spec, but I would augment it by e.g. adding 'authenticationRef' to every element of the triggers array.

Related

HELM YAML - arrays values that sometimes requires spaces/tabs but sometimes not?

I am confused when to use spaces and when not to when it comes to arrays and configs.
I think for single value arrays you need to use spaces:
ie:
values:
- "hello"
- "bye"
- "yes"
However this is wrong:
spec:
scaleTargetRef:
name: sb-testing
minReplicaCount: 3
triggers:
- type: azure-servicebus
metadata:
direction: in
When the values are a map, the helm interpreter complains when I add spaces:
error: error parsing deploy.yaml: error converting YAML to JSON: yaml: line 12: did not find expected '-' indicator
Doesn't when I don't:
spec:
scaleTargetRef:
name: sb-testing
minReplicaCount: 3
triggers:
- type: azure-servicebus
metadata:
direction: in
I can't seem to find any rules about this.
An array of objects in YAML can start with or without spaces. Both are valid in YAML syntax.
values:
- "hello"
- "bye"
- "yes"
values:
- "hello"
- "bye"
- "yes"
Make sure that the keys of the same block must be in the same column.
Sample:
spec:
scaleTargetRef:
name: sb-testing
minReplicaCount: 3
triggers:
- type: azure-servicebus
metadata: # "metadata" and "type" in the same column
direction: in
or
spec:
scaleTargetRef:
name: sb-testing
minReplicaCount: 3
triggers:
- type: azure-servicebus
metadata:
direction: in

Check if a values.yaml property has any entries in it when you don't know the names?

I have a helm chart template that looks like this:
volumes:
- name: secrets
projected:
sources:
{{- range $secretKey := .Values.secrets }}
- secret:
name: {{ $secretKey | kebabcase }}-secret
{{- end }}
This works perfectly, except for when .Values.secrets has no entries in it. Then it gives this error:
error validating data: ValidationError(Deployment.spec.template.spec.volumes[0].projected): missing required field "sources" in io.k8s.api.core.v1.ProjectedVolumeSource
Basically, it is complaining that sources does not have any values.
But I can't find a way to check to only do this section when .Values.secrets has entries. My values.yaml file is filled automatically and sometimes does not have any values for the secrets.
But because it is filled automatically, I don't know the names of the values in it. As such I cannot just do a test for one of the entries (like most examples do).
How can I check if .Values.secrets has any values?
You only need to add a conditional judgment above, and no object is generated when there is no value.
According to the helm document, when the object is empty, the if statement judges to return false.
A pipeline is evaluated as false if the value is:
a boolean false
a numeric zero
an empty string
a nil (empty or null)
an empty collection (map, slice, tuple, dict, array)
volumes:
{{- if .Values.secrets }}
- name: secrets
projected:
sources:
{{- range $secretKey := .Values.secrets }}
- secret:
name: {{ $secretKey | kebabcase }}-secret
{{- end }}
{{- end }}
case 1:
values.yaml
secrets:
output:
volumes:
case 2:
values.yaml
secrets:
- "aaa"
- "bbb"
output:
volumes:
- name: secrets
projected:
sources:
- secret:
name: aaa-secret
- secret:
name: bbb-secret

Safely Evaluating Input of Multiple Types - OPA Gatekeeper/Rego

I'm trying to deploy a Constraint Template to my Kubernetes cluster for enforcing PodDisriptionBudgets contain a maxUnavailable percentage higher than a given percentage, and denying integer values.
However, I'm unsure how to safely evaluate maxUnavailable since it can be an integer or a string. Here is the constraint template I am using:
apiVersion: templates.gatekeeper.sh/v1beta1
kind: ConstraintTemplate
metadata:
name: pdbrequiredtolerance
spec:
crd:
spec:
names:
kind: PdbRequiredTolerance
validation:
# Schema for the `parameters` field
openAPIV3Schema:
properties:
minAllowed:
type: integer
targets:
- target: admission.k8s.gatekeeper.sh
rego: |
package pdbrequiredtolerance
# Check that maxUnavailable exists
violation[{"msg": msg }] {
not input.review.object.spec.maxUnavailable
msg := "You must use maxUnavailable on your PDB"
}
# Check that maxUnavailable is a string
violation[{"msg": msg}] {
not is_string(input.review.object.spec.maxUnavailable)
msg := "maxUnavailable must be a string"
}
# Check that maxUnavailable is a percentage
violation[{"msg": msg}] {
not endswith(input.review.object.spec.maxUnavailable,"%")
msg := "maxUnavailable must be a string ending with %"
}
# Check that maxUnavailable is in the accpetable range
violation[{"msg": msg}] {
percentage := split(input.review.object.spec.maxUnavailable, "%")
to_number(percentage[0]) < input.parameters.minAllowed
msg := sprintf("You must have maxUnavailable of %v percent or higher", [input.parameters.minAllowed])
}
When I enter a PDB with a value that's too high, I receive the expected error:
Error from server ([pdb-must-have-max-unavailable] You must have maxUnavailable of 30 percent or higher)
However, when I use a PDB with an integer value:
Error from server (admission.k8s.gatekeeper.sh: __modset_templates["admission.k8s.gatekeeper.sh"]["PdbRequiredTolerance"]_idx_0:14: eval_type_error: endswith: operand 1 must be string but got number)
This is because endswith rule is trying to evaluate a string. Is there any way around this in Gatekeeper? Both PDBs I specified are valid Kubernetes manifests. I do not wish to return this confusing error to our end users, and would rather clarify that they cannot use integers.
I believe this was solved elsewhere, but for posterity, one solution to this would be to simply convert the value of variable type to a known type (like string) before doing the comparison or operation.
maxUnavailable := sprintf("%v", [input.review.object.spec.maxUnavailable])
maxUnavailable can now safely be dealt with as a string regardless of the original type.

How to read env property file in the config map

I have a property file in chart/properties folder. For example chart/properties/dev is the file
and the contents of it looks like the below
var1=somevalue1
var2=somevalue2
var3=somepwd=
var4=http://someurl.company.com
some of the value strings in property file have an =. There are also some empty lines in the property file.
and chart/configmap.yaml looks like below
apiVersion: v1
kind: ConfigMap
metadata:
name: env-configmap
namespace: {{ .Release.Namespace }}
data:
{{ range .Files.Lines "properties"/.Values.env.required.environment }}
{{ . | replace "=" ": " }}
{{ end }}
Generated yaml file:
---
# Source: app/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: env-configmap
namespace: default
data:
var1: somevalue1
var2: somevalue2
var3: somepwd:
var4: http://someurl.company.com
The generated output property entries are missing double quote in the value, as a result deployment complains of it when the value strings contain special characters.
I'm expecting the configmap.yaml data block to be a proper yaml (Indent 2) like file with the above changes. With above changes, there are extra lines after each property entry in yaml file. I got this to work partially when there are no blank lines and no value strings with =. Need help to get this working correctly.
Expected yaml file:
---
# Source: app/templates/configmap.yaml
apiVersion: v1
kind: ConfigMap
metadata:
name: env-configmap
namespace: default
data:
var1: "somevalue1"
var2: "somevalue2"
var3: "somepwd="
var4: "http://someurl.company.com"
You can follow go template syntax to do that. I update config.yaml like following works
apiVersion: v1
kind: ConfigMap
metadata:
name: env-configmap
namespace: {{ .Release.Namespace }}
data:
{{ range .Files.Lines "properties"/.Values.env.required.environment }}
{{- if ne . "" -}}
{{- $parts := splitn "=" 2 . -}} # details about split function http://masterminds.github.io/sprig/string_slice.html
{{ $parts._0 }}: {{ $parts._1 | quote }}
{{end}}
{{ end }}
I could not comment on your question because of my reputation. If it is possible for your case, you can use the config map as a file. I think Reading the property file in your code is easier.
https://kubernetes.io/docs/concepts/configuration/configmap/#using-configmaps-as-files-from-a-pod

how to explicitly write two references in ruamel.yaml

If I have multiple references and when I write them to a YAML file using ruaml.yaml from Python I get:
<<: [*name-name, *help-name]
but instead I would prefer to have
<<: *name-name
<<: *help-name
Is there an option to achieve this while writing to the file?
UPDATE
descriptions:
- &description-one-ref
description: >
helptexts:
- &help-one
help_text: |
questions:
- &question-one
title: "title test"
reference: "question-one-ref"
field: "ChoiceField"
choices:
- "Yes"
- "No"
required: true
<<: *description-one-ref
<<: *help-one
riskvalue_max: 10
calculations:
- conditions:
- comparator: "equal"
value: "Yes"
actions:
- riskvalue: 0
- conditions:
- comparator: "equal"
value: "No"
actions:
- riskvalue: 10
Currently I'm reading such a file and modify specific values within python and then want to write it back. When I'm writing I'm getting the issue that the references are as list and not as outlined.
That means the workflow is as: I'm reading the doc via
yaml = ruamel.yaml.YAML()
with open('test.yaml') as f:
data = yaml.load(f)
for k in data.keys():
if k == 'questions':
q = data.get(k)
for i in range(0, len(q)):
q[i]['title'] = "my new title"
f.close()
g = open('new_file.yaml', 'w')
yaml(data)
g.close()
No, there is no such option, as it would lead to an invalid YAML file.
The << is a mapping key, for which the value is interpreted
specially assuming the parser implements to the language independent
merge key specification. And a mapping key must be unique
according to the YAML specification:
The content of a mapping node is an unordered set of key: value node
pairs, with the restriction that each of the keys is unique.
That ruamel.yaml (< 0.15.75) doesn't throw an error on such
duplicate key is a bug. On duplicate normal keys, ruamel.yaml
does throw an error. The bug is inherited from PyYAML (which is not
specification conformant, and does not throw an error even on
duplicate normal keys).
However with a little pre- and post-processing what you want to do can
be easily achieved. The trick is to make the YAML valid before parsing
by making the offending duplicate << keys unique (but recognisable)
and then, when writing the YAML back to file, substituting these
unique keys by <<: * again. In the following the first occurence of
<<: * is replaced by [<<, 0]:, the second by [<<, 1]: etc.
The * needs to be part of the substitution, as there are no anchors in
the document for those aliases.
import sys
import subprocess
import ruamel.yaml
yaml = ruamel.yaml.YAML()
yaml.preserve_quotes = True
yaml.indent(sequence=4, offset=2)
class DoubleMergeKeyEnabler(object):
def __init__(self):
self.pat = '<<: ' # could be at the root level mapping, so no leading space
self.r_pat = '[<<, {}]: ' # probably not using sequences as keys
self.pat_nr = -1
def convert(self, doc):
while self.pat in doc:
self.pat_nr += 1
doc = doc.replace(self.pat, self.r_pat.format(self.pat_nr), 1)
return doc
def revert(self, doc):
while self.pat_nr >= 0:
doc = doc.replace(self.r_pat.format(self.pat_nr), self.pat, 1)
self.pat_nr -= 1
return doc
dmke = DoubleMergeKeyEnabler()
with open('test.yaml') as fp:
# we don't do this line by line, that would not work well on flow style mappings
orgdoc = fp.read()
doc = dmke.convert(orgdoc)
data = yaml.load(doc)
data['questions'][0].anchor.always_dump = True
#######################################
# >>>> do your thing on data here <<< #
#######################################
with open('output.yaml', 'w') as fp:
yaml.dump(data, fp, transform=dmke.revert)
res = subprocess.check_output(['diff', '-u', 'test.yaml', 'output.yaml']).decode('utf-8')
print('diff says:', res)
which gives:
diff says:
which means the files are the same on round-trip (as long as you don't
change anything before dumping).
Setting preserve_quotes and calling ident() on the YAML instance are necessary to
preserve your superfluous quotes, resp. keeping the indentation.
Since the anchor question-one has no alias, you need to enable dumping explicitly by
setting always_dump on that attribute to True. If necessary you can recursively
walk over data and set anchor.always_dump = True when .anchor.value is not None