Dynamically merge fields in helm chart - kubernetes

I'm trying to combine sections of a helm template with sections provided in the values file.
I have this in my template yaml
{{- $name := "test" }}
{{- if hasKey .Values.provisioners $name }}
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 20
cpu: 16
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- m
{{- end }}
and this is my values file:
provisioners:
gp:
limits:
cpu: 75
nvidia.com/gpu: 2
test:
limits:
memory: 10
cpu: 10
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- r
This will only install the manifest if there is a "test" section in provisioners. But what I want to do is 'inject' the limits and requirements from the matching provisioners section in the values file or overwrite that value if the item already exists in the template.
One possible complication is that the fields in the values file won't always be static, there can be a number of limits that can be applied so it would need to be able to copy every item that's in that section.
Likewise with the requirements section there can be any number of keys. If there's a matching key value in the values file then it needs to overwrite it otherwise append it
So the resulting template would be this if $name is set to "test"
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 10
cpu: 10
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- r
And the resulting template would be this if $name is set to "gp"
apiVersion: karpenter.sh/v1alpha5
kind: Provisioner
metadata:
name: test
spec:
providerRef:
name: p2p
labels:
workload: test
limits:
memory: 20
cpu: 75
nvidia.com/gpu: 2
requirements:
- key: karpenter.k8s.aws/instance-category
operator: In
values:
- t
- m
I'm hoping someone can point in the right direction of how this can be achieved. I have no idea where to start with this!!!
Thanks in advance

Related

add operation with Kustomize only if not exists

I want to add a resource limit and request using Kustomize if and only if it's not already configured. Problem is that the deployment is in fact a list of deployments, so I cannot use default values:
values.yaml
myDeployments:
- name: deployment1
- name: deployment2
resources:
limits:
cpu: 150
memory: 200
kustomize.yaml
- target:
kind: Deployment
patch: |-
- op: add
path: "/spec/template/spec/containers/0/resources"
value:
limits:
cpu: 300
memory: 400
Problem here is that it's replaces both deployments' resources, ignoring the resources defined in values.yaml.
You can't make Kustomize conditionally apply a patch based on whether or not the resource limits already exists. You could use labels to identify deployments that should receive the default resource limits, e.g. given something like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
replicas: 1
template:
spec:
metadata:
labels:
example.com/default_limits: "true"
[...]
You could do something like this in your kustomization.yaml:
- target:
kind: Deployment
labelSelector: example.com/default_limits=true
patch: |-
- op: add
path: "/spec/template/spec/containers/0/resources"
value:
limits:
cpu: 300
memory: 400
However, you could also simply set a default resource limits in your target namespace. See "Configure Default CPU Requests and Limits for a Namespace" for details. You would create a LimitRange resource in your namespace:
apiVersion: v1
kind: LimitRange
metadata:
name: cpu-limit-range
spec:
limits:
- type: container
default:
cpu: 150
memory: 200
This would be applied to any containers that don't declare their own resource limits, which is I think the behavior you're looking for.

Reading Value form ConfigMap that are mounted as file in Kubernetes cluster JAVA

Here is my ConfigMap
apiVersion: v1
data:
application_config.properties: |-
id= abc
mode= abc
username= abc
endpoint: abc
url= abc
id= abc
kind: ConfigMap
metadata:
name: yml-config
and here is my deployment
---
apiVersion: apps/v1
kind: Deployment
metadata:
....
spec:
....
template:
metadata:
labels:
app: demo
name: demo
spec:
containers:
- image: abc:1.0
imagePullPolicy: Always
name: demo
resources:
limits:
cpu: 500m
memory: 650Mi
requests:
cpu: 500m
memory: 650Mi
volumeMounts:
- mountPath: /opt/config/application_config.properties
subPath: application_config.properties
name: application-config-volume
........
volumes:
- name: application-config-volume
configMap:
name: yml-config
What I need is -> I wish to mount my configmap as a single properties file in the mentioned location and just wish to read the values in core java by doing some I/O usage.
But I tried many ways used subpath, items and keys tag too. But I am only getting File not found exception.
(point to note - I dont have the access to look into the container which is creating some issue )
It would be great if someone could help me along with the java code showing how to fetch the value based on the mount path. Which I may try.
Thanks in advance.
You can use any function or command that is used to read file at that location

yq - How to keep only certains keys in a (nested) object?

I have a bunch of Kubernetes resources (i.e. a lot of yaml files), and I would like to have a result with only certain paths.
My current brutal approach looks like:
cat my-list-of-deployments | yq eval 'select(.kind == "Deployment") \
| del(.metadata.labels, .spec.replicas, .spec.selector, .spec.strategy, .spec.template.metadata) \
| del(.spec.template.spec.containers.[0].env, del(.spec.template.spec.containers.[0].image))' -
Of course this is super inefficient.
In the path .spec.template.spec.containers.[0] I actually want ideally delete anything except: .spec.template.spec.containers.[*].image and .spec.template.spec.containers.[*].resources (where "*" means, keep all array elements).
I tried something like
del(.spec.template.spec.containers.[0] | select(. != "name"))
But this did not work for me. How can I make this better?
Example input:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- image: app-one:0.2.0
name: app-one
ports:
- containerPort: 80
name: http
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- image: redis:3.2-alpine
livenessProbe:
exec:
command:
- redis-cli
- info
- server
periodSeconds: 20
name: app-two
readinessProbe:
exec:
command:
- redis-cli
- ping
resources:
limits:
cpu: 100m
memory: 128Mi
startupProbe:
periodSeconds: 2
tcpSocket:
port: 6379
Desired output:
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-one
spec:
template:
spec:
containers:
- name: app-one
resources:
limits:
cpu: 50m
memory: 512Mi
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: app-two
spec:
template:
spec:
containers:
- name: app-two
resources:
limits:
cpu: 100m
memory: 128Mi
The key is to use the with_entries function inside the .containers array to manually mark the required fields - name, resources and use the |= update operator to put the modified result back
yq eval '
select(.kind == "Deployment").spec.template.spec.containers[] |=
with_entries( select(.key == "name" or .key == "resources") ) ' yaml

K8S deployments with shared environment variables

We have a set of deployments (sets of pods) that are all using same docker image. Examples:
web api
web admin
web tasks worker nodes
data tasks worker nodes
...
They all require a set of environment variables that are common, for example location of the database host, secret keys to external services, etc. They also have a set of environment variables that are not common.
Is there anyway where one could either:
Reuse a template where environment variables are defined
Load environment variables from file and set them on the pods
The optimal solution would be one that is namespace aware, as we separate the test, stage and prod environment using kubernetes namespaces.
Something similar to dockers env_file would be nice. But I cannot find any examples or reference related to this. The only thing I can find is setting env via secrets, but that is not clean, way to verbose, as I still need to write all environment variables for each deployment.
You can create a ConfigMap with all the common key:value pairs of env variables.
Then you can reuse the configmap to declare all the values of configMap as environment in Deployment.
Here is an example taken from kubernetes official docs.
Create a ConfigMap containing multiple key-value pairs.
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
Use envFrom to define all of the ConfigMap’s data as Pod environment variables. The key from the ConfigMap becomes the environment variable name in the Pod.
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config # All the key-value pair will be taken as environment key-value pair
env:
- name: uncommon
value: "uncommon value"
restartPolicy: Never
You can specify uncommon env variables in env field.
Now, to verify if the environment variables are actually available, see the logs.
$ kubectl logs -f test-pod
KUBERNETES_PORT=tcp://10.96.0.1:443
SPECIAL_LEVEL=very
uncommon=uncommon value
SPECIAL_TYPE=charm
...
Here, it is visible that all the provided environments are available.
you can add a secret first then use newly created secret into your countless deployment files to share same environment variable with value:
kubectl create secret generic jwt-secret --from-literal=JWT_KEY=my_awesome_jwt_secret_code
apiVersion: apps/v1
kind: Deployment
metadata:
name: auth-depl
spec:
replicas: 1
selector:
matchLabels:
app: auth
template:
metadata:
labels:
app: auth
spec:
containers:
- name: auth
image: lord/auth
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY
apiVersion: apps/v1
kind: Deployment
metadata:
name: tickets-depl
spec:
replicas: 1
selector:
matchLabels:
app: tickets
template:
metadata:
labels:
app: tickets
spec:
containers:
- name: tickets
image: lord/tickets
resources:
requests:
memory: "128Mi"
cpu: "250m"
limits:
memory: "256Mi"
cpu: "500m"
env:
- name: JWT_KEY
valueFrom:
secretKeyRef:
name: jwt-secret
key: JWT_KEY
process.env.JWT_KEY

helm: how to remove newline after toYaml function

From official documentation:
When the template engine runs, it removes the contents inside of {{ and }}, but it leaves the remaining whitespace exactly as is. The curly brace syntax of template declarations can be modified with special characters to tell the template engine to chomp whitespace. {{- (with the dash and space added) indicates that whitespace should be chomped left, while -}} means whitespace to the right should be consumed.
But I try all variations with no success. Have anyone solution how to place yaml inside yaml? I don't want to use range
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: test
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
when I use this code without -}} it's adding a newline:
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes:
- name: test
emptyDir: {}
but when I use -}} it's concate with another position.
resources:
limits:
cpu: 100m
memory: 128Mi
requests:
cpu: 20m
memory: 64Mi
volumes: <- shoud be in indent 2
- name: test
emptyDir: {}
values.yaml is
pod:
resources:
requests:
cpu: 20m
memory: 64Mi
limits:
cpu: 100m
memory: 128Mi
This worked for me:
{{ toYaml .Values.pod.resources | trim | indent 6 }}
The below variant is correct:
{{ toYaml .Values.pod.resources | indent 6 }}
Adding a newline doesn't create any issue here.
I've tried your pod.yaml and got the following error:
$ helm install .
Error: release pilfering-pronghorn failed: Pod "app" is invalid: spec.containers[0].volumeMounts[0].mountPath: Invalid value: "test": must be an absolute path
which means that mountPath of volumeMounts should be something like /mnt.
So, the following pod.yaml works pretty good and creates a pod with the exact resources we defined in values.yaml:
apiVersion: v1
kind: Pod
metadata:
name: app
labels:
app: app
spec:
containers:
- name: app
image: image
volumeMounts:
- mountPath: /mnt
name: test
resources:
{{ toYaml .Values.pod.resources | indent 6 }}
volumes:
- name: test
emptyDir: {}
{{- toYaml .Values.pod.resources | indent 6 -}}
This removes a new line
#Nickolay, it is not a valid yaml file, according to helm - at least helm barfs and says:
error converting YAML to JSON: yaml: line 51: did not find expected key
For me, line 51 is the empty space - and whatever follows should not be indented to the same level