How to reuse variables in a kubernetes yaml? - kubernetes

I have a number of repeated values in my kubernetes yaml file and I wondering if there was a way I could store variables somewhere in the file, ideally at the top, that I can reuse further down
sort of like
variables:
- appName: &appname myapp
- buildNumber: &buildno 1.0.23
that I can reuse further down like
labels:
app: *appname
tags.datadoghq.com/version:*buildno
containers:
- name: *appname
...
image: 123456.com:*buildno
if those are possible
I know anchors are a thing in yaml I just couldn't find anything on setting variables

You can't do this in Kubernetes manifests, because you need a processor to manipulate the YAML files. Though you can share the anchors in the same YAML manifest like this:
---
apiVersion: v1
kind: ConfigMap
metadata:
name: &cmname myconfig
namespace: &namespace default
labels:
name: *cmname
deployedInNamespace: *namespace
data:
config.yaml: |
[myconfig]
example_field=1
This will result in:
apiVersion: v1
data:
config.yaml: |
[myconfig]
example_field=1
kind: ConfigMap
metadata:
creationTimestamp: "2023-01-25T10:06:27Z"
labels:
deployedInNamespace: default
name: myconfig
name: myconfig
namespace: default
resourceVersion: "147712"
uid: 4039cea4-1e64-4d1a-bdff-910d5ff2a485
As you can see the labels name && deployedInNamespace have the values resulted from the anchor evaluation.
Based on your use case description, what you would need is going the Helm chart path and template your manifests. You can then leverage helper functions and easily customize when you want these fields. From my experience, when you have an use case like this, Helm is the way to go, because it will help you customize everything within your manifests when you decide to change something else.

I guess there is a similar question with answer.
Please check below
How to reuse an environment variable in a YAML file?

Related

custom environment variable - argocd

There are build environment variables (https://argoproj.github.io/argo-cd/user-guide/build-environment/) so can inject something like $ARGOCD_APP_NAME on the application/helm yaml file and it resolves to the actual value.
Is there a way we can set custom environment variables so it can be resolved on the argocd application yaml file?
For example on below argocd application yaml, need to set the ENV value so helm can know which values.yaml to use.
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
...
spec:
...
source:
...
helm:
valueFiles:
- values_${ENV}.yaml
It's a late answer, but you can. You can use the plugin field to add the ENV variables in the application level, the example follows:
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
...
spec:
...
source:
plugin:
env:
- name: ENV_VARIABLE
value: ENV_VALUE

Set values in a knative service.yaml file using environment variables

Is there a way to set the values of some keys in a Knative service.yaml file using environment variables?
More detail
I am trying to deploy a Knative service to a Kubernetes cluster using GitLab CI. Some of the variables in my service.yaml file depend on the project and environment of the GitLab CI pipeline. Is there a way I can seamlessly plug those values into my service.yaml file without resorting to hacks like sed -i ...?
For example, given the following script, I want the $(KUBE_NAMESPACE), $(CI_ENVIRONMENT_SLUG), and $(CI_PROJECT_PATH_SLUG) values to be replaced by accordingly-named environment variables.
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "$(KUBE_NAMESPACE)"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "$(CI_ENVIRONMENT_SLUG)"
app.gitlab.com/app: "$(CI_PROJECT_PATH_SLUG)"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
I don't think there is a great way to expand environment variables inside of an existing yaml, but if you don't want to use sed, you might be able to use envsubst:
envsubst < original.yaml > modified.yaml
You would just run this command before you use the yaml to expand the environment variables contained within it.
Also I think you'll need your variables to use curly braces, instead of parentheses, like this: ${KUBE_NAMESPACE}.
EDIT: You might also be able to use this inline like this: kubectl apply -f <(envsubst < service.yaml)
More than a Knative issue this is more of Kubernetes limitation. Kubernetes allows some expansion but not in annotations or namespace definitions. For example, you can do it in container env definitions:
containers:
- env:
- name: PODID
valueFrom: ...
- name: LOG_PATH
value: /var/log/$(PODID)
If this is a CI/CD system like Gitlab the environment variables should be in a shell environment, so a simple shell expansion will do. For example.
#!/bin/bash
echo -e "
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: design
namespace: "${KUBE_NAMESPACE}"
spec:
template:
metadata:
name: design-v1
annotations:
app.gitlab.com/env: "${CI_ENVIRONMENT_SLUG}"
app.gitlab.com/app: "${CI_PROJECT_PATH_SLUG}"
spec:
containers:
- name: user-container
image: ...
timeoutSeconds: 600
containerConcurrency: 8
" | kubectl apply -f -
You can also use envsubst as a helper like mentioned in the other answer.

Kubernetes Dynamic Configmapping in StatefulSet

In Kubernetes you have the ability to dynamically grab the name of a pod and reference it in a yaml file (Pod Field) like so:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and reference it later in the yaml file like so:
- name: FOO
value: $(POD_NAME)-bar
Where in the case of a StatefulSet the value of foo may be something like "app_thing-0-bar, app_thing-1-bar ... etc". However this doesn't seem to work in dynamically setting the name of a configmap. For example, the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app_thing-0-config
data:
FOO: BAR
and this in the StatefulSet deployment yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app_thing
.
.
.
.
.
envFrom:
- configMapRef:
name: $(POD_NAME)-config
will not reference the configmap correctly as it doesn't seem to like the $() syntax. Is there any way to do this without resorting to init containers and entrypoint scripting?
If I understand you correctly there is a tool that can make it work. It's called RELOADER:
Problem: We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant
DeploymentConfig, Deployment, Daemonset and Statefulset
Solution: Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs,
Deployments, Daemonsets and Statefulsets.
You can find all the necessary info in the link above.
Also if you'd need more details than you can check the documentation.
Please let me know if that helped.

Helm pre-install yaml for config

I’ve dependency in priority class inside my k8s yaml configs files and I need to install before any of my yaml inside the template folder
the prio class
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
value: 1000
globalDefault: false
After reading the helm docs it seems that I can use the pre-install hook
I’ve changed my yaml and add anotiations section with pre-hook, and still it doesnt works, any idea what I miss here?
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
"helm.sh/hook": pre-install
value: 1000
globalDefault: false
The yaml is located inisde the template folder
You put quotation marks for helm.sh/hook annotation which is incorrect - you can only add quotation marks for values of them.
You can add description field in your configuration file, remember that this field is an arbitrary string. It is meant to tell users of the cluster when they should use this PriorityClass.
Your PriorityClass should looks like this:
apiVersion: scheduling.k8s.io/v1beta1
kind: PriorityClass
metadata:
name: ocritical
annotations:
helm.sh/hook: pre-install,pre-upgrade
helm.sh/hook-delete-policy: before-hook-creation
value: 1000
globalDefault: false
description: "This priority class should be used for XYZ service pods only."
More information about proper configuration of PriorityClass you can find here: PriorityClass.
More information about installing hooks you can find here: helm-hooks.
I hope it helps.

Changing public url in knative service definition

i'm playing around with knative currently and bootstrapped a simple installation using gloo and glooctl. Everything worked fine out of the box. However, i just asked myself if there is a possibility to change the generated url, where the service is made available at.
I already changed the domain, but i want to know if i could select a domain name without containing the namespace, so helloworld-go.namespace.mydomain.com would become helloworld-go.mydomain.com.
The current YAML-definition looks like this:
apiVersion: serving.knative.dev/v1alpha1
kind: Service
metadata:
labels:
name: helloworld-go
namespace: default
spec:
template:
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
env:
- name: TARGET
value: Go Sample v1
Thank you for your help!
This is configurable via the ConfigMap named config-network in the namespace knative-serving. See the ConfigMap in the deployment resources:
apiVersion: v1
data:
_example: |
...
# domainTemplate specifies the golang text template string to use
# when constructing the Knative service's DNS name. The default
# value is "{{.Name}}.{{.Namespace}}.{{.Domain}}". And those three
# values (Name, Namespace, Domain) are the only variables defined.
#
# Changing this value might be necessary when the extra levels in
# the domain name generated is problematic for wildcard certificates
# that only support a single level of domain name added to the
# certificate's domain. In those cases you might consider using a value
# of "{{.Name}}-{{.Namespace}}.{{.Domain}}", or removing the Namespace
# entirely from the template. When choosing a new value be thoughtful
# of the potential for conflicts - for example, when users choose to use
# characters such as `-` in their service, or namespace, names.
# {{.Annotations}} can be used for any customization in the go template if needed.
# We strongly recommend keeping namespace part of the template to avoid domain name clashes
# Example '{{.Name}}-{{.Namespace}}.{{ index .Annotations "sub"}}.{{.Domain}}'
# and you have an annotation {"sub":"foo"}, then the generated template would be {Name}-{Namespace}.foo.{Domain}
domainTemplate: "{{.Name}}.{{.Namespace}}.{{.Domain}}"
...
kind: ConfigMap
metadata:
labels:
serving.knative.dev/release: "v0.8.0"
name: config-network
namespace: knative-serving
Therefore, your config-network should look like this:
apiVersion: v1
data:
domainTemplate: {{ '"{{.Name}}.{{.Domain}}"' }}
kind: ConfigMap
metadata:
name: config-network
namespace: knative-serving
You can also have a look and customize the config-domain to configure the domain name that is appended to your services.
Assuming you're running knative over an istio service mesh, there's an example of how to use an Istio Virtual Service to accomplish this at the service level in the knative docs.