Kubernetes Kustomize: replace variable in patch file - kubernetes

Given the following kustomize patch:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de
I want to use kubectl apply -k and somehow pass a value for ${PASSWORD} which I can set from my build script.
The only solution I got to work so far was replacing the ${PASSWORD} with sed, but I would prefer a kustomize solution.

As #Jonas already suggested you should consider using Secret. It's nicely described in this article.
I want to use kubectl apply -k and somehow pass a value for
${PASSWORD} which I can set from my build script.
I guess your script can store the generated password as a variable or save it to some file. You can easily create a Secret as follows:
$ kustomize edit add secret sl-demo-app --from-literal=db-password=$PASSWORD
or from a file:
$ kustomize edit add secret sl-demo-app --from-file=file/path
As you can read in the mentioned article:
These commands will modify your kustomization.yaml and add a
SecretGenerator inside it.
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
bases:
- ../../base
patchesStrategicMerge:
- custom-env.yaml
- replica-and-rollout-strategy.yaml
secretGenerator:
- literals:
- db-password=12345
name: sl-demo-app
type: Opaque
kustomize build run in your project directory will create among others following Secret:
apiVersion: v1
data:
db-password: MTIzNDU=
kind: Secret
metadata:
name: sl-demo-app-6ft88t2625
type: Opaque
...
More details you can fine in the article.
If we want to use this secret from our deployment, we just have, like
before, to add a new layer definition which uses the secret.
For example, this file will mount the db-password value as
environement variables
apiVersion: apps/v1
kind: Deployment
metadata:
name: sl-demo-app
spec:
template:
spec:
containers:
- name: app
env:
- name: "DB_PASSWORD"
valueFrom:
secretKeyRef:
name: sl-demo-app
key: db.password
In your Deployment definition file it may look similar to this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: flux
spec:
template:
spec:
containers:
- name: some-name
env:
- name: "PASSWORD"
valueFrom:
secretKeyRef:
name: git-secret
key: git.password
args:
- --some-key=some-value
...
- --git-url=https://user:${PASSWORD}#domain.de

Related

How can I delete environment variable with kustomize?

I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.
If it's not possible, can we replace environment variable name, and secret key name/key pair all together?
containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
Any help on this will be appreciated! Thanks!
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so.
If I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
If I write in my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
Then the output of kustomize build is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

How to add external yml config file into Kubernetes config map

I have a config file inside a config folder i.e console-service.yml. I am trying to load at runtime using configMap, below is my deployment yml:
kind: Deployment
apiVersion: apps/v1
metadata:
name: consoleservice
spec:
replicas: 1
template:
metadata:
labels:
app: consoleservice
spec:
containers:
- name: consoleservice
image: docker.example.com/app:1
volumeMounts:
- name: console-config-volume
mountPath: /config/console-server.yml
subPath: console-server.yml
readOnly: true
volumes:
- name: console-config-volume
configMap:
name: console-config
kind: ConfigMap
apiVersion: v1
metadata:
name: consoleservice
data:
pool.size.core: 1
pool.size.max: 16
I am new to configMap. How I can read .yml configuration from config/ location?
There are two possible solutions for your problem.
1. Embedd your File directly into the ConfigMap
This coul look similar to this:
kind: ConfigMap
apiVersion: v1
metadata:
name: some-yaml
file.yaml: |
pool:
size:
core: 1
max: 16
2. Create a ConfigMap from your YAML-File
Would be done by using kubectl:
kubectl create configmap some-yaml \
--from-file=./some-yaml-file.yaml
This would create a ConfigMap containg the selected file. You can add multiple files to a single ConfigMap.
You can find more informations in the Documentation.

CronJob: unknown field "configMapRef"

I'm applying a Kubernetes CronJob.
So far it works.
Now I want to add the environment variables. (env: -name... see below)
While tryng to apply I get the error
unknown field "configMapRef" in io.k8s.api.core.v1.EnvVarSource
I don't like to set all singles variables here. I prefer to link the configmap to not to double the variables. How is it possible set a link to the configmap.yaml variables in a CronJob file, how to code it?
Frank
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: ad-sync
creationTimestamp: 2019-02-15T09:10:20Z
namespace: default
selfLink: /apis/batch/v1beta1/namespaces/default/cronjobs/ad-sync
spec:
concurrencyPolicy: Allow
failedJobsHistoryLimit: 1
successfulJobsHistoryLimit: 3
suspend: false
schedule: "0 */1 * * *"
jobTemplate:
metadata:
labels:
job: ad-sync
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapRef:
name: ad-sync-service-configmap
restartPolicy: OnFailure
There is no such field configMapRef in env field instead there is a field called configMapKeyRef
in order to get more detail about kubernetes objects, its convenient to use kubectl explain --help
for example if you would like to check all of the keys and their types you can use following command
kubectl explain cronJob --recursive
kubectl explain cronjob.spec.jobTemplate.spec.template.spec.containers.env.valueFrom.configMapKeyRef
You should use configMapKeyRef for single value or configMapRef with envFrom
It works this way:
apiVersion: batch/v1beta1
kind: CronJob
metadata:
...
spec:
...
jobTemplate:
metadata:
...
spec:
template:
spec:
containers:
- name: ad-sync
...
envFrom:
- configMapRef:
name: ad-sync-service-configmap
command: ["dotnet", "AdSyncService.dll"]
There are two approaches, using valueFrom for individual values or envFrom for multiple values.
valueFrom is used inside the env attribute, like this:
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
args: []
env:
- name: AdSyncService
valueFrom:
configMapKeyRef:
name: ad-sync-service-configmap
key: log_level
envFrom is used direct inside the container attribure like this:
spec:
template:
spec:
containers:
- name: ad-sync
image: foo.azurecr.io/foobar/ad-sync
command: ["dotnet", "AdSyncService.dll"]
envFrom:
- configMapRef:
name: ad-sync-service-configmap
ConfigMap for reference:
apiVersion: v1
kind: ConfigMap
metadata:
name: ad-sync-service-configmap
namespace: default
data:
log_level: INFO
The main difference on both is:
valueFrom will inject the value of a a key from the referenced configMap
envFrom will inject All configMap keys as environment variables
The main issue with you example is that you used the configMapRef from envFrom inside the valueFrom where should actually be configMapKeyRef.
Also, the configMapKeyRef need a key attribute to identify where the data is comming from.
For more details, please check in this docs.

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config