Replace contents of an item in a list using Kustomize - kubernetes

I'm having difficulty trying to get kustomize to replace contents of an item in a list.
My kustomize file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
patches:
- patch.yaml
My patch.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
My resource.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
valueFrom:
secretKeyRef:
name: web-pgdb
key: database
kustomize build returns
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
valueFrom:
secretKeyRef:
key: database
name: web-pgdb
name: web-service-migration
what i want kustomize build to return
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
name: web-service-migration

If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify valueFrom, so your patch should look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
valueFrom: null
More details about strategic merge patch and how to delete maps: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps

Related

How can I delete environment variable with kustomize?

I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.
If it's not possible, can we replace environment variable name, and secret key name/key pair all together?
containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
Any help on this will be appreciated! Thanks!
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so.
If I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
If I write in my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
Then the output of kustomize build is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.

Kubernetes Deployment: Is there a way to copy a file from a mount directory to the root directory / after the deployment manifest is applied

The problem is your mount path can not be / but I need to move the demo.txt file into / once the container is created.
I have this sample deployment.yaml:
kind: ConfigMap
apiVersion: v1
metadata:
name: demo-configfile
data:
myfile: |
This my demo file's text info
This is just dummy text
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: demo
spec:
selector:
matchLabels:
name: demo-configmaps-test
template:
metadata:
labels:
name: demo-configmaps-test
spec:
containers:
- name: demo-container
image: alpine
imagePullPolicy: Always
command: ['sh', '-c', 'sleep 36000']
volumeMounts:
- name: demo-files
mountPath: /demo/files
volumes:
- name: demo-files
configMap:
name: demo-configfile
items:
- key: myfile
path: demo.txt

How to set Kubernetes config map and secret to mongodb environment variables

I am trying to set the two env variables of mongo namely - MONGO_INITDB_ROOT_USERNAME and MONGO_INITDB_ROOT_PASSWORD using kubernetes ConfigMap and Secret as follows:
When I don't use the config map and password, i.e. I hardcode the username and password, it works, but when I try to replace it with configmap and secret, it says
'Authentication failed.'
my username and password is the same, which is admin
Here's the yaml definition for these obects, can someone help me what is wrong?
apiVersion: v1
kind: ConfigMap
metadata:
name: mongodb-username
data:
username: admin
---
apiVersion: v1
kind: Secret
metadata:
name: mongodb-password
data:
password: YWRtaW4K
type: Opaque
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
# env:
# - name: MONGO_INITDB_ROOT_USERNAME
# value: admin
# - name: MONGO_INITDB_ROOT_PASSWORD
# value: admin
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
configMapKeyRef:
name: mongodb-username
key: username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: mongodb-password
key: password
Finally I was able to find the solution after hours, it is not something I did from kubernetes side, it is when I did base64 encode.
The correct way to encode is with following command:
echo -n 'admin' | base64
and this was the issue with me.
Your deployment yaml is fine, just change spec.containers[0].env to spec.containers[0].envFrom:
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongodb-username
- secretRef:
name: mongodb-password
That will put all keys of your secret and configmap as environment variables in the deployment.
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: root
MONGO_INITDB_ROOT_PASSWORD : password
kind: ConfigMap
metadata:
name: mongo-cred
namespace: default
inject it to deployment like
envFrom:
- configMapRef:
name: mongo-cred
the deployment will be something like
apiVersion: apps/v1
kind: Deployment
metadata:
name: mongodbtest
spec:
# serviceName: mongodbtest
replicas: 1
selector:
matchLabels:
app: mongodbtest
template:
metadata:
labels:
app: mongodbtest
selector: mongodbtest
spec:
containers:
- name: mongodbtest
image: mongo:3
envFrom:
- configMapRef:
name: mongo-cred
if you want to save the data in secret, the secret is best practice to store data with encryption base64 and sensitive data.
envFrom:
- secretRef:
name: mongo-cred
you can create the secret with
apiVersion: v1
data:
MONGO_INITDB_ROOT_USERNAME: YWRtaW4K #base 64 encoded
MONGO_INITDB_ROOT_PASSWORD : YWRtaW4K
kind: secret
type: Opaque
metadata:
name: mongo-cred
namespace: default

Adding containers in Custom Resource Definition for deployment Pod

I have a sample Deployment.yaml which has containers in it
kind: Pod
metadata:
generateName: test-pod-
spec:
containers:
- name: test-pod
image: test/mypod:v5.16
env:
- name: testenv
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: test-vol
mountPath: "/test/vol"
readOnly: true
This creates the pod with random name test-pod-vdffg
Now i want to do this Pod generation using Custom resource definition
So I created below CRD
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testconfigs.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpodconfigs
singular: testpodconfig
kind: TestPodConfig
And a custom resource like this
apiVersion: demo.k8s.com/v1
kind: TestPodConfig
metadata:
generateName: test-pod-
namespace: testns
spec:
image: test/mypod:v5.16
env:
- name: testenv
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
Here I am not sure whether the image property will add the container to PodSpec or not as it is a simple string. Also how can i add the Volumes and environment variables using client-go program.

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config