Adding containers in Custom Resource Definition for deployment Pod - kubernetes

I have a sample Deployment.yaml which has containers in it
kind: Pod
metadata:
generateName: test-pod-
spec:
containers:
- name: test-pod
image: test/mypod:v5.16
env:
- name: testenv
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
volumeMounts:
- name: test-vol
mountPath: "/test/vol"
readOnly: true
This creates the pod with random name test-pod-vdffg
Now i want to do this Pod generation using Custom resource definition
So I created below CRD
apiVersion: apiextensions.k8s.io/v1beta1
kind: CustomResourceDefinition
metadata:
name: testconfigs.demo.k8s.com
namespace: testns
spec:
group: demo.k8s.com
versions:
- name: v1
served: true
storage: true
scope: Namespaced
names:
plural: testpodconfigs
singular: testpodconfig
kind: TestPodConfig
And a custom resource like this
apiVersion: demo.k8s.com/v1
kind: TestPodConfig
metadata:
generateName: test-pod-
namespace: testns
spec:
image: test/mypod:v5.16
env:
- name: testenv
valueFrom:
configMapKeyRef:
name: kubernetes-config
key: type
Here I am not sure whether the image property will add the container to PodSpec or not as it is a simple string. Also how can i add the Volumes and environment variables using client-go program.

Related

How can I delete environment variable with kustomize?

I want to remove a few environment variables in a container with kustomize? Is that possible? When I patch, it just adds as you may know.
If it's not possible, can we replace environment variable name, and secret key name/key pair all together?
containers:
- name: container1
env:
- name: NAMESPACE
valueFrom:
secretKeyRef:
name: x
key: y
Any help on this will be appreciated! Thanks!
If you're looking remove that NAMESPACE variable from the manifest, you can use the special $patch: delete directive to do so.
If I start with this Deployment:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
image: docker.io/traefik/whoami:latest
env:
- name: ENV_VAR_1
valueFrom:
secretKeyRef:
name: someSecret
key: someKeyName
- name: ENV_VAR_2
value: example-value
If I write in my kustomization.yaml:
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- deployment.yaml
patches:
- patch: |
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- name: example
env:
- name: ENV_VAR_1
$patch: delete
Then the output of kustomize build is:
apiVersion: apps/v1
kind: Deployment
metadata:
name: example
spec:
template:
spec:
containers:
- env:
- name: ENV_VAR_2
value: example-value
image: docker.io/traefik/whoami:latest
name: example
Using a strategic merge patch like this has an advantage over a JSONPatch style patch like Nijat's answer because it doesn't depend on the order in which the environment variables are defined.

Replace contents of an item in a list using Kustomize

I'm having difficulty trying to get kustomize to replace contents of an item in a list.
My kustomize file
apiVersion: kustomize.config.k8s.io/v1beta1
kind: Kustomization
resources:
- resource.yaml
patches:
- patch.yaml
My patch.yaml file
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
My resource.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
valueFrom:
secretKeyRef:
name: web-pgdb
key: database
kustomize build returns
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
valueFrom:
secretKeyRef:
key: database
name: web-pgdb
name: web-service-migration
what i want kustomize build to return
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- env:
- name: PG_DATABASE
value: web-pgdb
name: web-service-migration
If I remember correctly patches in kustomize by default uses strategic merge, so you need to nullify valueFrom, so your patch should look like:
apiVersion: apps/v1
kind: Deployment
metadata:
name: web-service
spec:
template:
spec:
initContainers:
- name: web-service-migration
env:
- name: PG_DATABASE
value: web-pgdb
valueFrom: null
More details about strategic merge patch and how to delete maps: https://github.com/kubernetes/community/blob/master/contributors/devel/sig-api-machinery/strategic-merge-patch.md#maps

How to supply a value of a server in NFS mount in a k8 Deployment via a ConfigMap

I'm writing a helm chart where I need to supply a nfs.server value for the volume mount from the ConfigMap (efs-url in the example below).
There are examples in the docs on how to pass the value from the ConfigMap to env variables or even mount ConfigMaps. I understand how I can pass this value from the values.yaml but I just can't find an example on how it can be done using a ConfigMap.
I have control over this ConfigMap so I can reformat it as needed.
Am I missing something very obvious?
Is it even possible to do?
If not, what are the possible workarounds?
---
apiVersion: v1
kind: ConfigMap
metadata:
name: efs-url
data:
url: yourEFSsystemID.efs.yourEFSregion.amazonaws.com
---
kind: Deployment
apiVersion: extensions/v1beta1
metadata:
name: efs-provisioner
spec:
replicas: 1
strategy:
type: Recreate
template:
metadata:
labels:
app: efs-provisioner
spec:
containers:
- name: efs-provisioner
image: quay.io/external_storage/efs-provisioner:latest
env:
- name: FILE_SYSTEM_ID
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: file.system.id
- name: AWS_REGION
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: aws.region
- name: PROVISIONER_NAME
valueFrom:
configMapKeyRef:
name: efs-provisioner
key: provisioner.name
volumeMounts:
- name: pv-volume
mountPath: /persistentvolumes
volumes:
- name: pv-volume
nfs:
server: <<< VALUE SHOULD COME FROM THE CONFIG MAP >>>
path: /
Having analysed the comments it looks like using ConfigMap approach is not suitable for this example as ConfigMap
is an API object used to store non-confidential data in key-value pairs. Pods can consume ConfigMaps as environment variables, command-line arguments, or as configuration files in a volume.
To read more about ConfigMaps and how they can be utilized one can visit the "ConfigMaps" section and the "Configure a Pod to Use a ConfigMap" section.

Kubernetes puzzle: Populate environment variable from file (mounted volume)

I have a Pod or Job yaml spec file (I can edit it) and I want to launch it from my local machine (e.g. using kubectl create -f my_spec.yaml)
The spec declares a volume mount. There would be a file in that volume that I want to use as value for an environment variable.
I want to make it so that the volume file contents ends up in the environment variable (without me jumping through hoops by somehow "downloading" the file to my local machine and inserting it in the spec).
P.S. It's obvious how to do that if you have control over the command of the container. But in case of launching arbitrary image, I have no control over the command attribute as I do not know it.
apiVersion: batch/v1
kind: Job
metadata:
generateName: puzzle
spec:
template:
spec:
containers:
- name: main
image: arbitrary-image
env:
- name: my_var
valueFrom: <Contents of /mnt/my_var_value.txt>
volumeMounts:
- name: my-vol
path: /mnt
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc
You can create deployment with kubectl endless loop which will constantly poll volume and update configmap from it. After that you can mount created configmap into your pod. It's a little bit hacky but will work and update your configmap automatically. The only requirement is that PV must be ReadWriteMany or ReadOnlyMany (but in that case you can mount it in read-only mode to all pods).
apiVersion: v1
kind: ServiceAccount
metadata:
name: cm-creator
namespace: default
---
apiVersion: rbac.authorization.k8s.io/v1
kind: Role
metadata:
namespace: default
name: cm-creator
rules:
- apiGroups: [""]
resources: ["configmaps"]
verbs: ["create", "update", "get"]
---
apiVersion: rbac.authorization.k8s.io/v1
kind: RoleBinding
metadata:
name: cm-creator
namespace: default
subjects:
- kind: User
name: system:serviceaccount:default:cm-creator
apiGroup: rbac.authorization.k8s.io
roleRef:
kind: Role
name: cm-creator
apiGroup: rbac.authorization.k8s.io
---
apiVersion: apps/v1
kind: Deployment
metadata:
name: cm-creator
namespace: default
labels:
app: cm-creator
spec:
replicas: 1
serviceAccountName: cm-creator
selector:
matchLabels:
app: cm-creator
template:
metadata:
labels:
app: cm-creator
spec:
containers:
- name: cm-creator
image: bitnami/kubectl
command:
- /bin/bash
- -c
args:
- while true;
kubectl create cm myconfig --from-file=my_var=/mnt/my_var_value.txt --dry-run -o yaml | kubectl apply -f-;
sleep 60;
done
volumeMounts:
- name: my-vol
path: /mnt
readOnly: true
volumes:
- name: my-vol
persistentVolumeClaim:
claimName: my-pvc

Kubernetes - ConfigMap for nested variables

We have an image deployed in an AKS cluster for which we need to update a config entry during deployment using configmaps.
The configuration file has the following key and we are trying to replace the value of the "ChildKey" without replacing the entire file -
{
"ParentKey": {
"ChildKey": "123"
}
}
The configmap looks like -
apiVersion: v1
data:
ParentKey: |
ChildKey: 456
kind: ConfigMap
name: cf
And in the deployment, the configmap is used like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey
valueFrom:
configMapKeyRef:
key: ParentKey
name: cf
The replacement is not working with the setup above. Is there a different way to declare the key names for nested structures?
We have addressed this in the following manner -
The configmap carries a simpler structure - only the child element -
apiVersion: v1
data:
ChildKey: 456
kind: ConfigMap
name: cf
In the deployment, the environment variable key refers to the child key like this -
apiVersion: extensions/v1beta1
kind: Deployment
spec:
template:
metadata:
creationTimestamp: null
labels:
app: abc
spec:
containers:
- env:
- name: ParentKey__ChildKey
valueFrom:
configMapKeyRef:
key: ChildKey
name: cf
Posting this for reference.
use the double underscore for nested environment variables and arrays as explained here
To avoid explicit environment variables and typing names twice, you can use envFrom
configMap.yaml
apiVersion: v1
data:
ParentKey__ChildKey: 456
kind: ConfigMap
name: cf
deployment.yml
containers:
- name: $(name)
image: $(image)
envFrom:
- configMapRef:
name: common-config
- configMapRef:
name: specific-config