In Kubernetes you have the ability to dynamically grab the name of a pod and reference it in a yaml file (Pod Field) like so:
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and reference it later in the yaml file like so:
- name: FOO
value: $(POD_NAME)-bar
Where in the case of a StatefulSet the value of foo may be something like "app_thing-0-bar, app_thing-1-bar ... etc". However this doesn't seem to work in dynamically setting the name of a configmap. For example, the following configmap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app_thing-0-config
data:
FOO: BAR
and this in the StatefulSet deployment yaml:
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: app_thing
.
.
.
.
.
envFrom:
- configMapRef:
name: $(POD_NAME)-config
will not reference the configmap correctly as it doesn't seem to like the $() syntax. Is there any way to do this without resorting to init containers and entrypoint scripting?
If I understand you correctly there is a tool that can make it work. It's called RELOADER:
Problem: We would like to watch if some change happens in ConfigMap and/or Secret; then perform a rolling upgrade on relevant
DeploymentConfig, Deployment, Daemonset and Statefulset
Solution: Reloader can watch changes in ConfigMap and Secret and do rolling upgrades on Pods with their associated DeploymentConfigs,
Deployments, Daemonsets and Statefulsets.
You can find all the necessary info in the link above.
Also if you'd need more details than you can check the documentation.
Please let me know if that helped.
Related
I have a pod with the following specs
kind: Pod
metadata:
name: busybox
namespace: default
spec:
containers:
- image: busybox
command:
- sleep
- "3600"
imagePullPolicy: IfNotPresent
name: busybox
env:
- name: WATCH_NAMESPACE
valueFrom:
configMapKeyRef:
name: watch-namespace-config
key: WATCH_NAMESPACE
restartPolicy: Always
I also created a ConfigMap
kubectl create configmap watch-namespace-config \
--from-literal=WATCH_NAMESPACE=dev
The pod looks for values in the watch-namespace-config configmap.
When I manually change the configmap values, I want the pod to restart automatically to reflect this change. Checking if that is possible in any way.
This is currently a feature in progress https://github.com/kubernetes/kubernetes/issues/22368
For now, use Reloader - https://github.com/stakater/Reloader
It watches if some change happens in ConfigMap and/or Secret; then performs a rolling upgrade on relevant DeploymentConfig, Deployment, Daemonset, Statefulset and Rollout
How to use it - https://github.com/stakater/Reloader#how-to-use-reloader
As you mentioned correctly once you update a ConfigMap or Secret the Deployment/Pod/Stateful set is not updated.
An optional solution for this scenario is to use Kustomization.
Kustomization generates a unique name every time you update the ConfigMap/Secret with a generated hash, for example: ConfigMap-xxxxxx.
If you will will use:
kubectl kustomize . | kubectl apply -f -
kubectl will "update" the changes with the new config map values.
Working Example(s) using Kustomization:
https://github.com/nirgeier/KubernetesLabs/tree/master/Labs/08-Kustomization
I am trying to map kubernetes secret value to a environment variable . My secret is as shown below
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: opaque
data:
tls.crt: {{ required "A valid value is required for tls.crt" .Values.tlscrt }}
Mapped the key to environment variable in the deployment yaml
env:
- name: TEST_VALUE
valueFrom:
secretKeyRef:
name: test-secret
key: tls.crt
The value gets mapped when i do helm install. However when i do helm upgrade , the changed value is not reflected in the environment variable , it still has the old value. Can anyone please help here ?
Changes to secret or configMap data are not reflected in existing pods. You have to delete and recreate the pod in order to see changes. There are ways to automate the process (see this Q/A for example: Helm chart restart pods when configmap changes) and they all have one thing in common: you need to modify something in pod definition to trigger a restart. It does not happen when you update a linked secret or a configMap because the link remains the same.
We have a namespace in kubernetes where I would like some secrets (files like jks,properties,ts,etc.) to be made available to all the containers in all the pods (we have one JVM per container & one container per pod kind of Deployment).
I have created secrets using kustomization and plan to use it as a volume for spec of each Deployment & then volumeMount it for the container of this Deployment. I would like to have this volume to be mounted on each of the containers deployed in our namespace.
I want to know if kustomize (or anything else) can help me to mount this volume on all the deployments in this namespace?
I have tried the following patchesStrategicMerge
---
apiVersion: apps/v1
kind: Deployment
metadata:
namespace: myNamespace
spec:
template:
spec:
imagePullSecrets:
- name: pull-secret
containers:
- volumeMounts:
- name: secret-files
mountPath: "/secrets"
readOnly: true
volumes:
- name: secret-files
secret:
secretName: mySecrets
items:
- key: key1
path: ...somePath
- key: key2
path: ...somePath
It requires name in metadata section which does not help me as all my Deployments have different names.
Inject Information into Pods Using a PodPreset
You can use a PodPreset object to inject information like secrets, volume mounts, and environment variables etc into pods at creation time.
Update: Feb 2021. The PodPreset feature only made it to alpha. It was removed in v1.20 of kubernetes. See release note https://kubernetes.io/docs/setup/release/notes/
The v1alpha1 PodPreset API and admission plugin has been removed with
no built-in replacement. Admission webhooks can be used to modify pods
on creation. (#94090, #deads2k) [SIG API Machinery, Apps, CLI, Cloud
Provider, Scalability and Testing]
PodPresent (https://kubernetes.io/docs/tasks/inject-data-application/podpreset/) is one way to do this but for this all pods in your namespace should match the label you specify in PodPresent spec.
Another way (which is most popular) is to use Dynamic Admission Control (https://kubernetes.io/docs/reference/access-authn-authz/extensible-admission-controllers/) and write a Mutating webhook in your cluster which will edit your pod spec and add all the secrets you want to mount. Using this you can also make other changes in your pod spec like mounting volumes, adding label and many more.
Standalone kustomize support a patch to many resources. Here is an example Patching multiple resources at once. the built-in kustomize in kubectl doesn't support this feature.
To mount secret as volume you need to update yaml construct for your pod/deployment manifest files and rebuild them.
apiVersion: v1
kind: Pod
metadata:
name: my-pod
spec:
containers:
- name: my-container
image: nginx
volumeMounts:
- name: my-secret-volume
mountPath: /etc/secretpath
volumes:
- name: my-secret-volume
secret:
secretName: my-secret
kustomize (or anything else) will not mount it for you.
I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.
How do I load a configMap into an environment variable?
Things I've done
Kubernetes documentation describes just this scenario, and following it:
I've actually set up my configMap through Terraform with this:
resource "kubernetes_config_map" "production_database_host" {
metadata {
name = "production-database-host"
}
data {
connection_name = "${google_sql_database_instance.master.connection_name}"
}
}
But via Kubernetes, it would look like this:
apiVersion: v1
data:
connection_name: this_string_is_redacted
kind: ConfigMap
metadata:
creationTimestamp: 2018-10-12T05:49:49Z
name: production-database-host
namespace: default
resourceVersion: "316273"
selfLink: /api/v1/namespaces/default/configmaps/production-database-host
uid: a1c06423-cde2-11e8-b615-42010a800235
(Fetched by running kubectl get configmap production-database-host -o yaml)
Now, I also have a working container, in a deployment, where I added an environment variable like so:
env:
- name: INSTANCE_CONNECTION_NAME
valueFrom:
configMapKeyRef:
name: production-database-host
key: connection_name
However, applying this config gives me:
$ kubectl apply -f .
error: error converting YAML to JSON: yaml: line 39: did not find expected key
What am I doing wrong here? Why won't this simply load this_string_is_redacted into the INSTANCE_CONNECTION_NAME environment variable?
Edit: All the source for my infrastructure is in this repo. The Terraform files are applied first, they create the Kubnernetes cluster and add the configMap. Then I apply the Kubernetes config.
It was a formatting issue, unfortunately the block:
env:
- name: INSTANCE_CONNECTION_NAME
valueFrom:
configMapKeyRef:
name: production-database-host
key: connection_name
Was indented one space more than I should have been. Everything else works fine.