I am trying to map kubernetes secret value to a environment variable . My secret is as shown below
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: opaque
data:
tls.crt: {{ required "A valid value is required for tls.crt" .Values.tlscrt }}
Mapped the key to environment variable in the deployment yaml
env:
- name: TEST_VALUE
valueFrom:
secretKeyRef:
name: test-secret
key: tls.crt
The value gets mapped when i do helm install. However when i do helm upgrade , the changed value is not reflected in the environment variable , it still has the old value. Can anyone please help here ?
Changes to secret or configMap data are not reflected in existing pods. You have to delete and recreate the pod in order to see changes. There are ways to automate the process (see this Q/A for example: Helm chart restart pods when configmap changes) and they all have one thing in common: you need to modify something in pod definition to trigger a restart. It does not happen when you update a linked secret or a configMap because the link remains the same.
Related
I have a k8s secret yaml definition with some data items already applied in the cluster. After removing some data items from the yaml file, and updating the secret with kubectl apply, those removed data items still persists in the secret object existing in the k8s cluster, not being able to remove them without deleting and recreating the secret from scratch. However, this is not the usual behavior and only happens on rare occasions. Any idea why this is happening and how can I fix it without deleting the whole secret?
Example:
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
user: foo
password: bar
EOF
The secret is created with data items user and password.
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
password: bar
EOF
After removing user from the secret definition, the secret is updated with kubectl apply but the user data item still remains in the secret.
I have created a kubernetes secret from a file (secret.txt):
k1=v1
k2=v2
k3=v3
It looks like this secret-:
apiVersion: v1
metadata:
name: secret007
data:
secret.txt: bWFza1NhbHQ9InRlc3RzYWx0IgpzM1
I am using it as environment variable in the pod like this:
- name: KEY1
valueFrom:
secretKeyRef:
key: k1
name: secret007
optional: false
Problem
Problem since the data has a value as a single base64 value. I am not able to refer it in the pod and getting this error.
Warning Failed 6s (x2 over 6s) kubelet, Error: couldn't find key k1 in Secret kube-system/secret007
Please suggest how to do this without changing the secret format i.e Secret would be a single key value of filename and all secret.txt values as a single base64 value. Is it possible?
You can create secret entries from environment file
kubectl create secret generic test --from-env-file=secret.txt
and the output will have distinct values in your secret
apiVersion: v1
data:
k1: djE=
k2: djI=
k3: djM=
kind: Secret
metadata:
creationTimestamp: null
name: test
This is a community wiki answer posted for better visibility. Feel free to expand it.
As already mentioned by #DavidMaze, Kubernetes will not to look "inside" a Secret or a ConfigMap values:
You can refer to the entire Secret value as you've shown in the
question, but there's no way to tell it (a) that the value is actually
newline-separated key=value pairs (as opposed to TOML, YAML, JSON,
XML, ...) and (b) that you want to pick some specific value out of
there.
I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.
I created a helm chart which has secrets.yaml as:
apiVersion: v1
kind: Secret
type: Opaque
metadata:
name: appdbpassword
stringData:
password: password#1
My pod is:
apiVersion: v1
kind: Pod
metadata:
name: expense-pod-sample-1
spec:
containers:
- name: expense-container-sample-1
image: exm:1
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;" ]
envFrom:
- secretRef:
name: appdbpassword
Whenever I run the kubectl get secrets command, I get the following secrets:
name Type Data Age
appdbpassword Opaque 1 41m
sh.helm.release.v1.myhelm-1572515128.v1 helm.sh/release.v1 1 41m
Why am I getting that extra secret? Am I missing something here?
Helm v2 used ConfigMaps by default to store release information. The ConfigMaps were created in the same namespace of the Tiller (generally kube-system).
In Helm v3 the Tiller was removed, and the information about each release version had to go somewhere:
In Helm 3, release information about a particular release is now
stored in the same namespace as the release itself.
Furthermore, Helm v3 uses Secrets as default storage driver instead of ConfigMaps (i.e., it's expected that you see these helm secrets for each namespace that has a release version on it).
There is an option to helm upgrade to limit the number of old deployment secrets that are kept:
--history-max int limit the maximum number of revisions saved per release.
Use 0 for no limit (default 10)
This is because there is no Tiller anymore in Helm 3. Hence, release information is now stored in the same namespace as the release itself as a secret.
Which Helm uses as the default storage driver.
i have a sample nodejs application which uses an envVar environment variable, i have deployed this on kubernetes cluster. I am passing the env variable through config map.
once deployed and when pods is all running, if i change my config map with new value. Should deployment of my nodejs application need to be redone after this?
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '12345' # initial value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
after updating the configmap.yaml
configmap.yaml
kind: ConfigMap
apiVersion: v1
metadata:
name: app1-config
namespace: default
data:
envVal: '56789' # changed value
apiUrl: http://a4235a7ee247011e8aa6f0213eb6eb14-1392003683.us-west-2.elb.amazonaws.com/myapp4
When you mount the keys from the ConfigMap as environment variables, you would need to restart your pod for the changes to take effect.
When you mount it as volume into you system, the files in the volume will be updated automatically. The update is not immediate, there is some TTL configured in the kubelet before it checks for changes / does the update. But it is normally quite quick. However it would still depend on your application how it loads the data from the file - whether it will be able to update its self on the fly when the files change or whether these data were loaded only once at startup.