I have a k8s secret yaml definition with some data items already applied in the cluster. After removing some data items from the yaml file, and updating the secret with kubectl apply, those removed data items still persists in the secret object existing in the k8s cluster, not being able to remove them without deleting and recreating the secret from scratch. However, this is not the usual behavior and only happens on rare occasions. Any idea why this is happening and how can I fix it without deleting the whole secret?
Example:
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
user: foo
password: bar
EOF
The secret is created with data items user and password.
$ cat <<EOF|kubectl apply -f -
apiVersion: v1
kind: Secret
metadata:
name: db-credentials-secret
namespace: default
type: Opaque
stringData:
password: bar
EOF
After removing user from the secret definition, the secret is updated with kubectl apply but the user data item still remains in the secret.
Related
I am trying to map kubernetes secret value to a environment variable . My secret is as shown below
apiVersion: v1
kind: Secret
metadata:
name: test-secret
type: opaque
data:
tls.crt: {{ required "A valid value is required for tls.crt" .Values.tlscrt }}
Mapped the key to environment variable in the deployment yaml
env:
- name: TEST_VALUE
valueFrom:
secretKeyRef:
name: test-secret
key: tls.crt
The value gets mapped when i do helm install. However when i do helm upgrade , the changed value is not reflected in the environment variable , it still has the old value. Can anyone please help here ?
Changes to secret or configMap data are not reflected in existing pods. You have to delete and recreate the pod in order to see changes. There are ways to automate the process (see this Q/A for example: Helm chart restart pods when configmap changes) and they all have one thing in common: you need to modify something in pod definition to trigger a restart. It does not happen when you update a linked secret or a configMap because the link remains the same.
I created a secret of type service-account using the below code. The secret got created but when I run the kubectl get secrets the service-account secret is not listed. Where am I going wrong
apiVersion: v1
kind: Secret
metadata:
name: secret-sa-sample
annotations:
kubernetes.io/service-account.name: "sa-name"
type: kubernetes.io/service-account-token
data:
# You can include additional key value pairs as you do with Opaque Secrets
extra: YmFyCg==
kubectl create -f sa-secret.yaml
secret/secret-sa-sample created```
it might have been created in default namespace.
Specify namespace explicitly using -n $NS argument to kubectl
I am using Kubernetes to deploy my grafana dashboard and I am trying to use Kubernetes Secrets for saving grafana admin-password .. Here is my yaml file for secret
apiVersion: v1
kind: Secret
metadata:
name: $APP_INSTANCE_NAME-grafana
labels:
app.kubernetes.io/name: $APP_INSTANCE_NAME
app.kubernetes.io/component: grafana
type: Opaque
data:
# By default, admin-user is set to `admin`
admin-user: YWRtaW4=
admin-password: "$GRAFANA_GENERATED_PASSWORD"
value for GRAFANA_GENERATED_PASSWORD is base64 encoded and exported like
export GRAFANA_GENERATED_PASSWORD="$(echo -n $PASSWORD | base64)"
where PASSWORD is a variable which i exported on my machine like
export PASSWORD=qwerty123
I am trying to pass the value of GRAFANA_GENERATED_PASSWORD to the yaml file for secret like
envsubst '$GRAFANA_GENERATED_PASSWORD' > "grafana_secret.yaml"
The yaml file after passing the base64 encoded value looks like
apiVersion: v1
kind: Secret
metadata:
name: kafka-monitor-grafana
labels:
app.kubernetes.io/name: kafka-monitor
app.kubernetes.io/component: grafana
type: Opaque
data:
# By default, admin-user is set to `admin`
admin-user: YWRtaW4=
admin-password: "cXdlcnR5MTIz"
After deploying all my objects i couldn't login to my dashboard using password qwerty123 which is encoded properly ..
But when i try to encode my password like
export GRAFANA_GENERATED_PASSWORD="$(echo -n 'qwerty123' | base64)"
It is working properly and i can login to my dashboard using the password qwerty123 ..
Looks like the problem occur when i encode my password using a variable ...
But i have encode my password using a variable
As mentioned in #Pratheesh comment, after deploy the grafana for the first time, the persistent volume was not deleted/recreated and the file grafana.db that contains the Grafana dashboard password still keeping the old password.
In order to solve, the PersistentVolume (pv) need to be deleted before apply the secret with the new password.
(using kubernetes v1.15.7 in minikube and matching client version and minikube 1.9.0)
If I kubectl apply a secret like this:
apiVersion: v1
data:
MY_KEY: dmFsdWUK
MY_SECRET: c3VwZXJzZWNyZXQK
kind: Secret
metadata:
name: my-secret
type: Opaque
then subsequently kubectl apply a secret removing the MY_SECRET field, like this:
apiVersion: v1
data:
MY_KEY: dmFsdWUK
kind: Secret
metadata:
name: my-secret
type: Opaque
The data field in the result is what I expect when I kubectl get the secret:
data:
MY_KEY: dmFsdWUK
However, if I do the same thing using stringData instead for the first kubectl apply, it does not remove the missing key on the second one:
First kubectl apply:
apiVersion: v1
stringData:
MY_KEY: value
MY_SECRET: supersecret
kind: Secret
metadata:
name: my-secret
type: Opaque
Second kubectl apply (stays the same, except replacing MY_KEY's value with b2hubyEK to show the configuration DID change)
apiVersion: v1
data:
MY_KEY: b2hubyEK
kind: Secret
metadata:
name: my-secret
type: Opaque
kubectl get result after applying the second case:
data:
MY_KEY: b2hubyEK
MY_SECRET: c3VwZXJzZWNyZXQ=
The field also does not get removed if the second case uses stringData instead. So it seems that once stringData is used once, it's impossible to remove a field without deleting the secret. Is this a bug? Or should I be doing something differently when using stringData?
kubectl apply need to merge / patch the changes here. How this works is described in How apply calculates differences and merges changes
I would recommend to use kustomize with kubectl apply -k and use the secretGenerator to create a unique secret name, for every change. Then you are practicing Immutable Infrastructure and does not get this kind of problems.
A brand new tool for config manangement is kpt, and kpt live apply may also be an interesting solution for this.
The problem is that stringData is a write only field. It doesn’t have convergent behavior so it breaks the merge patch generator system. Most high level tools fix this by converting to normal data before dealing with the patch.
I know you can use ConfigMap properties as environment variables in the pod spec, but can you use environment variables declared in the pods spec inside the configmap?
For example:
I have a secret password which I wish to access in my configmap application.properties. The secret looks like so:
apiVersion: v1
data:
pw: THV3OE9vcXVpYTll==
kind: Secret
metadata:
name: foo
namespace: foo-bar
type: Opaque
so inside the pod spec I reference the secret as an env var. The configMap will be mounted as a volume from within the spec:
env:
- name: PASSWORD
valueFrom:
secretKeyRef:
name: foo
key: pw
...
and inside my configMap I can then reference the secret value like so:
apiVersion: v1
kind: ConfigMap
metadata:
name: application.properties
namespace: foo-bar
data:
application.properties: /
secret.password=$(PASSWORD)
Anything I've found online is just about consuming configMap values as env vars and doesn't mention consuming env vars in configMap values.
Currently it's not a Kubernetes Feature.
There is a closed issue requesting this feature and it's kind of controversial topic because the discussion is ongoing many months after being closed:
Reference Secrets from ConfigMap #79224
Referencing the closing comment:
Best practice is to not use secret values in envvars, only as mounted files. if you want to keep all config values in a single object, you can place all the values in a secret object and reference them that way.
Referencing secrets via configmaps is a non-goal... it confuses whether things mounting or injecting the config map are mounting confidential values.
I suggest you to read the entire thread to understand his reasons and maybe find another approach for your environment to get this variables.
"OK, but this is Real Life, I need to make this work"
Then I recommend you this workaround:
Import Data to Config Map from Kubernetes Secret
It makes the substitution with a shell in the entrypoint of the container.