I have a situation when I want to use one Opaque secret in different service
the only difference is that key should have different name:
f.e.
service1 should have env.variable named TOKEN and value SUperPassword111!
service2 should have env.variable named SRV__TOKEN and same value SUperPassword111!
Is it possible to use following secret for those those two service?
Here is the YAML for the secret
kind: Secret
apiVersion: v1
metadata:
name: some_secret
immutable: false
data:
TOKEN: U1VwZXJQYXNzd29yZDExMSEK
type: Opaque
The name of an environment variable is specified within the container-spec while the value is referenced with secretKeyRef which specifies the secret to use and the key within this particular secret.
In other words, the name of the environment variable and the key as used in a secret are entirely independent. So, if I understood your question correctly, the answer to it is; yes it is possible.
See https://kubernetes.io/docs/concepts/configuration/secret/ for a detailed explanation and a full example for referencing a secret from a pod.
Here a simple excerpt tailored to your question:
container-spec for "service1"
...
containers:
- name: service1
image: service1-image
env:
- name: TOKEN # the name of the env within your container
valueFrom:
secretKeyRef:
name: some_secret
key: TOKEN # the name as specified in the secret
...
container-spec for "service2"
...
containers:
- name: service1
image: service1-image
env:
- name: SRV__TOKEN # the name of the env within your container
valueFrom:
secretKeyRef:
name: some_secret
key: TOKEN # the name as specified in the secret
...
Related
I have the following environment variable in my Kubernetes template:
envFrom:
- configMapRef:
name: configmap
env:
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
I would like to use the value from 'fieldRef' in a config map instead. Would this kind of modification be possible?
In other words, I want to add the 'MACHINENAME' environment variable to the config map, so I don't have to use the 'env:' block.
You cannot do this in the way you describe.
A ConfigMap only contains fixed string-key/string-value pairs. You cannot embed a more complex structure into a ConfigMap, or say that a ConfigMap value will be resolved using the downward API when a Pod is created. The node name of the pod, and most of the other downward API information, will be different for each pod using the ConfigMap (and likely even for each replica of the same deployment) and so there is no fixed value you can put into a ConfigMap.
You tagged this question with the Helm deployment tool. If you're using Helm, and you're simply trying to avoid repeating this boilerplate in every Deployment spec, you can write a helper template that includes this definition
{{/* templates/_helpers.tpl */}}
{{- define "machinename" -}}
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- end -}}
Now in your Deployment spec, you can include this template rather than retyping the whole YAML block.
containers:
- envFrom:
- configMapRef:
name: configmap
env:
{{ include "machinename . | indent 6 }}
(The exact indent value will depend on the context where you include it, and should be two more than the number of spaces at the start of the env: line. It is important that the line containing indent not itself be indented.)
Yes, using a ConfigMap would be possible. This Stack Overflow post is quite old, but has some good information in:
Advantage of using configmaps for environment variables with Kubernetes/Helm
You would need to either mount the ConfigMap as a volume or consume via environment variables by using envFrom. This guide provides both examples:
https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html
You can use the volume mount option and merge different configmap or env
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
Ref : https://kubernetes.io/docs/concepts/storage/projected-volumes/
initContainer
There is another alternative you can follow if you want to merge those values.
Either you merge configmap & fieldRef with InitContainer as it's your Node name so, have to get value first and edit/add value to configmap with initContainer.
In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.
I have a template part like:
spec:
containers:
- name: webinspect-runner-{{ .Values.pipeline.sequence }}
...
env:
- name: wi_base_url
valueFrom:
secretKeyRef:
name: webinspect
key: wi-base-url
- name: wi_type
valueFrom:
secretKeyRef:
name: webinspect
key: wi-type
The webinspect/wi_type secret may be missing. I want the container also don't have the wi_type envvar or get a default value (better) when the secret is missing, but k8s just reports CreateContainerConfigError: couldn't find key wi-type in Secret namespace/webinspect and the pod fails.
Is there a way to use a default value, or skip the block if the secret does not exist?
Two options, the first is add optional: true to the secretKeyRef block(s) which makes it skip. The second is a much more complex approach using the lookup template function in Helm. Probably go with the first :)
I have a template and a deployment in it, One of my deployment's env variables is a value from a secret that contains the name of the namespace that just used the template.
The secret must contain the name of the namespace, I can't change it.
I know that it is possible to use env variables as value of different env variables. Looks like this:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: FULL_NAME
value: ${MY_NAME}.$(MY_NAMESPACE)
This way, if my namespace's name is app and the MY_NAME parameter is dan:
echo $FULL_NAME
> dan.app
But I want to use that name as a reference to a secret name. Like this:
env:
- name: MY_NAMESPACE
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.namespace
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: ${MY_NAME}.$(MY_NAMESPACE)
key: app_name
I get an event that says:
Pod "test-0" is invalid[spec.containers[0].env[1].valueFrom.secretKeyRef.name: Invalid value: "dan.$(MY_NAMESPACE)": a DNS-1123 subdomain must consist of lower case alphanumeric characters, '-' or '.', and must start with an alphanumeric character
Well basically it does not translate the $(MY_NAMESPACE) to the actual value of its env variable.
The secretKeyRef: secret name must be a fixed string. Variable expansion only happens in a couple of places in the pod spec (in env: values, command:, and args:, but nowhere else).
For your immediate problem, it may help to recognize that a Pod and any matching Secret must be in the same namespace. You don't need to include the namespace name in the Secret name, and there's no way to refer to a Secret in a different namespace if you needed to.
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: dan # always in the same namespace
key: app_name
More broadly, you can use a tool like Helm to fill in some of these values from templates. These can use data known at install time to construct "fixed strings" that can be submitted to Kubernetes.
{{/* In a Helm chart's templates/deployment.yaml */}}
- name: FULL_NAME_SECRET
valueFrom:
secretKeyRef:
name: {{ include "dan.name" . }}.{{ .Release.Namespace }}
key: app_name
In this the parts in the curly braces {{ ... }} are expanded by Helm's template engine. .Release.Namespace is the namespace into which you're installing. include "dan.name" . calls a helper template that constructs the "dan" name, in a way that supports multiple installations in the same namespace.
I am trying to add config data as environment variables, but Kubernetes warns about invalid variable names. The configmap data contains JSON and property files.
spec:
containers:
- name: env-var-configmap
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-configmap
After deploying I do not see them added in the process environment. Instead I see a warning message like below
Config map example-configmap contains keys that are not valid environment variable names. Only config map keys with valid names will be added as environment variables.
But I see it works if I add it directly as a key-value pair
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
I have thousand of key values in the configmap data and I could not add them all as separate key-value pairs.
Is there any short syntax to add all values from a configmap as environment variables?
My answer, while #P-Ekambaram already helped you out, I was getting the same error message, it turned out that my issue was that I named the configMap ms-provisioning-broadsoft-adapter and was trying to use ms-provisioning-broadsoft-adapter as the key. As soon as I changed they key to ms_provisioning_broadsoft_adapter, e.g. I added the underscores instead of hyphens and it happily let me add it to an application.
Hope this might help someone else that also runs into the error invalid variable name cannot be added as environmental variable
sample reference is given below
create configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
load configmap data as environment variables in the pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
output
master $ kubectl logs dapi-test-pod | grep SPECIAL
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
You should rename your variable.
In my case they were like this:
VV-CUSTOMER-CODE
VV-CUSTOMER-URL
I just rename to:
VV_CUSTOMER_CODE
VV_CUSTOMER_URL
Works fine. Openshift/kubernets works with underline _, but not with hyphen - .
I hope help you.