how to refer for a secret object having environment variables inside a container - kubernetes

I have a question and I hope anyone can help me.
well, I have a deployment YAML file having a pod for an application and this app must be connected with redisDB using environment variables, I already setting the environment variables on the pod as u see here :
spec:
containers:
- name: app
image: nix/python
ports:
- containerPort: 8000
imagePullPolicy: Always
env:
- name: ENVIRONMENT
value: "DEV"
- name: HOST
value: "localhost"
- name: PORT
value: "8000"
- name: REDIS_HOST
value: "nix"
- name: REDIS_PORT
value: "6379"
- name: REDIS_DB
value: "0"
but I think it's not a best practice as a secure way, so I am thinking of defining those environments all into a secret object and referring to it under the container env. I just wanna refer to the name of the secret name and the container must read all the variables all at once not one by one. so how to make it plz ?

Replace the env field with this:
envFrom:
- secretRef:
name: {{ .name }}
optional: false
Set {{ .name }} to the name of the secret object you create.
You secret object should look like this:
apiVersion: v1
kind: Secret
metadata:
name: {{ .name }}
type: Opaque
stringData:
ENVIRONMENT: "DEV"
HOST: "localhost"
PORT: "8000"
REDIS_HOST: "nix"
REDIS_PORT: "6379"
REDIS_DB: "0"

Related

Combining ENV variables in helm chart

Based on this SO, this should work and I'm not sure what I'm missing.
I'm trying to combine env variables in a helm chart. TARGET and TARGET_KEY, but I'm getting:
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
I'm expecting
- name: TARGET_KEY # combining keys together
value: Hello World
apiVersion: serving.knative.dev/v1
kind: Service
metadata:
name: hello
namespace: myapp
spec:
template:
metadata:
annotations:
autoscaling.knative.dev/minScale: "1"
autoscaling.knative.dev/target: "10"
spec:
containers:
- image: gcr.io/knative-samples/helloworld-go
ports:
- containerPort: 8080
env:
- name: TARGET
value: "World"
- name: APIKEY
valueFrom: # get single key from secret at key
secretKeyRef:
name: {{ .Values.keys.name }}
key: apiKey
- name: TARGET_KEY # combining keys together
value: Hello $(TARGET)
envFrom: # set ENV variables from all the values in secret
- secretRef:
name: {{ .Values.keys.name }}
I am using ArgoCD to sync the helm charts. Checking the newly deployed pod's ENV vars.
#David is correct. The ENV variable shown in template and pod description keep the template name, but once I ssh'ed into the pod, doing printenv shows the env variable was properly filled in.
However, I did read there are issues with alphabetic sorting and ordering when trying to mix multiple ENV vars this way. That's a topic for another SO.

Kubernetes YAML file with if-else condition

I have yaml file which use to deploy my application in all the environments. I want to add some JVM args only for test environment . is there anyway i can do it in YAML file?
here is the yaml
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
-Denable.scan=true
"
here i want -Denable.scan=true to be conditional and should add only for Test environment .
I tried following way but it not working and kubernete throwing error error converting YAML to JSON: yaml: line 53: did not find expected key
Tried:-
apiVersion: v1
kind: Pod
metadata:
name: rss-site
labels:
app: web
spec:
containers:
- name: front-end
image: nginx
ports:
- containerPort: 80
- name: rss-reader
image: nickchase/rss-php-nginx:v1
ports:
- containerPort: 88
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
${{if eq "TEST" "TEST" }} # just sample condition , it will change
-Denable.scan=true
${{end }}
"
helm will do that. In fact, the syntax is almost identical to what you've put, and would be something like this:
env:
- name: JAVA_OPTS
value: "
-Dlog4j.configurationFile=log4j2.xml
{{- if eq .Values.profile "TEST" }}
-Denable.scan=true
{{- end }}
"
And you declare via the install package (called a Chart) which profile you want to use (i.e. you set the .Values.profile value)
You can check out https://helm.sh/ for details and examples

microprofile-config.properties as ConfigMap in Openliberty

This example shows how to use ConfigMaps with openliberty.
The problem to me is that you have to create a section for the env variable in each of kubernetes deployment.
containers:
- name: system-container
image: system:1.0-SNAPSHOT
ports:
- containerPort: 9080
# Set the environment variables
env:
- name: CONTEXT_ROOT
valueFrom:
configMapKeyRef:
name: sys-app-root
key: contextRoot
- name: SYSTEM_APP_USERNAME
valueFrom:
secretKeyRef:
name: sys-app-credentials
key: username
- name: SYSTEM_APP_PASSWORD
valueFrom:
secretKeyRef:
name: sys-app-credentials
key: password
Wouldn't it be just easier to upload microprofile-config.properties as
a ConfigMap and mount it as volume to the right location?
You dont have to create env part for each value. You can create your config map from microprofile-config.properties file and then just use envFrom to load all the key pairs like this:
envFrom:
- configMapRef:
name: my-microprofile-config
But maybe I missed your real problem...
Since you're using open-liberty I also think that creating your own config-map using microprofile-config.properties file will be good idea.
According to this documentation:
MicroProfile Config allows you to define configuration values, which are referred to as "config property" values, in a huge range of locations that are known as ConfigSources.
To load all the key pairs one can use envForm (just like #Gus suggested).
According to this documentation:
One can use envForm to define all of the ConfigMap's data as container environment variables.
The key from the ConfigMap becomes the environment variable name in the Pod.
Here is the example of use envForm:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
To load all the key pairs you need this scrap:
envFrom:
- configMapRef:
name: special-config
See also this and this documentations.

Pod does not see secrets

The pod that created in the same default namespace as it's secret does not see values from it.
Secret's file contains following:
apiVersion: v1
kind: Secret
metadata:
name: backend-secret
data:
SECRET_KEY: <base64 of value>
DEBUG: <base64 of value>
After creating this secret via kubectl create -f backend-secret.yaml I'm launching pod with the following configuration:
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
imagePullSecrets:
- name: dockerhub-credentials
volumes:
- name: secret
secret:
secretName: backend-secret
But pod crashes after trying to extract this environment variable via python's os.environ['DEBUG'] line.
How to make it work?
If you mount secret as volume, it will be mounted in a defined directory where key name will be the file name. For example click here
If you want to access secrets from the environment into your pod then you need to use secret in an environment variable like following.
apiVersion: v1
kind: Pod
metadata:
name: backend
spec:
containers:
- image: backend
name: backend
ports:
- containerPort: 8000
env:
- name: DEBUG
valueFrom:
secretKeyRef:
name: backend-secret
key: DEBUG
- name: SECRET_KEY
valueFrom:
secretKeyRef:
name: backend-secret
key: SECRET_KEY
imagePullSecrets:
- name: dockerhub-credentials
Ref: https://kubernetes.io/docs/concepts/configuration/secret/#using-secrets-as-environment-variables
Finally, I've used these lines at Deployment.spec.template.spec.containers:
containers:
- name: backend
image: zuber93/wts_backend
imagePullPolicy: Always
envFrom:
- secretRef:
name: backend-secret
ports:
- containerPort: 8000

Use Kubernetes secrets as environment variables inside a config map

I have an application in a container which reads certain data from a configMap which goes like this
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
password: hello123
Now I created a secret for the password and mounted as env variable while starting the container.
apiVersion: v1
kind: Secret
metadata:
name: appdbpassword
type: Opaque
stringData:
password: hello123
My pod looks like:
apiVersion: v1
kind: Pod
metadata:
name: {{ .Values.pod.name }}
spec:
containers:
- name: {{ .Values.container.name }}
image: {{ .Values.image }}
command: [ "/bin/sh", "-c", "--" ]
args: [ "while true; do sleep 30; done;"]
env:
- name: password
valueFrom:
secretKeyRef:
name: appdbpassword
key: password
volumeMounts:
- name: config-volume
mountPath: /app/app-config/application.yaml
subPath: application.yaml
volumes:
- name: config-volume
configMap:
name: app-config
I tried using this env variable inside the configMap:
apiVersion: v1
kind: ConfigMap
metadata:
name: app-config
data:
application.yaml: |
server:
port: 8080
host: 0.0.0.0
##
## UCP configuration.
## If skipped, it will default to looking inside of the connections.xml file.
database:
ApplicationDB:
username: username
**password: ${password}**
But my application is unable to read this password. Am I missing something here?
EDIT:
I cannot change the application.yaml to any other form as my server looks for application.yaml in source path. Do we have any way to use that environment variable in values.yaml(helm) file and use it in the configmap?
your ${password} variable will not be replaced by its value as application.yaml is a static file. If you use this yaml file in some configuration then it is possible that it will get replaced by its value.
consider a scenario where instead of application.yaml pass this file
application.sh: |
echo "${password}"
now go inside /app/app-config you will see application.sh file . And now do sh application.sh you will see the value of environment variable.
I hope this might clear your point.
You cannot use a secret in ConfigMap as they are intended to non-sensitive data (See here).
Also you should not pass Secrets using env's as it's create potential risk (Read more here why env shouldn't be
used).
Applications usually dump env variables in error reports or even write the to the
app logs at startup which could lead to exposing Secrets.
The best way would be to mount the Secret as file.
Here's an simple example how to mount it as file:
spec:
template:
spec:
containers:
- image: "my-image:latest"
name: my-app
...
volumeMounts:
- mountPath: "/var/my-app"
name: ssh-key
readOnly: true
volumes:
- name: ssh-key
secret:
secretName: ssh-key
Kubernetes documentation explains well how to use and mount secrets.