I have some CATALINA_OPTS properties (regarding database port, user and so on) set up in ConfigMap file. Then, this file is added to the docker image via Pod environment variable.
One of the CATALINA_OPTS properties is database password, and it is required to move this from ConfigMap to the Secrets file.
I can expose key from Secrets file through environment variable:
apiVersion: v1
kind: Pod
...
containers:
- name: myContainer
image: myImage
env:
- name: CATALINA_OPTS
valueFrom:
configMapKeyRef:
name: catalina_opts
key: CATALINA_OPTS
- name: MY_ENV_PASSWORD
valueFrom:
secretKeyRef:
name: db-pass
key: my-pass
Thing is, i need to append this password to the CATALINA_OPTS. I tried to do it in Dockerfile:
RUN export CATALINA_OPTS="$CATALINA_OPTS -Dmy.password=$MY_ENV_PASSWORD"
However, MY_ENV_PASSWORD is not appending to the existing CATALINA_OPTS. When I list my environment variables (i'm checking the log in Jenkins) i cannot see the password.
Am I doing something wrong here? Is there any 'regular' way to do this?
Dockerfile RUN steps are run as part of your image build step and NOT during your image execution. Hence, you cannot rely on RUN export (build step) to set K8S environment variables for your container (run step).
Remove the RUN export from your Dockerfile and Ensure you are setting CATALINA_OPTS in your catalina_opts ConfigMap like this:
apiVersion: v1
kind: ConfigMap
metadata:
name: catalina_opts
data:
SOME_ENV_VAR: INFO
CATALINA_OPTS: opts... -Dmy.password=$MY_ENV_PASSWORD
Related
In my Kubernetes cluster, I have a ConfigMap object containing the address of my Postgres pod. It was created with the following YAML:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: postgres-service
Now I reference this value in one of my Deployment's configuration:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
This deployment is a Spring Boot application that intends to communicate with the database. Thus it reads the database's URL from the DB_ADDRESS environment variable. (ignore the default values, those are used only during development)
datasource:
url: ${DB_ADDRESS:jdbc:postgresql://localhost:5432/users}
username: ${POSTGRES_USER:postgres}
password: ${POSTGRES_PASSWORD:mysecretpassword}
So, according to the logs, the problem is that the address has to have the jdbc:postgresql:// prefix. Either in the ConfigMap's YAML or in the application.yml I would need to concatenate the prefix protocol string with the variable. Any idea how to do it in yml or suggestion of some other workaround?
If you create a Service, that will provide you with a hostname (the name of the service) that you can then use in the ConfigMap. E.g., if you create a service named postgres, then your ConfigMap would look like:
apiVersion: v1
kind: ConfigMap
metadata:
name: postgres-configmap
data:
database_url: jdbc:postgresql://postgres:5432/users
Kubernetes environment variable declarations can embed the values of other environment variables. This is the only string manipulation that Kubernetes supports, and it pretty much only works in env: blocks.
For this setup, once you've retrieved the database hostname from the ConfigMap, you can then embed it into a more complete SPRING_DATASOURCE_URL environment variable:
env:
- name: DB_ADDRESS
valueFrom:
configMapKeyRef:
name: postgres-configmap
key: database_url
- name: SPRING_DATASOURCE_URL
value: 'jdbc:postgresql://$(DB_ADDRESS):5432/users'
You might similarly parameterize the port (though it will almost always be the standard port 5432) and database name. Avoid putting these settings in a Spring profile YAML file, where you'll have to rebuild your application if any of the deploy-time settings change.
I am trying to add config data as environment variables, but Kubernetes warns about invalid variable names. The configmap data contains JSON and property files.
spec:
containers:
- name: env-var-configmap
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-configmap
After deploying I do not see them added in the process environment. Instead I see a warning message like below
Config map example-configmap contains keys that are not valid environment variable names. Only config map keys with valid names will be added as environment variables.
But I see it works if I add it directly as a key-value pair
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
I have thousand of key values in the configmap data and I could not add them all as separate key-value pairs.
Is there any short syntax to add all values from a configmap as environment variables?
My answer, while #P-Ekambaram already helped you out, I was getting the same error message, it turned out that my issue was that I named the configMap ms-provisioning-broadsoft-adapter and was trying to use ms-provisioning-broadsoft-adapter as the key. As soon as I changed they key to ms_provisioning_broadsoft_adapter, e.g. I added the underscores instead of hyphens and it happily let me add it to an application.
Hope this might help someone else that also runs into the error invalid variable name cannot be added as environmental variable
sample reference is given below
create configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
load configmap data as environment variables in the pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
output
master $ kubectl logs dapi-test-pod | grep SPECIAL
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
You should rename your variable.
In my case they were like this:
VV-CUSTOMER-CODE
VV-CUSTOMER-URL
I just rename to:
VV_CUSTOMER_CODE
VV_CUSTOMER_URL
Works fine. Openshift/kubernets works with underline _, but not with hyphen - .
I hope help you.
I need to add jks file to my JVM for SSL Handshake with the server. The JKS is mounted in volume and available to the docker container. How do I pass the JKS truststore path and password to the Springboot(JVM) during start up.
One option I think is as an environment variables (-Djavax.net.ssl.trustStore, -Djavax.net.ssl.trustStorePassword) . For Openshift, following works as described in the url below.
Option 1:
env:
- name: JAVA_OPTIONS
value: -Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit
https://developers.redhat.com/blog/2017/11/22/dynamically-creating-java-keystores-openshift/
But, I don't seem to find similar JAVA_OPTIONS environment variable for Kubernetes.
Option2 :
My Docker file is:
FROM openjdk:8-jre-apline
..........
........
ENTRYPOINT ["java", "-jar", "xxx.jar"]
Can this be changed as below and the $JAVA_OPTS can be set as env variable to JVM via configmap?
FROM openjdk:8-jre-apline
..........
........
ENTRYPOINT [ "sh", "-c", "java $JAVA_OPTS -jar xxx.jar" ]
Configmap:
JAVA_OPTS: "-Djavax.net.ssl.trustStore=/var/run/secrets/java.io/keystores/truststore.jks -Djavax.net.ssl.trustStorePassword=changeit"
Please suggest if this would work or anyother better solutions. More preferred if we can get store the password in secret.
A couple of options:
1: You can break it all up and use secrets to store your credentials only as env vars, secret to store the keystore which can be mounted as a file on disk in the container, and a ConfigMap to hold other java options as env variables then use an entrypoint script in your container to validate and mash it all together to form the JAVA_OPTS string.
2: You can put the whole string in a JAVA_OPTS secret that you consume at run-time.
apiVersion: v1
kind: Pod
metadata:
name: secret-env-pod
spec:
containers:
- name: mycontainer
image: myimage
env:
- name: JAVA_OPTS
valueFrom:
secretKeyRef:
name: mysecret
key: JAVA_OPTS
restartPolicy: Never
I am able to do this in a K8s deployment using _JAVA_OPTION environment variable for a Spring Boot 2.3.x application in Docker container running Java 8 (Got tip for this envvar from this SO answer https://stackoverflow.com/a/11615960/309261).
env:
- name: _JAVA_OPTIONS
value: >
-Djavax.net.ssl.trustStore=/path/to/truststore.jks
-Djavax.net.ssl.trustStorePassword=changeit
I am currently creating pods on AKS from a net core project. The problem is that I have a secret generated from appsettings.json that I created previously in the pipeline. During the deployment phase I load this secret inside a volume of the pod itself. What I want to achieve is to read the values from the Kubernetes secret and load them as env variables inside the helm chart. Any help is appreciated Thanks :)
Please see how you can use secret as environmental variable
As a single variable
containers:
- name: mycontainer
image: redis
env:
- name: SECRET_USERNAME
valueFrom:
secretKeyRef:
name: mysecret
key: username
Or the whole secret
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- secretRef:
name: mysecret
Your secrets should not be in your appsettings.json because they will end up in your source control repository.
Reading secrets from k8s into helm chart is something you should never attempt to do.
Ideally your secrets sit in a secure secret store (a vault) that either has an API that your k8s hosted app(s) can call into
Or (the vault) has an integration with k8s which mounts your secrets as a volume in your pods (the volume is an in-memory read-only storage).
This way your secrets are only kept in the vault which ensures the secrets are encrypted while at rest and in transit.
Use case:
I want to be able to re-run a job from where the first job left off. I am using Helm to deploy into Kubernetes.
I have the idea of saving the state of the first job in a ConfigMap. The ConfigMap yaml defining the ConfigMap is packaged up with the job and both are deployed at the same time with Helm.
apiVersion: v1
kind: ConfigMap
metadata:
name: NameOfMyConfigMap
data:
someKey: someValue
MY_STATE: state <---- See below as to whether this should be included or not
The job is run with an ENV variable set from the ConfigMap:
env:
- name: MY_STATE
valueFrom:
configMapKeyRef:
name: NameOfMyConfigMap
key: MY_STATE
The job runs a script that looks to see if $MY_STATE is set and if it is not set then the job is being run for the first time, otherwise the job closes down the already running first job, saves the first job's state into the MY_STATE ConfigMap variable and launches the job again using the saved state.
If I don't declare the MY_STATE key in the initial ConfigMap definition then the first run of the job will fail, as the ENV definition above cannot find the ConfigMap variable.
If I do declare the value (MY_STATE: "") in the ConfigMap definition, then the first deployment will work. However, if I re-deploy the job with helm upgrade then does the value I enter in the definition not overwrite an existing value in the existing ConfigMap?
What is the best method of storing state in between runs of the same job?
Have you tried using volumes? In this case it should not be overwritten when using helm upgrade.
Could an example like this work? (From
https://groups.google.com/forum/#!msg/kubernetes-users/v2806ezEdPk/1geJCO8-AQAJ)
apiVersion: batch/v1
kind: Job
metadata:
name: keystore-configmap-job
spec:
template:
metadata:
name: keystore-configmap
spec:
containers:
- name: keystore
image: ubuntu
volumeMounts:
- name: keystore-configmap-volume
mountPath: /config-base64
command: [ "sh", "-c", "cat /config-base64/keystore.jks | base64 --decode | sha256sum" ]
restartPolicy: Never
volumes:
- name: keystore-configmap-volume
configMap:
name: keystore-configmap