kubernetes: set environment variable as integer - kubernetes

I want to set Kubernetes Deployment env with integer value, but I have to quote the value for Kubernetes Deployment to accept it. This makes the env value a string and is causing a TypeError in the app.
Is there any workaround to set integer or float value to env?

Generally it was answered in comment, however I'll add references from official kubernetes documentation.
env field uses EnvVar array. Based on EnvVar v1 core API group, name and value should be only strings.
Please see EnvVar v1 core
And here is an official example to see how variables are set:
apiVersion: v1
kind: Pod
metadata:
name: dependent-envars-demo
spec:
containers:
- name: dependent-envars-demo
args:
- while true; do echo -en '\n'; printf UNCHANGED_REFERENCE=$UNCHANGED_REFERENCE'\n'; printf SERVICE_ADDRESS=$SERVICE_ADDRESS'\n';printf ESCAPED_REFERENCE=$ESCAPED_REFERENCE'\n'; sleep 30; done;
command:
- sh
- -c
image: busybox
env:
- name: SERVICE_PORT
value: "80"
- name: SERVICE_IP
value: "172.17.0.1"
- name: UNCHANGED_REFERENCE
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
- name: PROTOCOL
value: "https"
- name: SERVICE_ADDRESS
value: "$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
- name: ESCAPED_REFERENCE
value: "$$(PROTOCOL)://$(SERVICE_IP):$(SERVICE_PORT)"
Link to this example is here

Related

Pass current date to kubernetes cronjob

I have a docker image that receive an env var name SINCE_DATE.
I have created a cronjob to run that container and I want to pass it the current date.
How can I do it?
Trying this, I get the literal string date -d "yesterday 23:59"
apiVersion: batch/v1beta1
kind: CronJob
metadata:
name: my-cron
spec:
schedule: "* * * * *"
jobTemplate:
spec:
template:
spec:
restartPolicy: Never
containers:
- name: my-cron
image: {{ .Values.image.repository }}:{{ .Values.image.tag }}
imagePullPolicy: {{ .Values.image.pullPolicy }}
env:
- name: SINCE_DATE
value: $(date -d "yesterday 23:59")
You could achieve it by overwriting container Entrypoint command and set environment variable.
In your case it would looks like:
containers:
- name: my-cron
image: nginx
#imagePullPolicy: {{ .Values.image.pullPolicy }}
command:
- bash
- -c
- |
export SINCE_DATE=`date -d "yesterday 23:59"`
exec /docker-entrypoint.sh
Note:
Nginx docker-entrypoint.sh in located in / If your image have different path, you should use it, for example exec /usr/local/bin/docker-entrypoint.sh
Very similar use-case can be found in this Stack question
What does this solution?
It will overwrite default script set in the container ENTRYPOINT with the same script but beforehand set dynamically environment variable.
I solved the same problem recently using KubeMod, which patches resources as they are created/updated in K8S. It is nice for this use case since it requires no modification to the original job specification.
In my case I needed to insert a date into the middle of a previously existing string in the spec, but it's the same concept.
For example, this matches a specific job by regex, and alters the second argument of the first container in the spec.
apiVersion: api.kubemod.io/v1beta1
kind: ModRule
metadata:
name: 'name-of-your-modrule'
namespace: default
spec:
type: Patch
match:
- select: '$.metadata.name'
matchRegex: 'regex-that-matches-your-job-name'
- select: '$.kind'
matchValue: 'Job'
patch:
- op: replace
path: '/spec/template/spec/containers/0/args/1'
select: '$.spec.template.spec.containers[0].args[1]'
value: '{{ .SelectedItem | replace "Placeholder Value" (cat "The time is" (now | date "2006-01-02T15:04:05Z07:00")) | squote }}'

Helm chart not allowing me to consume values with special characters ex '/' or '='

I am trying set the below value in values.yaml
ex:
envVar: KY13o5+J/jHpg==
Try to consume that value in deploy.yaml file as
.
.
containers:
- name: 'app-container'
.
.
env:
- name: ACCESS_KEY
value: {{ .Values.envVar }}
The ACCESS_KEY gets passed to container as env variable if I don't use characters like / and =. If I use those characters than the ACCESS_KEY env variable will not be available on running container.
I need a way to escape those two characters. I tried using \ and it worked fof / but not for =.
Note: I am not facing any problems with +. I am facing this problem on deploying the container to Kubernetes cluster.
Try using quote string function to escape special characters in env vars
env:
- name: ACCESS_KEY
value: {{ .Values.envVar | quote }}
Update:
Even without quotes, env var is properly loaded. Are you facing issues reading this variable?
pod.yaml
apiVersion: v1
kind: Pod
metadata:
name: test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: ACCESS_KEY
value: {{ .Values.envVar }}
kubectl logs --previous test-pod -n test
SHLVL=1
HOME=/root
ACCESS_KEY=KY13o5+J/jHpg==
KUBERNETES_PORT_443_TCP_ADDR=172.20.0.1
...

Unable to add configmap data as environment variables into pod. it says invalid variable name cannot be added as environmental variable

I am trying to add config data as environment variables, but Kubernetes warns about invalid variable names. The configmap data contains JSON and property files.
spec:
containers:
- name: env-var-configmap
image: nginx:1.7.9
envFrom:
- configMapRef:
name: example-configmap
After deploying I do not see them added in the process environment. Instead I see a warning message like below
Config map example-configmap contains keys that are not valid environment variable names. Only config map keys with valid names will be added as environment variables.
But I see it works if I add it directly as a key-value pair
env:
# Define the environment variable
- name: SPECIAL_LEVEL_KEY
valueFrom:
configMapKeyRef:
# The ConfigMap containing the value you want to assign to SPECIAL_LEVEL_KEY
name: special-config
# Specify the key associated with the value
key: special.how
I have thousand of key values in the configmap data and I could not add them all as separate key-value pairs.
Is there any short syntax to add all values from a configmap as environment variables?
My answer, while #P-Ekambaram already helped you out, I was getting the same error message, it turned out that my issue was that I named the configMap ms-provisioning-broadsoft-adapter and was trying to use ms-provisioning-broadsoft-adapter as the key. As soon as I changed they key to ms_provisioning_broadsoft_adapter, e.g. I added the underscores instead of hyphens and it happily let me add it to an application.
Hope this might help someone else that also runs into the error invalid variable name cannot be added as environmental variable
sample reference is given below
create configmap as shown below
apiVersion: v1
kind: ConfigMap
metadata:
name: special-config
namespace: default
data:
SPECIAL_LEVEL: very
SPECIAL_TYPE: charm
load configmap data as environment variables in the pod
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "/bin/sh", "-c", "env" ]
envFrom:
- configMapRef:
name: special-config
restartPolicy: Never
output
master $ kubectl logs dapi-test-pod | grep SPECIAL
SPECIAL_LEVEL=very
SPECIAL_TYPE=charm
You should rename your variable.
In my case they were like this:
VV-CUSTOMER-CODE
VV-CUSTOMER-URL
I just rename to:
VV_CUSTOMER_CODE
VV_CUSTOMER_URL
Works fine. Openshift/kubernets works with underline _, but not with hyphen - .
I hope help you.

Escaping '.' in helms chart yaml file

I need to define an env var with name contains '.' characters, and Kubenetes does not seem to like it.
spec:
containers:
env:
- name: "com.my.app.dir"
value: "/myapp/subdir/"
I tried single quotes, double quotes, backslashes, double backslashes, and many other ways. Still cannot make it work. I wonder if anyone knows a way to escape the '.' characters. Thanks in advance.
Kubernetes doesn't have a problem setting an environment variable with a .
Here's a simple spec that logs the environment by directly running the node executable
apiVersion: v1
kind: Pod
metadata:
name: env-node
spec:
containers:
- image: 'node:12-slim'
name: env-node
command:
- node
- '-pe'
- process.env
env:
- name: OTHER
value: here
- name: 'ONE_two-Three.four'
value: 'diditwork'
And the environment output (with some kubernetes default vars removed for brevity)
{
PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
HOSTNAME: 'env-node',
NODE_VERSION: '12.16.1',
OTHER: 'here',
'ONE_two-Three.four': 'diditwork',
HOME: '/root'
}
Most shells (sh, bash, zsh) won't accept environment variables with a . in them. POSIX defines [a-zA-Z_][a-zA-Z0-9_]* as the allowed characters in the name of an environment variable.
So running the same node process via a shell:
spec:
containers:
- image: 'node:12-slim'
name: nodeenvtest-simple-shell
command:
- sh
- '-c'
- 'node -e "console.log(process.env)"'
env:
- name: 'ONE_two-Three.four'
value: 'diditwork'
- name: 'OTHER'
value: 'here'
Results in a missing environment variable:
{
NODE_VERSION: '12.16.1',
HOSTNAME: 'env-shell',
HOME: '/root',
OTHER: 'here',
PATH: '/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin',
PWD: '/'
}
If there is no shell between the container and the app running, a . after the first character in the environment variable should be fine.

Replication Controller replica ID in an environment variable?

I'm attempting to inject a ReplicationController's randomly generated pod ID extension (i.e. multiverse-{replicaID}) into a container's environment variables. I could manually get the hostname and extract it from there, but I'd prefer if I didn't have to add the special case into the script running inside the container, due to compatibility reasons.
If a pod is named multiverse-nffj1, INSTANCE_ID should equal nffj1. I've scoured the docs and found nothing.
apiVersion: v1
kind: ReplicationController
metadata:
name: multiverse
spec:
replicas: 3
template:
spec:
containers:
- env:
- name: INSTANCE_ID
value: $(replicaID)
I've tried adding a command into the controller's template configuration to create the environment variable from the hostname, but couldn't figure out how to make that environment variable available to the running script.
Is there a variable I'm missing, or does this feature not exist? If it doesn't, does anyone have any ideas on how to make this to work without editing the script inside of the container?
There is an answer provided by Anton Kostenko about inserting DB credentials into container environment variables, but it could be applied to your case also. It is all about the content of the InitContainer spec.
You can use InitContainer to get the hash from the container’s hostname and put it to the file on the shared volume that you mount to the container.
In this example InitContainer put the Pod name into the INSTANCE_ID environment variable, but you can modify it according to your needs:
Create the init.yaml file with the content:
apiVersion: v1
kind: Pod
metadata:
name: init-test
spec:
containers:
- name: init-test
image: ubuntu
args: [bash, -c, 'source /data/config && echo $INSTANCE_ID && while true ; do sleep 1000; done ']
volumeMounts:
- name: config-data
mountPath: /data
initContainers:
- name: init-init
image: busybox
command: ["sh","-c","echo -n INSTANCE_ID=$(hostname) > /data/config"]
volumeMounts:
- name: config-data
mountPath: /data
volumes:
- name: config-data
emptyDir: {}
Create the pod using following command:
kubectl create -f init.yaml
Check if Pod initialization is done and is Running:
kubectl get pod init-test
Check the logs to see the results of this example configuration:
$ kubectl logs init-test
init-test