How to pass creation timestamp to kubernetes cronjob - kubernetes

Inside my container I have in my spec.jobTemplate.spec.template.spec:
containers:
- name: "run"
env:
{{ include "schedule.envVariables" . | nindent 16 }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_CREATION_TIMESTAMP
valueFrom:
fieldRef:
fieldPath: metadata.creationTimestamp
However I just get the error:
CronJob.batch "schedule-3eb71b12d3" is invalid: spec.jobTemplate.spec.template.spec.containers[0].env[19]
When changing to:
- name: POD_CREATION_TIMESTAMP
value: ""
I get no errors. Any idea?

The reason is that fieldRef doesn't support the use of metadata.creationTimestamp.
$ kubectl explain job.spec.template.spec.containers.env.valueFrom
...
fieldRef <Object>
Selects a field of the pod: supports metadata.name, metadata.namespace,
`metadata.labels['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
...

Kubernetes lacks that capability. When I try to add that field using fieldPath, I see:
field label not supported: metadata.creationTimestamp
The only guaranteed way of getting that value (that I can think of) would be to give your cronjob RBAC access to request that info at runtime. Then, you could run this from inside the pod to get that value:
kubectl get pod ${POD_NAME} -o=jsonpath='{.metadata.creationTimestamp}'

Related

Use fieldRef in Kubernetes configMap

I have the following environment variable in my Kubernetes template:
envFrom:
- configMapRef:
name: configmap
env:
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
I would like to use the value from 'fieldRef' in a config map instead. Would this kind of modification be possible?
In other words, I want to add the 'MACHINENAME' environment variable to the config map, so I don't have to use the 'env:' block.
You cannot do this in the way you describe.
A ConfigMap only contains fixed string-key/string-value pairs. You cannot embed a more complex structure into a ConfigMap, or say that a ConfigMap value will be resolved using the downward API when a Pod is created. The node name of the pod, and most of the other downward API information, will be different for each pod using the ConfigMap (and likely even for each replica of the same deployment) and so there is no fixed value you can put into a ConfigMap.
You tagged this question with the Helm deployment tool. If you're using Helm, and you're simply trying to avoid repeating this boilerplate in every Deployment spec, you can write a helper template that includes this definition
{{/* templates/_helpers.tpl */}}
{{- define "machinename" -}}
- name: MACHINENAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
{{- end -}}
Now in your Deployment spec, you can include this template rather than retyping the whole YAML block.
containers:
- envFrom:
- configMapRef:
name: configmap
env:
{{ include "machinename . | indent 6 }}
(The exact indent value will depend on the context where you include it, and should be two more than the number of spaces at the start of the env: line. It is important that the line containing indent not itself be indented.)
Yes, using a ConfigMap would be possible. This Stack Overflow post is quite old, but has some good information in:
Advantage of using configmaps for environment variables with Kubernetes/Helm
You would need to either mount the ConfigMap as a volume or consume via environment variables by using envFrom. This guide provides both examples:
https://matthewpalmer.net/kubernetes-app-developer/articles/ultimate-configmap-guide-kubernetes.html
You can use the volume mount option and merge different configmap or env
volumes:
- name: all-in-one
projected:
sources:
- secret:
name: mysecret
items:
- key: username
path: my-group/my-username
- downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
- path: "cpu_limit"
resourceFieldRef:
containerName: container-test
resource: limits.cpu
- configMap:
name: myconfigmap
items:
- key: config
path: my-group/my-config
Ref : https://kubernetes.io/docs/concepts/storage/projected-volumes/
initContainer
There is another alternative you can follow if you want to merge those values.
Either you merge configmap & fieldRef with InitContainer as it's your Node name so, have to get value first and edit/add value to configmap with initContainer.

Is it possible to retrieve pod name from values.yaml file of helm chart?

Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!

Empty variable when using `status.hostIP` as reference field for my env variable in kubernetes

I'm deploying a kubernetes pod using helm v3, my kubectl client and server are above 1.7 so it should support reference fields. However when i deploy, the value is just empty.
using
environment:
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Where the DD_AGENT_HOST is my env variable that should be given the host ip.
Any idea on why this might be happening?
Had to add it this to the container specification directly, not passing from an env and using include from helm as that doesn't work
The issue is related to helm app deployment template(IF you use one). For instance, if you have deployment.yaml with
env:
{{- range .Values.env }}
- name: {{ .name }}
value: {{ .value | quote }}
{{- end }}
And one of env values is valueFrom, you have to add explicitly (unless there is a nicer way of doing it):
- name: DD_AGENT_HOST
valueFrom:
fieldRef:
fieldPath: status.hostIP
Otherwise copy of the range above will not use valueFromand as a result DD_AGENT_HOST will be empty

Using different Secrets in sts replicas

I'm trying to use different secrets on a StatefulSet, based on the index o the pods.
Here is the things I tried:
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: key
name: mysecret-$(POD_NAME)
- name: SECRET_HOST
value: myhost-$(POD_NAME)
However, mysecret-$(POD_NAME) is not correctly sobstituted as parameter, while myhost-$(POD_NAME) acts correcly.
How can I solve this problem? The goal is to set different variables from secret/configmaps on different replicas of the StatefulSet.
AFAIK this is not supported. The only volumes you can have differ are the PVs. Instead you would use a single secret with keys or whatnot based on the pod index, and write your software to read from the correct key.

How do you get the Node IP from inside a Pod?

I am running a go app that is creating prometheus metrics that are node specific metrics and I want to be able to add the node IP as a label.
Is there a way to capture the Node IP from within the Pod?
The accepted answer didn't work for me, it seems fieldPath is required now:
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
Is there a way to capture the Node IP from within the Pod?
Yes, easily, using the env: valueFrom: fieldRef: status.hostIP; the whole(?) list is presented in the envVarSource docs, I guess because objectFieldSelector can appear in multiple contexts.
so:
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
status.hostIP