How do you get the Node IP from inside a Pod? - kubernetes

I am running a go app that is creating prometheus metrics that are node specific metrics and I want to be able to add the node IP as a label.
Is there a way to capture the Node IP from within the Pod?

The accepted answer didn't work for me, it seems fieldPath is required now:
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP

Is there a way to capture the Node IP from within the Pod?
Yes, easily, using the env: valueFrom: fieldRef: status.hostIP; the whole(?) list is presented in the envVarSource docs, I guess because objectFieldSelector can appear in multiple contexts.
so:
containers:
- env:
- name: NODE_IP
valueFrom:
fieldRef:
status.hostIP

Related

How to pass creation timestamp to kubernetes cronjob

Inside my container I have in my spec.jobTemplate.spec.template.spec:
containers:
- name: "run"
env:
{{ include "schedule.envVariables" . | nindent 16 }}
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_CREATION_TIMESTAMP
valueFrom:
fieldRef:
fieldPath: metadata.creationTimestamp
However I just get the error:
CronJob.batch "schedule-3eb71b12d3" is invalid: spec.jobTemplate.spec.template.spec.containers[0].env[19]
When changing to:
- name: POD_CREATION_TIMESTAMP
value: ""
I get no errors. Any idea?
The reason is that fieldRef doesn't support the use of metadata.creationTimestamp.
$ kubectl explain job.spec.template.spec.containers.env.valueFrom
...
fieldRef <Object>
Selects a field of the pod: supports metadata.name, metadata.namespace,
`metadata.labels['<KEY>']`, `metadata.annotations['<KEY>']`, spec.nodeName,
spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs.
...
Kubernetes lacks that capability. When I try to add that field using fieldPath, I see:
field label not supported: metadata.creationTimestamp
The only guaranteed way of getting that value (that I can think of) would be to give your cronjob RBAC access to request that info at runtime. Then, you could run this from inside the pod to get that value:
kubectl get pod ${POD_NAME} -o=jsonpath='{.metadata.creationTimestamp}'

Is it possible to retrieve pod name from values.yaml file of helm chart?

Quite new to Helm. Currently, I create an env variable in a way that when I deploy my pod, I am able to see the pod name in the environment variables list. This can be done like so in the template file:
containers:
- name: my_container
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
Is it possible to do something similar in the values.yaml file (maybe in an extraEnv field?) and then use this value in the .tpl? Other configurations, like configmap names, depend on it, in order to be unique between pods and I want to easily retrieve the value like so:
volumes:
- name: my_vol
configMap:
name: {{ .Values.pathto.extraEnv.podname }}
Thanks in advance!

Set Kubernetes env variable from container's image version

I was wondering if it's possible to refer to image field in Kubernetes deployment yaml file,
as
env:
- name: VERSION
value:
valueFrom:
containerField: spec.image
Please let me know. Thank you.
image value in pod definition cannot be passed as environment variable using fieldRef.
The only supported values are metadata.name, metadata.namespace, metadata.labels, metadata.annotations, spec.nodeName, spec.serviceAccountName, status.hostIP, status.podIP, status.podIPs and resource fields (memory, cpu request/limits) and container ephemeral storage limit/request.
As a workaround it can be passed using labels and then passing this label as an environment variable, example:
env:
- name: VERSION
valueFrom:
fieldRef:
fieldPath: metadata.labels['version']

Using different Secrets in sts replicas

I'm trying to use different secrets on a StatefulSet, based on the index o the pods.
Here is the things I tried:
env:
- name: POD_NAME
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: SECRET_KEY
valueFrom:
secretKeyRef:
key: key
name: mysecret-$(POD_NAME)
- name: SECRET_HOST
value: myhost-$(POD_NAME)
However, mysecret-$(POD_NAME) is not correctly sobstituted as parameter, while myhost-$(POD_NAME) acts correcly.
How can I solve this problem? The goal is to set different variables from secret/configmaps on different replicas of the StatefulSet.
AFAIK this is not supported. The only volumes you can have differ are the PVs. Instead you would use a single secret with keys or whatnot based on the pod index, and write your software to read from the correct key.

kubernetes transfer Physical IP to dubbo

I want to transfer Physical IP to dubbo pod by yaml,but the parameter is Fixed value.For example:
dubbo.yaml
spec:
replicas: 2
...
env:
- name: PhysicalIP
value: 192.168.1.1
In pod before start dubbo,i can replay container ip,for example:
echo "replay /etc/hosts"
cp /etc/hosts /etc/hosts.tmp
sed -i "s/.*$(hostname)/${PhysicalIP} $(hostname)/" /etc/hosts.tmp
cat /etc/hosts.tmp > /etc/hosts
There is an question,when pod deploy to host 192.168.1.1 and host 192.168.1.2,the host 192.168.1.2's pod environment variable ${PhysicalIP} value is 192.168.1.1,I want to ${PhysicalIP} is 192.168.1.2 in host 192.168.1.2,Is there any way?
You should be able to get information about the pod through Environment Variables or using a DownwardAPIVolumeFile.
Using environment variables:
You should add to your yaml something like this:
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
As far as I know, the name of the node is, right now, the best approach to what you need that you can get from inside the container.
Using a DownwardAPIVolumeFile:
You should add to your yaml something like this:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "nodename"
fieldRef:
fieldPath: spec.nodeName
This way you will have the information of the node name stored in /etc/nodename
The issue #24657 and the pull #42717, on the kubernetes github are related to this. (Sorry, I need more reputation here to be able to post more links!).
As you can see there, access to the node IP through the downwardAPI should be available soon (using status.hostIP, probably).