How to get the System HostName of kubernetes deployment inside pod? - kubernetes

In kubernetes we can use environment variable to pass hostIP using
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
So similarly how get hostName instead of HostIP?

env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
See: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#the-downward-api

That should be, as already answered, spec.nodeName.
What I also wanted to mention, there is a limited number of fields you can use. Check Capabilities of the Downward API.
Information available via fieldRef:
metadata.name
metadata.namespace
metadata.uid
metadata.labels['<KEY>']
metadata.annotations['<KEY>']
In addition, the following information is available through downwardAPI volume fieldRef:
metadata.labels
metadata.annotations
The following information is available through environment variables:
status.podIP - the pod's IP address
spec.serviceAccountName
spec.nodeName - the node's name, available since v1.4.0-alpha.3
status.hostIP - the node's IP, available since v1.7.0-alpha.1

Related

using the Downward API to access multiple container name kubernetes

Is there a programmatic way to get the container name in the pod spec?
I have multiple app containers running in a single POD via Deployment.yaml.
My fluentd instance is running as a sidecar in the deployment. Fluentd needs to collect the logs emitted from these containers. How does it identify a container's name?
The downward API looks promising in this regard. However, the container name is not mentioned in the Capabilities of the Downward API section.
Any workaround solution?
- name: fluentd
image: fluent/fluentd-kubernetes-daemonset:v1-debian-forward
env:
- name: FLUENT_FOWARD_HOST
value: "10.10.132.59"
- name: FLUENT_FOWARD_PORT
value: "24224"
- name: K8S_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: K8S_POD
valueFrom:
fieldRef:
fieldPath: metadata.name

Assigning a unique number in env inside each pod in a kubernetes cluster

I have a kubernetes cluster inside which there will be some pods running. For each pod I want to assign a unique id as env variable. eg: pod 1 server_id= 1, pod 2 server_id=2 etc.
Anyone have any idea how this can be done. I am building my docker image and deploying to cluster through gitlab ci.
Adding ENV variables into helm or YAML template
You can add a variable to the YAML file and apply it as per the requirement if your deployment are different.
Get the variables values into POD and use it
Or else you can get the POD name of deployment that will be different from each other and if you are fine to use that which will be unique
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
If you want to set your own variable and values you have to use the different deployment.
if you want to manage the sequence you need to use the stateful sets.
Sequence something like
POD-1
POD-2
POD-3
as environment variables at the, you want just this you can use stateful sets POD name and get value from node and inject back to POD and application use it further.

kubernetes: using value of runAsUser in an environment variable using valueFrom?

I have a kubernetes deployment that starts a pod that includes a runAsUser key in its securityContext. I was hoping I could stick this value in the environment of an initContainer using valueFrom, like this:
apiVersion: apps/v1
kind: Deployment
metadata:
name: testdeployment
spec:
template:
spec:
containers:
- name: myservice
image: myimage
securityContext:
runAsUser: 1000
initContainers:
- name: initialize_things
image: myimage
env:
- name: CONTAINER_UID
valueFrom:
fieldRef:
fieldPath: spec.containers[0].securityContext.runAsUser
That doesn't seem to work:
The Deployment "testdeployment" is invalid: spec.template.spec.initContainers[0].env[0].valueFrom.fieldRef.fieldPath: Invalid value: "spec.containers[0].securityContext.runAsUser": error converting fieldPath: field label not supported: spec.containers[0].securityContext.runAsUser
Is there any way to make this work? I'm trying to reduce the number of places I'm hardcoding that UID.
I think you cant make this work because The downward API doesnt support spec.containers[0].securityContext.runAsUser as a field.
Btw, in your case more logically was to put full path , I mean spec.template.spec.containers[0].securityContext.runAsUser, but anyway, it wont help
As per Capabilities of the Downward API - you are able to use only few fields
Information available via fieldRef:
metadata.name
metadata.namespace
metadata.uid
metadata.labels['<KEY>']
metadata.annotations['<KEY>']
In addition, the following information is available through downwardAPI volume fieldRef:
metadata.labels
metadata.annotations
The following information is available through environment variables:
status.podIP
spec.serviceAccountName
spec.nodeName
status.hostIP
You can find very similar issue on github closed: how to get imageID in container

exporting POD_HOST + random_string as an environment variable in a pod

I am writing a statefulset and I need to export the POD_HOST+abc as an environment variable.
The second environment variable should be named differently from the POD_HOST
I did something like
env:
- name: POD_HOST
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_FULL_NAME
value: $POD_HOST"abc"
which I can see as being exported in the environment as $POD_HOST"abc". Is there a way I can make the POD_HOST resolved in the kubernetes pod before being exported in the environment
You just need to use parentheses for using environment variable inside environment variable.
Example :
env:
- name: POD_HOST
valueFrom:
fieldRef:
apiVersion: v1
fieldPath: metadata.name
- name: POD_FULL_NAME
value: $(POD_HOST)-abc

How to see which node/pod served a Kubernetes Ingress request?

I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible?
The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users.
I'm looking for something like a HTTP Response header like this:
X-Kubernetes-Pod: mypod-abcdef-23874
X-Kubernetes-Node: kubw02
AFAIK, there is no feature like that out of the box.
The easiest way I can think of, is adding these information as headers yourself from your API.
You technically have to Expose Pod Information to Containers Through Environment Variables and get it from code to add the headers to the response.
Would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
And from the API you get the information and insert into the header.