Kubernetes: how to get pod name I'm running in? - kubernetes

I want to run Python code inside a pod. The pod is created by airflow that I don't control.
I want to somehow get the name of the pod I'm running in.
How can it be done?

You can tell kuberenetes to mount an env variable for you:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and then in python you can access it like:
import os
pod_name = os.environ['MY_POD_NAME']
Or you can just open and read /etc/hostname:
f = open('/etc/hostname')
pod_name = f.read()
f.close()

Exposing Pod and Cluster Vars to Containers
Lets say you need some data about Pod or K8s environment in your application to add Pod informnation as metada tp logs. such as e.g.
Pod IP
Pod Name
Service Account of Pod
NOTE: All Pod information can be made available in the config file.
There are 2 ways to expose Pod fields into a running Container:
Environment Variables
Volume Files
Example of Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Kubernets Docs
Source

one way can be
So in the Kubernetes cluster you are operating in do
kubectl get pods
now see the yaml of all pods by
oc get pods <pod-name> -o yaml
then in that find the container images being used by the pods .
identify the image tag that belongs to your container creation.
That means that when you build you image , image has a name and a tag , which is further pushed to some cloud hub from where your pod will pull the image and start the container .
you need to find the image tag and name in pod yaml using the above commands given .

Try the below :
# List all pods in all namespaces
kubectl get pods --all-namespaces
# List all pods in the current namespace
kubectl get pods -o wide
Then u can see more details using the below :
kubectl describe pod <pod-name>
Also you can refer to the following stackoverflow question and the related answers.
get-current-pod

Related

How to get self pod with kubernetes client-go

I have a kubernetes service, written in Go, and am using client-go to access the kubernetes apis.
I need the Pod of the service's own pod.
The PodInterface allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.
It appears by reading /var/run/secrets/kubernetes.io/serviceaccount/namespace and searching pods in the namespace for the one matching hostname, I can determine the "self" pod.
Is this the proper solution?
Expose the POD_NAME and POD_NAMESPACE to your pod as environment variables. Later use those values to get your own pod object.
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
Ref: environment-variable-expose-pod-information

Add to configmap pod name in Kubernetes

I'm trying to figure it out, how to change one string inside configmap in Kubernetes. I have pretty simple configmap:
apiVersion: v1
data:
config.cfg: |-
[authentication]
USERNAME=user
PASSWORD=password
[podname]
PODNAME=metadata.podName
kind: ConfigMap
metadata:
name: name_here
And I need to mount the configmap inside a couple of pods. But PODNAME should be matched to current podname. Is it possible in any another way? thanks!
I do not think it could be done with ConfigMap. But you can set environment variables in your pod spec that references a pod fields.
apiVersion: v1
kind: Pod
metadata:
name: test-ref-pod-name
spec:
containers:
- name: test-container
image: busybox
command: [ "sh", "-c"]
args:
- env | grep PODNAME
env:
- name: PODNAME
valueFrom:
fieldRef:
fieldPath: metadata.name
restartPolicy: Never
See official documentation: https://kubernetes.io/docs/tasks/inject-data-application/environment-variable-expose-pod-information/#use-pod-fields-as-values-for-environment-variables
This doesn't answer you question exactly but the pod name normally ends up as the host name inside the pod and can be accessed as a standard environment variable
echo $HOSTNAME

Is it possible to know if the node where a Kubernetes Pod is being scheduled is master or worker?

I'm currently using Kubernetes to schedule a DaemonSet on both master and worker nodes.
The DaemonSet definition is the same for both node types (same image, same volumes, etc), the only difference is that when the entrypoint is executed, I need to write a different configuration file (which is generated in Python with some dynamic values) if the node is a master or a worker.
Currently, to overcome this I'm using two different DaemonSet definitions with an env value which tells if the node is a master or not. Here's the yaml file (only relevant parts):
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: worker-ds
namespace: kube-system
labels:
k8s-app: worker
spec:
...
spec:
hostNetwork: true
containers:
- name: my-image
...
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: IS_MASTER
value: "false"
...
---
apiVersion: extensions/v1beta1
kind: DaemonSet
metadata:
name: master-ds
namespace: kube-system
labels:
k8s-app: master
spec:
...
spec:
hostNetwork: true
nodeSelector:
node-role.kubernetes.io/master: ""
tolerations:
- key: node-role.kubernetes.io/master
operator: Exists
effect: NoSchedule
containers:
- name: my-image
...
env:
- name: NODE_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
- name: IS_MASTER
value: "true"
...
However, since the only difference is the IS_MASTER value, I want to collapse both the definitions in a single one that programmatically understands if the current node where the pod is being scheduled is a master or a worker.
Is there any way to know this information about the node programmatically (even reading a configuration file [for example something that only the master has or viceversa] in the node or something like that)?
Thanks in advance.
Unfortunately, there is not a convenient way to access the node information in pod.
If you only want a single DaemonSet definition, you can add a sidecar container to your pod, the sidecar container can access the k8s api, then your main container can get something useful from the sidecar.
By the way, I think your current solution is properly :)
You can tell a node is the master if it has the label node-role.kubernetes.io/master: "".
What you need to do is access that label from your containers which can be done with the Downward Api (Edit: Wrong, only Pod information can be accessed from the Downward Api). You can mount the labels inside your containers using:
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef:
fieldPath: metadata.labels
You can then search the content of that file from within the container.

Service selection from only one pod of one statefulset

It is possible to create a service that only points to a pod, created by a statefulset?
The solutions that make me would be:
Put as a provider on behalf of the pod.
Dynamic labels with the name of the pod.
As per Kubernetes 1.9 you can use: statefulset.kubernetes.io/pod-name
From the documentation:
"When the StatefulSet controller creates a Pod, it adds a label,
statefulset.kubernetes.io/pod-name, that is set to the name of the
Pod. This label allows you to attach a Service to a specific Pod in
the StatefulSet."
https://kubernetes.io/docs/concepts/workloads/controllers/statefulset/#pod-name-label
You can use the ordinal value of statefulset pod to label a pod.
I would use kubectl in initContainer to label the pods from within the pods created by statefulset and use that label in service selector spec.
example init container:
initContainers:
- name: set-label
image: lachlanevenson/k8s-kubectl:v1.8.5
command:
- sh
- -c
- '/usr/local/bin/kubectl label pod $POD_NAME select-id=${HOSTNAME##*-} --server=https://kubernetes.default --token=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token) --certificate-authority=/var/run/secrets/kubernetes.io/serviceaccount/ca.crt -n $POD_NAMESPACE'
env:
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
example service selector:
selector:
app: my-app
select-id: <0|1|2 ordinal value>

Kubernetes deployment name from within a pod?

How can I to source the Kubernetes deployment/job name that spawned the current pod from within the pod?
In many cases the hostname of the Pod equals to the name of the Pod (you can access that by the HOSTNAME environment variable). However that's not a reliable method of determining the Pod's identity.
You will want to you use the Downward API which allows you to expose metadata as environment variables and/or files on a volume.
The name and namespace of a Pod can be exposed as environment variables (fields: metadata.name and metadata.namespace) but the information about the creator of a Pod (which is the annotation kubernetes.io/created-by) can only be exposed as a file.
Example:
apiVersion: extensions/v1beta1
kind: Deployment
metadata:
name: busybox
labels: {app: busybox}
spec:
selector: {matchLabels: {app: busybox}}
template:
metadata: {labels: {app: busybox}}
spec:
containers:
- name: busybox
image: busybox
command:
- "sh"
- "-c"
- |
echo "I am $MY_POD_NAME in the namespace $MY_POD_NAMESPACE"
echo
grep ".*" /etc/podinfo/*
while :; do sleep 3600; done
env:
- name: MY_POD_NAME
valueFrom: {fieldRef: {fieldPath: metadata.name}}
- name: MY_POD_NAMESPACE
valueFrom: {fieldRef: {fieldPath: metadata.namespace}}
volumeMounts:
- name: podinfo
mountPath: /etc/podinfo/
volumes:
- name: podinfo
downwardAPI:
items:
- path: "labels"
fieldRef: {fieldPath: metadata.labels}
- path: "annotations"
fieldRef: {fieldPath: metadata.annotations}
Too see the output:
$ kubectl logs `kubectl get pod -l app=busybox -o name | cut -d / -f2`
Output:
I am busybox-1704453464-m1b9h in the namespace default
/etc/podinfo/annotations:kubernetes.io/config.seen="2017-02-16T16:46:57.831347234Z"
/etc/podinfo/annotations:kubernetes.io/config.source="api"
/etc/podinfo/annotations:kubernetes.io/created-by="{\"kind\":\"SerializedReference\",\"apiVersion\":\"v1\",\"reference\":{\"kind\":\"ReplicaSet\",\"namespace\":\"default\",\"name\":\"busybox-1704453464\",\"uid\":\"87b86370-f467-11e6-8d47-525400247352\",\"apiVersion\":\"extensions\",\"resourceVersion\":\"191157\"}}\n"
/etc/podinfo/annotations:kubernetes.io/limit-ranger="LimitRanger plugin set: cpu request for container busybox"
/etc/podinfo/labels:app="busybox"
/etc/podinfo/labels:pod-template-hash="1704453464"
If you are using the Downwards API to get deployment name from inside the pod, and you want to avoid using the volume mount way - there is one opinionated way to get deployment info, exposed to pod as environment variables.
Template labels specified in a Deployment spec are added as pod labels to each pod of that deployment.
Example : the app label below will be added to all pods of this deployment
...
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
...
It is a commonly followed (again, not necessarily true for your case) convention for deployments, to keep the app label value same as the the deployment name, as shown in the above example. If your deployments follow this convention (mine did), you can expose this label's value (essentially, the name of deployment) as an environment variable to the pod, using the downwards API
Continuing on above example :
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx:1.14.2
ports:
- containerPort: 80
env:
- name: NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: DEPLOYMENT_NAME
valueFrom:
fieldRef:
fieldPath: metadata.labels['app']
Again, clarifying that this is not a guaranteed solution for your problem as it still does not give the deployment name in env vars. It is just an opinionated way which I found useful and thought would be good to share.
In my case, there were a lot of deployments (>20) and I didn't want to add the deployment name manually as an env variable, for each of the deployment config. As my deployments already followed the above convention, I just copied the bit of yaml specifying NAMESPACE and DEPLOYMENT_NAME variable to each deployment config
references :
https://kubernetes.io/docs/tasks/inject-data-application/downward-api-volume-expose-pod-information/#capabilities-of-the-downward-api