How to get self pod with kubernetes client-go - kubernetes

I have a kubernetes service, written in Go, and am using client-go to access the kubernetes apis.
I need the Pod of the service's own pod.
The PodInterface allows me to iterate all pods, but what I need is a "self" concept to get the currently running pod that is executing my code.
It appears by reading /var/run/secrets/kubernetes.io/serviceaccount/namespace and searching pods in the namespace for the one matching hostname, I can determine the "self" pod.
Is this the proper solution?

Expose the POD_NAME and POD_NAMESPACE to your pod as environment variables. Later use those values to get your own pod object.
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_POD_NAME MY_POD_NAMESPACE;
sleep 10;
done;
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
restartPolicy: Never
Ref: environment-variable-expose-pod-information

Related

Kubernetes: how to get pod name I'm running in?

I want to run Python code inside a pod. The pod is created by airflow that I don't control.
I want to somehow get the name of the pod I'm running in.
How can it be done?
You can tell kuberenetes to mount an env variable for you:
apiVersion: v1
kind: Pod
metadata:
name: dapi-test-pod
spec:
containers:
- name: test-container
image: gcr.io/google_containers/busybox
command: [ "/bin/sh", "-c", "env" ]
env:
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
and then in python you can access it like:
import os
pod_name = os.environ['MY_POD_NAME']
Or you can just open and read /etc/hostname:
f = open('/etc/hostname')
pod_name = f.read()
f.close()
Exposing Pod and Cluster Vars to Containers
Lets say you need some data about Pod or K8s environment in your application to add Pod informnation as metada tp logs. such as e.g.
Pod IP
Pod Name
Service Account of Pod
NOTE: All Pod information can be made available in the config file.
There are 2 ways to expose Pod fields into a running Container:
Environment Variables
Volume Files
Example of Environment Variable
apiVersion: apps/v1
kind: Deployment
metadata:
name: nginx-deployment-env
labels:
app: nginx
spec:
replicas: 3
selector:
matchLabels:
app: nginx
template:
metadata:
labels:
app: nginx
spec:
containers:
- name: nginx
image: nginx
ports:
- containerPort: 80
- name: log-sider
image: busybox
command: [ 'sh', '-c' ]
args:
- while true; do
echo sync app logs;
printenv POD_NAME POD_IP POD_SERVICE_ASCCOUNT;
sleep 20;
done;
env:
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: POD_SERVICE_ASCCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
Kubernets Docs
Source
one way can be
So in the Kubernetes cluster you are operating in do
kubectl get pods
now see the yaml of all pods by
oc get pods <pod-name> -o yaml
then in that find the container images being used by the pods .
identify the image tag that belongs to your container creation.
That means that when you build you image , image has a name and a tag , which is further pushed to some cloud hub from where your pod will pull the image and start the container .
you need to find the image tag and name in pod yaml using the above commands given .
Try the below :
# List all pods in all namespaces
kubectl get pods --all-namespaces
# List all pods in the current namespace
kubectl get pods -o wide
Then u can see more details using the below :
kubectl describe pod <pod-name>
Also you can refer to the following stackoverflow question and the related answers.
get-current-pod

Assigning a unique number in env inside each pod in a kubernetes cluster

I have a kubernetes cluster inside which there will be some pods running. For each pod I want to assign a unique id as env variable. eg: pod 1 server_id= 1, pod 2 server_id=2 etc.
Anyone have any idea how this can be done. I am building my docker image and deploying to cluster through gitlab ci.
Adding ENV variables into helm or YAML template
You can add a variable to the YAML file and apply it as per the requirement if your deployment are different.
Get the variables values into POD and use it
Or else you can get the POD name of deployment that will be different from each other and if you are fine to use that which will be unique
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
If you want to set your own variable and values you have to use the different deployment.
if you want to manage the sequence you need to use the stateful sets.
Sequence something like
POD-1
POD-2
POD-3
as environment variables at the, you want just this you can use stateful sets POD name and get value from node and inject back to POD and application use it further.

How to run consul via kubernetes?

I tried running the pod (https://www.consul.io/docs/platform/k8s/run.html)
It failed with... containers with unready status: [consul]
kubectl create -f consul-pod.yml
apiVersion: v1
kind: Pod
metadata:
name: consul-example
spec:
containers:
- name: example
image: "consul:latest"
env:
- name: HOST_IP
valueFrom:
fieldRef:
fieldPath: status.hostIP
command:
- "/bin/sh"
- "-ec"
- |
export CONSUL_HTTP_ADDR="${HOST_IP}:8500"
consul kv put hello world
restartPolicy: Never

How to see which node/pod served a Kubernetes Ingress request?

I have a Deployment with three replicas, everyone started on a different node, behing an ingress. For tests and troubleshooting, I want to see which pod/node served my request. How is this possible?
The only way I know is to open the logs on all of the pods, do my request and search for the pod that has my request in the access log. But this is complicated and error prune, especially on productive apps with requests from other users.
I'm looking for something like a HTTP Response header like this:
X-Kubernetes-Pod: mypod-abcdef-23874
X-Kubernetes-Node: kubw02
AFAIK, there is no feature like that out of the box.
The easiest way I can think of, is adding these information as headers yourself from your API.
You technically have to Expose Pod Information to Containers Through Environment Variables and get it from code to add the headers to the response.
Would be something like this:
apiVersion: v1
kind: Pod
metadata:
name: dapi-envars-fieldref
spec:
containers:
- name: test-container
image: k8s.gcr.io/busybox
command: [ "sh", "-c"]
args:
- while true; do
echo -en '\n';
printenv MY_NODE_NAME MY_POD_NAME MY_POD_NAMESPACE;
printenv MY_POD_IP MY_POD_SERVICE_ACCOUNT;
sleep 10;
done;
env:
- name: MY_NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: MY_POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: MY_POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: MY_POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: MY_POD_SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
restartPolicy: Never
And from the API you get the information and insert into the header.

Make external API hit from kubernetes

I have a Kubernetes service running and we have an external API's dependent on this service.
We would like to be notified if there is any service restart. Is there any possibility to hit an API endpoint on every service restart?
Hi and welcome to the community!
There are multiple ways of achieve this. A really simple one (as pointed out by Thomas) is an Init Container. Refer to the Kubernetes docs for more on how to get those running! This init container would do nothing more than send out an HTTP request to your external API once the pod is started and terminate immediately afterwards.
The other way is much more complex and will require you to write some code yourself. What you'd have to do is write your own controller that watches the entities through the Kubernetes API and notify your external service when a pod is rescheduled, killed, died etc.
(You could however have your external service to exactly that why accessing the kube-api directly...)
Expanding on enzian's comment about using initContainers. Here is an example using a Curl based initContainer and mounting the metadata to pass as an environment variable in the call:
apiVersion: apps/v1
kind: Deployment
metadata:
name: some-service
namespace: the-project
labels:
app: some-service
spec:
replicas: 1
selector:
matchLabels:
app: some-service
template:
metadata:
labels:
app: some-service
spec:
initContainers:
- name: service-name-init
image: txn2/curl:v3.0.0
- name: SOME_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
command: [
"/bin/sh",
"-c",
"/usr/bin/curl -sX GET example.com/notify/$(SOME_NAME)"
]
containers:
- name: ok
image: txn2/ok
imagePullPolicy: Always
env:
- name: NODE_NAME
valueFrom:
fieldRef:
fieldPath: spec.nodeName
- name: POD_NAME
valueFrom:
fieldRef:
fieldPath: metadata.name
- name: POD_NAMESPACE
valueFrom:
fieldRef:
fieldPath: metadata.namespace
- name: POD_IP
valueFrom:
fieldRef:
fieldPath: status.podIP
- name: SERVICE_ACCOUNT
valueFrom:
fieldRef:
fieldPath: spec.serviceAccountName
ports:
- name: ok-port
containerPort: 8080