Get IP addresses of all k8s pods from within a container - kubernetes

I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.

For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.

You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.

Related

How to get another pod name from it's IP?

I have a pod that exposes a port (Server). Other pods (Clients) can communicate with it.
The server can find remote IP and port on a socket (when the client connects to it). I am looking for a way to get the client's pod name (from its IP and port).
I saw a bunch of questions/answers about getting pod names via kubectl. However, I am not sure whether I can do kubectl from within a cluster itself.
I am trying to figure out what is available for something running on the cluster. It's ok if it requires some special privileges. It's more complicated if it requires authentication.
List all the Pods with the List Pods API operation and parse the JSON response for the podIP field (e.g. with jq or some other JSON parsing tool) to find the JSON object of the Pod that has your desired IP address. Then, extract the metadata.name field from this JSON object to get the name of the Pod.
You can do this by either directly using the Kubernetes API (e.g. with curl) or with kubectl (e.g. kubectl get pods -o json | jq ...). In any case, you must include in this request the ServiceAccount token of the ServiceAccount used by the Pod from which you are issuing the request (if you use the Kubernetes API directly, as a Bearer token in the Authorization header, and if you use kubectl with the --token command-line flag).
Regarding authorisation, you need a Role allowing the list verb on the pods resource and a RoleBinding that binds this Role to the ServiceAccount that your Pod is using (by default, Pods use a ServiceAccount named default in their namespace, but you can specify a custom ServiceAccount with the serviceAccountName field of the Pod).
Whatever can be done via kubectl can be done via direct requests to the API server as well. All you need is to have proper ServiceAccount set up for your pod. Once you have it - you use can plenty of libraries dedicated for communication with k8s API server.

kubectl get nodes from pod (NetworkPolicy)

I try to run using Python kubectl to get nodes inside the POD.
How I should set up a Network Policy for this pod?
I tried to connect my namespace to the kube-system namespace, but it was not working.
Thanks.
As per Accessing the API from a Pod:
The recommended way to locate the apiserver within the pod is with the
kubernetes.default.svc DNS name, which resolves to a Service IP which
in turn will be routed to an apiserver.
The recommended way to authenticate to the apiserver is with a service
account credential. By kube-system, a pod is associated with a service account, and a credential (token) for that service account is placed into the filesystem tree of each container in that pod, at /var/run/secrets/kubernetes.io/serviceaccount/token.
All you need is a service account with enough privileges, and use the API server DNS name as stated above. Example:
# Export the token value
export TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
# Use kubectl to talk internally with the API server
kubectl --insecure-skip-tls-verify=true \
--server="https://kubernetes.default.svc:443" \
--token="${TOKEN}" \
get nodes
The Network Policy may be restrictive and prevent this type of call, however, by default, the above should work.

kubernetes api-server understanding

Hi i'm a newbie for k8s and i was wondering where and how kubectl sends requests to the kube api-server.
So for example, if i'm sending a request such as "kubectl get pods --all-namespaces"(and my default kubernetes endpoints is set as "192.168.64.2:8443"), my understanding is that this would translate to a https request such as "https://192.168.64.2:8443/api/v1/pods......etc" and kubectl would use authentication stored in .kube/config file. Am i right?
And i also have a metrics-server up and running on endpoint "172.17.0.8:4443" but how does kubectl know to use this ip when i run "kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/<NODE_NAME> | jq"? are all kubectl commands directed to one ip?
Thanks in advance.
Authentication is one of the steps that kubectl achieves. You could see what happens when a command run with verbose ability. for example,
kubectl get pods -v9 --all-namespaces
Kubernetes know resource definitions and their implementation, you could check resource types with,
kubectl api-resources
So Kubernetes api-server knows which resources are metric-server and how that could call.
All kubectl requests go to the api-server. The api-server can either answer by itself or it delegates to other components, for example an extension api server.

A way to communicate to a Pod that it's restarting

I need to communicate to a Pod if it's restarting or not. Because depending of the situation my app is working differently (stateful app). I wouldn't like to create other pod that runs a kind of watchdog and then informs my app if it's restarting or not (after a fault). But maybe there is a way to do it with Kubernetes components (Kubelet,..).
Quoting from Kubernetes Docs:
Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account
(for example, default)
A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.
An RBAC Role or ClusterRole contains rules that represent a set of
permissions.
A Role always sets permissions within a particular namespace.
ClusterRole, by contrast, is a non-namespaced resource
So, In-order to get/watch the status of the other pod, you can call Kubernetes API from the pod running your code by using serviceaccounts. Follow below steps in-order to automatically retrieve other pod status from a given pod without any external dependency (Due to reliability concerns, you shouldn't rely upon nodes)
Create a serviceaccount in your pod's (requestor pod) namespace
kubectl create sa pod-reader
If both pods are in same namespace, create role,rolebinding
Create a role
kubectl create role pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Else, i.e the pods are in different namespaces, create clusterrole,clusterrolebinding
Create a clusterrole
kubectl create clusterrole pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create clusterrolebinding pod-reader-binding --clusterrole=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Verify the permissions
kubectl auth can-i watch pods --as=system:serviceaccount:<NAMESPACE>:pod-reader
Now deploy your pod/(your app) with this serviceaccount.
kubectl create <MY-POD> --image=<MY-CONTAINER-IMAGE> --serviceaccount=pod-reader
This will mount serviceaccount secret token in your pod, which can be found at /var/run/secrets/kubernetes.io/serviceaccount/token. Your app can use this token to make GET requests to Kubernetes API server in-order to get the status of the pod. See below example (this assumes your pod has curl utility installed. However, you can make a relevant API call from your code, pass the Header by reading the serviceaccount token file mounted in your pod).
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl https://kubernetes.default/api/v1/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD> -H "Authorization: Bearer ${TOKEN}" -k
curl https://kubernetes.default/api/v1/watch/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD>?timeoutSeconds=30 -H "Authorization: Bearer ${TOKEN}" -k
References:
Kubernetes API
serviceaccount

How to make a service can access via the service proxy running at the master in kubernetes

How to make a service can access via the service proxy running at the master in kubernetes ?
like service of kube-ui or fluentd-elasticsearch in example. can access the url: http://[masterIP:post]/api/v1/proxy/namespaces/kube-system/services/kube-ui/
I can not access http://[masterIP:post]/api/v1/proxy/namespaces/test/services/myweb, when I create a service in the test namespace named myweb.
So how to do ?
If you're trying to access it from a pod running in the cluster, you're best off just accessing the service directly. Services are made available using DNS within the cluster. If your pod is in the same namespace as the service, you should be able to access it simply using its name, e.g. at myweb in this case. If your pod is in a different namespace, you can hit it at pod-name.namespace, e.g. myweb.test in this case.
If you're trying to access it from outside the cluster, then you shouldn't need to do anything different than you do for the default services. If you're unable to access it in the same way, it's likely that your service doesn't have any pods backing it, or that those pods aren't working. You can check which pods are backing your service using kubectl get endpoints myweb --namespace=test. If that's empty, then you should make sure you've scheduled the pods that are meant to implement the service, and if so, that their labels are correct.
You might find the documentation on services useful.