A way to communicate to a Pod that it's restarting - kubernetes

I need to communicate to a Pod if it's restarting or not. Because depending of the situation my app is working differently (stateful app). I wouldn't like to create other pod that runs a kind of watchdog and then informs my app if it's restarting or not (after a fault). But maybe there is a way to do it with Kubernetes components (Kubelet,..).

Quoting from Kubernetes Docs:
Processes in containers inside pods can also contact the apiserver.
When they do, they are authenticated as a particular Service Account
(for example, default)
A RoleBinding or ClusterRoleBinding binds a role to subjects. Subjects can be groups, users or ServiceAccounts.
An RBAC Role or ClusterRole contains rules that represent a set of
permissions.
A Role always sets permissions within a particular namespace.
ClusterRole, by contrast, is a non-namespaced resource
So, In-order to get/watch the status of the other pod, you can call Kubernetes API from the pod running your code by using serviceaccounts. Follow below steps in-order to automatically retrieve other pod status from a given pod without any external dependency (Due to reliability concerns, you shouldn't rely upon nodes)
Create a serviceaccount in your pod's (requestor pod) namespace
kubectl create sa pod-reader
If both pods are in same namespace, create role,rolebinding
Create a role
kubectl create role pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create rolebinding pod-reader-binding --role=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Else, i.e the pods are in different namespaces, create clusterrole,clusterrolebinding
Create a clusterrole
kubectl create clusterrole pod-reader --verb=get,watch --resource=pods
Create a rolebinding
kubectl create clusterrolebinding pod-reader-binding --clusterrole=pod-reader --serviceaccount=<NAMESPACE>:pod-reader
Verify the permissions
kubectl auth can-i watch pods --as=system:serviceaccount:<NAMESPACE>:pod-reader
Now deploy your pod/(your app) with this serviceaccount.
kubectl create <MY-POD> --image=<MY-CONTAINER-IMAGE> --serviceaccount=pod-reader
This will mount serviceaccount secret token in your pod, which can be found at /var/run/secrets/kubernetes.io/serviceaccount/token. Your app can use this token to make GET requests to Kubernetes API server in-order to get the status of the pod. See below example (this assumes your pod has curl utility installed. However, you can make a relevant API call from your code, pass the Header by reading the serviceaccount token file mounted in your pod).
TOKEN=$(cat /var/run/secrets/kubernetes.io/serviceaccount/token)
curl https://kubernetes.default/api/v1/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD> -H "Authorization: Bearer ${TOKEN}" -k
curl https://kubernetes.default/api/v1/watch/namespaces/<NAMESPACE>/pods/<NAME_OF_THE_OTHER_POD>?timeoutSeconds=30 -H "Authorization: Bearer ${TOKEN}" -k
References:
Kubernetes API
serviceaccount

Related

What is the behaviour of Secrets for Kubernetes ServiceAccounts?

kubectl explain serviceaccount.secrets describes ServiceAccount Secrets as the secrets allowed to be used by Pods running using this ServiceAccount, but what effect does adding a Secret name to this list have?
The ServiceAccount token Secret (which is automatically added to this list) gets automatically mounted as a volume into all containers in a Pod running using this ServiceAccount (as long as the ServiceAccount admission controller is enabled), but what happens for other secrets?
It holds the name of all secrets containing tokens for that SA so when the controller goes to rotate things, it knows where to find them.

why i can't create pods a a user with enough permissions in kubernetes

I am following a tutorial regarding RBAC, I think I understand the main idea but I don't get why this is failing:
kc auth can-i "*" pod/compute --as deploy#test.com
no
kc create clusterrole deploy --verb="*" --resource=pods --resource-name=compute
clusterrole.rbac.authorization.k8s.io/deploy created
kc create clusterrolebinding deploy --user=deploy#test.com --clusterrole=deploy
clusterrolebinding.rbac.authorization.k8s.io/deploy created
# this tells me that deploy#test.com should be able to create a pod named compute
kc auth can-i "*" pod/compute --as deploy#test.com
yes
# but it fails when trying to do so
kc run compute --image=nginx --as deploy#test.com
Error from server (Forbidden): pods is forbidden: User "deploy#test.com" cannot create resource "pods" in API group "" in the namespace "default"
the namespace name should be irrelevant afaik, since this is a clusterrole.
Restricting the create permission to a specific resource name is not supported.
This is from the Kubernetes documentation:
Note: You cannot restrict create or deletecollection requests by resourceName. For create, this limitation is because the object name is not known at authorization time.
This means the ClusterRole you created doesn't allow you to create any Pod.
You need to have another ClusterRole assigned where you don't specify the resource name.

Get IP addresses of all k8s pods from within a container

I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.
For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.
You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.

Can I use Role and ServiceAccounts with several namespaces?

I'm trying to connect my k8s cluster to my ceph cluster with this manual:
https://akomljen.com/using-existing-ceph-cluster-for-kubernetes-persistent-storage/
I want to deploy rbd-provision pods into kube-system namespace like this https://paste.ee/p/C1pB4
After deploying pvc I get errors because my pvc is in default namespace. Can I do with that anything? I read docs and if I'm right I can't use ServiceAccount with 2 ns, or can?
No. Service account is namespaced object and it is limited to particular namespace only
Service accounts can be granted permissions in another namespace.
For example, within the namespace "acme", grant the permissions in the view ClusterRole to the service account in the namespace "acme" named "myapp" :
kubectl create rolebinding myapp-view-binding \
--clusterrole=view --serviceaccount=acme:myapp \
--namespace=acme

Kubernetes RBAC authentication for default user

I am using kops in AWS to create my Kubernetes cluster.
I have created a cluster with RBAC enabled via --authorization=RBAC as described here.
I am trying to use the default service account token to interact with the cluster and getting this error:
Error from server (Forbidden): User "system:serviceaccount:default:default" cannot list pods in the namespace "default". (get pods)
Am I missing a role or binding somewhere?
I thing it is not a good idea to give the cluster-admin role to default service account in default namespace.
If you will give cluster-admin access to default user in default namespace - every app (pod) that will be deployed in cluster, in default namespace - will be able to manipulate the cluster (delete system pods/deployments or make other bad stuff).
By default the clusterrole cluster-admin is given to default service account in kube-system namespace.
You can use it for interacting with cluster.
try to give admin role and try.
kubectl create clusterrolebinding add-on-cluster-admin --clusterrole=cluster-admin --serviceaccount=default:default