Can a pod within my cluster get a list of pods in a service? - kubernetes

Suppose I have a Service within my cluster called workers. From kubectl, I can list the pods in that service using the selector the service uses:
kubectl get pods --selector app=worker
Can a pod within the cluster get a list of that service's pods?

You can achieve it quite easily with Kubernetes Python Client library. The following code can be run locally as it uses your .kube/config file for authentication:
from kubernetes import client , config
config.load_kube_config()
v1 = client.CoreV1Api()
list_pods = v1.list_pod_for_all_namespaces(label_selector="app=worker")
for item in list_pods.items:
print(item.metadata.name)
Before that you will need to install the mentioned library, which can be done with pip:
pip3 install kubernetes
For production use I would recommend integrating it with your python docker image and of course instead of using .kube/config file, you will need to create a ServiceAccount for your client Pod so it can authenticate to kubernetes API. This is nicely described in this answer, provided by krelst.

Related

Kubernetes, deploy from within a pod

We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.
You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.

Get IP addresses of all k8s pods from within a container

I have a service called my-service with an endpoint called refreshCache. my-service is hosted on multiple servers, and occasionally I want an event in my-service on one of the servers to trigger refreshCache on my-service on all servers. To do this I manually maintain a list of all the servers that host my-service, pull that list, and send a REST request to <server>/.../refreshCache for each server.
I'm now migrating my service to k8s. Similarly to before, where I was running refreshCache on all servers that hosted my-service, I now want to be able to run refreshCache on all the pods that host my-service. Unfortunately I cannot manually maintain a list of pod IPs, as my understanding is that IPs are ephemeral in k8s, so I need to be able to dynamically get the IPs of all pods in a node, from within a container in one of those pods. Is this possible?
Note: I'm aware this information is available with kubectl get endpoints ..., however kubectl will not be available within my container.
For achieving this the best way would be to use a K8s config inside the pod.
For this the K8s Client can help. Here is an example python script that can be used to get pods and their metadata from inside the pod.
from kubernetes import client, config
def trigger_refresh_cache():
# it works only if this script is run by K8s as a POD
config.load_incluster_config()
v1 = client.CoreV1Api()
print("Listing pods with their IPs:")
ret = v1.list_pod_for_all_namespaces(label_selector='app=my-service')
for i in ret.items:
print("%s\t%s\t%s" %
(i.status.pod_ip, i.metadata.namespace, i.metadata.name))
# Rest of the logic goes here to trigger endpoint
Here the method load_incluster_config() is used which loads the kubeconfig inside pod via the service account attached to that pod.
You don't need kubectl to access the Kubernetes API. You can do it with any tool that can make HTTP requests.
The Kubernetes API is a simple HTTP REST API, and all the authentication information that you need is present in the container if it runs as a Pod in the cluster.
To get the Endpoints object named my-service from within a container in the cluster, you can do:
curl -k -H "Authorization: Bearer $(cat /var/run/secrets/kubernetes.io/serviceaccount/token)" \
https://kubernetes.default.svc:443/api/v1/namespaces/{namespace}/endpoints/my-service
Note: replace {namespace} with the namespace of the my-service Endpoints resource.
And to extract the IP addresses of the returned JSON, you could pipe the output to a tool like jq:
... | jq -r '.subsets[].addresses[].ip'
Note that the Pod from which you are executing this needs read permissions for the Endpoints resource, otherwise the API request is denied.
You can do this with a ClusterRole, ClusterRoleBinding, and Service Account (you need to set this up only once):
kubectl create sa endpoint-reader
kubectl create clusterrole endpoint-reader --verb=get,list --resource=endpoints
kubectl create clusterrolebinding endpoint-reader --serviceaccount=default:endpoint-reader --clusterrole=endpoint-reader
Then, use the endpoint-reader ServiceAccount for the Pod from which you want to execute the above curl command by specifying it in the pod.spec.serviceAccountName field.
Granting permissions for any other API operations (i.e. combinations of verbs and resources) works in the same way.

Deploy to IBM Cloud Kubernetes from deployment.yml

I have developed my microservice ecosystem and i managed to deploy and run it localy using docker containers and minikube. For each service i have specified two files: deployment.yml (pod specification) and service.yml (service specification). When I deploy each service to minikube cluster i simply run:
kubectl create -f deployment.yml
and after that
kubectl create -f service.yml
Now I want to deploy microservice ecosystem to IBM Cloud Services. I spend some time researching the deployment procedures and I did not find any using the deployment.yml and service.yml when deploying services.
My question is, can I just somehow deploy my services using existing deployment.yml and service.yml files?
Thank you for the answers.
As long as it's kubernetes under the hood and kubernetes API is accessible (kubectl works) you can do exactly the same. Is it sustainable in long term, that depends on your case, but likely it is not and I would suggest looking into stuff like ie. Helm
So I was confused about the deployment steps.
I just needed to go to IBM Cloud Service dashboard, find my cluster, click on cluster link and follow the steps in the Access section on the page.
After finishing the steps described within the section we can deploy our services as we were using the minikube and kubectl locally.

Deploying Images from gitlab in a new namespace in Kubernetes

I have integrated gitlab with Kubernetes cluster which is hosted on AWS. Currently it builds the code from gitlab to the default namespace. I have created two namespaces in kubernetes one for production and one for development. What are the steps if I want that to be deployed in a dev or a production namespace. Do I need to make changes at the gitlab level or on the kubernetes level.
This is done at the kubernetes level. Whether you're using helm or kubectl, you can specify the desired namespace in the command.
As in:
kubectl create -f deployment.yaml --namespace <desired-namespace>
helm install stable/gitlab-ce --namespace <desired-namespace>
Alternatively, you can just change your current namespace to the desired namespace and install as you did before. By default, helm charts or kuberenetes yaml files will install into your current namespace unless specified otherwise.

Kubernetes helm - Running helm install in a running pod

I want to spin up a single installer pod with helm install that once running, will apply some logic and install other applications into my cluster using helm install.
I'm aware of the helm dependencies, but I want to run some business logic with the installations and I'd rather do it in the installer pod and on the host triggering the whole installation process.
I found suggestions on using the Kubernetes REST API when inside a pod, but helm requires kubectl installed and configured.
Any ideas?
It seems this was a lot easier than I thought...
On a simple pod running Debian, I just installed kubectl, and with the default service account's secret that's already mounted, the kubectl was already configured to the cluster's API.
Note that the configured default namespace is the one that my installer pod is deployed to.
Verified with
$ kubectl cluster-info
$ kubectl get ns
I then installed helm, which was already using the kubectl to access the cluster for installing tiller.
Verified with
$ helm version
$ helm init
I installed a test chart
$ helm install --name my-release stable/wordpress
It works!!
I hope this helps
You could add kubectl to your installer pod.
"In cluster" credentials could be provided via service account in "default-token" secret: https://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/