Kubernetes, deploy from within a pod - kubernetes

We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.

You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.

Related

ArgoCD deployment to eks and aks

Is there any way that ArgoCD deploy to AKS and EKS cluster simultaneously. I don't see anything setting in ArgoCD to connect to another cluster. My aim is that I want ArgoCD to deploy in both AKS and EKS. As of now since ArgoCD is deployed to EKS so by default its picking it up but I want to connect ArgoCD with AKS as well. If there is a way please tell me.
Yes, you can deploy to multiple clusters or external clusters using the Argo CD.
please check this out : https://blog.doit-intl.com/automating-kubernetes-multi-cluster-config-with-argo-cd-5ac5e371ef01
if your argo CD is running local on same host
you can check the existing clusters using the
kubectl config get-contexts
and using cluster context Name you can add the context to the Argo CD via agro cli
argocd cluster add RESPECTIVE-CONTEXT name
https://argoproj.github.io/argo-cd/user-guide/commands/argocd_cluster_add/
readmore at : https://itnext.io/argocd-setup-external-clusters-by-name-d3d58a53acb0

Kubernetes API for the cluster in AKS

I am trying to list all the workloads/deployments we're running on the clusters we're running on AKS. I don't see an endpoint for this in AKS API REST reference, how do I get the deployments etc?
AKS API is for managing clusters.
See Kubernetes API if you want to access anything within a cluster. E.g. the workloads.

Can a pod within my cluster get a list of pods in a service?

Suppose I have a Service within my cluster called workers. From kubectl, I can list the pods in that service using the selector the service uses:
kubectl get pods --selector app=worker
Can a pod within the cluster get a list of that service's pods?
You can achieve it quite easily with Kubernetes Python Client library. The following code can be run locally as it uses your .kube/config file for authentication:
from kubernetes import client , config
config.load_kube_config()
v1 = client.CoreV1Api()
list_pods = v1.list_pod_for_all_namespaces(label_selector="app=worker")
for item in list_pods.items:
print(item.metadata.name)
Before that you will need to install the mentioned library, which can be done with pip:
pip3 install kubernetes
For production use I would recommend integrating it with your python docker image and of course instead of using .kube/config file, you will need to create a ServiceAccount for your client Pod so it can authenticate to kubernetes API. This is nicely described in this answer, provided by krelst.

How to talk to Kubernetes CRD service within a pod in the same k8s cluster?

I installed a Spark on K8s operator in my K8s cluster and I have an app running within the k8s cluster. I'd like to enable this app to talk to the sparkapplication CRD service. Can I know what would be the endpoint I should use? (or what's the K8s endpoint within a K8s cluster)
It's clearly documented here. So basically, it creates a NodePort type of service. It also specifies that it could create an Ingress to access the UI. For example:
...
status:
sparkApplicationId: spark-5f4ba921c85ff3f1cb04bef324f9154c9
applicationState:
state: COMPLETED
completionTime: 2018-02-20T23:33:55Z
driverInfo:
podName: spark-pi-83ba921c85ff3f1cb04bef324f9154c9-driver
webUIAddress: 35.192.234.248:31064
webUIPort: 31064
webUIServiceName: spark-pi-2402118027-ui-svc
webUIIngressName: spark-pi-ui-ingress
webUIIngressAddress: spark-pi.ingress.cluster.com
In this case, you could use 35.192.234.248:31064 to access your UI. Internally within the K8s cluster, you could use spark-pi-2402118027-ui-svc.<namespace>.svc.cluster.local or simply spark-pi-2402118027-ui-svc if you are within the same namespace.

What's the purpose of Kubernetes ServiceAccount

I've read documentation, I've seen exemples, but I don't know why would I add a serviceAccount in my pods ?
The 'elasticsearch' exemple from Kubernetes (https://github.com/kubernetes/kubernetes/tree/master/examples/elasticsearch) has a service account 'elasticsearch', what does it grant ?
Thank you.
The service accounts inject authentication credentials into the pod to talk to the Kubernetes service (e.g. the apiserver).
This is important if you are building an application that needs to inspect the pods/services/controllers that are running in the cluster to have correct behavior. For example, the kube2sky container watches services and endpoints to provide DNS within the cluster by connecting to the Kubernetes service.