Kubernetes API for the cluster in AKS - kubernetes

I am trying to list all the workloads/deployments we're running on the clusters we're running on AKS. I don't see an endpoint for this in AKS API REST reference, how do I get the deployments etc?

AKS API is for managing clusters.
See Kubernetes API if you want to access anything within a cluster. E.g. the workloads.

Related

What is the Best Way to Scale an external (non EKS) EC2 Auto Scaling Group from Inside a Kubernetes Cluster Based on Prometheus Metrics?

I am currently autoscaling an HPA via internal Prometheus metrics which then filters down to scale the cluster via the AWS Cluster Autoscaler. That HPA is tied to an external service run on bare EC2 instances. I would like to use the same metrics that I use to scale that HPA to also scale the ASG behind that service that is external to the Kubernetes cluster.
What is the best way to do this? It is preferable that the external EC2 cluster does not have network access to the EKS cluster.
I was thinking about just writing a small service that does it via the AWS API based on polling Prometheus intermittently but I figured that there must be a better way.

Kubernetes, deploy from within a pod

We have an AWS EKS Kubernetes cluster with two factor authentication for all the kubectl commands.
Is there a way of deploying an app into this cluster using a pod deployed inside the cluster?
Can I deploy using helm charts or by specifying service account instead of kubeconfig file?
Can I specify a service account(use the one that is assigned to the pod with kubectl) for all actions of kubectl?
All this is meant to bypass two-factor authentication for the continuous deployment via Jenkins, by deploying jenkins agent into the cluster and using it for deployments. Thanks.
You can use a supported Kubernetes client library or Kubectl or directly use curl to call rest api exposed by Kubernetes API Server from within a pod.
You can use helm as well as long as you install it in the pod.
When you call Kubernetes API from within a pod by default service account is used.Service account mounted in the pod need to have role and rolebinding associated to be able to call Kubernetes API.

How to integrate Prometheus with Kubernetes where both are running on different host?

My prometheus server is running on different server. Also I have another kubernetes cluster. So, I need monitoring kubernetes pod metrics using prometheus running on different servers.
To monitor external cluster I would take advantage of Prometheus federation topology.
In your Kubernetes cluster install node-exporter pods and configure Prometheus with short-term storage.
Expose the Prometheus service (you can follow this guide) outside of Kubernetes cluster, this can be done either by LB or a node port.
Configure the Prometheus server to scrape metrics from Kubernetes endpoints configuring them with correct tags and proper authentication.

Kubernetes: Is it possible to get master nodes IP/Name in GKE regional cluster

As per the google GKE documentation, for regional cluster's masters and nodes are spread across multiple zones. Is there anyway in GKE to see the master nodes with which zone it is running?
I also tried kubectl cluster-info and it gives me the below result. Is that mean my cluster have only one master running ?
Kubernetes master is running at https://xx.xx.xx.xx
GLBCDefaultBackend is running at https://xx.xx.xx.xx/api/v1/namespaces/kube-system/services/default-http-backend:http/proxy
Heapster is running at https://xx.xx.xx.xx/api/v1/namespaces/kube-system/services/heapster/proxy
KubeDNS is running at https://xx.xx.xx.xx/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Metrics-server is running at https://xx.xx.xx.xx/api/v1/namespaces/kube-system/services/https:metrics-server:/proxy
Master nodes in GKE are managed by google, thus you will not be able to see them.
By default, a regional GKE cluster will create three master nodes spread across three zones.
Therefore, only a single static API endpoint is provided for the entire cluster.
The https://xx.xx.xx.xx represents your cluster API endpoint.
I invite you to visit this documentation to have an insight on how GKE operates.
In GKE you don't have a master per say.
GKE and EKS takes care of master node for you, this is mostly the adventage of those features instead of using AWS and GCP to just spawn a VM.

Prometheus: Better Option to monitor external K8s Cluster

I have two kubernetes clusters who do not talk to one another in any way. The idea is to maintain one prometheus instance(in another 3rd cluster) that can scrape endpoints from both the clusters.
I created a service account in each cluster, gave it cluster role & clusterrolebinding and took an yaml file of the secret. I then imported the same secret in the 3rd cluster where I have prometheus running. Using these mounted secrets, I was able to pull data from all pods in cluster 1 and 2.
Are there any better options to achieve this usecase?
I am in a way transferring secrets from one cluster to another to get the same ca.crt and token.
I think it is not safe to share secrets between clusters.
What about federation prometheus, one prometheus instance can export some data, which can be consumed by external prometheus instance.
For example, a cluster scheduler running multiple services might expose resource usage information (like memory and CPU usage) about service instances running on the cluster. On the other hand, a service running on that cluster will only expose application-specific service metrics. Often, these two sets of metrics are scraped by separate Prometheus servers.
Or deploy some exporter, which can be consumed by external prometheus. e.g. https://github.com/kubernetes/kube-state-metrics (but it is not providing cpu/memory usage of pods)