Access kubernetes cluster details like namespaces and its pods and pod image using only REST API's - kubernetes

I have my kubernetes cluster on ibm cloud account which i have access to.Using api key im able to generate IAM token(Bearer token) for authorization to use API's in this swagger https://containers.cloud.ibm.com/global/swagger-global-api/ and get details till namespaces list.
But i need to still dig through namespaces and get pods of that and each pod image there.only using REST API/client libraries(no kubectl or commands).How would i achieve this from my external node application?

Related

Kubernetes - access Metric Server

In the official Kubernetes documentation:
https://kubernetes.io/docs/tasks/run-application/horizontal-pod-autoscale-walkthrough/
We can see the following:
This example requires a running Kubernetes cluster and kubectl, version 1.2 or later. Metrics server monitoring needs to be deployed in the cluster to provide metrics through the Metrics API. Horizontal Pod Autoscaler uses this API to collect metrics. To learn how to deploy the metrics-server, see the metrics-server documentation.
To specify multiple resource metrics for a Horizontal Pod Autoscaler, you must have a Kubernetes cluster and kubectl at version 1.6 or later. To make use of custom metrics, your cluster must be able to communicate with the API server providing the custom Metrics API. Finally, to use metrics not related to any Kubernetes object you must have a Kubernetes cluster at version 1.10 or later, and you must be able to communicate with the API server that provides the external Metrics API. See the Horizontal Pod Autoscaler user guide for more details.
In order to verify I can "make use of custom metrics", I ran:
kubectl get metrics-server
And got the result: error: the server doesn't have a resource type "metrics-server"
May I ask what can I do to verify "Metrics server monitoring needs to be deployed in the cluster" please?
Thank you
The actual behavior behind the kubectl is to send an API request to a particular endpoint in the Kubernetes API server. There are a couple of predefined objects coming along with kubectl. But if you have some endpoints that are not defined with kubectl, you can use the flag --raw to send the request to API server.
In your case, you can checkout the built-in metrics with this command.
> kubectl get --raw /apis/metrics.k8s.io
{"kind":"APIGroup","apiVersion":"v1","name":"metrics.k8s.io","versions":[{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}],"preferredVersion":{"groupVersion":"metrics.k8s.io/v1beta1","version":"v1beta1"}}
You will get the JSON response from kubectl. Then, you can follow the path under the response to query your target resources. In my case, in order to get the actual metrics, I will need to use this command.
> kubectl get --raw /apis/metrics.k8s.io/v1beta1/pods
For this metrics endpoint, it refers to the built-in metrics. They are CPU and memory. If you want to use the custom metrics, you will need to install the prometheus, prometheus adaptor and corresponding exporter depending on your application. For the custom metrics verification, you can go to the following endpoint.
> kubectl get --raw /apis/custom.metrics.k8s.io

How to extract information about a PV attached to a pod from an app running in another pod?

Among a big stack of orchestrated k8s pods, I have following two pods of interest:
Elasticsearch pod attached to a PV
A tomcat based application pod that serves as administrator for all other pods
I want to be able to query and display very minimal/basic disk availability and usage statistics of the PV (attached to pod #1) on the app running in pod #2
Can this be achieved without having to run a web-server inside my ES pod? Since ES might be very loaded, I prefer not to add a web-server to it.
The PV attached to ES pod also holds the logs. So I want to avoid any log-extraction-based solution to achieve getting this information over to pod #2.
You need get the PV details from kubernetes cluster API, where ever you are.
Accessing the Kubernetes cluster API from within a Pod
When accessing the API from within a Pod, locating and authenticating to the API server are slightly different to the external client case described above.
The easiest way to use the Kubernetes API from a Pod is to use one of the official client libraries. These libraries can automatically discover the API server and authenticate.
Using Official Client Libraries
From within a Pod, the recommended ways to connect to the Kubernetes API are:
For a Go client, use the official Go client library. The rest.InClusterConfig() function handles API host discovery and authentication automatically. See an example here.
For a Python client, use the official Python client library. The config.load_incluster_config() function handles API host discovery and authentication automatically. See an example here.
There are a number of other libraries available, please refer to the Client Libraries page.
In each case, the service account credentials of the Pod are used to communicate securely with the API server.
Reference
https://kubernetes.io/docs/tasks/administer-cluster/access-cluster-api/#accessing-the-api-from-within-a-pod

app in its own namespace with a service account available in any namespace

I have a very specific scenario I'm trying to solve for:
Using Kubernetes (single cluster)
Installing Vault on that cluster
sending GitLab containers to the same cluster.
I need to install Vault in such a way that:
Vault lives in it's own namespace (easy/solved)
Vault's service account (vault-auth) is available to all other namespaces (unsolved)
GitLab's default behavior is to put all apps/services into their own namespaces with the Project ID; EG: repo_name+project_id. It's predictable but the two options are:
When the app is in its own namespace it cannot access the Vault service account in the 'vault' Namespace. It requires you to create a vault service account in each application namespace; hot garbage, or...
Put ALL apps + Vault in the default namespace and applications can easily find the 'vault-auth' service account. Messy but totally works.
To use GitLab in the way it is intended (and I don't disagree) is to leave each app in it's own namespace. The question then becomes:
How would one create the Kubernetes Service Account for Vault (vault-auth) so that Vault the application is in it's own namespace but the service account itself is available to ALL namespaces?
Then, no matter the namespace that GitLab creates, the containers have equal access to the 'vault-auth' service account.

React when a pod is created (hook)

I'd like to know if it's possible to get information from a pod when it's just created.
I'm spending time in developing a kubernetes controller process that reacts itself when a pod is created in cluster.
When a pod is just created, the service has to be able to get some basic information from pod. For example, ip, annotations...
I'd like to use a java service.
Any ideas?
You can use kubernetes
api-server
to get information regarding
endpoints (service)
. Kubernetes expose its API via REST so, you can use anything to communicate. Also, verify the results using 'kubectl' tool while development. For example, if you want to monitor pods related to service say, myservice.
kubectl get endpoints <myservice_pod> --watch
This will notify you with any activity with pods related to myservice. IMO, in java you have to use polling mechanism to mimic --watch functionality.
well, if you use kubernetes API client you can just watch on changes for all pods and then get their details (assuming you have granted RBAC auth)

Superagent request within a kubernetes cluster

I have two kubernetes controllers and services with pods running named web and api respectively.
In my web pod I am using superagent to try and access an api pod with the following http://api:3000/api/user this results in the error ERR_NAME_NOT_RESOLVED
However if I run a shell on my web pod and curl http://api:3000/api/user everything works as it should
Am I missing something fundamental about how superagent works? Or something else?
If you launch your superagent in a browser, the browser is not a part of Kubernetes cluster, hence it neither uses kube DNS nor can it access cluster IPs.
To make it work you need to expose your api service to the external world by means of NodePort/Loadbalancer service or Ingress