kubernetes api-server understanding - kubernetes

Hi i'm a newbie for k8s and i was wondering where and how kubectl sends requests to the kube api-server.
So for example, if i'm sending a request such as "kubectl get pods --all-namespaces"(and my default kubernetes endpoints is set as "192.168.64.2:8443"), my understanding is that this would translate to a https request such as "https://192.168.64.2:8443/api/v1/pods......etc" and kubectl would use authentication stored in .kube/config file. Am i right?
And i also have a metrics-server up and running on endpoint "172.17.0.8:4443" but how does kubectl know to use this ip when i run "kubectl get --raw /apis/metrics.k8s.io/v1beta1/nodes/<NODE_NAME> | jq"? are all kubectl commands directed to one ip?
Thanks in advance.

Authentication is one of the steps that kubectl achieves. You could see what happens when a command run with verbose ability. for example,
kubectl get pods -v9 --all-namespaces
Kubernetes know resource definitions and their implementation, you could check resource types with,
kubectl api-resources
So Kubernetes api-server knows which resources are metric-server and how that could call.

All kubectl requests go to the api-server. The api-server can either answer by itself or it delegates to other components, for example an extension api server.

Related

How can I use Kubernetes Python API to check health endpoints?

I'm seeking the answer regarding how to use the Kubernetes Python API to check health
I am using:
kubectl get --raw='/readyz?verbose'
kubectl get --raw='/readyz?verbose' is probing the /readyz endpoint of the Kubernetes apiserver.
If you want to check the current running and ready state of a Pod and its containers, you can look at the Pod's status.Conditions and status.ContainerStatuses field.
If you want to probe the health endpoints of a container directly, you can use kubectl proxy or kubectl port-forward to open a proxy connection into the Pod's network namespace, then probe the health endpoint through that proxy.

Kubectl using command to get cluster status

I need to create a shell-script which examine the cluster
Status.**
I saw that the kubectl describe-nodes provides lots of data
I can output it to json and then parse it but maybe it’s just overkill.
Is there a simple way to with kubectl command to get the status of the cluster ? just if its up / down
The least expensive way to check if you can reach the API server is kubectl version. In addition kubectl cluster-info gives you some more info.
In addition to Michael's answer, that would only tell you about the API server or master and internal services like KubeDns etc, but not the nodes.
It depends on your need and definition of "status" here. You could run kubectl cluster-info followed by kubectl get nodes and check the STATUS column for all nodes using parsing tools like awk, jq or kubectl's own -o jsonpath option to verify that all nodes are ready.
The below command would display the health of scheduler, controller and etcd
kubectl get cs
Command below lists Kubernetes core components like, etcd, controller, scheduler, kube-proxy, core-dns, network plugin. All those pods should be running to be sure that Kubernetes is healthy.
kubectl get pod -n kube-system
Finally deploy one front-end and back-end Pod and verify the inter-pod communication to ensure that cluster is up and working correctly.
Below are the commands to get cluster status based on requirements:
To get information regarding where your Kubernetes master is running at, CoreDNS is running at, kubernetes-dasboard is running at, use
kubectl cluster-info
To get detailed information to further debug and diagnose cluster problem, use kubectl cluster-info dump
To get only the health status for your node use, kubectl get componentstatus or kubectl get cs
*To show detailed information about a resource use kubectl describe node <node>

Kubernetes: Follow kubectl proxy logs

Is it possible to see what traffic is going through kubectl proxy? For example HTTP request, response.
Is it possible to follow that log (kind of -f)?
kubectl --v=10 proxy follows the log.

Getting Kubernetes cluster information

I have deployment my kubernetes cluster using kubeadm.
Now I want to gather cluster based information like master node IP, port on which apiserver is listening and name of the cluster.
With kubectl cluster-info gives me some data but I am looking to fetch cluster level information with the help of K8s rest API.
One way which i have tried is look for apiserver pod and get the data. It's giving me cluster level data but I need some other cleaner way of doing it.
Thanks in advance!
If you have ran the apiserver, you can access the kubernetes REST API on port 8001.
One way to expose it is like this :
sudo kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$'&
then you can visit http://YOUR_VM_IP:8001/api
there you can see all the list of APIs and all the information you want.

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system

I have deployed kubernetes cluster. The issue i have is that the dashboard is not accessible from external desktop system.
Following is my setup.
Two vm's with cluster deployed, one master one node.
dashboard running without any issue the kube-dns is also working as expected.
kubernetes version is 1.7.
Issue: When trying to access dashboard externally through kubectl proxy. i get unauthorized response.
This is with rbac role and rolebindings enabled.
How to i configure the cluster for http browser access to dashboard from external system.
Any hint/suggestions are most welcome.
kubectl proxy not working > 1.7
try this:
copy ~/.kube/config file to your desktop
then run the kubect like this
export POD_NAME=$(kubectl --kubeconfig=config get pods -n kube-system -l "app=kubernetes-dashboard,release=kubernetes-dashboard" -o jsonpath="{.items[0].metadata.name}")
echo http://127.0.0.1:9090/
kubectl --kubeconfig=config -n kube-system port-forward $POD_NAME 9090:9090
Then access the ui like this: http://127.0.0.1:9090
see this helps
If kubectl proxy gives the Unauthorized error, there can be 2 reasons:
Your user cert doesn't have the appropriate permissions. This is unlikely since you successfully deployed kube-dns and the dashboard.
kubelet authn/authz is enabled and it's not setup correctly. See the answer to my question.