I have deployment my kubernetes cluster using kubeadm.
Now I want to gather cluster based information like master node IP, port on which apiserver is listening and name of the cluster.
With kubectl cluster-info gives me some data but I am looking to fetch cluster level information with the help of K8s rest API.
One way which i have tried is look for apiserver pod and get the data. It's giving me cluster level data but I need some other cleaner way of doing it.
Thanks in advance!
If you have ran the apiserver, you can access the kubernetes REST API on port 8001.
One way to expose it is like this :
sudo kubectl proxy --address='0.0.0.0' --port=8001 --accept-hosts='^*$'&
then you can visit http://YOUR_VM_IP:8001/api
there you can see all the list of APIs and all the information you want.
Related
You can find mentions of that resource in the following Questions: 1, 2. But I am not able to figure out what is the use of this resource.
Yes, it's true, the provided (in comments) link to the documentation might be confusing so let me try to clarify you this.
As per the official documentation the apiserver proxy:
is a bastion built into the apiserver
connects a user outside of the cluster to cluster IPs which otherwise might not be reachable
runs in the apiserver processes
client to proxy uses HTTPS (or http if apiserver so configured)
proxy to target may use HTTP or HTTPS as chosen by proxy using available information
can be used to reach a Node, Pod, or Service
does load balancing when used to reach a Service
So answering your question - setting node/proxyresource in clusterRole allows k8s services access kubelet endpoints on specific node and path.
As per the official documentation:
There are two primary communication paths from the control plane
(apiserver) to the nodes. The first is from the apiserver to the
kubelet process which runs on each node in the cluster. The second is
from the apiserver to any node, pod, or service through the
apiserver's proxy functionality.
The connections from the apiserver to the kubelet are used for:
fetching logs for pods
attaching (through kubectl) to running pods
providing the kubelet's port-forwarding functionality
Here are also few running examples of using node/proxy resource in clusterRole:
How to Setup Prometheus Monitoring On Kubernetes Cluster
Running Prometheus on Kubernetes
It is hard to find in Kubernetes official documents any information about this sub-resource.
In context of RBAC, the format node/proxy can be used to grant access to the sub-resource named proxy for node resource. Also the same access can be granted for pods and services.
We can see it from the output of available resourses from the Kubernetes API server (API Version: v1.21.0):
===/api/v1===
...
nodes/proxy
...
pods/proxy
...
services/proxy
...
Detailed information about usage of proxy sub-resource can be found in The Kubernetes API (depends on the version you use) - section Proxy Operations for every mentioned resource: pods, nodes, services.
I'm following this guide to preserve source ip for service type nodeport.
kubectl create deployment source-ip-app --image=k8s.gcr.io/echoserver:1.4
kubectl expose deployment source-ip-app --name=clusterip --port=80 --target-port=8080
At this point my service is accessible externally with nodeip:nodeport
When I change the service traffic policy,
kubectl patch svc nodeport -p '{"spec":{"externalTrafficPolicy":"Local"}}'
my service is not accessible.
I found a similar issue , But the solution is not much helpful or not understandable for me . I saw some github threads which says its something to do with hostname override in kube proxy , I'm not clear with it too.
I'm using kubernetes version v1.15.3. Kube proxy is running in iptables mode. I have a single master node and few worker nodes.
I'm facing the same issue in my minikube too.
Any help would be greatly appreciated.
From the docs here
If there are no local endpoints, packets sent to the node are dropped
So you need to use the correct node IP of the kubernetes node to access the service. Here correct node IP is the node's IP where the pod is scheduled.
This is not necessary if you can make sure every node(master and workers) has a replica of the pod.
I have created a Kubernetes cluster and one of instance in the cluster is inactive
I want to review the configured Kubernetes Engine cluster of an inactive configuration by which command should I check?
Should I use this "kubectl config get-contexts"?
or
kubectl config use-context and kubectl config view?
Am beginner to cloud please anyone explains?
The kubectl config get-context will not help you debug why the instance is failing. Basically it will just show you the list ot contexts. A context is a group of cluster access parameters. Each context contains a Kubernetes cluster, a user, and a namespace. The current context is the cluster that is currently the default for kubectl . On other hand the kubectl config view will just print you kubeconfig settings.
The best way to start is the Kubernestes official documentation. It provides a good basic steps for troubleshoouting your cluster. Some of the steps can be applied to GKE as well as the Kubeadm or Minikube clusters.
If you're using GKE, then you can read the nodes logs from Stackdriver. This document is excellent start when you want to check the logs directly in the log viewer.
If one of your instaces report NotReady after listing them with kubectl get nodes I suggest to ssh to that instances and check kubernetes components (kubelet and kube-proxy). You can view the GKE nodes from the instances page.
Kube Proxy logs:
/var/log/kube-proxy.log
If you want to check the kubelet logs, they're a unit in systemd in COS that can be accessed using jorunactl.
Kubelet logs:
sudo journalctl -u kubelet
For further debugging it is worth mentioning that that GKE master is a node inside a Google managed project and it is different from your cluster project.
For the detailed master logs you will have open a google support ticket. Here is more information about how GKE cluster architecture works, in case there's something related to the api-server.
Let me know if that was helpful.
You can run below command to check status of all the nodes of a kubernetes cluster. Pleases note if you are using GKE managed service you will not be able to see status of master nodes, you will only see status of worker nodes.
kubectl get nodes -o wide
kubectl describe node nodename
You can also run below command to check status of control plane components.
kubectl get componentstatus
You can use the below command to get list of all the nodes in GKE cluster:
kubectl get nodes -o wide
Once you have the list of nodes, you can describe the node to get the events"
kubectl describe node <Node-Name>
Based on the events you can debug the node.
I was wondering how pods are accessed when no service is defined for that specific pod. If it's through the environment variables, how does the cluster retrieve these?
Also, when services are defined, where on the master node is it stored?
Kind regards,
Charles
If you define a service for your app , you can access it outside the cluster using that service
Services are of several types , including nodePort , where you can access that port on any cluster node and you will have access to the service regardless of the actual location of the pod
you can access the endpoints or actual pod ports inside the cluster as well , but not outside
all of the above uses the kubernetes service discovery
There are two type of service dicovery though
Internal Service discovery
External Service Discovery.
You cannot "access" a pods container port(s) without a service. Services are objects that define the desired state of an ultimate set of iptable rule(s).
Also, services, like all other objects, are stored in etcd and maintained through your master(s).
You could however manually create an iptable rule forwarding traffic to the local container port that docker has exposed.
Hope this helps! If you still have any questions drop them here.
Just for debugging purposes, you can forward a port from your machine to one in the pod:
kubectl port-forward POD_NAME HOST_PORT:POD_PORT
If you have to access it from anywhere, you should use services, but you got to have a deployment created
Create deployment
kubectl create -f https://raw.githubusercontent.com/kubernetes/website/master/content/en/examples/service/networking/run-my-nginx.yaml
Expose the deployment with a NodePort service
kubectl expose deployment deployment/my-nginx --type=NodePort --name=nginx-service
Then list the services and get the port of the service
kubectl get services | grep nginx-service
All cluster data is stored in etcd which is a distributed key-value store. If etcd goes down, cluster becomes unstable and no new pods can come up.
Kubernetes has a way to access any pod within the cluster. Service is a logical way to access a set of pods bound by a selector. An individual pod can still be accessed irrespective of the service. Further service can be created to access the pods from outside the cluster (NodePort service)
I couldn't find any information on wherever a connection creation between cluster's pod and locahost is encrypted when running "kubectl port-forward" command.
It seems like it uses "socat" library which supports encryption, but I'm not sure if kubernetes actually uses it.
As far as I know when you port-forward the port of choice to your machine kubectl connects to one of the masters of your cluster so yes, normally communication is encrypted. How your master communicate to the pod though is dependent on how you set up internal comms.
kubectl port-forward uses socat to make an encrypted TLS tunnel with port forwarding capabilities.
The tunnel goes from you to the kube api-server to the pod so it may actually be 2 tunnels with the kube api-server acting as a pseudo router.
An example of where I've found it useful was that I was doing a quick PoC of a Jenkins Pipeline hosted on Azure Kubernetes Service and earlier in my Kubernetes studies I didn't know how to setup an Ingress, but I could reach the Server via port 80 unencrypted, but I knew my traffic could be snooped on. So I just did kubectl port-forward to temporarily login and securely to debug my POC. Also really helpful with RabbitMQ Cluster hosted on Kubernetes, you can go into the management webpage with kubectl port-forward and make sure that it's clustering the way you wanted it to.