Cannot get nodes using kubectl get nodes with gcloud shell - kubernetes

My GCP GKE cluster is connected to the Rancher (v 2.3.3) but it shows unavailable with the msg:
Failed to communicate with API server: Get https://X.x.X.x:443/api/v1/namespaces/kube-system?timeout=30s: waiting for cluster agent to connect
When I try to connect to the GCP K8s Cluster via gcloudshell I cannot retrieve any info with command: kubectl get nodes !!
Any idea why it is happening ... all workloads and services are running and green, only Ingress stuff is with warning info some of them with Unhealthy status from the backend services. But first need to know how can I troubleshoot the problem with connectivity to the k8s cluster with gcloud or rancher !!

Related

Accessing etcd metrics a pod

I'm trying to launch a prometheus pod in order to scrape the etcd metrics from within our kubernetes cluster.
I was trying to reproduce the solution proposed here: Access etcd metrics for Prometheus
Unfortunately, the etcd containers seem to be unavailable from the cluster.
# nc -vz etcd1 2379
nc: getaddrinfo for host "etcd1" port 2379: Name or service not known
In a way, this seems logical since no etcd container appear in the cluster:
kubectl get pods -A | grep -i etcd does not return anything.
However, when I connect onto the machine hosting the master nodes, I can find the containers using the docker ps command.
The cluster has been deployed using Kubespray.
Do you know if there is a way to reach the etcd containers from the cluster pods?
Duh… the etcd container is configured with the host network. Therefore, the metrics endpoint is directly accessible on the node.

Kubectl port-forward not working with IBM Cluster

When I do Kubernetes port-forward with IBM cluster I get connection refused. I have access to other clusters like Azure Kubernetes Service and kubectl port-forward is working fine there. Also when I get a pod log using kubectl logs {pod_name} I get TLS handshake error but the other kubernetes commands like get pod and describe pod is working fine.

Enable unsafe sysctls on a cluster managed by Amazon EKS

I'm attempting to follow instructions for resolving a data congestion issue by enabling 2 unsafe sysctls for certain pods running in a Kubernetes cluster where the Pods are deployed by EKS. To do this, I must enable those parameters in the nodes running those pods. The following command is for enabling on a per-node basis:
kubelet --allowed-unsafe-sysctls \
'net.unix.max_dgram_qlen,net.core.somaxconn'
However, the Nodes in the cluster I am working with are deployed by EKS. The EKS cluster was deployed by using the Amazon dashboard (Not a yaml config file/terraform/etc). I am not sure how to translate the above step to have all nodes in my cluster have those systcl enabled.

How kube-apiserver knows where is kubelet service/process running in worker node?

I have bootstraped (kubernetes the hard way by kelseyhightower) a k8s cluster in virtual box with 2 master(s) and 2 worker(s) and 1 LB for 2 master's kube-apiserver. BTW, kubelet is not running in master, only in worker node.
Now cluster is up and running but I am not able to understand how kube-apiserver on master is connecting to kubelet to fetch the node's metric data etc.
Could you please let me in details?
Kubernetes API server is not aware of Kubelets but Kubelets are aware of Kubernetes API server. Kubelet registers the node and reports metrics to Kubernetes API Server which gets persisted into ETCD key value store. Kubelets use a kubeconfig file to communicate with Kubernetes API Server. This kubeconfig file has the endpoint of Kubernetes API server.The communication between Kubelet and Kubernetes API Server is secure with mutual TLS.
In Kubernetes the Hard Way Kubernetes control plane components - API Server, Scheduler, Controller Manager are run as systems unit and that's why there is no Kubelet running on the control plane nodes and if you perform kubectl get nodes command you would not see the master nodes listed as there is no Kubelet to register the master nodes.
A more standard way to deploy Kubernetes control plane components - API Server, Scheduler, Controller Manager is using Kubelet and not systemd units and that's how Kubeadm deploys Kubernetes control plane.
Official documentation on Master to Cluster communication.

Kubernetes, Unable to connect to the server: EOF

Environment of kubectl: Windows 10.
Kubectl version: https://storage.googleapis.com/kubernetes-release/release/v1.15.0/bin/windows/amd64/kubectl.exe
Hello. I've just installed Kubernetes cluster at Google Cloud Platform. Then applied the next command:
gcloud container clusters get-credentials my-cluster --zone europe-west1-b --project my-project
It successfully added the credentials at %UserProfile%\.kube\config
But when I try kubectl get pods it returns Unable to connect to the server: EOF. My computer accesses the internet through corporate proxy. How and where could I provide cert file for the kubectl so it could use the cert with all the requests? Thanx.
You would get EOF if there is no response from kubectl API calls in a certain time(Idle time is set 300 sec by default).
Try increasing cluster Idle time or maybe you might need VPN to access those pods (something like those)