Output of "kubectl cluster-info" command - kubernetes

Below is the output:
C:\Windows\system32>kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:32772
KubeDNS is running at https://127.0.0.1:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
C:\Windows\system32>
I am using Minikube to run a single Node cluster in my local box to lean Kubernetes. I Google'ed Minikube docs and what I understood was that Minikube will launch a VM (in my case I used Oracle VirtualBox) in my local and runs a single Node Kubernetes cluster in the VM.
In the above output, "Kubernetes master is running at https://127.0.0.1:32772" means Kubernetes master is running on my local box or inside the VM launched by Minikube?
UPDATE 1:
I tried to see which service is running on this port and below is the output:
C:\Users>netstat -a -o -n | find "32772"
TCP 127.0.0.1:32772 0.0.0.0:0 LISTENING 8892
C:\Users>
And 8892 is running com.backend.docker.exe. I am more confused now that is Docker running my cluster, if not then why it is showing that com.backend.docker.exe is running on port "32772".

Both assumptions are correct.
It's running in a VM, a VM which is running in your PC and exposing the Kubernetes API so that you can access it with kubectl without needing to get into the VM.

Related

How to connect to a minikube cluster created in a linux VM from windows 10 local computer?

I have created a following minikube cluster in linux machine . Now I wanted to connect to the node of the cluster from my local windows 10 machine. I have kubectl installed in the local machine. How do I connect the worker node of a minikube cluster from my windows machine? I am new to the Kubes , please let me know if any details needs to added to the question.
NAME STATUS ROLES AGE VERSION
minikube Ready control-plane,master 52d v1.22.3
minikube-m02 Ready worker 49m v1.22.3
minikube-m03 Ready worker 43m v1.22.3
Deploying a Nginx reverse proxy in front of a minikube can make us interact from local machines to the Virtual machine where the minikube is installed.
You can’t access minikube remotely because it’s only accessible locally. For this reason, you need to deploy an Nginx reverse proxy next to minikube that will allow receiving requests from remote clients then forward them to kube-apiserver. Kubernetes API server is a point where all your requests will go when you use the command-line tool kubectl. The kubectl allows you to run commands against Kubernetes clusters.
Refer this document for the detailed procedure of installing Nginx reverse proxy in front of a minikube.

Kubernetes Nginx Ingress controller Readiness Probe failed

I am trying to setup my very first Kubernetes cluster and it seems to have setup fine until nginx-ingress controller.
Here is my cluster information:
Nodes: three RHEL7 and one RHEL8 nodes
Master is running on RHEL7
Kubernetes server version: 1.19.1
Networking used: flannel
coredns is running fine.
selinux and firewall are disabled on all nodes
Here are my all pods running in kube-system
I then followed instructions on following page to install nginx ingress controller: https://docs.nginx.com/nginx-ingress-controller/installation/installation-with-manifests/
Instead of deployment, I decided to use daemon-set since I am going to have only few nodes running in my kubernetes cluster.
After following the instructions, pod on my RHEL8 is constantly failing with the following error:
Readiness probe failed: Get "http://10.244.3.2:8081/nginx-ready": dial
tcp 10.244.3.2:8081: connect: connection refused Back-off restarting
failed container
Here is the screenshot shows that RHEL7 pods are working just fine and RHEL8 is failing:
All nodes are setup exactly the same way and there is no difference.
I am very new to Kubernetes and don't know much internals of it. Can someone please point me on how can I debug and fix this issue? I am really willing to learn from issues like this.
This is how I provisioned RHEL7 and RHEL8 nodes
Installed docker version: 19.03.12, build 48a66213fe
Disabled firewalld
Disabled swap
Disabled SELinux
To enable iptables to see bridged traffic, set net.bridge.bridge-nf-call-ip6tables = 1 and net.bridge.bridge-nf-call-iptables = 1
Added hosts entry for all the nodes involved in Kubernetes cluster so that they can find each other without hitting DNS
Added IP address of all nodes in Kubernetes cluster on /etc/environment for no_proxy so that it doesn't hit corporate proxy
Verified docker driver to be "systemd" and NOT "cgroupfs"
Reboot server
Install kubectl, kubeadm, kubelet as per kubernetes guide here at: https://kubernetes.io/docs/tasks/tools/install-kubectl/
Start and enable kubelet service
Initialize master by executing the following:
kubeadm init --pod-network-cidr=10.244.0.0/16 --service-cidr=10.96.0.0/12
Apply node-selector patch for mixed OS scheduling
wget https://raw.githubusercontent.com/Microsoft/SDN/master/Kubernetes/flannel/l2bridge/manifests/node-selector-patch.yml
kubectl patch ds/kube-proxy --patch "$(cat node-selector-patch.yml)" -n=kube-system
Apply flannel CNI
wget https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
Modify net-conf.json section of kube-flannel.yml for a type "host-gw"
kubectl apply -f kube-flannel.yml
Apply node selector patch
kubectl patch ds/kube-flannel-ds-amd64 --patch "$(cat node-selector-patch.yml)" -n=kube-system
Thanks
According to kubernetes documentation the list of supported host operating systems is as follows:
Ubuntu 16.04+
Debian 9+
CentOS 7
Red Hat Enterprise Linux (RHEL) 7
Fedora 25+
HypriotOS v1.0.1+
Flatcar Container Linux (tested with 2512.3.0)
This article mentioned that there are network issues on RHEL 8:
(2020/02/11 Update: After installation, I keep facing pod network issue which is like deployed pod is unable to reach external network
or pods deployed in different workers are unable to ping each other
even I can see all nodes (master, worker1 and worker2) are ready via
kubectl get nodes. After checking through the Kubernetes.io official website, I observed the nfstables backend is not compatible with the
current kubeadm packages. Please refer the following link in “Ensure
iptables tooling does not use the nfstables backend”.
The simplest solution here is to reinstall the node on supported operating system.

Kubernetes: nodes joining a cluster - unable to connect to master

I've got 3 VMs which can connect.
I've started up 1 master and 2 nodes.
However, I'm not sure what IP address to use here:
sudo kubeadm join <ip address>:6443 --token <token> --discovery-token-ca-cert-hash <ca-cert-hash>
The actual IP I used to deploy the master (i.e. with kubeadm) was 192.168.56.101.
And I can telnet from the node to the master using:
telnet 192.168.56.101 6443
E.g.
telnet 192.168.56.101 6443
Trying 192.168.56.101...
Connected to 192.168.56.101.
Escape character is '^]'.
However trying kubeadm join on the node with that IP does not work. It just hangs.
Any suggestions?
run 'hostname -i' and grab the IP address. Use it in init command. The master IP address should be reachable from all the nodes
run
kubectl cluster-info
Kubernetes master is running at https://xxx.xxx.xx.xx:6443
KubeDNS is running at https://xxx.xxx.xx.xx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes master IP it's what you are looking for
Did you deployed your CNI network ? Flannel or Calico for example ?
run this command to see if all your master pods are running.
kubectl get pods --all-namespaces
In your Node did you install docker and kubelet ?

SSH to Kubernetes pod using Bastion

I have deployed Google cloud Kubernetes cluster. The cluster has internal IP only.
In order to access it, I created a virtual machine bastion-1 which has external IP.
The structure:
My Machine -> bastion-1 -> Kubernetes cluster
The connection to the proxy station:
$ ssh bastion -D 1080
now using kubectl using proxy:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl get pods
No resources found.
The Kubernetes master server is responding, which is a good sign.
Now, trying to ssh a pod:
$ HTTPS_PROXY=socks5://127.0.0.1:1080 kubectl exec -it "my-pod" -- /bin/bash
error: error sending request: Post https://xxx.xxx.xxx.xxx/api/v1/namespaces/xxx/pods/pod-xxx/exec?command=%2Fbin%2Fbash&container=xxx&container=xxx&stdin=true&stdout=true&tty=true: EOF
Question:
How to allow ssh connection to pod via bastion? What I'm doing wrong?
You can't do this right now.
The reason is because the connections used for commands like exec and proxy use SPDY2.
There's a bug report here with more information.
You'll have to switch to using a HTTP proxy

The connection to the server localhost:8080 was refused

I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
From master (node1),
$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
From worker (node 2),
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
I am not sure how to fix this issue. Any help is appreciated.
On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.
Thanks!
kubectl interfaces with kube-apiserver for cluster management. The command works on the master node because that's where kube-apiserver runs. On the worker nodes, only kubelet and kube-proxy is running.
In fact, kubectl is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.
from master you need ~/.kube/config pass this file as argument for kubectl command. Copy the config file to other server or laptop then pass the argument as for kubectl command
eg:
kubectl --kubeconfig=~/.kube/config
This worked for me after executing following commands:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
As a hint, the message being prompted indicates its related to network.
So one potential answer could also be, which worked for my resolution, is to have a look at the key cluster value for context within contexts.
My error was that I had placed an incorrect cluster name there.
Having the appropriate cluster name is crucial to finding it for respective context and the error will disappear.
To solve the issue The connection to the server localhost:8080 was refused - did you specify the right host or port?, you may be missing a step.
My Fix:
On MacOS if you install K8s with brew, you still need to brew install minikube, afterwards you should run minikube start. This will start your cluster.
Run the command kubectl cluster-info and you should get a happy path response similar to:
Kubernetes control plane is running at https://127.0.0.1:63000
KubeDNS is running at https://127.0.0.1:63308/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes install steps: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
Minikube docs: https://minikube.sigs.k8s.io/docs/start/
Ensure what context is selected if you're running Kubernetes in the Docker Desktop.
Once you've selected it right, you'll be able to run the kubectl commands without any exception:
% kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 2d11h v1.22.5