Kubernetes: nodes joining a cluster - unable to connect to master - kubernetes

I've got 3 VMs which can connect.
I've started up 1 master and 2 nodes.
However, I'm not sure what IP address to use here:
sudo kubeadm join <ip address>:6443 --token <token> --discovery-token-ca-cert-hash <ca-cert-hash>
The actual IP I used to deploy the master (i.e. with kubeadm) was 192.168.56.101.
And I can telnet from the node to the master using:
telnet 192.168.56.101 6443
E.g.
telnet 192.168.56.101 6443
Trying 192.168.56.101...
Connected to 192.168.56.101.
Escape character is '^]'.
However trying kubeadm join on the node with that IP does not work. It just hangs.
Any suggestions?

run 'hostname -i' and grab the IP address. Use it in init command. The master IP address should be reachable from all the nodes

run
kubectl cluster-info
Kubernetes master is running at https://xxx.xxx.xx.xx:6443
KubeDNS is running at https://xxx.xxx.xx.xx:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes master IP it's what you are looking for
Did you deployed your CNI network ? Flannel or Calico for example ?
run this command to see if all your master pods are running.
kubectl get pods --all-namespaces
In your Node did you install docker and kubelet ?

Related

Output of "kubectl cluster-info" command

Below is the output:
C:\Windows\system32>kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:32772
KubeDNS is running at https://127.0.0.1:32772/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
C:\Windows\system32>
I am using Minikube to run a single Node cluster in my local box to lean Kubernetes. I Google'ed Minikube docs and what I understood was that Minikube will launch a VM (in my case I used Oracle VirtualBox) in my local and runs a single Node Kubernetes cluster in the VM.
In the above output, "Kubernetes master is running at https://127.0.0.1:32772" means Kubernetes master is running on my local box or inside the VM launched by Minikube?
UPDATE 1:
I tried to see which service is running on this port and below is the output:
C:\Users>netstat -a -o -n | find "32772"
TCP 127.0.0.1:32772 0.0.0.0:0 LISTENING 8892
C:\Users>
And 8892 is running com.backend.docker.exe. I am more confused now that is Docker running my cluster, if not then why it is showing that com.backend.docker.exe is running on port "32772".
Both assumptions are correct.
It's running in a VM, a VM which is running in your PC and exposing the Kubernetes API so that you can access it with kubectl without needing to get into the VM.

Expose Kubernetes Ingress to LAN computers

I have computer A and B on LAN:
A at IP 192.168.0.104
B at IP 192.168.0.110
On computer B I have a Kubernetes service with ingress:
path hello
host hello-node.com
minikube ip is 192.168.49.2
/etc/hosts has a line:
192.168.49.2 hello-node.com
On B I see the service response to hello-node.com/hello but not to
192.168.49.2/hello. On 192.168.49.2/hello I see 404 error from nginx.
How do I access either hello-node.com/hello or 192.168.49.2/hello from computer A?
I do not want to rely on any 3rd party service (load balancer etc)
info:
minikube version: v1.16.0
$ kubectl cluster-info
Kubernetes control plane is running at https://192.168.49.2:8443
KubeDNS is running at https://192.168.49.2:8443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Workaround without using ingress, but NodePort expose command. From link from #mariusz-k
kubectl expose deployment/hello-node --type="NodePort" --port 8080
SERVICE_NODE_IP=$(minikube ip)
FORWARD_PORT=8090
SERVICE_NODE_PORT=$(kubectl get services/hello-node -o go-template='{{(index .spec.ports 0).nodePort}}')
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$SERVICE_NODE_IP -NL \*:$FORWARD_PORT:0.0.0.0:$SERVICE_NODE_PORT
You need to get the address of Computer B (the cluster ip) and then connect to it.
# Get the cluster "master" ip
$ kubectl cluster-info
Kubernetes master is running at https://<the desired ip/DNS record>......:443
# use the above ip to get the content of your service
curl -vsI <ip>/hello
You can access your minikube service from another machine by following steps from this github issue:
service_name=web # This is what you need to replace with your own service
service_port=$(minikube service $service_name --url | cut -d':' -f3)
ssh -i ~/.minikube/machines/minikube/id_rsa docker#$(minikube ip) -NL \*:${service_port}:0.0.0.0:${service_port}
After that your service will be available under `<minikube's-host-ip>:

CoreDNS only works in one host in kubernete cluster

I have a kubernetes with 3 nodes:
[root#ops001 ~]# kubectl get nodes
NAME STATUS ROLES AGE VERSION
azshara-k8s01 Ready <none> 143d v1.15.2
azshara-k8s02 Ready <none> 143d v1.15.2
azshara-k8s03 Ready <none> 143d v1.15.2
when after I am deployed some pods I found only one nodes azshara-k8s03 could resolve DNS, the other two nodes could not resolve DNS.this is my azshara-k8s03 host node /etc/resolv.conf:
options timeout:2 attempts:3 rotate single-request-reopen
; generated by /usr/sbin/dhclient-script
nameserver 100.100.2.136
nameserver 100.100.2.138
this is the other 2 node /etc/resolv.conf:
nameserver 114.114.114.114
should I keep the same ? what should I do to make the DNS works fine in 3 nodes?
did you try if 114.114.114.114 is actually reachable from your nodes? if not, change it to something that actually is ;-]
also check which resolv.conf your kublets actually use: it is often something else than /etc/resolv.conf: do ps ax |grep kubelet and check the value of --resolv-conf flag and see if the DNSes in that file work correctly.
update:
what names are failing to resolve on the 2 problematic nodes? are these public names or internal only? if they are internal only than 114.114.114 will not know about them. 100.100.2.136 and 100.100.2.138 are not reachable for me: are they your internal DNSes? if so try to just change /etc/resolv.conf on 2 nodes that don't work to be the same as on the one that works.
First step,your CoreDNS port are listening on port you specify,you can login Pod in other pod and try to using telnet command to make sure the DNS expose port is accesseable(current I am using alpine,centos using yum,ubuntu or debian using apt-get):
apk add busybox-extras
telnet <your coredns server ip> <your coredns listening port>
Second step: login pods on each host machine and make sure the port is accessable in each pod,if telnet port is not accessable,you should fix your cluser net first.

How to access to the services in kubernetes cluster when ssh to another VMs via proxy?

Consider if we build two VMs in a bare-metal server through a network, one is master and another is worker. I ssh to the master and construct a cluster using kubeadm which has three pods and a service with type: ClusterIP. So when I want access to the cluster I do kubectl proxy in the master. Now we can explore the API with curl and wget in the VM which we ssh to it, like this :
$ curl http://localhost:8080/api/
So far, so good! but I want access to the services by my laptop? The localhost which comes above is refer to the bare-metal server! How can access to the services through proxy by my laptop when cluster is placed in another machine?
When I do $ curl http://localhost:8080/api/ in my laptop it says :
127.0.0.1 refused to connect
which make sense! But what is the solution to this?
If you forward the port 8080 when sshing to master, you can use localhost on your laptop to access the apis on the cluster.
You can try adding the -L flag to your ssh command:
$ ssh -L 8080:localhost:8080 your.master.host.com
Then the curl to localhost will work.
You can also specify an extra arguments to the kubectl proxy command, to let your reverse-proxy server listening on non-default ip address (127.0.0.1) - expose outside
kubectl proxy --port=8001 --address='<MASTER_IP_ADDRESS>' --accept-hosts="^.*$"
You can get your Master IP address by issuing following command: kubectl cluster-info

The connection to the server localhost:8080 was refused

I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
From master (node1),
$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
From worker (node 2),
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
I am not sure how to fix this issue. Any help is appreciated.
On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.
Thanks!
kubectl interfaces with kube-apiserver for cluster management. The command works on the master node because that's where kube-apiserver runs. On the worker nodes, only kubelet and kube-proxy is running.
In fact, kubectl is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.
from master you need ~/.kube/config pass this file as argument for kubectl command. Copy the config file to other server or laptop then pass the argument as for kubectl command
eg:
kubectl --kubeconfig=~/.kube/config
This worked for me after executing following commands:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
As a hint, the message being prompted indicates its related to network.
So one potential answer could also be, which worked for my resolution, is to have a look at the key cluster value for context within contexts.
My error was that I had placed an incorrect cluster name there.
Having the appropriate cluster name is crucial to finding it for respective context and the error will disappear.
To solve the issue The connection to the server localhost:8080 was refused - did you specify the right host or port?, you may be missing a step.
My Fix:
On MacOS if you install K8s with brew, you still need to brew install minikube, afterwards you should run minikube start. This will start your cluster.
Run the command kubectl cluster-info and you should get a happy path response similar to:
Kubernetes control plane is running at https://127.0.0.1:63000
KubeDNS is running at https://127.0.0.1:63308/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes install steps: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
Minikube docs: https://minikube.sigs.k8s.io/docs/start/
Ensure what context is selected if you're running Kubernetes in the Docker Desktop.
Once you've selected it right, you'll be able to run the kubectl commands without any exception:
% kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 2d11h v1.22.5