Kubectl commands cannot be executed from another VM - kubernetes

I'm having an issue when executing the "kubectl" commands. In fact, my cluster consists of one Master and one Worker node. The kubectl commands can be executed from the Master server without having an issue. But, I also have another VM which I use that VM as a Jump server to login to the master and worker nodes. I need to execute the kubectl commands from that Jump server. I created the .kube directory, and copied the kubeconfig file from the Master node to the Jump server. And also I set the context correctly as well. But the kubectl commands hangs when executing from the Jump server and it gives a timeout error.
Below are the information.
kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.240.0.30:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
ubuntu#ansible:~$ kubectl config use-context kubernetes-admin#kubernetes
Switched to context "kubernetes-admin#kubernetes".
ubuntu#ansible:~$ kubectl get pods
Unable to connect to the server: dial tcp 10.240.0.30:6443: i/o timeout
ubuntu#ansible:~$ kubectl config current-context
kubernetes-admin#kubernetes
Everything seems to be OK for me and wondering why kubectl commands hang when wxecuting from the Jump server.

Troubleshooted the issue by verifying whether the Jump VM can telnet to Kubernetes Master Node by executing the below.
telnet <ip-address-of-the-kubernetes-master-node> 6443
Since the error was a "Connection Timed Out" I had to add a firewall rule to Kubernetes Master node. Added a firewall rule as below. Note: In my case I'm using GCP.
gcloud compute firewall-rules create allow-kubernetes-apiserver \
--allow tcp:22,tcp:6443,icmp \
--network kubernetes \
--source-ranges 0.0.0.0/0
Then I was able to telnet to the Master Node without any issue. If still you can't get connected to the Master node, change the Internal IP in the kubconfig file under .kube directory to the Public IP address of the Master node.
Then change the context using below command.
kubectl config set-context <context-name>

Related

Configure Kubectl to connect to a local network Kubernetes cluster

I'm trying to connect to a kubernetes cluster running on my Windows PC from my Mac. This is so I can continue to develop from my Mac but run everything on a machine with more resources. I know that to do this I need to change the kubectl context on my Mac to point towards my Windows PC but don't know how to manually do this.
When I've connected to a cluster before on AKS, I would use az aks get-credentials and this would correctly an entry to .kube/config and change the context to it. I'm basically trying to do this but on a local network.
I've tried to add an entry into kubeconfig but get The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?. I've also checked my antivirus on the Windows computer and no requests are getting blocked.
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: {CERT}
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
current-context: windows-docker-desktop
kind: Config
preferences: {}
users:
- name: windows-docker-desktop
user:
client-certificate-data: {CERT}
client-key-data: {KEY}
I've also tried using kubectl --insecure-skip-tls-verify --context=windows-docker-desktop get pods which results in the same error: The connection to the server 192.168.1.XXX:6443 was refused - did you specify the right host or port?.
Many thanks.
From your MAC try if the port is open: Like nc -zv 192.168.yourwindowsIp 6443. If it doest respond Open, you have a network problem.
Try this.
clusters:
- cluster:
server: https://192.168.1.XXX:6443
name: windows-docker-desktop
insecure-skip-tls-verify: true
directly in the configfile
the set-context you dont need to specify as you have only one.
To be sure it is not your firewall, disable it just for a very short period, only to test the conection.
Last thing: Seems you are using Kubernetes in Docker-Desktop. If not and you have a local cluster with more than 1 node, you need to install a network fabric in your cluster like Flannel or Calico.
https://projectcalico.docs.tigera.io/about/about-calico
https://github.com/flannel-io/flannel

Expose kube-apiserver with additional IP address

I setup a k8s cluster using kubeadm init on a bare metal cluster.
I noticed the kube-apiserver is exposing its interface on a private IP:
# kubectl get pods kube-apiserver-cluster1 -n kube-system -o wide
NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES
kube-apiserver-cluster1 1/1 Running 0 6d22h 10.11.1.99 cluster1 <none> <none>
Here's the kube config inside the cluster:
# kubectl config view
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://10.11.1.99:6443
name: kubernetes
contexts:
- context:
cluster: kubernetes
user: kubernetes-admin
name: kubernetes-admin#kubernetes
current-context: kubernetes-admin#kubernetes
kind: Config
preferences: {}
users:
- name: kubernetes-admin
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
This is fine for using kubectl locally on the cluster, but I want to add an additional interface to expose the kube-apiserver using the public IP address. Ultimately I'm trying to configure kubectl from a laptop to remotely access the cluster.
How can I expose the kube-apiserver on an external IP address?
Execute following command:
$ kubeadm init --pod-network-cidr=<ip-range> --apiserver-advertise-address=0.0.0.0 --apiserver-cert-extra-sans=<PRIVATE_IP>[,<PUBLIC_IP>,...]
Don't forget to replace the private IP for the public IP in your .kube/config if you use kubectl from remote.
You can also forward the private IP of the master node to the public IP of the master node on the worker node. Run this command on worker node before running kubeadm join:
$ sudo iptables -t nat -A OUTPUT -d <Private IP of master node> -j DNAT --to-destination <Public IP of master node>.
But keep in mind that you'll also have to forward worker private IPs the same way on the master node to make everything work correctly (if they suffer from the same issue of being covered by cloud provider NAT).
See more: apiserver-ip, kube-apiserver.

node joined successfully to master node, but got error when kubectl get nodes "The connection to the server localhost:8080 was refused"

I'm using two virtual machine with operating system Centos 8
master-node:
kubeadm init
node-1:
kubeadm join
node-1 joined successfully, and got the result run "kubectl get nodes".
but running kubectl get nodes got response "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
I've checked my config using command kubectl config view and I got a result:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null
I've ls /etc/kubernetes/ and it show kubelet.conf only
From what I see you are trying to use kubectl on worker node after successfull kubeadm join.
kubeadm init is genereting kubeadmin credentials/config files that are used to connect to the cluster and you were expecting that kubeadm join will also create simmilar credentials so you can run kubectl commands from worker node. kubeadm join command is not placing any admin credentials on worker nodes (where applications are running; for security reasons).
If you want it there you need to copy it from master to worker manually (or create a new ones).
Based on the writing, once kubeadm init is completed the master node is initialized and components are set.
Running kubeadm join on worker node would join this node to previous master.
After this step if you're running kubectl get nodes on master and encountering the above mentioned issue then its because cluster config is missing for kubectl.
The default config will be /etc/kubernetes/admin.conf which can set to environmental variables with key as KUBECONFIG.
Or simplest way would be to copy this file into .kube folder.
cp -f /etc/kubernetes/admin.conf ~/.kube/config

Kubectl get nodes return "the server doesn't have a resource type "nodes""

I installed the Kubernetes and performed kubeadm init and join from the worker too. But when i run kubectl get nodes it gives the following response
the server doesn't have a resource type "nodes"
What might be the problem here? COuld not see anything in the /var/log/messages
Any hints here?
In my case, I wanted to see the description of my pods.
When I used kubectl describe postgres-deployment-866647ff76-72kwf, the error said error: the server doesn't have a resource type "postgres-deployment-866647ff76-72kwf".
I corrected it by adding pod, before the pod name, as follows:
kubectl describe pod postgres-deployment-866647ff76-72kwf
It looks to me that the authentication credentials were not set correctly. Did you copy the kubeconfig file /etc/kubernetes/admin.conf to ~/.kube/config? If you used kubeadm the API server should be configured to run on 6443, not in 8080. Could you also check that the KUBECONFIG variable is not set?
It would also help to increase the verbose level using the flag --v=99. Moreover, are you accessing from the same machine where the Kubernetes master components are installed, or are you accessing from the outside?
I got this message when I was trying to play around with Docker-Desktop. I had previously been doing a few experiments with Google Cloud and run some kubectl commands for that. The result was that in my ~/.kube/config file I still had stale config related to a now non-existent GCP cluster, and my default k8s context was set to that.
Try the following:
# Find what current contexts you have
kubectl config view
I get:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
contexts:
- context:
cluster: docker-desktop
user: docker-desktop
name: docker-desktop
current-context: docker-desktop
kind: Config
preferences: {}
users:
- name: docker-desktop
user:
client-certificate-data: REDACTED
client-key-data: REDACTED
So only one context now. If you have more than one context here, check that its the one you expect that is set to current-context. If not change it with:
# Get rid of old contexts that you don't use
kubectl config delete-context some-old-context
# Selecting the context that I have auth for
kubectl config use-context docker-desktop

Error while running kubectl commands

I have recently installed minikube and kubectl. However when I run kubectl get pods or any other command related to kubectl I get the error
Unable to connect to the server: unexpected EOF
Does anyone know how to fix this?I am using Ubuntu server 16.04.Thanks in advance.
The following steps can be used for further debugging.
Check the minikube local cluster status using minikube status command.
$: minikube status
minikube: Running
cluster: Running
kubectl: Correctly Configured: pointing to minikube-vm at 172.0.x.y
If problem with kubectl configuratuion,then configure it using, kubectl config use-context minikube command.
$: kubectl config use-context minikube
Switched to context "minikube".
Check the cluster status, using kubectl cluster-info command.
$: kubectl cluster-info
Kubernetes master is running at ...
Heapster is running at ...
KubeDNS is running at ...
...
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
Note: It can even be due to very simple reason: internet speed (it happend to me just now).
I have same problem too. I solved after change the server addr to localhost
apiVersion: v1
clusters:
- cluster:
certificate-authority: /var/lib/minikube/certs/ca.crt
server: https://localhost:8443 # check it
name: m01
...
users:
- name: m01
user:
client-certificate: /var/lib/minikube/certs/apiserver.crt
client-key: /var/lib/minikube/certs/apiserver.key
I think your kubernetes master is not setup properly. You can check that by checking the following services in master node are in active state and running.
etcd2.service
kube-apiserver.service Kubernetes API Server
kube-controller-manager.service Kubernetes Controller Manager
kube-scheduler.service Kubernetes Scheduler