KIND and kubectl: The connection to the server localhost:8080 was refused - did you specify the right host or port? - kubernetes

I'm trying to use KIND to spin up my Kubernetes cluster and trying to use it with Kubectl but I'm stymied at the first hurdle
I set up a cluster using the following kind config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- hostPort: 80
containerPort: 80
Then I use kubectl
kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This makes sense because if I do docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9dcd3b6fd19d kindest/node:v1.17.11 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:44609->6443/tcp kind-control-plane
What do I have to do to start the kubernetes API server and get the nodes on it?

I solve this issue with sudo chmod 0644 /etc/kubernetes/admin.conf and export KUBECONFIG=/etc/kubernetes/admin.conf

Most likely problem and the one that usually gets me is that you don’t have a .kube directory with the right config in it. Try this:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Related

"The connection to the server localhost:8080 was refused - did you specify the right host or port?"

I'm on an ec2 instance trying to get my cluster created. I have kubectl already installed and here are my services and workloads yaml files
services.yaml
apiVersion: v1
kind: Service
metadata:
name: stockapi-webapp
spec:
selector:
app: stockapi
ports:
- name: http
port: 80
type: LoadBalancer
workloads.yaml
apiVersion: v1
kind: Deployment
metadata:
name: stockapi
spec:
selector:
matchLabels:
app: stockapi
replicas: 1
template: # template for the pods
metadata:
labels:
app: stockapi
spec:
containers:
- name: stock-api
image: public.ecr.aws/u1c1h9j4/stock-api:latest
When I try to run
kubectl apply -f workloads.yaml
I get this as an error
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I also tried changing the port in my services.yaml to 8080 and that didn't fix it either
This error comes when you don't have ~/.kube/config file present or configured correctly on the client / where you run the kubectl command.
kubectl reads the clusterinfo and which port to connect to from the ~/.kube/config file.
if you are using eks here's how you can create config file
aws eks create kubeconfig file
Encountered the exact error in my cluster when I executed the "kubectl get nodes" command.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I ran the following command in master node and it fixed the error.
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd
I was following the instructions on aws
With me, I was on a Mac. I had docker desktop installed. This seemed to include kubectl in homebrew
I traced it down to a link in usr/local/bin and renamed it to kubectl-old
Then I reinstalled kubectl, put it on my path and everything worked.
I know this is very specific to my case, but may help others.
I found how to solve this question. Run the below commands
1.sudo -i
2.swapoff -a
3.exit
4.strace -eopenat kubectl version
and you can type kubectl get nodes again.
Cheers !
In my case I had a problem with a certificate authority. Found out that by checking the kubectl config
kubectl config view
The clusters part was null, instead of having something similar to
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
It was not parsed because of time differences between my machine and a server (several seconds was enough).
Running
sudo apt-get install ntp
sudo apt-get install ntpdate
sudo ntpdate ntp.ubuntu.com
Had solved the issue.
My intention is to create my own bare-metal Kubernetes Cloud with up to six nodes. I immediate ran into issues with the below intermittent issue.
"The connection to the server 192.168.1.88:6443 was refused - did you specify the right host or port?"
I have performed a two node installation (master and slave) about 20 different times using a multitude of “how to” sites. I would say I have had the most success with the below link…
https://www.knowledgehut.com/blog/devops/install-kubernetes-on-ubuntu
…however every install results in the intermittent issue above.
Given this issue is intermittent, I have to assume the necessary folder structure and permissions exist, the necessary application prerequisites exist, services/processes are starting under the correct context, and the installation was performed properly (using sudo at the appropriate times).
Again, the problem is intermittent. I will work, then stop, and the start again. A reboot sometimes corrects the issue.
Using Ubuntu ubuntu-22.04.1-desktop-amd64.
I have read a lot of comments on line concerning this issue, and a majority of the recommended fixes deals with creating directories and installing packages under the correct user context.
I do not believe this is the problem I am having given the issue is intermittent.
Linux is not my strong suite…and I would rather not go the Windows route. I am doing this to learn something new and would like to experience the entire enchilada (from OS install, to Kubernetes Cluster install, and then to Docker deployments).
I could be said that problems are the best teachers. And I would agree with that. However, I would like to absolutely know that I am starting with a stable working set of instructions before devoting endless hours trouble shooting bad/incorrect documentation.
Any ideas on how to proceed with correcting this problem?
d0naldashw0rth(at)yahoo(dot)com
I got the same error and after switching from root user to regular user (ubuntu, etc...) my problem was fixed.
Exactly the same issues as Donald wrote.
I've tried all the suggestions as you described above.
sudo systemctl stop kubelet
sudo systemctl start kubelet
strace -eopenat kubectl version
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
The cluster crashes intermittent. Sometimes works, sometimes not.
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
Any idea else? Thank you!

The connection to the server localhost:8080 was refused - did you specify the right host or port? FAQ

I am new to kubernetes . I got the below error while interacting with the cluster kubectl get nodes .
ERROR:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
After search in the internet i fixed my issues .
#sudo cp /etc/kubernetes/admin.conf $HOME/
#sudo chown $(id -u):$(id -g) $HOME/admin.conf
#export KUBECONFIG=$HOME/admin.conf
Your kubectl is probably not referring to right kubeconfig file or the kubeconfig file does not right details.
When there is clear instructions by kubeadm init to execute following commands as an regular user, if you miss runing them you end up with issue reported.
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should check back the logs at kubeadm init time and you will find similar as below asking to execute the command.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP is your Kubernetes master IP
Issue is with the use-context in kubectl command.Please check the same in kubeconfig file.

When I start prepare kubernetes in aws the errors shown below

Below are the commands and their outputs:
root#k8s-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root#k8s-master:~# kubectl get services -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It looks like you are not running EKS. Otherwise you cannot access the masters. With EKS, the masters are managed by AWS and you can't ssh to them
your kubectl commands makes a call to the kubernetes api server. So you have to check if it is running on localhost on port 8080.

Cannot connect to Kubernetes api on AWS vm's

I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!

Why does kubectl have different behavior with sudo?

Running kubectl get pods with sudo:
sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Running as a normal user:
kubectl get pods
No resources found.
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080
You would have run these commands from the normal user :
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
which would have copied config file in your normal user home directory and that is why you are able to get to the connection from the normal host and not from sudo.