"The connection to the server localhost:8080 was refused - did you specify the right host or port?" - kubernetes

I'm on an ec2 instance trying to get my cluster created. I have kubectl already installed and here are my services and workloads yaml files
services.yaml
apiVersion: v1
kind: Service
metadata:
name: stockapi-webapp
spec:
selector:
app: stockapi
ports:
- name: http
port: 80
type: LoadBalancer
workloads.yaml
apiVersion: v1
kind: Deployment
metadata:
name: stockapi
spec:
selector:
matchLabels:
app: stockapi
replicas: 1
template: # template for the pods
metadata:
labels:
app: stockapi
spec:
containers:
- name: stock-api
image: public.ecr.aws/u1c1h9j4/stock-api:latest
When I try to run
kubectl apply -f workloads.yaml
I get this as an error
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I also tried changing the port in my services.yaml to 8080 and that didn't fix it either

This error comes when you don't have ~/.kube/config file present or configured correctly on the client / where you run the kubectl command.
kubectl reads the clusterinfo and which port to connect to from the ~/.kube/config file.
if you are using eks here's how you can create config file
aws eks create kubeconfig file

Encountered the exact error in my cluster when I executed the "kubectl get nodes" command.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
I ran the following command in master node and it fixed the error.
apt-get update && apt-get install -y apt-transport-https ca-certificates curl software-properties-common
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | apt-key add -
add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu focal stable"
apt-get update && apt-get install -y containerd.io
Configure containerd
mkdir -p /etc/containerd
containerd config default > /etc/containerd/config.toml
systemctl restart containerd

I was following the instructions on aws
With me, I was on a Mac. I had docker desktop installed. This seemed to include kubectl in homebrew
I traced it down to a link in usr/local/bin and renamed it to kubectl-old
Then I reinstalled kubectl, put it on my path and everything worked.
I know this is very specific to my case, but may help others.

I found how to solve this question. Run the below commands
1.sudo -i
2.swapoff -a
3.exit
4.strace -eopenat kubectl version
and you can type kubectl get nodes again.
Cheers !

In my case I had a problem with a certificate authority. Found out that by checking the kubectl config
kubectl config view
The clusters part was null, instead of having something similar to
- cluster:
certificate-authority-data: DATA+OMITTED
server: https://kubernetes.docker.internal:6443
name: docker-desktop
It was not parsed because of time differences between my machine and a server (several seconds was enough).
Running
sudo apt-get install ntp
sudo apt-get install ntpdate
sudo ntpdate ntp.ubuntu.com
Had solved the issue.

My intention is to create my own bare-metal Kubernetes Cloud with up to six nodes. I immediate ran into issues with the below intermittent issue.
"The connection to the server 192.168.1.88:6443 was refused - did you specify the right host or port?"
I have performed a two node installation (master and slave) about 20 different times using a multitude of “how to” sites. I would say I have had the most success with the below link…
https://www.knowledgehut.com/blog/devops/install-kubernetes-on-ubuntu
…however every install results in the intermittent issue above.
Given this issue is intermittent, I have to assume the necessary folder structure and permissions exist, the necessary application prerequisites exist, services/processes are starting under the correct context, and the installation was performed properly (using sudo at the appropriate times).
Again, the problem is intermittent. I will work, then stop, and the start again. A reboot sometimes corrects the issue.
Using Ubuntu ubuntu-22.04.1-desktop-amd64.
I have read a lot of comments on line concerning this issue, and a majority of the recommended fixes deals with creating directories and installing packages under the correct user context.
I do not believe this is the problem I am having given the issue is intermittent.
Linux is not my strong suite…and I would rather not go the Windows route. I am doing this to learn something new and would like to experience the entire enchilada (from OS install, to Kubernetes Cluster install, and then to Docker deployments).
I could be said that problems are the best teachers. And I would agree with that. However, I would like to absolutely know that I am starting with a stable working set of instructions before devoting endless hours trouble shooting bad/incorrect documentation.
Any ideas on how to proceed with correcting this problem?
d0naldashw0rth(at)yahoo(dot)com

I got the same error and after switching from root user to regular user (ubuntu, etc...) my problem was fixed.

Exactly the same issues as Donald wrote.
I've tried all the suggestions as you described above.
sudo systemctl stop kubelet
sudo systemctl start kubelet
strace -eopenat kubectl version
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
export KUBECONFIG=$HOME/.kube/config
The cluster crashes intermittent. Sometimes works, sometimes not.
The connection to the server x.x.x.x:6443 was refused - did you specify the right host or port?
Any idea else? Thank you!

Related

kubectl in pod works unexpectedly

kubectl is installed in the pod. When kubectl is used to execute commands in another pod, some commands works as expected, and the others behave abnormally, such as echo.
execute on host
excecute in pod
The host and pod have the same version of kubectl,is it related to some concept of tty? or something else? thx for any device.
I've tried to add privilege, tty, stdin to my kubectl pod, but not working. Yaml part as below:
containers:
- name: bridge
image: registry:5000/bridge:v1
imagePullPolicy: IfNotPresent
securityContext:
privileged: true
stdin: true
tty: true
It depends on the container environment and limitations based on permissions or if the pod has the required software to run the commands.
As it appears echo is not present in the image which is deployed on pod. Though echo command is part of standard libraries and comes by default you can try installing it again.like in ubuntu use below
sudo apt-get update
sudo apt-get install coreutils
It was finally found to be related to my kubectl command in pod. Which is override by the following script. When I changed to the original kubectl command, everything works fine.
root#bridge-66d98bd46d-zk65m:/data/ww# cat /usr/bin/kubectl
#!/bin/bash
/opt/kubectl1 -ndefault $#
root#bridge-66d98bd46d-zk65m:/data/ww# /opt/kubectl1 -ndefault exec vnc-test -- bash -c 'which echo '
/usr/bin/echo
root#bridge-66d98bd46d-zk65m:/data/ww#

Why do I have to edit /etc/hosts just sometimes when using nginx-ingress controller and resources in my local k8s environment?

Not sure if this is OS specific, but on my M1 Mac, I'm installing the Nginx controller and resource example located in the official Quick Start guide for the controller. for Docker Desktop for Mac. The instructions are as follows:
// Create the Ingress
helm upgrade --install ingress-nginx ingress-nginx \
--repo https://kubernetes.github.io/ingress-nginx \
--namespace ingress-nginx --create-namespace
// Pre-flight checks
kubectl get pods --namespace=ingress-nginx
kubectl wait --namespace ingress-nginx \
--for=condition=ready pod \
--selector=app.kubernetes.io/component=controller \
--timeout=120s
// and finally, deploy and test the resource.
kubectl create deployment demo --image=httpd --port=80
kubectl expose deployment demo
kubectl create ingress demo-localhost --class=nginx \
--rule=demo.localdev.me/*=demo:80
kubectl port-forward --namespace=ingress-nginx service/ingress-nginx-controller 8080:80
I noticed that the instructions did not mention having to edit the /etc/hosts file, which I found strange. And, when I tested it by putting demo.localdev.me:8080 into the browser, it did work as expected!
But why? What happened that an application inside of a docker container was able to influence behavior on my host machine and intercept its web traffic without me having to edit the /etc/hosts file?
For my next test, I re-executed everything above with the only change being that I switched demo to demo2. That did not work. I did have to go into /etc/hosts and add demo2.localdev.me 127.0.0.1 as an entry. After that both demo and demo2 work as expected.
Why is this happening? Not having to edit the /etc/hosts file is appealing. Is there a way to configure it so that they all work? How would I turn it "off" from happening automatically if I needed to route traffic back out to the internet rather than my local machine?
I replicated your issue and got a similar behaviour on the Ubuntu 20.04.3 OS.
The problem is that NGINX Ingress controller Local testing guide did not mention that demo.localdev.me address points to 127.0.0.1 - that's why it works without editing /etc/hosts or /etc/resolve.conf file. Probably it's something like *.localtest.me addresses:
Here’s how it works. The entire domain name localtest.me—and all wildcard entries—point to 127.0.0.1. So without any changes to your host file you can immediate start testing with a local URL.
Also good and detailed explanation in this topic.
So Docker Desktop / Kubernetes change nothing on your host.
The address demo2.localdev.me also points to 127.0.0.1, so it should work as well for you - and as I tested in my environment the behaviour was exactly the same as for the demo.localdev.me.
You may run nslookup command and check which IP address is pointed to the specific domain name, for example:
user#shell:~$ nslookup demo2.localdev.me
Server: 127.0.0.53
Address: 127.0.0.53#53
Non-authoritative answer:
Name: demo2.localdev.me
Address: 127.0.0.1
You may try to do some tests with other hosts name, like some existing ones or no-existing then of course it won't work because the address won't be resolved to the 127.0.0.1 thus it won't be forwarded to the Ingress NGINX controller. In these cases, you can edit /etc/hosts (as you did) or use curl flag -H, for example:
I created the ingress using following command:
kubectl create ingress demo-localhost --class=nginx --rule=facebook.com/*=demo:80
Then I started port-forwarding and I run:
user#shell:~$ curl -H "Host: facebook.com" localhost:8080
<html><body><h1>It works!</h1></body></html>
You wrote:
For my next test, I re-executed everything above with the only change being that I switched demo to demo2. That did not work. I did have to go into /etc/hosts and add demo2.localdev.me 127.0.0.1 as an entry. After that both demo and demo2 work as expected.
Well, that sounds strange, could you run nslookup demo2.localdev.me without adding an entry in the /etc/hosts and then check? Are you sure you performed the correct query before, did you not change something on the Kubernetes configuration side? As I tested (and presented above), it should work exactly the same as for demo.localdev.me.

KIND and kubectl: The connection to the server localhost:8080 was refused - did you specify the right host or port?

I'm trying to use KIND to spin up my Kubernetes cluster and trying to use it with Kubectl but I'm stymied at the first hurdle
I set up a cluster using the following kind config
kind: Cluster
apiVersion: kind.x-k8s.io/v1alpha4
nodes:
- role: control-plane
extraPortMappings:
- hostPort: 80
containerPort: 80
Then I use kubectl
kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This makes sense because if I do docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
9dcd3b6fd19d kindest/node:v1.17.11 "/usr/local/bin/entr…" 14 minutes ago Up 14 minutes 127.0.0.1:44609->6443/tcp kind-control-plane
What do I have to do to start the kubernetes API server and get the nodes on it?
I solve this issue with sudo chmod 0644 /etc/kubernetes/admin.conf and export KUBECONFIG=/etc/kubernetes/admin.conf
Most likely problem and the one that usually gets me is that you don’t have a .kube directory with the right config in it. Try this:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

Cannot connect to Kubernetes api on AWS vm's

I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!

Why tiller connect to localhost 8080 for kubernetes api?

When use helm for kubernetes package management, after installed the helm client,
after
helm init
I can see tiller pods are running on kubernetes cluster, and then when I run helm ls, it gives an error:
Error: Get http://localhost:8080/api/v1/namespaces/kube-system/configmaps?labe
lSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
and use kubectl logs I can see similar message like:
[storage/driver] 2017/08/28 08:08:48 list: failed to list: Get
http://localhost:8080/api/v1/namespaces/kube-system/configmaps?
labelSelector=OWNER%3DTILLER: dial tcp 127.0.0.1:8080: getsockopt: connection
refused
I can see the tiller pod is running at one of the node instead of master, there is no api server running on that node, why it connects to 127.0.0.1 instead of my master ip?
Run this before doing helm init. It worked for me.
kubectl config view --raw > ~/.kube/config
First delete tiller deployment and stop the tiller service.By running below commands,
kubectl delete deployment tiller-deploy --namespace=kube-system
kubectl delete service tiller-deploy --namespace=kube-system
rm -rf $HOME/.helm/
By default, helm init installs the Tiller pod into the kube-system namespace, with Tiller configured to use the default service account.
Configure Tiller with cluster-admin access with the following command:
kubectl create clusterrolebinding tiller-cluster-admin \
--clusterrole=cluster-admin \
--serviceaccount=kube-system:default
Then install helm server (Tiller) with the following command:
helm init
So I was having this problem since a couple weeks on my work station and none of the answers provided (here or in Github) worked for me.
What it has worked is this:
sudo kubectl proxy --kubeconfig ~/.kube/config --port 80
Notice that I am using port 80, so I needed to use sudo to be able to bing the proxy there, but if you are using 8080 you won't need that.
Be careful with this because the kubeconfig file that the command above is pointing to is in /root/.kube/config instead than in your usual $HOME. You can either use an absolute path to point to the config you want to use or create one in root's home (or use this sudo flag to preserve your original HOME env var --preserve-env=HOME).
Now if you are using helm by itself I guess this is it. To get my setup working, as I am using Helm through the Terraform provider on GKE this was a pain in the ass to debug as the message I was getting doesn't even mention Helm and it's returned by Terraform when planning. For anybody that may be in a similar situation:
The errors when doing a plan/apply operation in Terraform in any cluster with Helm releases in the state:
Error: error installing: Post "http://localhost/apis/apps/v1/namespaces/kube-system/deployments": dial tcp [::1]:80: connect: connection refused
Error: Get "http://localhost/api/v1/namespaces/system/secrets/apigee-secrets": dial tcp [::1]:80: connect: connection refused
One of these errors for every helm release in the cluster or something like that. In this case for a GKE cluster I had to ensure that I had the env var GOOGLE_APPLICATION_CREDENTIALS pointing to the key file with valid credentials (application-default unless you are not using the default set up for application auth) :
gcloud auth application-default login
export GOOGLE_APPLICATION_CREDENTIALS=/home/$USER/.config/gcloud/application_default_credentials.json
With the kube proxy in place and the correct credentials I am able again to use Terraform (and Helm) as usual. I hope this is helpful for anybody experiencing this.
kubectl config view --raw > ~/.kube/config
export KUBECONFIG=~/.kube/config
worked for me