I'm following steps mentioned here to configure kubernete cluster.But when I'm executing kubectl get nodes to check the status of my newly created cluster, that time I'm getting error message The connection to the server localhost:8080 was refused - did you specify the right host or port?
In this link it is mentioned for Ubuntu 14.04 LTS 64bit server, but I'm using Ubuntu 16,04 64 Bit.
Can you expert please help me to solve this issue?
You need to run the following commands (as a regular user) after you have initialized your cluster:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Follow this issue on GitHub if you still encounter it.
The document your following is more then year old, its not update to date steps.
I would recommend creating cluster using the kubeadm with version v.1.7.0
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/
use this configure master
kubeadm init --pod-network-cidr=10.244.0.0/16 --kubernetes-version v1.7.0
It looks like your client configuration thinks the server runs on localhost:8080 which is probably not the case.
Can you look at the ~/.kube/config file to see what is configured there?
Also try running kubectl --server=<SERVER_IP>:<SERVER_PORT> get nodes and see if this helps.
Related
I have installed OracleVM, and created an Ubuntu machine. I have not tried any major cluster or tried deploying anything, I have started reading about Kubernetes and as an example just tried creating a simple pod. But I am getting host error, can anyone tell me where am I going wrong?
I have tried the simple kubectl run command
The snip of my issue
You need to disable swap on the system for kubelet to work. You can disable swap with sudo swapoff -a and restart kubelet service sudo systemctl restart kubelet
What happened:
when I reboot the centos7 server and run get pod, see below error:
The connection to the server localhost:8080 was refused - did you specify the right host or port? What you expected to happen:
before I reboot the system, the Kubernetes have three nodes, and pods/service/,.. all working fine.
How to reproduce it (as minimally and precisely as possible):
reboot the server
kubectl get pod
Anything else we need to know?
I even used sudo kubeadm reset and init again but the issue still exists!
There are few things to consider:
kubeadm reset performs a best effort revert of changes made by kubeadm init or kubeadm join. So some configurations may stay on the cluster.
Make sure you run kubectl as a proper user. You might need to copy the admin.conf to .kube/config dir of the user's home directory.
After kubeadm init you need to run the following commands:
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
Make sure you do so.
Check Centos' firewall configuration. After the restart it might go back to defaults.
Please let me know if that helped.
i am trying to install a kops cluster on AWS and to that as a pre-requisite i installed kubectl as per these instructions provided,
https://kubernetes.io/docs/tasks/tools/install-kubectl/#install-kubectl
but when i try to verify the installation, i am getting the below error.
ubuntu#ip-172-31-30-74:~$ kubectl version
Client Version: version.Info{Major:"1", Minor:"8", GitVersion:"v1.8.0", GitCommit:"6e937839ac04a38cac63e6a7a306c5d035fe7b0a", GitTreeState:"clean", BuildDate:"2017-09-28T22:57:57Z", GoVersion:"go1.8.3", Compiler:"gc", Platform:"linux/amd64"}
The connection to the server localhost:8080 was refused - did you specify the right host or port?
i am not sure why! because i had set up cluster with similar way earlier and everything worked fine.
Now wanted to set up a new cluster, but kind of stuck in this.
Any help appreciated.
Two things :
If every instruction was followed properly, and still facing same issue. #VAS answer might help.
However in my case, i was trying to verify with kubectl as soon as i deployed a cluster. It is to be noted that based on the size of the master and worker nodes cluster takes some time to come up.
Once the cluster is up, kubectl was successfully able to communicate. As silly as it sounds, i waited out for 15 mins or so until my master was successfully running. Then everything worked fine.
The connection to the server localhost:8080 was refused - did you specify the right host or port?
This error usually means that your kubectl config is not correct and either it points to the wrong address or credentials are wrong.
If you have successfully created a cluster with kops, you just need to export its connections settings to kubectl config.
kops export --name=<your_cluster_name> --config=~/.kube/config
If you want to use a separate config file for this cluster, you can do it by setting the environment variable:
export KUBECONFIG=~/.kube/you_cluster_name.config
kops export kubecfg --name you_cluster_name --config=~$KUBECONFIG
You can also create a kubectl config for each team member using KOPS_STATE_STORE:
export KOPS_STATE_STORE=s3://<somes3bucket>
# NAME=<kubernetes.mydomain.com>
kops export kubecfg ${NAME}
In my particular case I forgot to configure kubectl after the installation which resulted in the exact same symptoms.
More specifically I forgot to create and populate the config file in $HOME/.kube directory. You can read about how to do this properly here, but this should suffice to make the error go away:
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
I'm running kubectl create -f notRelevantToThisQuestion.yml
The response I get is:
Error from server (NotFound): the server could not find the requested
resource
Is there any way to determine which resource was requested that was not found?
kubectl get ns returns
NAME STATUS AGE default Active 243d
kube-public Active 243d kube-system Active 243d
This is not a cron job.
Client version 1.9
Server version 1.6
This is very similar to https://devops.stackexchange.com/questions/2956/how-do-i-get-kubernetes-to-work-when-i-get-an-error-the-server-could-not-find-t?rq=1 but my k8s cluster has been deployed correctly (everything's been working for almost a year, I'm adding a new pod now).
To solve this, downgrade the client or upgrade the server. In my case I've upgraded server (new minikube) but forget to upgrade client (kubectl) and end up with those versions.
$ kubectl version --short
Client Version: v1.9.0
Server Version: v1.14.1
When I'd upgraded client version (in this case to 1.14.2) then everything started to work again.
Instructions how to install (in your case upgrade) client are here https://kubernetes.io/docs/tasks/tools/install-kubectl
I have the same error when trying to do a CD with Jenkins and Kubernetes. In the pipeline I excute kubectl create -f app-deployment.yml -v=8 This image show more information about the error:
The cause of problem in versions:
From documentation
a client should be skewed no more than one minor version from the
master, but may lead the master by up to one minor version. For
example, a v1.3 master should work with v1.1, v1.2, and v1.3 nodes,
and should work with v1.2, v1.3, and v1.4 clients.
From http://words.yuvi.in/post/kubectl-rbac/
Running kubectl create -f notRelevantToThisQuestion.yml -v=8 will print all the HTTP traffic (requests and responses!) in an easy to read way. In this way, one can identify which resource is not available from the http responses.
apply these and then try
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
This solution is particularly for mac users.
Step 1:- Update kubernetes
brew upgrade kubernetes-cli
Step 2:- Overwrite it
brew link --overwrite kubernetes-cli
For Openshift, I was using old oc CLI version, after updating to latest oc CLI solved my issue
I stumbled upon this question when creating resource from Dashboard.
The resource was namespaced and I had no namespace selected. Selecting namespace fixed the server could not find the requested resource error.
In my case, I didn't enable kubernetes from docker desktop.
I enabled it which worked.
I am trying to install Kubernetes locally on my CentOS. I am following this blog http://containertutorials.com/get_started_kubernetes/index.html, with appropriate changes to match CentOS and latest Kubernetes version.
./kube-up.sh script runs and exists with no errors and I don't see the server started on port 8080. Is there a way to know what was the error and if there is any other procedure to follow on CentOS 6.3
The easiest way to install the kubernetes cluster is using kubeadm. The initial post which details the steps of setup is here. And the detailed documentation for the kubeadm can be found here. With this you will get the latest released kubernetes.
If you really want to use the script to bring up the cluster, I did following:
Install the required packages
yum install -y git docker etcd
Start docker process
systemctl enable --now docker
Install golang
Latest go version because default centos golang is old and for kubernetes to compile we need at least go1.7
curl -O https://storage.googleapis.com/golang/go1.8.1.linux-amd64.tar.gz
tar -C /usr/local -xzf go1.8.1.linux-amd64.tar.gz
export PATH=$PATH:/usr/local/go/bin
Setup GOPATH
export GOPATH=~/go
export GOBIN=$GOPATH/bin
export PATH=$PATH:$GOBIN
Download k8s source and other golang dependencies
Note: this might take sometime depending on your internet speed
go get -d k8s.io/kubernetes
go get -u github.com/cloudflare/cfssl/cmd/...
Start cluster
cd $GOPATH/src/k8s.io/kubernetes
./hack/local-up-cluster.sh
In new terminal
alias kubectl=$GOPATH/src/k8s.io/kubernetes/cluster/kubectl.sh
kubectl get nodes