The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port? - kubernetes

I have setup the Kubernetes cluster with Kubespray
Once I restart the node and check the status of the node I am getting as below
$ kubectl get nodes
The connection to the server 127.0.0.1:6443 was refused - did you specify the right host or port?
Environment:
OS : CentOS 7
Kubespray
kubelet version: 1.22.3
Need your help on this.
Regards,
Zain

This work for me, I'm using minukube,
When checking the minikube status by running the command minikube status you'll probably get something like that
E0121 07:14:19.882656 7165 status.go:415] kubeconfig endpoint: got:
127.0.0.1:55900, want: 127.0.0.1:49736
type: Control Plane
host: Running
kubelet: Stopped
apiserver: Stopped
kubeconfig: Misconfigured
To fix it, I just followed the next steps:
minikube update-context
minukube start

Below step can solve your issue.
kubelet may be down, use the below commands on the master node.
1. sudo -i
2. swapoff -a
3. exit
4. strace -eopenat kubectl version
Then try using kubectl get nodes.

Thank you Sai for your inputs. i was getting journalctl -xeu kubelet output was Error while dialing dial unix /var/run/cri-dockerd.sock: connect: no such file or directory i was restarted and enabled cri-dockerd services
sudo systemctl enable cri-dockerd.service
sudo systemctl restart cri-dockerd.service
then sudo systemctl start kubelet finally it works for me.
#kubectl cluster-info
Kubernetes control plane is running at https://127.0.0.1:6443
this link will give https://github.com/kubernetes-sigs/kubespray/issues/8734 more info.
Regards,Zain

Related

Kubernetes Deployment: Error: failed to create deployment:

Environment Details:
Kubernetes version: `v1.20.2`
Master Node: `Bare Metal/Host OS: CentOS 7`
Worker Node: `VM/Host OS: CentOS 7`
I have installed & configured the Kubernetes cluster, the Master node on the bare metal server & the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) & centos7. Successfully joined worker node to master, below is the get nodes status.
$ kubectl get nodes
**NAME STATUS ROLES AGE VERSION
k8s-worker-node1 Ready <none> 2d2h v1.20.2
master-node Ready control-plane,master 3d4h v1.20.2**
While creating a deployment on the worker node I am getting the below error message.
On worker node, I issued the following command.
$ kubectl create deployment nginx-depl --image=nginx
Error message is:
error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host
please help me to resolve this issue as I am not able to understand what is the problem.
May you have to run minikube start before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.
This worked for me.
It seems that you are issuing the kubectl create deployment command on the worker node. This won't work because the kubectl command communicates with the kub-apiserver for cluster communication. Since the apiserver does not run on the worker node executing the command on it will raise an error.
Instead execute the same kubectl command on the master node as a non-root user with the following additional commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl create deployment nginx-depl --image=nginx

K8s error: [kube-peers] Could not get peers: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host

I am new to K8s.
I setup a cluster with 1 master and 1 worker in Azure. I am using Azure VMs.
I could be able to setup etcd, api server, scheduler etc on master and kubelet, kube=proxy on worker and can fetch nodes using kubectl get nodes in master.
The nodes are at NotReady state as I was trying to create networking using weavenet.
But the pods are not creating. 1 Pod is created but the other one is throwing error. Upon investigation it looks the kubernetes service has
endpoint which is not reachable from worker node. How can I fix this?
As suggested in comment section, root cause of the issue was related to Firewall configuration. After adjusting firewall rules, OP confirmed that it's working.
Thank you confusedgenius, PjoterS, yes it was firewall issue.
In general no route to host indicates that
The host is unavailable
Network issues
In this scenario firewall probably blocked traffic from port 443 on the 10.96.0.1 node.
Depends on your OS, you can use preinstalled firewall management tools like Firewalld
### Temporary disable FirewallD
$ sudo systemctl stop firewalld
### Adding port on firewall
$ sudo firewall-cmd --add-port=port/protocol --permanent.
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --list-all
UFW on Ubuntu
$ sudo ufw disable
Firewall stopped and disabled on system startup
or just flush iptables like in this article.
systemctl stop kubelet
systemctl stop docker
iptables --flush
iptables -tnat --flush
systemctl start kubelet
systemctl start docker

How to join worker node in existing cluster?

I was facing some issues while joining worker node in existing cluster.
Please find below the details of my scenario.
I've Created a HA clusters with 4 master and 3 worker.
I removed 1 master node.
Removed node was not a part of cluster now and reset was successful.
Now joining the removed node as worker node in existing cluster.
I'm firing below command
kubeadm join --token giify2.4i6n2jmc7v50c8st 192.168.230.207:6443 --discovery-token-ca-cert-hash sha256:dd431e6e19db45672add3ab0f0b711da29f1894231dbeb10d823ad833b2f6e1b
In above command - 192.168.230.207 is cluster IP
Result of above command
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.230.206:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.230.206:6443: connect: connection refused
Already tried Steps
ted this file(kubectl -n kube-system get cm kubeadm-config -o yaml) using kubeadm patch and removed references of removed node("192.168.230.206")
We are using external etcd so checked member list to confirm removed node is not a part of etcd now. Fired below command etcdctl --endpoints=https://cluster-ip --ca-file=/etc/etcd/pki/ca.pem --cert-file=/etc/etcd/pki/client.pem --key-file=/etc/etcd/pki/client-key.pem member list
Can someone please help me resolve this issue as I'm not able to join this node?
In addition to #P Ekambaram answer, I assume that you probably have completely dispose of all redundant data from previous kubeadm join setup.
Remove cluster entries via kubeadm command on worker node: kubeadm reset;
Wipe all redundant data residing on worker node: rm -rf /etc/kubernetes; rm -rf ~/.kube;
Try to re-join worker node.
Use these instructions one after another to completely remove all old installation on worker node.....
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker.service
yum remove kubeadm kubectl kubelet kubernetes-cni kube*
yum autoremove
rm -rf ~/.kube
then reinstall using
yum install -y kubelet kubeadm kubectl
reboot
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
then use kubeadm join command
fix these problems and run join command
docker service is not enabled, please run 'systemctl enable docker.service'
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
kubelet service is not enabled, please run 'systemctl enable kubelet.service
kubeadm join try to reach 192.168.230.206 whereas new IP is 192.168.230.207.
In addition to change in cm kubeadm-config, you may change your cluster IP address (cluster.server) in this configmap
kubectl edit cm -n kube-public cluster-info

Container runtime network not ready: cni config uninitialized

I'm installing kubernetes(kubeadm) on centos VM running inside Virtualbox, so with yum I installed kubeadm, kubelet and docker.
Now while trying to setup cluster with kubeadm init --pod-network-cidr=192.168.56.0/24 --apiserver-advertise-address=192.168.56.33/32 i run into the following error :
Unable to update cni config: No networks found in /etc/cni/net.d
Container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
So I checked, no cni folder in /etc even that kubernetes-cni-0.6.0-0.x86_64 is installed. I Tried commenting KUBELET_NETWORK_ARGS in /etc/systemd/system/kubelet.service.d/10-kubeadm.conf but it didn't work.
PS:
I'm installing behind proxy.
I have multiple network adapters:
NAT : 10.0.2.15/24 for Internet
Host Only : 192.168.56.33/32
And docker interface : 172.17.0.1/16
Docker version: 17.12.1-ce
kubectl version : Major:"1",
Minor:"9", GitVersion:"v1.9.3"
Centos 7
Add pod network add-on
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/2140ac876ef134e0ed5af15c65e414cf26827915/Documentation/kube-flannel.yml
If flannel doesn't work, then try calico -
kubectl apply -f https://docs.projectcalico.org/manifests/calico-typha.yaml
There are several points to remember when setting up the cluster with "kubeadm init" and it is clearly documented on the Kubernetes site kubeadm cluster create:
"kubeadm reset" if you have already created a previous cluster
Remove the ".kube" folder from the home or root directory
(Also stopping the kubelet with systemctl will allow for a smooth setup)
Disable swap permanently on the machine, especially if you are rebooting your linux system
And not to forget, install a pod network add-on according to the instructions provided on the add on site (not Kubernetes site)
Follow the post initialization steps given on the command window by kubeadm.
If all these steps are followed correctly then your cluster will run properly.
And don't forget to do the following command to enable scheduling on the created cluster:
kubectl taint nodes --all node-role.kubernetes.io/master-
About how to install from behind proxy you may find this useful:
install using proxy
Check this answer.
Use this PR (till will be approved):
kubectl -n kube-system apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
it's a known issue: coreos/flannel#1044
Stop and disable apparmor & restart the containerd service on that node will solve your issue
root#node:~# systemctl stop apparmor
root#node:~# systemctl disable apparmor
root#node:~# systemctl restart containerd.service
I could not see the helm server version:
$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Error: could not find a ready tiller pod
The kubectl describe node kubernetes-master --namespace digital-ocean-namespace command was showing the message:
NetworkPluginNotReady message:docker: network plugin is not ready: cni config uninitialized
The nodes was not ready:
$ kubectl get node --namespace digital-ocean-namespace
NAME STATUS ROLES AGE VERSION
kubernetes-master NotReady master 82m v1.14.1
kubernetes-worker-1 NotReady <none> 81m v1.14.1
I had a version compatibility issue between Kubernetes and the flannel network.
My k8s version was 1.14 as seen in the command:
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:11:31Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.1", GitCommit:"b7394102d6ef778017f2ca4046abbaa23b88c290", GitTreeState:"clean", BuildDate:"2019-04-08T17:02:58Z", GoVersion:"go1.12.1", Compiler:"gc", Platform:"linux/amd64"}
After re-installing the flannel network with the command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
I could then see the helm server version:
$ helm version --tiller-namespace digital-ocean-namespace
Client: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Server: &version.Version{SemVer:"v2.13.1", GitCommit:"618447cbf203d147601b4b9bd7f8c37a5d39fbb4", GitTreeState:"clean"}
Resolved this issue by installing Calico CNI plugin using following commands:
curl https://docs.projectcalico.org/manifests/calico.yaml -O
kubectl apply -f calico.yaml
It was a proxy error as mentionned in Github https://github.com/kubernetes/kubernetes/issues/34695
They suggested to use kubeadm init --use-kubernetes-version v1.4.1 but i change my network entirely (no proxy) and i manage to setup my cluster.
After that we can setup pod network with kubectl apply -f ... see https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/#pod-network
I solved this by Installing a Pod network add-o,
I used Flannel pod network which is a very simple overlay network that satisfies the kubernetes requirements
you can do it with this command:
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml
You can read more about this in the kubernetes documentation
https://kubernetes.io/docs/setup/production-environment/tools/kubeadm/create-cluster-kubeadm/#pod-network
I face the same errors and it seens that It seens that systemd have a problems. I'm not remember my last systemd version. But update it solve the problems for me.
In my case I restarted docker and the status changed to ready
sudo systemctl stop docker
sudo systemctl start docker
I faced same errors, I was seeing the issue after slave node joined to cluster. Slave node was showing status 'Not ready' after joining.
I checked kubectl describe node ksalve and observed the mentioned issue.
After digging deeper I found that systemd was different in master and slave node.
In master I have configured systemd however slave has default cfgroup only.
Once I removed the systemd from master node, slave status immediately changed to Ready.
My problem was that I was updating the hostname after the cluster was created. By doing that, it's like the master didn't know it was the master.
I am still running:
sudo hostname $(curl 169.254.169.254/latest/meta-data/hostname) [1][2]
but now I run it before the cluster initialization
Error that lead me to this from running sudo journalctl -u kubelet:
Unable to register node "ip-10-126-121-125.ec2.internal" with API server: nodes "ip-10-126-121-125.ec2.internal" is forbidden: node "ip-10-126-121-125" cannot modify node "ip-10-126-121-125.ec2.internal"
This is for AWS VPC CNI
Step1 : kubectl get mutatingwebhookconfigurations -oyaml >
mutating.txt
Step 2: Kubectl delete -f mutating.txt
Step3: Restart the node
Step4 : You should be able to see the node is ready
Step5: Install the mutatingwebhookconfiguration back
In my case, it is because I forgot to open the 8285 port. 8285 port is used by flannel you need to open it from the firewall.
e.g:
if you use flannel addon and your OS is centOS:
firewall-cmd --permanent --add-port=8285/tcp
firewall-cmd --reload

The connection to the server localhost:8080 was refused

I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
From master (node1),
$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
From worker (node 2),
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
I am not sure how to fix this issue. Any help is appreciated.
On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.
Thanks!
kubectl interfaces with kube-apiserver for cluster management. The command works on the master node because that's where kube-apiserver runs. On the worker nodes, only kubelet and kube-proxy is running.
In fact, kubectl is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.
from master you need ~/.kube/config pass this file as argument for kubectl command. Copy the config file to other server or laptop then pass the argument as for kubectl command
eg:
kubectl --kubeconfig=~/.kube/config
This worked for me after executing following commands:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
As a hint, the message being prompted indicates its related to network.
So one potential answer could also be, which worked for my resolution, is to have a look at the key cluster value for context within contexts.
My error was that I had placed an incorrect cluster name there.
Having the appropriate cluster name is crucial to finding it for respective context and the error will disappear.
To solve the issue The connection to the server localhost:8080 was refused - did you specify the right host or port?, you may be missing a step.
My Fix:
On MacOS if you install K8s with brew, you still need to brew install minikube, afterwards you should run minikube start. This will start your cluster.
Run the command kubectl cluster-info and you should get a happy path response similar to:
Kubernetes control plane is running at https://127.0.0.1:63000
KubeDNS is running at https://127.0.0.1:63308/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes install steps: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
Minikube docs: https://minikube.sigs.k8s.io/docs/start/
Ensure what context is selected if you're running Kubernetes in the Docker Desktop.
Once you've selected it right, you'll be able to run the kubectl commands without any exception:
% kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 2d11h v1.22.5