Environment Details:
Kubernetes version: `v1.20.2`
Master Node: `Bare Metal/Host OS: CentOS 7`
Worker Node: `VM/Host OS: CentOS 7`
I have installed & configured the Kubernetes cluster, the Master node on the bare metal server & the worker node on windows server 2012 HyperV VM. Both master and worker nodes have the same Kubernetes version ( v1.20.2) & centos7. Successfully joined worker node to master, below is the get nodes status.
$ kubectl get nodes
**NAME STATUS ROLES AGE VERSION
k8s-worker-node1 Ready <none> 2d2h v1.20.2
master-node Ready control-plane,master 3d4h v1.20.2**
While creating a deployment on the worker node I am getting the below error message.
On worker node, I issued the following command.
$ kubectl create deployment nginx-depl --image=nginx
Error message is:
error: failed to create deployment: Post “http://localhost:8080/apis/apps/v1/namespaces/default/deployments?fieldManager=kubectl-create”: dial tcp: lookup localhost on 8.8.8.8:53: no such host
please help me to resolve this issue as I am not able to understand what is the problem.
May you have to run minikube start before. I’m learning and between one class and another I forgot to run this command. I hope I have helped someone.
This worked for me.
It seems that you are issuing the kubectl create deployment command on the worker node. This won't work because the kubectl command communicates with the kub-apiserver for cluster communication. Since the apiserver does not run on the worker node executing the command on it will raise an error.
Instead execute the same kubectl command on the master node as a non-root user with the following additional commands,
$ mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
$ kubectl create deployment nginx-depl --image=nginx
Related
Below are the commands and their outputs:
root#k8s-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root#k8s-master:~# kubectl get services -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It looks like you are not running EKS. Otherwise you cannot access the masters. With EKS, the masters are managed by AWS and you can't ssh to them
your kubectl commands makes a call to the kubernetes api server. So you have to check if it is running on localhost on port 8080.
I was facing some issues while joining worker node in existing cluster.
Please find below the details of my scenario.
I've Created a HA clusters with 4 master and 3 worker.
I removed 1 master node.
Removed node was not a part of cluster now and reset was successful.
Now joining the removed node as worker node in existing cluster.
I'm firing below command
kubeadm join --token giify2.4i6n2jmc7v50c8st 192.168.230.207:6443 --discovery-token-ca-cert-hash sha256:dd431e6e19db45672add3ab0f0b711da29f1894231dbeb10d823ad833b2f6e1b
In above command - 192.168.230.207 is cluster IP
Result of above command
[preflight] Running pre-flight checks
[WARNING Service-Docker]: docker service is not enabled, please run 'systemctl enable docker.service'
[WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/
[WARNING FileExisting-tc]: tc not found in system path
[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
[preflight] Reading configuration from the cluster...
[preflight] FYI: You can look at this config file with 'kubectl -n kube-system get cm kubeadm-config -oyaml'
error execution phase preflight: unable to fetch the kubeadm-config ConfigMap: failed to get config map: Get https://192.168.230.206:6443/api/v1/namespaces/kube-system/configmaps/kubeadm-config: dial tcp 192.168.230.206:6443: connect: connection refused
Already tried Steps
ted this file(kubectl -n kube-system get cm kubeadm-config -o yaml) using kubeadm patch and removed references of removed node("192.168.230.206")
We are using external etcd so checked member list to confirm removed node is not a part of etcd now. Fired below command etcdctl --endpoints=https://cluster-ip --ca-file=/etc/etcd/pki/ca.pem --cert-file=/etc/etcd/pki/client.pem --key-file=/etc/etcd/pki/client-key.pem member list
Can someone please help me resolve this issue as I'm not able to join this node?
In addition to #P Ekambaram answer, I assume that you probably have completely dispose of all redundant data from previous kubeadm join setup.
Remove cluster entries via kubeadm command on worker node: kubeadm reset;
Wipe all redundant data residing on worker node: rm -rf /etc/kubernetes; rm -rf ~/.kube;
Try to re-join worker node.
Use these instructions one after another to completely remove all old installation on worker node.....
kubeadm reset
systemctl stop kubelet
systemctl stop docker
rm -rf /var/lib/cni/
rm -rf /var/lib/kubelet/*
rm -rf /etc/cni/
ifconfig cni0 down
ifconfig flannel.1 down
ifconfig docker0 down
ip link delete cni0
ip link delete flannel.1
systemctl start docker.service
yum remove kubeadm kubectl kubelet kubernetes-cni kube*
yum autoremove
rm -rf ~/.kube
then reinstall using
yum install -y kubelet kubeadm kubectl
reboot
systemctl start docker && systemctl enable docker
systemctl start kubelet && systemctl enable kubelet
then use kubeadm join command
fix these problems and run join command
docker service is not enabled, please run 'systemctl enable docker.service'
detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd".
kubelet service is not enabled, please run 'systemctl enable kubelet.service
kubeadm join try to reach 192.168.230.206 whereas new IP is 192.168.230.207.
In addition to change in cm kubeadm-config, you may change your cluster IP address (cluster.server) in this configmap
kubectl edit cm -n kube-public cluster-info
Running under Ubuntu I used kubeadm init to setup my cluster (master node) and copied over the /etc/kubernetes/admin.conf $HOME/.kube/config and all was well when using kubectl.
However after a reboot my master node has had an IP address change which is not the same as what is in $HOME/.kube/config so now I can no longer connect kubectl
So how do I regenerate the admin.conf now that I have a new IP address? Running kubeadm init will just kill everything which is not what I want.
I found this solution on the internet and it works for me:
systemctl stop kubelet docker
cd /etc/
mv kubernetes kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
mkdir -p kubernetes
cp -r kubernetes-backup/pki kubernetes
rm kubernetes/pki/{apiserver.*,etcd/peer.*}
systemctl start docker
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
#Run "kubeadm reset" on all nodes if was this error "error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists"
cp kubernetes/admin.conf ~/.kube/config
kubectl get nodes --sort-by=.metadata.creationTimestamp
kubectl delete node $(kubectl get nodes -o jsonpath='{.items[(#.status.conditions[0].status=="Unknown")].metadata.name}')
kubectl get pods --all-namespaces
After These, Join your Slaves to Master.
Reference: https://medium.com/#juniarto.samsudin/ip-address-changes-in-kubernetes-master-node-11527b867e88
The following command can be used to regenerate admin.conf
kubeadm alpha phase kubeconfig admin --apiserver-advertise-address <new_ip>
However, if you use an IP instead of a hostname, your API-server certificate will be invalid. So, either regenerate your certs ( kubeadm alpha phase certs renew apiserver ), use hostnames instead of IPs or add the insecure --insecure-skip-tls-verify flag when using kubectl
You do not want to use kubeadm reset. That will reset everything and you would have to start configuring your cluster again.
Well, in your scenario, please have a look on the steps below:
nano /etc/hosts (update your new IP against YOUR_HOSTNAME)
nano /etc/kubernetes/config (configuration settings related to your cluster) here in this file look for the following params and update accordingly
KUBE_MASTER="--master=http://YOUR_HOSTNAME:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME:2379" #2379 is default port
nano /etc/etcd/etcd.conf (conf related to etcd)
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME/WHERE_EVER_ETCD_HOSTED:2379"
2379 is default port for etcd. and you can have multiple etcd servers defined here comma separated
Restart kubelet, apiserver, etcd services.
It is good to use hostname instead of IP to avoid such scenarios.
Hope it helps!
I was able to cluster 2 nodes together in Kubernetes. The master node seems to be running fine but running any command on the worker node results in the error: "The connection to the server localhost:8080 was refused - did you specify the right host or port?"
From master (node1),
$ kubectl get nodes
NAME STATUS AGE VERSION
node1 Ready 23h v1.7.3
node2 Ready 23h v1.7.3
From worker (node 2),
$ kubectl get nodes
The connection to the server localhost:8080 was refused - did you specify the right host or port?
$ telnet localhost 8080
Trying ::1...
telnet: connect to address ::1: Connection refused
Trying 127.0.0.1...
telnet: connect to address 127.0.0.1: Connection refused
$ ping localhost
PING localhost (127.0.0.1) 56(84) bytes of data.
64 bytes from localhost (127.0.0.1): icmp_seq=1 ttl=64 time=0.032 ms
I am not sure how to fix this issue. Any help is appreciated.
On executing,"journalctl -xeu kubelet" I see:
"CNI failed to retrieve network namespace path: Cannot find network namespace for the terminated container", but this seems to be related to installing a pod network ... which I am not able to because of the above error.
Thanks!
kubectl interfaces with kube-apiserver for cluster management. The command works on the master node because that's where kube-apiserver runs. On the worker nodes, only kubelet and kube-proxy is running.
In fact, kubectl is supposed to be run on a client (eg. laptop, desktop) and not on the kubernetes nodes.
from master you need ~/.kube/config pass this file as argument for kubectl command. Copy the config file to other server or laptop then pass the argument as for kubectl command
eg:
kubectl --kubeconfig=~/.kube/config
This worked for me after executing following commands:
$ sudo mkdir -p $HOME/.kube
$ sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
$ sudo chown $(id -u):$(id -g) $HOME/.kube/config
As a hint, the message being prompted indicates its related to network.
So one potential answer could also be, which worked for my resolution, is to have a look at the key cluster value for context within contexts.
My error was that I had placed an incorrect cluster name there.
Having the appropriate cluster name is crucial to finding it for respective context and the error will disappear.
To solve the issue The connection to the server localhost:8080 was refused - did you specify the right host or port?, you may be missing a step.
My Fix:
On MacOS if you install K8s with brew, you still need to brew install minikube, afterwards you should run minikube start. This will start your cluster.
Run the command kubectl cluster-info and you should get a happy path response similar to:
Kubernetes control plane is running at https://127.0.0.1:63000
KubeDNS is running at https://127.0.0.1:63308/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
Kubernetes install steps: https://kubernetes.io/docs/tasks/tools/install-kubectl-macos/
Minikube docs: https://minikube.sigs.k8s.io/docs/start/
Ensure what context is selected if you're running Kubernetes in the Docker Desktop.
Once you've selected it right, you'll be able to run the kubectl commands without any exception:
% kubectl cluster-info
Kubernetes control plane is running at https://kubernetes.docker.internal:6443
CoreDNS is running at https://kubernetes.docker.internal:6443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
% kubectl get nodes
NAME STATUS ROLES AGE VERSION
docker-desktop Ready control-plane,master 2d11h v1.22.5
On OS X 10.11.6, I created 4 CentOS 7 VMs each with two interfaces ( One NAT, and one Host-only network.) in VirtualBox. Each VM's host-only interface receives an IP via DCHCP and DNS via dnsmasq.
OS X is running dnsmasq configure via a /usr/local/etc/dnsmasq.conf file that contains:
interface=vboxnet0
bind-interfaces
dhcp-range=vboxnet0,192.168.56.100,192.168.56.200,255.255.255.0,infinite
dhcp-leasefile=/usr/local/etc/dnsmasq.leases
local=/dev/
expand-hosts
domain=dev
address=/kube-master.dev/192.168.56.100
address=/kube-minion1.dev/192.168.56.101
address=/kube-minion2.dev/192.168.56.102
address=/kube-minion3.dev/192.168.56.103
address=/vbox-host.dev/192.168.56.1
dhcp-host=08:00:27:09:48:16,192.168.56.100
dhcp-host=0a:00:27:00:00:00,192.168.56.1
dhcp-host=08:00:27:95:AE:39,192.168.56.101
dhcp-host=08:00:27:97:C9:D4,192.168.56.102
dhcp-host=08:00:27:9B:AD:B5,192.168.56.103
I can ssh into each VM through their respective host-only adapter's associated address (e.g., kube-master.dev, kube-minion1.dev, kube-minion2.dev, kube-minion3.dev), and then
yum update
skipping a few steps, get to the point of installing kubeadm as per http://kubernetes.io/docs/getting-started-guides/kubeadm/, that is:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=http://yum.kubernetes.io/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg
https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
setenforce 0
yum install -y docker kubelet kubeadm kubectl kubernetes-cni ebtables
systemctl enable docker && systemctl start docker
systemctl enable kubelet && systemctl start kubelet
Then it is unclear to me if the following is correct but on kube-master.dev I execute
kubeadm init --api-advertise-addresses=192.168.56.100 --api-external-dns-names=kube-master.dev
And then on each minion execute:
rm -Rf /etc/kubernetes/manifests/
kubeadm join --token=e7cd12.68011e93d5db7670 192.168.56.100
On kube-master.dev, I then run
kubectl get nodes
to verify the each node has joined the cluster.
The command returns:
NAME STATUS AGE
kube-master.dev Ready 44m
kube-minion1.dev Ready 40m
kube-minion2.dev Ready 39m
kube-minion3.dev Ready 39m
indicating things are groovy.
Afterward, things go entirely off the rail when I attempt to install a pod network.
On kube-master.dev, I run:
kubectl apply -f https://git.io/weave-kube
to install Weave Net, and once the POD network is installed I start monitoring that network is working via executing:
watch kubectl get pods --all-namespaces
And
kube-dns-654381707-05i1t 0/3
never moves off of zero.
So please what am I doing wrong? I've hammered at this for days. The kubeadm documentation is a bit thin in a few place, so I'm not sure I init'ed the master correctly, and installing the pod network is bit conjecture on my part. Also, I haven't found a tutorial other than the Kubernetes kubeadm and the associated youtube video documenting the use of kubeadm to set up a kubernetes cluster.