I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!
Related
I have created a Kubernetes cluster with 2 nodes, one Master node and one Worker node (2 different VMs).
The worker node has joined the cluster successfully, so when I run the commanad:
kubectl get nodes in my master node it appears the 2 nodes exists in the cluster!
However, when I run the command kubectl apply -f https://k8s.io/examples/controllers/nginx-deployment.yaml from my worker node terminal, in order to create a deployment in the worker node, I have the following error:
The connection to the server localhost:8080 was refused. - did you specify the right host or port?
Any help what is going on here?
It looks like you have issue with your kubeconfig file, as usually localhost:8080 is the default server to connect in absence of this file . Generally, Kubernetes uses this file to store cluster authentication information and a list of contexts to which kubectl refers when running commands - that's why kubectl can't work properly without this file.
To check the presence of kubeconfig file, enter this command: kubectl config view.
Or just check the presence of the file named config in the $HOME/.kube directory, which is the default location for kubeconfig file.
If it is absent, you would need to copy the config file to your node, e.g.:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
sudo service kubelet restart
It is also possible to generate config file in a more difficult way - instead of copying - as described here.
The easy way to do it is to copy the config from master node usually found here : /etc/kubernetes/admin.conf , to whetever node you want to configure kubectl ( even on master node) . The location to be copied is : $HOME/.kube/config
Also, you can this command from master node by specify nodeselector or label.
Assign POD node
After certification renewal in K8 Cluster, we are getting permission error while running some kubectl commands. For example; we get this error while running the commands "kubectl get deployments" and "kubectl get pv"
"Error from server (Forbidden): pods "<>" is forbidden: User "system:node:<>" cannot create resource "pods/exec" in API group "" in the namespace "......"
But, we can able to run commands such as "Kubectl get nodes" and "kubectl get pods" without any issues.
During cert renewal process, we ran the below command and manually updated kubelet.conf in /etc/kubernetes and config file in /root/.kube directory.
Is there any other files, we need to update the new certificate related details ? Kindly help us with remediation process/steps at the earliest possible.
• sudo kubeadm alpha kubeconfig user --org system:nodes --client-name system:node:$(hostname) > kubelet.conf
Our Prod Kubernetes Cluster Information:
Kubernetes Cluster Version - 1.13.x
Master Node - 1
Worker Nodes - 11
Recently, the below mentioned certifications got renewed recently.
apiserver.crt
apiserver-kubelet-client.crt
front-proxy-client.crt
apiserver-etcd-client.crt
Apparently, you are using the kubelet certificate instead of that for the kubectl (admin cert).
Try running this command and see if it works:
sudo kubectl get pv --kubeconfig /etc/kubernetes/admin.conf
If it does, do:
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
kubectl get pv
kubectl get deploy
I am new to kubernetes . I got the below error while interacting with the cluster kubectl get nodes .
ERROR:
The connection to the server localhost:8080 was refused - did you specify the right host or port?
After search in the internet i fixed my issues .
#sudo cp /etc/kubernetes/admin.conf $HOME/
#sudo chown $(id -u):$(id -g) $HOME/admin.conf
#export KUBECONFIG=$HOME/admin.conf
Your kubectl is probably not referring to right kubeconfig file or the kubeconfig file does not right details.
When there is clear instructions by kubeadm init to execute following commands as an regular user, if you miss runing them you end up with issue reported.
To make kubectl work for your non-root user, run these commands, which are also part of the kubeadm init output:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should check back the logs at kubeadm init time and you will find similar as below asking to execute the command.
[addons] Applied essential addon: CoreDNS
[addons] Applied essential addon: kube-proxy
Your Kubernetes control-plane has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a Pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join <control-plane-host>:<control-plane-port> --token <token> --discovery-token-ca-cert-hash sha256:<hash>
Maybe you not set environment variables, try this:
export KUBERNETES_MASTER=http://MasterIP:8080
MasterIP is your Kubernetes master IP
Issue is with the use-context in kubectl command.Please check the same in kubeconfig file.
Running under Ubuntu I used kubeadm init to setup my cluster (master node) and copied over the /etc/kubernetes/admin.conf $HOME/.kube/config and all was well when using kubectl.
However after a reboot my master node has had an IP address change which is not the same as what is in $HOME/.kube/config so now I can no longer connect kubectl
So how do I regenerate the admin.conf now that I have a new IP address? Running kubeadm init will just kill everything which is not what I want.
I found this solution on the internet and it works for me:
systemctl stop kubelet docker
cd /etc/
mv kubernetes kubernetes-backup
mv /var/lib/kubelet /var/lib/kubelet-backup
mkdir -p kubernetes
cp -r kubernetes-backup/pki kubernetes
rm kubernetes/pki/{apiserver.*,etcd/peer.*}
systemctl start docker
kubeadm init --ignore-preflight-errors=DirAvailable--var-lib-etcd
#Run "kubeadm reset" on all nodes if was this error "error execution phase preflight: [preflight] Some fatal errors occurred:
[ERROR FileAvailable--etc-kubernetes-kubelet.conf]: /etc/kubernetes/kubelet.conf already exists
[ERROR Port-10250]: Port 10250 is in use
[ERROR FileAvailable--etc-kubernetes-pki-ca.crt]: /etc/kubernetes/pki/ca.crt already exists"
cp kubernetes/admin.conf ~/.kube/config
kubectl get nodes --sort-by=.metadata.creationTimestamp
kubectl delete node $(kubectl get nodes -o jsonpath='{.items[(#.status.conditions[0].status=="Unknown")].metadata.name}')
kubectl get pods --all-namespaces
After These, Join your Slaves to Master.
Reference: https://medium.com/#juniarto.samsudin/ip-address-changes-in-kubernetes-master-node-11527b867e88
The following command can be used to regenerate admin.conf
kubeadm alpha phase kubeconfig admin --apiserver-advertise-address <new_ip>
However, if you use an IP instead of a hostname, your API-server certificate will be invalid. So, either regenerate your certs ( kubeadm alpha phase certs renew apiserver ), use hostnames instead of IPs or add the insecure --insecure-skip-tls-verify flag when using kubectl
You do not want to use kubeadm reset. That will reset everything and you would have to start configuring your cluster again.
Well, in your scenario, please have a look on the steps below:
nano /etc/hosts (update your new IP against YOUR_HOSTNAME)
nano /etc/kubernetes/config (configuration settings related to your cluster) here in this file look for the following params and update accordingly
KUBE_MASTER="--master=http://YOUR_HOSTNAME:8080"
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME:2379" #2379 is default port
nano /etc/etcd/etcd.conf (conf related to etcd)
KUBE_ETCD_SERVERS="--etcd-servers=http://YOUR_HOSTNAME/WHERE_EVER_ETCD_HOSTED:2379"
2379 is default port for etcd. and you can have multiple etcd servers defined here comma separated
Restart kubelet, apiserver, etcd services.
It is good to use hostname instead of IP to avoid such scenarios.
Hope it helps!
Running kubectl get pods with sudo:
sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Running as a normal user:
kubectl get pods
No resources found.
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080
You would have run these commands from the normal user :
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
which would have copied config file in your normal user home directory and that is why you are able to get to the connection from the normal host and not from sudo.