Deleted ~/.kube/config - kubernetes

I accidentally deleted the config file from ~/.kube/config. Every kubectl command fails due to config missing.
Example:
kubectl get nodes
The connection to the server localhost:8080 was refused - did you
specify the right host or port?
I have already install k3s using:
export K3S_KUBECONFIG_MODE="644"
curl -sfL https://get.k3s.io | sh -s - --docker
and kubectl using:
snap install kubectl --classic
Does anyone know how to fix this?

The master copy is available at /etc/rancher/k3s/k3s.yaml. So, copy it back to ~/.kube/config
cp /etc/rancher/k3s/k3s.yaml ~/.kube/config
Reference: https://rancher.com/docs/k3s/latest/en/cluster-access/

Related

Install minikube (Kubernetes) because I have only one master server / node, but it is not pointing to my IP, but to IP 172.17.0.2

Good afternoon, I am new to Kubernetes and I am installing for the development environment Kubernetes, I have a red hat server(redhat) to be node/master, at the same time, I followed the following steps to install:
Following the tutorial on the page:
https://www.linuxtechi.com/install-kubernetes-k8s-minikube-centos-8/
sudo dnf update -y
sudo setenforce 0
sudo sed -i --follow-symlinks 's/SELINUX=enforcing/SELINUX=disabled/g' /etc/sysconfig/selinux
sudo systemctl start docker
sudo systemctl enable docker
sudo dnf install conntrack -y
#Installing Kubectl
sudo cat <<EOF > /etc/yum.repos.d/kubernetes.repo (root)
yum install -y kubectl (root)
#Installing Minikube
curl -Lo minikube https://storage.googleapis.com/minikube/releases/latest/minikube-linux-amd64
chmod +x minikube
mkdir -p /usr/local/bin/
install minikube /usr/local/bin/
However, when I configure minikube, it does not point to the IP of my server, but to the IP 172.17.0.2:
minikube ip
172.17.0.2
kubectl cluster-info
To further debug and diagnose cluster problems, use 'kubectl cluster-info dump'.
The connection to the server 172.17.0.2:8443 was refused - did you specify the right host or port?
My IP is 10.154.7.209
What I can be doing wrong? If I can't use minikube to raise a server as master/node, what can I use?
You are not doing anything wrong 👌. minikube is basically using the docker driver so 172.17.0.2 is the IP address of the container where your cluster is running.
But looks like you are using a proxy somewhere in your system. So you need to include the 172.17.0.0/24 range in the NO_PROXY environment variable.
Something like this:
export HTTP_PROXY=http://<proxy hostname:port>
export HTTPS_PROXY=https://<proxy hostname:port>
export NO_PROXY=localhost,127.0.0.1,10.96.0.0/12,192.168.99.0/24,192.168.39.0/24,172.17.0.0/24 👈
minikube start
# get pods
minukube kubectl -- get pods --all-namespaces
✌️

When I start prepare kubernetes in aws the errors shown below

Below are the commands and their outputs:
root#k8s-master:~# sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
cp: cannot stat '/etc/kubernetes/admin.conf': No such file or directory
root#k8s-master:~# kubectl get services -n kube-system
The connection to the server localhost:8080 was refused - did you specify the right host or port?
It looks like you are not running EKS. Otherwise you cannot access the masters. With EKS, the masters are managed by AWS and you can't ssh to them
your kubectl commands makes a call to the kubernetes api server. So you have to check if it is running on localhost on port 8080.

How to login to Kubernetes using service account?

I am trying to perform a simple operation of logging into my cluster to update image of a deployment. I am stuck at the first step. I get an error that connection to localhost:8080 is refused. Please help.
$ chmod u+x kubectl && mv kubectl /bin/kubectl
$ $KUBE_CERT > ca.crt
$ kubectl config set-cluster cfc --server=$KUBE_URL --certificate-authority=ca.crt
Cluster "cfc" set.
$ kubectl config set-context cfc --cluster=cfc
Context "cfc" created.
$ kubectl config set-credentials gitlab-admin --token=$KUBE_TOKEN
User "gitlab-admin" set.
$ kubectl config set-context cfc --user=gitlab-admin
Context "cfc" modified.
$ kubectl config use-context cfc
Switched to context "cfc".
$ echo "Deploying dashboard with version extracted from tag ${CI_COMMIT_TAG}"
Deploying dashboard with version extracted from tag dev-1.0.4-22
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason why you have you connection refused is because your proxy is not started. Try executing code below so kubectl can access the cluster via proxy (localhost:8080).
kubectl proxy --address 0.0.0.0 --accept-hosts '.*' &
Another approach is to use curl and operate with your cluster just like in the following example:
curl --cacert /path/to/cert -H "Bearer {your token}" "${KUBE_URL}/api"

Cannot connect to Kubernetes api on AWS vm's

I have deployed Kubernetes using the link Kubernetes official page
I see that Kubernetes is deployed because in the end i got this
Your Kubernetes master has initialized successfully!
To start using your cluster, you need to run the following as a regular user:
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
You should now deploy a pod network to the cluster.
Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
https://kubernetes.io/docs/concepts/cluster-administration/addons/
You can now join any number of machines by running the following on each node
as root:
kubeadm join 172.16.32.101:6443 --token ma1d4q.qemewtyhkjhe1u9f --discovery-token-ca-cert-hash sha256:408b1fdf7a5ea5f282741db91ebc5aa2823802056ea9da843b8ff52b1daff240
when i do kubectl get pods it thorws this error
# kubectl get pods
The connection to the server 127.0.0.1:6553 was refused - did you specify the right host or port?
When I do see the cluster-info it says as follows
kubectl cluster-info
Kubernetes master is running at https://127.0.0.1:6553
But when i see the config it shows as follows
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUN5RENDQWJDZ0F3SUJBZ0lCQURBTkJna3Foa2lHOXcwQkFRc0ZBREFWTVJNd0VRWURWUVFERXdwcmRXSmwKY201bGRHVnpNQjRYRFRFNE1EWXlNREEyTURJd04xb1hEVEk0TURZeE56QTJNREl3TjFvd0ZURVRNQkVHQTFVRQpBeE1LYTNWaVpYSnVaWFJsY3pDQ0FTSXdEUVlKS29aSWh2Y05BUUVCQlFBRGdnRVBBRENDQVFvQ2dnRUJBT0ZXCkxWQkJoWmZCQms4bXJrV0w2MmFGd2U0cUYvZkRHekJidnE5TFpGN3M4UW1rdDJVUlo5YmtSdWxzSlBrUXV1U2IKKy93YWRoM054S0JTQUkrTDIrUXdyaDVLSy9lU0pvbjl5TXJlWnhmRFdPTno2Y3c4K2txdnh5akVsRUdvSEhPYQpjZHpuZnJHSXVZS3lwcm1GOEIybys0VW9ldytWVUsxRG5Ra3ZwSUZmZ1VjVWF4UjVMYTVzY2ZLNFpweTU2UE4wCjh1ZjdHSkhJZFhNdXlVZVpFT3Z3ay9uUTM3S1NlWHVhcUlsWlFqcHQvN0RrUmFZeGdTWlBqSHd5c0tQOHMzU20KZHJoeEtyS0RPYU1Wczd5a2xSYjhzQjZOWDB6UitrTzhRNGJOUytOYVBwbXFhb3hac1lGTmhCeGJUM3BvUXhkQwpldmQyTmVlYndSWGJPV3hSVzNjQ0F3RUFBYU1qTUNFd0RnWURWUjBQQVFIL0JBUURBZ0trTUE4R0ExVWRFd0VCCi93UUZNQU1CQWY4d0RRWUpLb1pJaHZjTkFRRUxCUUFEZ2dFQkFDTFBlT0s5MUdsdFJJTjdmMWZsNmlINTg0c1UKUWhBdW1xRTJkUHFNT0ZWWkxjbDVJZlhIQ1dGeE13OEQwOG1NNTZGUTNEVDd4bi9lNk5aK1l2M1JrK2hBdmdaQgpaQk9XT3k4UFJDOVQ2S1NrYjdGTDRSWDBEamdSeE5WTFIvUHd1TUczK3V2ZFhjeXhTYVJJRUtrLzYxZjJsTGZmCjNhcTdrWkV3a05pWXMwOVh0YVZGQ21UaTd0M0xrc1NsbDZXM0NTdVlEYlRQSzJBWjUzUVhhMmxYUlZVZkhCMFEKMHVOQWE3UUtscE9GdTF2UDBDRU1GMzc4MklDa1kzMDBHZlFEWFhiODA5MXhmcytxUjFQbEhJSHZKOGRqV29jNApvdTJ1b2dHc2tGTDhGdXVFTTRYRjhSV0grZXpJRkRjV1JsZnJmVHErZ2s2aUs4dGpSTUNVc2lFNEI5QT0KLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
**server: https://172.16.32.101:6443**
Even telnet shows that there is a process running on 6443 but not on 6553
how can change the port and how can I fix the issue??
Any help would be of great use
Thanks in advance.
It looks like your last kubectl config interferes with the previous clusters configurations.
It is possible to have settings for several different clusters in one .kube/config or in separate files.
But in some cases, you may want to manage only the cluster you've just created.
Note: After tearing down the exited cluster using kubeadm reset followed by initializing fresh cluster using kubeadm init, new certificates will be generated. To operate the new cluster, you have to update kubectl configuration or replace it with the new one.
To clean up old kubectl configurations and apply the last one, run the following commands:
rm -rf $HOME/.kube
unset KUBECONFIG
# Check if you have KUBECONFIG configured in profile dot files and comment or remove it.
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
It gives you up-to-date configuration for the last cluster you've created using kubeadm tool.
Note: You should copy kubectl configuration for all users accounts which you are going to use to manage the cluster.
Here are some examples of how to manage config file using the command line.
I figured out the issue it is because of the firewall in the machine I could join nodes to the cluster once I allowed traffic via port 6443. I didn't fix the issue with this post but for beginners use this K8's on AWS for a better idea.
Thanks for the help guys...!!!

Why does kubectl have different behavior with sudo?

Running kubectl get pods with sudo:
sudo kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
Running as a normal user:
kubectl get pods
No resources found.
By default, kubectl looks in ~/.kube/config (or the file pointed to be $KUBECONFIG) to determine what server to connect to. Your home directory and environment are different when running commands as root. When no connection info is found, kubectl defaults to localhost:8080
You would have run these commands from the normal user :
sudo cp /etc/kubernetes/admin.conf $HOME/
sudo chown $(id -u):$(id -g) $HOME/admin.conf
export KUBECONFIG=$HOME/admin.conf
which would have copied config file in your normal user home directory and that is why you are able to get to the connection from the normal host and not from sudo.